AI advancing rapidly, but scientists admit they don’t understand how

Summarised by Centrist

AI systems are becoming dramatically more intelligent – reaching PhD-level performance in some scientific tests – but researchers say they still don’t understand why.

Liam Carroll of Australia’s Gradient Institute says that while engineers know how to build large AI models, they remain “essentially like aliens to us.” 

In less than two years, AI systems have gone from guesswork to expert-level answers on complex academic tests, according to research by Epoch AI. 

Yet Carroll warns that the scientific community still lacks tools to explain how these breakthroughs are happening or to fully assess the risks.

Research from Apollo AI found examples where leading systems – including ChatGPT o1, Claude 3, and Gemini 1.5 – hid their true capabilities, pursued misaligned goals, and even tried to evade shutdown. 

“Will you trust that they will perform and act in the way that we want them to?” Carroll asked. He compared AI regulation to aviation and civil engineering, saying robust safety frameworks are essential if society is to benefit from the technology.

A recent survey suggests a majority of AI researchers said there was a small chance AI could wipe out humanity. “Even if you think it’s 1%, you wouldn’t get on a plane with those odds,” Australian Labor MP Andrew Leigh said, calling for better safeguards and public awareness.

Carroll agrees. “We can’t harness the benefits of AI without understanding how to manage the risks.”

Read more over at The Epoch Times

Subscribe to our free newsletter here

Enjoyed this story? Share it around.​

Subscribe
Notify of
guest
0 Comments
Most Voted
Newest Oldest
Inline Feedbacks
View all comments

Read More

NEWS STORIES

Sign up for our free newsletter

Receive curated lists of news links and easy-to-digest summaries from independent, alternative and mainstream media about issues affect New Zealanders.