
The real AI challenge: Empowering people, not just platforms
For CEOs, the hardest part of AI transformation isn’t the technology – it’s reskilling at scale to ensure teams can apply what they’ve learnt in practice....
by Michael R. Wade Published 4 November 2024 in Artificial Intelligence • 7 min read
The clock is ticking down to a moment when artificial intelligence could slip beyond our control. IMD’s new AI Safety Clock has been set to 29 minutes to midnight, reflecting the growing threat posed by uncontrolled artificial general intelligence (UAGI), autonomous systems that function without human oversight and may pose serious dangers.
This clock serves as a stark reminder that we are nearing a crucial point in AI development where rapid advancements, paired with insufficient regulation, are pushing us closer to potential dangers that could drastically affect society and business.
But how is this timeline calculated? What are the real dangers, and how can governments and companies work together to mitigate these risks?
The classic doomsday scenario is when AI gains the ability to make decisions on its own, without oversight.
Introduced in October 2024, the AI Safety Clock assesses the risks of UAGI. The aim is to inform the public, policymakers, and business leaders about these risks, thereby promoting the safe development and use of AI.
The clock’s time is calculated through a methodology that looks at several key factors. This includes measuring the sophistication of AI, regulatory frameworks, and the ways the technology interacts with the physical world, aka infrastructure.
To reach this number, it involves tracking developments in AI models, how they are performing against human intelligence, and the speed at which they’re becoming more capable. In a nutshell: AI models are moving rapidly on both fronts.
We also look at how autonomous these systems are. For instance, if an AI remains under human control, the risk is lower. But if it becomes independent, the danger is exponentially magnified. The classic doomsday scenario is when AI gains the ability to make decisions on its own, without oversight.
But perhaps the most alarming factor in our methodology is the connection of AI to the physical world. If AI systems begin controlling critical infrastructure, such as power grids or military systems, the consequences could be catastrophic. Much like nuclear weapons reshaped geopolitics, uncontrolled superintelligence could be just as world-altering.
We also factor in regulation to the clock. Each time meaningful guardrails are put in place the clock moves away from midnight. For instance, the vetoing of an AI safety bill in California last month moved us closer to midnight, while Europe’s AI Act helped push the clock back.
Since OpenAI’s chatbot ChatGPT burst onto the scene in late 2022, a new wave of groundbreaking generative AI launches has taken the world by storm. Advocates say this new wave of AI could shift consumer behavior as profoundly as the internet and mobile phones did.
But what happens if AGI is no longer under human control? The risks are vast.
Although UAGI hasn’t yet arrived in full force, there are already signs of its potential to do harm. Chatbots, for example, have already influenced people’s moral judgements, according to studies. Misinformation powered by AI is on the rise, and this technology is only becoming more sophisticated. Consider the deepfake of Russian president Vladamir Putin declaring peace with Ukraine that circulated on social media.
The military application of AI is another pressing concern. Countries are developing AI-driven weapons systems, such as autonomous drones, and the fear is that they might eventually operate beyond human control. In the wrong hands, these technologies could spark conflicts or cause untold damage to societies. A scary prospect indeed.
Curtail the very innovation that fuels advancement in favor of the public good.- Gavin Newsom
One of the biggest obstacles in managing UAGI risk is the fragmented approach to regulation. While the EU has been proactive with its AI Act, other regions lag behind. In the US, for instance, there’s no nationwide AI legislation, with efforts often led by individual states. Recent attempts, like California’s proposed AI safety bill, have been vetoed out of fear that regulating too strictly could stifle innovation or push tech companies out of the state.
California governor Gavin Newsom said recently the legislation could “curtail the very innovation that fuels advancement in favor of the public good.”
SAP CEO Christian Klein, meanwhile, cautioned EU policymakers against over-regulating artificial intelligence, saying recently that it could weaken Europe’s global standing and widen the gap with the US. “I’m totally against regulating the technology, it would harm the competitiveness of Europe a lot,” he told the FT.
For stronger regulation to overcome such opposition, international cooperation is essential. One option would be a global body like the International Atomic Energy Agency (IAEA), which oversees nuclear technology. Such an organization could audit AI systems and ensure they adhere to global safety standards.
“If we want to avoid a future where UAGI operates beyond our control, we need governments, corporations, and global institutions to step up and work together.”
There are several concrete steps that governments and corporations can take to mitigate the risks of UAGI:
The AI Safety Clock is not intended to incite panic, but it does serve as a warning. While the clock is ticking, there is still a window of opportunity to steer AI development in the right direction – but the time to act is now. If we want to avoid a future where UAGI operates beyond our control, we need governments, corporations, and global institutions to step up and work together. The goal is not to stop innovation, but to make sure it’s intelligent, ethical, and safe.
All views expressed herein are those of the author and have been specifically developed and published in accordance with the principles of academic freedom. As such, such views are not necessarily held or endorsed by TONOMUS or its affiliates.
TONOMUS Professor of Strategy and Digital
Michael R Wade is TONOMUS Professor of Strategy and Digital at IMD and Director of the TONOMUS Global Center for Digital and AI Transformation. He directs a number of open programs such as Leading Digital and AI Transformation, Digital Transformation for Boards, Leading Digital Execution, Digital Transformation Sprint, Digital Transformation in Practice, Business Creativity and Innovation Sprint. He has written 10 books, hundreds of articles, and hosted popular management podcasts including Mike & Amit Talk Tech. In 2021, he was inducted into the Swiss Digital Shapers Hall of Fame.
9 hours ago • by Howard H. Yu in Artificial Intelligence
For CEOs, the hardest part of AI transformation isn’t the technology – it’s reskilling at scale to ensure teams can apply what they’ve learnt in practice....
15 May 2025 • by Öykü Işık, José Parra Moyano in Artificial Intelligence
In a world of deepfakes, hallucinations, and bias, we offer practical guidance on how you can trust your AI systems....
14 May 2025 in Artificial Intelligence
Companies are rushing to embrace the game-changing opportunities to improve operations and scale that AI offers; yet many are unaware of the pitfalls. Test your knowledge of the risks here – and...
12 May 2025 in Artificial Intelligence
In an era defined by advanced AI and behavioral data, Apple’s 'Evil Steve' test exemplifies a proactive approach to ethical decision-making and responsible data governance....
Explore first person business intelligence from top minds curated for a global executive audience