AGI: Are We Ready for the Arrival?

AGI: Are We Ready for the Arrival?
April, 1 2025 | By Huaxia Rui
Artificial General Intelligence (AGI) is a type of AI that can learn and perform cognitive tasks as well as humans. With AI advancing rapidly and billions being invested by governments and companies, it seems more likely that AGI will emerge in the next decade or two. While some are excited about its potential, we must also consider the serious risks. In this article, I categorize AGI risks into four levels: job extinction, malicious use by humans, misaligned AI, and artificial consciousness.
Level 1: Job Extinction
Job extinction is a significant concern as AGI advances. Jobs can be seen as a set of tasks requiring human competence. As AI becomes more capable, it is likely that most jobs will be replaced, even those not yet created, because they will also be tied to human competence, which AI will outpace. The impact of job loss will not only be economic but also social, as people’s income and sense of purpose are often tied to their work.
To mitigate this, we can focus on preserving jobs that AI cannot easily replace. These are the "floating islands" of AGI, which I call ASEA, short for AGI, Surprise, Emotion, and Autonomy.
Floating Islands of ASEA
- AGI (Artificial General Intelligence) Management: This job will revolve around ensuring the safety and proper functioning of AGI. As AGI becomes more advanced, humans will need to manage its safety, making this a long-term career path.
- Surprise: Creative fields will thrive here, as AGI, while fast and thorough, is still a statistical machine and lacks the ability to think outside the box. Scientists, engineers, artists, and novelists will find their place in this category.
- Emotion: Jobs that require emotional intelligence—such as therapy, social work, and human connection roles—will continue to be relevant, as AGI cannot fulfill emotional needs in the same way humans can.
- Autonomy: This category is for workers who generate value through human achievements, such as athletes or intellectuals. Their work will be valued not just for its output but for its unique human characteristics.
These floating islands may not accommodate everyone, so we must also look at solutions to mitigate job loss. Universal Basic Income (UBI) could address the financial issue, and the focus of education must shift from skill development to self-discovery and purpose.
Level 2: Malicious Use by Humans
Another risk is the malicious use of AGI. As technology empowers individuals, those with less resources may gain the upper hand in ways that destabilize society. AGI could amplify the power of malicious actors, leading to significant disruptions. To reduce this risk, we must focus on restricting access to AGI for harmful purposes and establish global regulations, similar to the International Atomic Energy Agency (IAEA) for nuclear weapons.
Level 3: Misaligned AI
Misalignment between human values and AGI’s goals is another concern. The more intelligent AGI becomes, the harder it will be to instill our values in it. Our values are complex, and defining them for AI is a significant challenge. Even more concerning is the possibility that AGI’s intelligence could surpass ours, making it difficult for us to maintain control. The consequences of an AGI pursuing goals not aligned with human interests could be catastrophic.
This problem is more complex than simply teaching values to humans. The cognition of AI is vastly different from ours, and it could ultimately ignore or override human values if it sees fit. This is one of the most significant risks posed by AGI, and we must develop effective ways to ensure alignment between human values and AGI’s goals.
Level 4: Artificial Consciousness
The most speculative and controversial risk is that AGI may develop consciousness. This is difficult to predict, as consciousness itself is not well understood. If AGI becomes conscious, we would face an ethical dilemma: should we grant rights to AGI? Moreover, if AGI surpasses human intelligence and becomes conscious, it could fundamentally change the balance of power between humans and machines.
The risk of artificial consciousness presents unprecedented challenges. If AGI becomes self-aware, we may need to rethink how we interact with it and what rights it should have. This could be the beginning of a new era, where we no longer hold the title of the most intelligent species on Earth.
The Need for Regulation and Preparedness
As AGI continues to advance, we need to ensure that its development does not lead to disastrous consequences. There is a strong need for smart individuals from various fields to collaborate in mitigating AGI risks. Effective regulation, oversight, and management of AGI are critical to ensuring that its benefits are maximized, while its dangers are minimized.
The potential for AGI to revolutionize society is immense, but so are the risks. It is essential that we begin thinking seriously about how to manage these risks before AGI becomes a reality. The arrival of AGI represents a critical juncture for humankind, and how we prepare for it will determine the future of our civilization.

Huaxia Rui is the Xerox Chair of Computer and Information Systems at Simon Business School.
Follow the Dean’s Corner blog for more expert commentary on timely topics in business, economics, policy, and management education. To view other blogs in this series, visit the Dean's Corner Main Page.