Analysis: AI Sparks Three Major Concerns – How Big is the Threat?

The rapid development of AI brings convenience to human life, but also raises significant concerns. Currently, human worries about AI mainly focus on three aspects: first, will the various deceptions created by AI’s “deepfake” technology lead to uncontrollable chaos in human society? Second, will AI’s application in the industrial sector cause mass unemployment and social unrest? And third, will superintelligence destroy humanity?

After OpenAI released ChatGPT 4.0 version, on March 29, 2023, Elon Musk and thousands of tech leaders and researchers signed an open letter urging all AI labs worldwide to halt the development of advanced systems. The risks of AI listed in the letter mainly focus on the three aforementioned aspects.

How big is the threat of AI in these three major areas to humanity? Do humans have the capability to cope with these threats?

The use of generative AI to create fake videos for scams, known as “deepfakes,” has become a major issue. “Deepfake” technology can synthesize highly realistic fake videos or audios using minimal visual and audio data, enough to deceive family and colleagues, providing great convenience for fraud.

In 2024, an employee of the Hong Kong engineering firm Arup received a video conference invitation from someone claiming to be the Chief Financial Officer of the UK headquarters, announcing a confidential transaction. The faces and voices of several “financial personnel” in the meeting were highly consistent with reality but were actually the work of AI’s “multi-face swapping” technology.

The employee transferred a total of 200 million Hong Kong dollars to five local accounts without suspicion, only realizing they had been scammed after checking with headquarters. This was the most devastating AI “face-swapping” scam case in Hong Kong’s history.

In addition to financial fraud, “deepfake” is also the culprit when it comes to spreading fake information and propaganda. In December 2023, over a hundred “deepfake” videos imitating the UK then-Prime Minister Rishi Sunak were promoted on Facebook. One video featured a fake BBC anchor reporting false news, claiming Sunak embezzled a large sum from a public project.

The harm of “deepfake” extends beyond individual fraud as, combined with data breaches, its power is greatly amplified. Once personal privacy falls into the wrong hands, AI can tailor scams using this data.

When “deepfake” combines with data leaks, targeting each person with precision like targeted advertising, society may face crises like trust collapse and inadequate regulation. Existing preventive measures may struggle to cope with such a scaled-up, personalized threat.

Addressing the harm of “deepfake” requires a multi-level, multi-party strategy, covering aspects such as technology, law, education, and social governance. It is essential to develop detection technology that can quickly identify “deepfakes,” strengthen legislation to combat them, and widely disseminate prevention knowledge at a societal level.

However, due to AI being a product of multiple high technologies and the hierarchical nature of public knowledge structures, a large portion of the population unable to quickly grasp this technology may inevitably become victims of “deepfakes.”

Another concern triggered by AI is the issue of job displacement. Historically, every technological revolution has seen the demise of old positions and the birth of new ones. For example, the Industrial Revolution replaced artisans with factory workers and engineers; the Internet age reduced traditional retail roles but created new professions like programmers and e-commerce managers.

Though past experiences show that technological progress and the transformation of human industries and job forms generally complement each other, human worries about the emergence of new technologies have been minimal. However, AI may disrupt this historical pattern and bring profound impacts to human society, evident in the following aspects:

1. Unprecedented speed and scale of job replacements: While previous technological revolutions, like the Industrial Revolution or the Internet Revolution, led to job losses and creations, these transitions typically took several decades or longer, giving society enough time to adjust. AI’s replacement speed may far exceed that of previous revolutions.

According to SEO.AI’s prediction, by 2030, up to 800 million jobs globally may be replaced, a scale and speed unprecedented in history. Compared to the Industrial Revolution, which took nearly a century to completely change the labor structure, AI may achieve a similar disruptive change in just a few years. This rapid change may leave society unable to adjust its education systems, training mechanisms, or employment policies in time, resulting in mass unemployment and structural imbalance in traditional economies.

2. Widespread and cross-sector penetration of job replacements: Unlike previous single technologies (such as steam engines or the Internet) that primarily affected specific industries, AI is a general-purpose technology that can penetrate nearly all sectors, including manufacturing, services, creative industries, and decision-making fields. AI can replace not only physical labor (e.g., automated production lines) but also cognitive labor (e.g., data analysis, legal document drafting, copywriting, and even medical diagnosis).

In history, technological advancement usually only displaced some low-skilled jobs while simultaneously creating new roles requiring human creativity or judgment. However, AI’s strong adaptability may simultaneously squeeze low and high-skilled positions, leading to a significant reduction in the “middle ground” – occupations easily transitioned by humans.

3. Lag and asymmetry in creating new roles: Technological revolutions typically follow a pattern of “destruction and reconstruction,” where old jobs vanish, and new ones emerge. AI, however, may disrupt this balance. Initially, AI requires maintenance and development roles (like AI engineers, data scientists) with a high level of specialization and technical thresholds that the average worker may struggle to transition into quickly. Furthermore, AI’s automation capacity may reduce the overall demand for labor. For instance, one AI system can replace hundreds of customer service representatives, while maintaining this system may only require a few engineers. This inequality means that the number and quality of new roles may not compensate for the losses incurred by replaced positions, leading to a “hollowing out” of the job market.

4. Aggravation of social inequality: While technological progress has historically brought pain, it often long term improved the overall quality of life. However, the benefits of AI could be highly concentrated in the hands of a few tech-savvy corporations and individuals, leaving ordinary workers facing stagnant wages or unemployment risks. The deployment costs of AI are high, making it challenging for small businesses or developing countries to keep pace, widening economic disparities globally. This inequality is not only economic but could also provoke social unrest and even political crises, presenting a systemic challenge unprecedented in past technological revolutions.

5. Impact on human values and psychological aspects: AI not only threatens employment but may also challenge human core roles in society. Past technological advancements often enhanced human capabilities (e.g., machines enhancing physical strength, computers enhancing computing power), while AI may directly replace human decision-making and creative functions.

If AI surpasses humans in arts, literature, or scientific research, individuals may face a crisis of self-worth. This unprecedented psychological and social impact may trigger resistance to technology itself or deeper cultural conflicts.

In conclusion, the potential consequences of AI breaking historical norms regarding job replacements may lead to the collapse of social adaptive capacity. If waves of unemployment occur too swiftly and extensively, while new jobs fail to swiftly fill gaps, and educational and retraining systems fail to keep pace, human society may face prolonged high unemployment rates, economic stagnation, or even social instability.

Moreover, the challenge to human intelligence and creativity posed by AI could shake the foundation of societal structures, necessitating a comprehensive reconstruction of ethical, legal, and governance systems. The complexity of these adjustments may strain the current social mechanisms beyond capacity.

Meanwhile, faced with the enormous impact AI could bring to industrial structures, humans have yet to present feasible solutions to cope with these changes.

Globally, the focus is on “Will AI destroy humanity?” Many experts believe this depends on how humans address ethical dilemmas. Since the appearance of nuclear weapons, the threat of destruction has always loomed over humanity. There are currently tens of thousands of nuclear weapons on Earth, held by nine countries. Thus far, humanity has not witnessed a nuclear war erupt because humans can still rationally control these weapons. However, will this dynamic change with the advent of superintelligent AI?

Artificial General Intelligence (AGI) is the ultimate goal of AI development. This type of AI features the ability to self-improve without human intervention, possessing high intelligence, referred to by some scholars as “superintelligence” (AI far surpassing human intelligence).

In March 2023, after the release of ChatGPT 4.0 version by OpenAI, Tamlyn Hunt, a scholar at the University of California, Santa Barbara, specializing in philosophy and neuroscience, wrote on the “Scientific American” website that with the rapid development of AI, achieving “Artificial General Intelligence” is expected soon.

Hunt believes that once AI reaches a stage surpassing human intelligence and decides to destroy humanity, humans will be helpless to resist. This superintelligence’s thought speed is a million times faster than humans, enabling it to predict and eliminate any defense or protective measures humans attempt to establish, rendering humans incapable of control.

Academic discussions about technological singularities and models of intelligence explosion have been ongoing since the 1960s. The 1980s film “Terminator” brought these “survival concerns” to the silver screen.

With the development of 21st-century internet and AI technologies, artificial intelligence is gradually entering practical application areas, drawing more scholarly attention to the potential threats posed by superintelligence. Notable figures like Nick Bostrom, a philosophy professor at the University of Oxford, published the best-selling book “Superintelligence: Paths, Dangers, Strategies” in 2014, addressing these concerns.

In “Superintelligence,” Bostrom deeply explores the risks of developing artificial intelligence and proposes potential solutions, such as establishing an ethical system for AI to prevent unmanageable risks. Bostrom’s solutions revolve around a core concept: while humanity may not prevent the arrival of superintelligence, thoughtful design and preparation could transform it into a tool for humans rather than a threat.

Bostrom emphasizes the importance of “value alignment,” ensuring that superintelligence understands and respects human values.

One potential solution he proposes is embedding human-compatible goals in the initial design of superintelligence, prioritizing learning and internalizing human moral and ethical principles. Only when AI values humans and what humans hold most dear can humanity truly enjoy the benefits of this technology.

Bostrom suggests slowing down AI capability development, shifting more focus to safety research rather than merely pursuing AI performance enhancements. He advocates for promoting interdisciplinary cooperation, convening philosophers, computer scientists, ethicists, etc., to collectively address the “value alignment” problem of AI, preventing technological development from spiraling out of control.

Bostrom’s viewpoint represents the industry consensus on AI ethics. However, the question remains: what ethics should be set for AI? This inquiry delves into the ethical issues within human society itself.

Human understanding of ethics can be divided into two camps: innate ethics and acquired ethics. The former posits that ethics and morals stem from gods and heavenly mandates beyond humans, serving as divine standards for human existence, thus possessing eternal characteristics. The latter views ethics as a set of public norms established by humans, derived from tradition, customs, and power structures, rendering ethics relative.

If humans were to set innate ethics for AI, enabling AI to learn ethical concepts from gods and belief systems, AI might comply with divine will, possessing stable standards of good and evil and ethical judgment. For instance, AI could be programmed to follow the “Ten Commandments” or Confucian principles of “benevolence, righteousness, propriety, wisdom, and faith.” Such an approach could significantly reduce the risks of AI going out of control.

Conversely, if those who do not believe in gods apply ethics to AI, then AI would learn a collection of political leanings and political positions, judging right and wrong based on hierarchical interest principles. This type of AI would only exacerbate societal divides and increase the risk of conflict.

The ethical dilemma of AI is, in essence, a mirror reflection of human ethical dilemmas. When these two forms of AI ethics collide within machines, will we witness the same value differences and conflicts present in human societies?

Today, human understanding of ethics faces a critical moment of collapse or reconstruction. The rapid development of AI at this moment may not only make rebuilding ethics more manageable but could also accelerate the rate of human ethical collapse. If this proves true, the existential crisis humanity faces may be significantly amplified by this technology.