AI (Artificial Intelligence) is a double-edged sword, bringing convenience to people while also posing significant social security issues. Dr. Geoffrey Hinton, the 2024 Nobel Prize winner in Physics and a pioneer in AI, has warned that the world is on the brink of AI getting out of control.
Despite the optimism surrounding the development of AI, many unresolved issues persist and some are even escalating. Problems like fraud, misinformation influencing elections, and copyright violations have prompted numerous technology experts and professionals to issue warnings or protests regarding the dangers posed by AI.
The organization “NOMORE Unauthorized Generated AI,” comprised of 26 renowned voice actors in Japan, issued a statement opposing the unauthorized use of their voices in AI-generated content. The video was posted on major social platforms such as X and YouTube on October 15.
They are calling for increased attention to copyright issues and urging the industry and government to hasten the formulation of regulations to prevent such problems from recurring. In fact, similar appeals have been made by the Japan Newspaper Publishers & Editors Association, the Japan Photographic Copyright Association, and the Japan Book Publishers Association last year. The Japanese government responded to these requests by enacting relevant regulations, but the actual effects fell short of expectations.
Moreover, media outlets and publishers such as The New York Times, Forbes, and Condé Nast recently jointly accused Amazon’s CEO Jeff Bezos. They alleged that the AI startup, Perplexity, supported by Bezos, infringed on copyright interests by using their materials without permission to generate search results. Additionally, The New York Times filed a similar lawsuit last year against OpenAI and Microsoft for the same reasons.
Beyond copyright issues, the misuse of AI by authoritarian governments and malicious actors for fraud and dissemination of false information poses a growing concern, making it increasingly difficult to detect and distinguish.
According to reports from Japanese media on October 3, the Chinese Communist Party (CCP) utilized AI technology to create over 200 accounts, spreading fake videos and messages regarding “Okinawa’s independence” on social platforms. These accounts claimed that “Ryukyu does not belong to Japan, but to China,” aiming to sow discord between Okinawa and Japan. These videos have already garnered millions of interactions on social media.
The CCP, Russia, and Iran have been repeatedly exposed for using AI technologies like deepfake and AI to create numerous false messages and videos, disseminating them on major social platforms both domestically and internationally to achieve propaganda or to polarize Western societies.
Garry Tan, CEO of the prominent tech venture firm Y Combinator, revealed on October 11 that criminals are using AI-generated voice to carry out “sophisticated phishing scams” online.
He explained that these criminals send fake Google support emails to your Gmail inbox, asking you to verify your activity or account status. Clicking “yes” would redirect you to a phishing website.
Sam Mitrovic, an IT consultant at Microsoft’s solution provider company Sammitrovic, issued a warning regarding AI phishing attacks targeting Gmail accounts in September on a blog post. He noted receiving fake emails and voice messages on Gmail accounts, urging vigilance to avoid falling victim to such scams. He emphasized the increasing sophistication and scale of AI scams, posing legitimate-looking deceptions that many individuals could fall prey to.
Kiyohara Jin, a computer engineer in Japan, expressed his concerns to a media outlet, stating that the rapid development of AI worldwide lacks a healthy developmental path, potentially leading to disasters that humans cannot bear. He pointed out that for immediate gains, many individuals fail to prioritize safety while human ethical standards have yet to catch up, providing opportunities for morally corrupt groups or individuals.
Dubbed the “AI Father,” Professor Geoffrey Hinton and John J. Hopfield of Princeton University jointly received the 2024 Nobel Prize in Physics on October 11. However, they did not celebrate but expressed worries about humanity’s future.
In a video conference with the Nobel Committee, Hinton emphasized the urgency to address the threats posed by AI, as we are at a historical crossroads where tackling these challenges in the coming years is imperative.
Hinton called for increased research into AI safety issues and understanding how to control AI that may surpass human intelligence. He warned that without proper regulation of AI technology, the risks of phishing attacks, false videos, and political interference would increase. Many leading researchers currently believe that AI will soon surpass human intelligence, prompting a serious consideration of the potential outcomes.
He specifically highlighted the inadequate legal norms regarding the military use of AI in the U.S., potentially leading to uncontrollable or disastrous situations during warfare. Furthermore, he raised concerns about China’s unrestricted development of AI-related weapons due to lacking legal constraints, posing even greater worries.
During a video call at Princeton University’s auditorium on October 11, Professor Hopfield echoed his concerns. He stressed that while AI carries both positive and negative aspects, the limited understanding of AI’s boundaries is unsettling, particularly concerning the neural network systems currently being vigorously promoted.
Hopfield concluded by stating that his worries do not lie solely with AI but with the consequences of integrating AI with global information flow. He added that AI, relying solely on simple algorithms, can control vast information systems, raising concerns over the lack of a comprehensive understanding of its functioning and the potential manipulation by a few individuals worldwide.