The Risks With Generative AI, and the Possible Solutions
“AI is more dangerous than, say, mismanaged aircraft design or production maintenance or bad car production,” Elon Musk said in a recent Fox News interview. “In the sense that it has the potential – however small one may regard that probability, but it is non-trivial – it has the potential of civilization destruction,” Musk added.
Musk has repeatedly warned about the negative impact of AI. In March, Musk and a number of well-known AI researchers signed a letter, published by the nonprofit Future of Life Institute, which notes that AI labs are currently locked in an “out-of-control race” to develop and deploy machine learning systems “that no one – not even their creators – can understand, predict, or reliably control… Therefore, we call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4,” says the letter. “This pause should be public and verifiable and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.”
Signatories include author Yuval Noah Harari, Apple co-founder Steve Wozniak, Skype co-founder Jaan Tallinn, politician Andrew Yang, and several well-known AI researchers and CEOs, including Stuart Russell, Yoshua Bengio, Gary Marcus, and Emad Mostaque.
The letter may not have any effect on the current arms race in AI research, especially with big tech companies rushing to deploy new products. It is a sign, though, of a growing public awareness to look carefully at the risks and beyond the hype of generative AI products.
Shortly after this letter was published, the Center for AI and Digital Policy (CAIDP) filed a complaint with the Federal Trade Commission (FTC) against ChatGPT-4 claiming it to be “a risk to public safety,” and urged the U.S. government to investigate its maker, OpenAI, for endangering consumers. The complaint recognized ChatGPT-4 potential for abuse in such categories as “disinformation,” “proliferation of conventional and unconventional weapons,” and “cybersecurity.”
In an interview to ABC, OpenAI CEO Sam Altman expressed his concerns about the potential dangers of advanced AI, saying that despite its "tremendous benefits," he also fears the potentially unprecedented scope of its risks. "The current worries that I have are that there are going to be disinformation problems or economic shocks, or something else at a level far beyond anything we're prepared for," he added. "And that doesn't require superintelligence."
Altman does not recognize the perils with ChatGPT but rather with its competitors: "A thing that I do worry about is... we're not going to be the only creator of this technology. There will be other people who don't put some of the safety limits that we put on it.” He further added: "There will be tremendous benefits, but, you know, tools do wonderful good and real bad, and we will minimize the bad and maximize the good."
This is indeed the intention. But how can we achieve this goal? Should we leave it in the hands of companies and developers or should we strive for universal standards? Before we answer these questions, let’s understand some of the concerns with Generative AI products.
The concerns
Deepfake images of Donald Trump and Pope Francis generated by AI have recently created a stir online. One viral image showing the Pope in a stylish white puffer coat and a bejeweled crucifix was made with an AI program called Midjourney, which creates images based on textual descriptions provided by users. It has also been used to produce misleading images of former president Donald Trump being arrested.
Those arrest images were created and posted on twitter by Eliot Higgins, a British journalist and founder of Bellingcat, an open-source investigative organization. He used Midjourney to imagine the former president’s arrest, trial, imprisonment in an orange jumpsuit and escape through a sewer. He posted the images on Twitter, marking clear that these were AI creations and not real photos. The images weren’t meant to fool anyone. Higgins wanted to draw attention to the tool’s power and alarm the public of the dire consequences when the tool is misused.
It is plausible, however, to imagine governments or other nefarious actors manufacturing images to harass or discredit their enemies, and in the worst-case scenario, trigger a Third World War. While Higgins made it clear that the Trump images were generated by AI, the Pope Francis images were posted with no such disclosures and fooled people. As word spread across the internet that the Pope’s image was generated by AI, many expressed surprise.
“I thought the pope’s puffer jacket was real and didn't give it a second thought,” Chrissy Teigen tweeted, “no way am I surviving the future of technology.”
Fake or manipulated images are nothing new. But the ease in which they can be generated has dramatically changed. "The only way that realistic fakery has been possible in the past to the level we're seeing now daily was in Hollywood studios," said Henry Ajder, an AI expert in an interview to Business Insider. "This was kind of the cream of the crop of VFX and CGI work, whereas now many people have the power of a Hollywood studio in the palm of their hands."
It is not limited to images. Such software can create deepfake videos and voice clones. Evidence already exists that fraudsters can use these tools to generate realistic but fake content quickly and cheaply, disseminating it to large groups or targeting certain communities or specific individuals. We shouldn’t be too concerned if someone uses ChatGPT to assist in writing an email, but we should be very concerned if AI is used for scams, where the technology is making it easier and cheaper for bad actors to mimic voices, convincing people, often the elderly, that their loved ones are in distress.
Social media and the impact of AI on youth and young adults
Let’s consider the following statistics: According to the Pew Research Center, 69% of adults and 81% of teenagers in the U.S. use social media. Approximately 86% of 18- to 29-year-olds use some type of social media platform and 97% of teenagers ages 13 to 17 have at least one social media account.
People ages 16 to 24 spend an average of three hours and one minute on social media daily, and research reported in the journal JAMA Psychiatry found that adolescents who use social media more than three hours per day may have an increased risk of mental health problems. This is consistent with statistics showing almost 25% of teens view social media as having a negative effect.
Young adults aged 18 to 25 have the highest prevalence of mental illness of any adult age group: 25.8%. Young adults ages 26 to 49 have a 22.2% prevalence, and adults ages 50 and older have a 13.8% prevalence.
Why are these statistics important? Bear in mind that these fake images, videos or even voice clones are disseminated via social media. Since teenagers and young adults are the ones using it more heavily, they are the ones to be impacted the most and the first by misinformation.
Furthermore, teenagers and young adults often like to pull pranks either as a joke or sometimes to bully someone. With the AI tools available today, everyone, including kids, can create fake images or videos with ease and with almost no cost. They can create fake embarrassing images or videos and spread it on social media. These fake creations might have dire circumstances on the targets of these “jokes.”
Young adults and teenagers are already the highest affected by mental illness. The convenience of inappropriately using generative AI may exacerbate the prevalence and the intensity of mental illness among young adults. If AI tools are not guarded to protect against misuse or inappropriate use, mental illness might become a major problem in our future society, with acute economic and health implications. How can we protect society and our future generation from going down this path of destruction?
Identifying fake AI creations
One sign that an image was on Midjourney is a "plasticky" appearance, but as the technology advances, the platform may fix this issue. For now, it might be one of the indicators to look for.
AI programs generally struggle with "semantic consistencies," such as lighting, shapes, and subtlety. Therefore, check whether the lighting on a person in an image is in the right place; whether someone's head is slightly too big or has over-exaggerated eyebrows and bone structure. Other inconsistencies include smiling with lower sets of teeth in an image because usually people smile with their top teeth, not their bottom, or weirdness with hands.
Not every single image will have these signs, but they could be useful pointers.
Aesthetic factors are not always enough to identify deepfakes, especially as AI tools start to become more sophisticated. Hence, context is critical – it is worth trying to find an authoritative source and asking questions like, “Who’s sharing this image? Where has it been shared? Can it be cross-referenced to a more established source with known fact-checking capabilities?"
If all else fails you can use a reverse image search tool to find the context of an image. For this purpose, you can use tools such as Google Lens or Yandex’s visual search function. For example, if you did a reverse image search on the Trump getting arrested images, it might have taken you to all the news websites where it has been shared in articles. It essentially a way to trace back the image.
The above might be good measures to consider when investigating these images. But it puts the onus on the receiver of the information instead of on the sender, not to mention the companies creating these AI tools. It seems that we are left to our own devices to protect ourselves. We cannot expect kids or teenagers to do a thorough investigation for the information they receive. There must be other measures to guard us from mis- or inappropriate information.
A better solution would be that when these AI creations are generated, the algorithm will issue them with some sort of a mark like a watermark or similar, with a cryptographic seal, which would immediately signify the creation as non-authentic and generated by AI.
Dutch company Revel.ai and Truepic, a California company, have been exploring broader digital content verification. The companies have been working on a stamp, which identifies that the image or video is computer-generated, making it transparent that this is deepfake video or image.
The data is cryptographically sealed into the file; tampering with the image breaks the digital signature and prevents the credentials from appearing when using trusted software. The companies hope the badge, which will come with a fee for commercial clients, will be adopted by other content creators to help create a standard of trust involving AI.
It would be best if this sort of badge or mark be a universal standard set by, preferably a standard body, such as National Institute of Standards and Technology (NIST) or any international equivalent body and required by all AI developers and companies, who are developing AI tools, such as ChatGPT, to conform to.
As Sam Altman said in his quote I used earlier, that "there will be tremendous benefits, but, you know, tools do wonderful good and real bad, and we will minimize the bad and maximize the good." Let us all put our best foot forward to minimize the bad and maximize the good.
The views and opinions expressed herein are the views and opinions of the author and do not necessarily reflect those of Nasdaq, Inc.