John Oliver is Right: AI Should be Regulated. But How?
The hype of ChatGPT has gotten so intense that John Oliver dedicated a whole segment on Artificial Intelligence (AI) during a recent episode (you can watch it a little further down). He explains how AI utilization has become commonplace and is part of our modern lives, used in almost every industry and application such as self-driving cars, spam filters and even training software for therapists. He acknowledges AI has great potential and how it could change research, bioengineering, medicine and more. In his words, “AI will change everything.”
After acknowledging the benefits, Oliver spends most of the show discussing the perils of AI, mainly its biases, ethical issues, and misuse. He provides examples from hiring software, medical research, art and even autonomous cars malfunctioning and discriminatory algorithms. He calls for “explainable” AI and AI regulation, and believes that the latest EU proposed AI Act is a move in the right direction.
Oliver's concluding remarks are particularly relevant: “AI clearly has tremendous potential and could do great things if it is anything like most technological advances over the past few centuries. Unless we are very careful, it may hurt the under-privileged, enrich the powerful and widen the gap between them...AI is a mirror and will reflect exactly who we are – from the best of us to the worst of us.”
The challenge is how we support and encourage the advantages of this technology and the benefits it can bring to our lives and our global economy and society, while controlling for the biases and ethical issues and mitigating its harmful, nefarious usage. This is an uphill challenge and should be addressed with careful examination and an understanding of the full spectrum of the technology’s capabilities and benefits as well as its limitations and drawbacks.
But before we discuss this challenge and offer some suggestions, let’s first understand how AI works and why it might produce biases and unethical outcomes.
Is AI "smart" or "stupid"?
John Oliver said that "the problem with AI is not that it is smart but that it is stupid in ways we cannot predict."
As much as we would like to call it “artificial intelligence,” there is still a lot of human input involved in the creation of these algorithms. Humans write the code, humans decide which methods and methodologies to use and humans decide which data to use and how to use it. Most importantly, the algorithm and the data it is fed is very much subject to human error. Therefore, AI is as smart as the person(s) who coded it and the data it was trained on.
Humans inherently have biases – consciously and unconsciously. These biases can get into the code as well as into the choice of data used, how the data is trained and how the algorithm is tested and audited before launch. If we encounter any problems with the output of these algorithms, the humans who have created them should be accountable for and answer for all the biases and ethical concerns embedded in their algorithms.
The tech world has known about algorithms’ flaws for years. In 2013, a Harvard University study found that ads for arrest records, which appear alongside the results of Google searches of names, were significantly more likely to show up on searches for distinctively African American names. The Federal Trade Commission reported algorithms that allow advertisers to target people who live in low-income neighborhoods with high-interest loans.
The problems are not new. They are simply getting intensified as technology advances. It is unfortunate that we need hyped applications such as ChatGPT to bring them to our attention, but that doesn't have to be the case. We should discuss these issues and address them as soon as they surface, and even earlier.
This is the reason that even though the metaverse is not yet a reality, I have been advocating that it is not too soon to discuss ethics, and I have been covering, at length, why data concerns – such as the biases we’ve witnessed with AI – should be discussed now and not later. Because these concerns and problems will only get exacerbated in the metaverse, when AI is utilized with the integration of other technologies, such as brain wave and biometric data.
The case of the Apple Card algorithm and lessons to be learned
Apple Card, which was launched in August 2019, ran into major problems in November of that year when users noticed that it seemed to offer smaller lines of credit to women than to men. David Heinemeier Hansson, a prominent software developer, vented on Twitter that even though his spouse, Jamie Hansson, had a better credit score and other factors in her favor, her application for a credit line increase had been denied. His complaints went viral, with others chiming in recounting similar experiences. Apple’s own co-founder Steve Wozniak said he had a similar experience where he was offered 10 times the credit limit his wife was offered.
Black box algorithms, like the one Apple Card is using, are indeed capable of discrimination. They may not require human intelligence to operate, but they are created by humans. Although they are thought to be objective because they are automated, they are not necessarily so.
An algorithm depends on: (1) the code, created by humans, who might be consciously or unconsciously biased; (2) the methods and the data used, which are decided by the creators of the algorithm; (3) the way the algorithm is tested and audited, which is, again, decided by the algorithm’s creators.
The algorithm might be a “black box” for the users and customers who are using these applications, but it is not a “black box” for their creators.
How biases can enter the algorithm
Goldman Sachs, the issuing bank for the Apple Card, insisted right away that there wasn't any gender bias in the algorithm, but it failed to offer any proof. Then Goldman defended it by saying that the algorithm had been vetted for potential bias by a third party; moreover, it doesn’t even use gender as an input. How could the bank discriminate if no one ever tells it which customers are women and which are men?
This explanation was somewhat misleading. It is entirely possible for algorithms to discriminate on gender, even when they are programmed to be “blind” to that variable. Imposing willful blindness to something as critical as gender only makes it harder for a company to detect, prevent, and reverse bias on exactly that variable.
A gender-blind algorithm could end up biased against women as long as it’s drawing on any input or inputs that happen to correlate with gender. There’s ample research showing how such proxies can lead to unwanted biases in different algorithms. Studies have shown, for example, that creditworthiness can be predicted by something as simple as whether you use a Mac or a PC. But other variables, such as a home address, can serve as a proxy for race. Similarly, where a person shops might conceivably overlap with information about their gender.
The book “Weapons of Math Destruction,” published in 2016 by Cathy O’Neil, a former Wall Street quant, describes many situations where proxies have helped create horribly biased and unfair automated systems, not just in finance but also in education, criminal justice, and health care.
The idea that removing an input eliminates bias is a very common and dangerous misconception. This means algorithms need to be carefully audited to make sure bias hasn’t somehow crept in. Goldman said it did just that, but the very fact that customers’ gender is not collected would make such an audit less effective. Companies should actively measure protected attributes like gender and race to be sure their algorithms are not biased against them.
Without knowing a person’s gender, though, such tests are far more difficult. It may be possible for an auditor to infer gender from known variables and then test for bias on that. But this would not be 100 percent accurate. Companies should examine the data fed to an algorithm as well as its output to check whether it treats, for example, women differently from men on average, or whether there are different error rates for men and women.
If these examinations and testing are not done with careful attention, we’ll see more of the likes of Amazon pulling an algorithm used in hiring due to gender bias; Google criticized for a racist auto complete, and both IBM and Microsoft embarrassed by facial recognition algorithms that turned out to be better at recognizing men than women, and white people than those of other races.
Sensible regulations and policies
AI should be regulated, and policies to mitigate misusage and biases should be put in place. But the question is how. We must understand that AI is a tool, the means and not the ends to the mean. In other words, do you regulate the tool? Do you regulate the hammer? Or do you regulate the use of the hammer?
In the case of ChatGPT, where there are plausible concerns about chatbots such as the spread of misinformation or toxic content, legislators should deal with these risks in sectoral legislation, such as the Digital Service Act, which require platforms and search engines to tackle misinformation and harmful content, and not as proposed in the European Union's AI Act, in a way that entirely ignores the different use cases’ risk profiles.
We ought not treat AI as an automated “black box,” especially if it produces biases, which could widen social and economic inequalities. We should require individuals and organizations to follow policies and rules on how to use and implement AI and Generative AI; and how to test and audit the algorithms to make sure that they are ethical, bias-proof, and generate meaningful results that could benefit users, customers, and our global society.
Remember that AI is as smart as the person(s) who coded it and the data it was trained on. Policies on auditing the code and the data it is fed should be a common practice of any company that uses AI. In regulated areas such as employment, financial services, healthcare, for example, these policies and algorithms should be subject to regulators’ compliance and auditing.
We shouldn’t be too concerned if someone uses ChatGPT to assist in writing an email, but we should be very concerned if AI is used for scams, where the technology is making it easier and cheaper for bad actors to mimic voices, convincing people, often the elderly, that their loved ones are in distress.
We should be mindful and consider the broad spectrum of AI use cases – support the ones that benefit our future and place rules and policies that will mitigate biases, and unethical, harmful, nefarious activities. As John Oliver said: “AI is a mirror and will reflect exactly who we are – from the best of us to the worst of us.” Let’s make sure we are putting our best face forward when it comes to artificial intelligence!
The views and opinions expressed herein are the views and opinions of the author and do not necessarily reflect those of Nasdaq, Inc.