Artificial Intelligence

The Path to Ethical AI Starts With Collaboration

Wireless technology abstract image
Credit: Blue Planet Studio / stock.adobe.com

By Leif-Nissen Lundbaek, Co-Founder & CEO of XAIN

To the layman, the word-set of ethical AI is a misnomer. AI oftentimes still conjures visions of a dystopian future in which artificial intelligence runs rampant, dominating humankind. Thanks to modern-era entertainment in films such as 2001:A Space Odyssey (HAL 3000) or The Terminator, public perception of AI has been limited to these fictional depictions. So it should come as no surprise that when we talk about ethical AI people would assume its inverse involves robots, lasers, and a war to end humanity. But it’s not quite that dramatic.

In truth, the conversation around ethical AI typically boils down to the societal issues such as data collection, cyberattacks on critical infrastructure, and inherent bias in code. When we talk about ethical AI, we are talking about preventing these things from happening. We are talking about creating an AI system and rules that help manage these issues rather than create them.

To develop an ethical AI, we must first understand what it is we’re trying to protect ourselves from. This story starts with data. Data is both the lifeblood of AI and the fuel to drive AI into its next functional iteration. To gain a practical understanding of how AI operates, we need to be cognizant of the fact that it needs massive amounts of data to be useful. It takes time to collect enough data for AI to prove useful in disseminating and collating it.

Think about the data that you generate on a daily basis. AI can be trained to analyze everything from the trips you take in an Uber, to the last thing you bought on Facebook Marketplace. Nearly everything we do online is tracked, monitored, and analyzed by teams of data scientists who collect it and then feed it to ever-more sophisticated algorithms. It’s here that we start to ponder the ethics of doing so, what data belongs to us and what should happen to that data once we create it via our actions. This conversation has been never-ending and continuously debated.

Yet, it often leads to the AI itself, rather than the collection methods. This is where we create the ethical rules that govern AI systems.

Creating ethical AI means considering the human impact of what we’re creating and how it impacts our future.

Current data collection practices simply aren’t sustainable. We’re reaching a tipping point, one where both consumers and regulators seem legitimately worried about who is watching us, and why. It’s become one of the few bi-partisan issues, as big tech companies face a reckoning when it comes to data collection. While some may feel it’s too late, tech companies are still muddling along, attempting to properly decide what to do with and secure all this collected data.

To that end, when we talk about building ethical AI, we must first reach a compromise about not only what is collected, but who is collecting it, and why it’s being collected at all. Is all data collection necessary? Is our privacy still protected by measures such as anonymization? These are the lingering questions as we move into a new era of AI data manipulation.

Ethical AI starts with a set of standards that promote trust that strive for industry-wide consensus on data collection practices. While some countries have started to outline their own regulatory principles — like the U.S. did at CES this year — in an ideal world these standards are a non-partisan, multinational agreement built with consumers in mind. These are the kinds of solutions that come from non-profits, like the Knight Foundation, and the AI Ethics Initiative, and the EU Commission for Trustworthy AI, all of which have advocated for exactly this. Borders shouldn’t come into play when it comes to regulating the behavior of AI systems.

Companies like Bosch, for example, have even outlined their own ethical guidelines for artificial intelligence and how it should be deployed. But creating this kind of plan company by company is not tenable. We must begin to explore broader scopes of AI regulation. The moral standards of one company might not match the standards of the next. This is where factors like game theory come into play, creating a sort of AI arms race as companies and nations follow the not-always-ethical AI path of those before them.

Consensus is important, because AI knows no borders and it’s not beholden to the whims of one regulatory body over another. AI is only as ethical as the humans who create it. In the digital age, it’s all too easy to move servers to a favorable location to avoid the legal wrangling of legislation, as Facebook did in 2018 when it migrated 1.5 billion European users out of the reach of GDPR. This kind of ethical wrangling could be what keeps AI systems siloed and out of the reach of a truly ethical solution.

Facebook’s problem of serving relevant ads though, is only the tip of the iceberg. There’s far more sinister AI than those employed on social media to squeeze out additional revenue from users.

AI is also responsible for tracking more than a billion Chinese citizens and sorting them into social classes that limit movement, housing prospects, and job opportunities based on an algorithmically created score. There’s AI that tells police where to deploy resources based on past crime statistics — algorithms that are so horrifically broken that systemic racism is built into the system, and seemingly by design. Or how about something simpler, like an AI chatbot called Tay that humans trained within hours to become a Hitler-quoting Nazi? All the ethical rules here were created by humans, implemented and learned by AI systems.

That’s not all. We also have questions about how to retain control over self-taught AI, or whether it should be allowed to kill in military operations. There’s the issue of how to strip code of bias to ensure we’re not passing along our worst behaviors to machines. And then we have an issue that all of us can see coming; how do we slow job loss and wealth inequality as robots and AI snatch an increasing number of jobs right out from under us?

Building ethical AI means considering all of these scenarios, as well as dreaming up the ones that could dominate the conversation in years to come. It means considering effect before implementing an AI system just because we can. A problem Silicon Valley especially has faced for decades.

So how do we get there? How do we get to a place of ethical consideration that is globally and totally agreed upon? That’s a difficult question to answer, but it starts with diversity and transparency. Building AI means building it for everyone, yet the vast majority of programmers are western white males with similar socio-economic backgrounds. This is why facial recognition technology, for example, is quite accurate at detecting the faces of white males, though it’s remarkably bad at detecting the faces of people of color, and even white women. Bias is passed on to AI even when unintentional. A person’s worldview shapes how they interpret the data, which influences how it’s fed to machines to learn.

To combat this, businesses must deploy a diverse workforce that’s representative of society at large. Computers can make complex decisions in fractions of a second, but can we trust that it will always make the right one? And for that matter, what is the right one? Subjectivity comes into play which demands the need for alternative thinkers outside engineers and developers. Philosophers, psychologists and ethicists are needed to truly design inclusive AI systems. AI systems really depend on the data we feed them, and how we combat the biases contained within that data.

The best solution (in a perfect world) would be a landmark agreement between major companies about how to build AI moving forward, though it may be naive to assume companies could ever collaborate in a way that pushes aside self-interest. Which leaves legislation. If private enterprise doesn’t address these issues, legislators will… eventually. Though, it might take a while for them to get their head around the technical aspects of it all. Considering the trouble U.S. Senators had with how Facebook is monetized, and continuing innovation in the AI space, it’s tough to assume that these are the legislators responsible for leading the way.

It’s really going to come down to collaboration. The future of AI, as TechCrunch once said, relies on a code of ethics. But more than that, it relies on the ability to reach a compromise that pleases both businesses and the consumers who rely on them. It won’t be an easy task.

Leif-Nissen Lundbæk (Ph.D.) is Co-Founder and CEO of the technology company XAIN AG. His work focuses mainly on algorithms and applications for privacy-preserving artificial intelligence. In 2017, he founded XAIN AG together with Professor Michael Huth and Felix Hahmann. The Berlin-based company aims to solve the challenge of combining AI with privacy with an emphasis on Federated Learning.

The views and opinions expressed herein are the views and opinions of the author and do not necessarily reflect those of Nasdaq, Inc.