Harnessing AI's Power: The Need for a Regulatory Framework
By Emil Åkesson
Introduction
I was listening to an interview with an older journalist the other day regarding Artificial Intelligence. The reason I even mention that she was older is, I believe, important for the context. She was saying, in a derogatory way, that AI is certainly far from impressive. Apparently, it is “just another writing tool providing generalized pieces of text with no soul to them.”
And I could not help but think of this other interview I witnessed back in the 90’s. It was with an experienced politician describing the Internet as a fad, something which will soon lose its relevancy as people go back to more productive past times. Young me wanted to shout at the TV, “How can you not see how this will impact EVERYTHING!”
For the first time in almost 30 years, I got that feeling again, listening to this haughty journalist going on and on about the reasons AI will never be anything but a passing fad.
Understanding how technology works, I can personally not help being amazed by the implications of Artificial Intelligence. Once just the dream of visionary novelists and screenwriters is no longer confined within the bounds of imagination or the screen of the TV or the cinema. It's as real as the air we breathe, ready to shape the contours of our everyday existence in ways both conspicuous and subtle.
And it’s just that, the subtleness. There are millions of AI converts using the tech on a day-to-day basis, just like me. But then there are the masses. Those who don’t yet see that AI is already affecting them. It might be by analyzing the databases in which they store all their information or when the banks are using AI to determine the investment strategies in their retirement fund portfolios. But AI is here, and it will remain. It is simply too good and too valuable for anybody even being able to put this genie back in the lamp.
But as we let AI hold the reins in ever more corners of our lives, a critical question emerges: With its seemingly limitless potential and staggering power, how do we keep Artificial Intelligence in check and within the bounds of ethical and societal norms?
As with any tool, the power of AI is a two-sided coin: One side glimmers with the promise of a brighter future; the other threatens with a shadowy dystopia.
Misused, AI can turn into a Pandora's Box, releasing an array of challenges from deep fakes that muddy the waters of reality to algorithmic biases that amplify societal divisions to privacy invasions that make Orwell's 1984 look like a picnic in the park. What happens when bad actors with the power and influence of a government behind them fully realize the harm they can cause using AI? Do we all really think that Russia, for example, would hesitate one moment to unleash it on the world?
Demystifying AI
But what is AI? To put it simply, AI is just another branch of computer science, one that is devoted to mimicking human intelligence and behavior. But there is a spectrum to it.
On one end, we have Narrow AI, adept at performing specific tasks — like your GPS navigation system guiding you through a labyrinth of lanes. On the other end, we have General AI, (or AGI, Artificial General Intelligence), the Holy Grail of AI research, which aims to build machines capable of comprehending or learning any intellectual task a human being can do. Most experts agree we will reach AGI, they just can’t agree on how far off it is.
To give an indication of how far along we are, an engineer working at Google was quite certain that Google’s AI model had reached self-awareness, which is the knowledge of it being an AI, yet and acting based on that knowledge. Providing feedback to the engineer in such a way that it exhibited enough signs of being self-aware that the engineer felt obligated to sound the alarm. I do want to make clear that Google vehemently denies that particular this AI reached such a level of advancement.
But imagine what could happen when this inevitably does become reality. Will that AI-spread across the internet like a virus? And if it does, what will its intentions be?
But this is still some ways off. Right now, we have AI’s that are trained by humans to perform specific, or more generalized tasks. They have taken the step of not being confined to just the knowledgebase they have been fed with, but the can Generate new information based on that knowledge. Incidentally, that is what the G in GPT stands for.
This technology is already revolutionizing business, healthcare, the financial markets, and more.
Potential Risks and Ethical Dilemmas
While artificial intelligence is dazzling in its potential, it is not without its share of shadows. We are just now ascending to the crest of this technological wave. But as we do so, we risk a plunge into a sea of uncertainties and dilemmas, weaving a complex narrative of risks that must be thoroughly understood and navigated.
Data privacy and security currently sits at the forefront of these risks.
With the advent of AI, we've willingly or unwittingly opened the doors to an era where colossal volumes of personal data are collected, analyzed, and often stored indefinitely. Take the example of voice-activated virtual assistants, from Siri to Alexa, who listen in on our daily lives, potentially capturing sensitive information.
How can we really be sure that this vast reservoir of data is not mismanaged and used in the training of AI-tools? And once it is part of those tools, what to say it is not misused or falls into the wrong hands?
Another significant concern in the AI realm is security. In a world growing ever more interconnected, cyber threats pose a formidable challenge. Imagine the potential harm if AI systems guiding critical infrastructure — power grids, traffic management, or healthcare services — were breached. The stakes are high, the consequences are potentially disastrous.
AI’s increasing autonomy also brings forth the question of accountability. In the event of AI-led decisions going awry, who do we hold responsible? The creators of the AI? The users? The AI itself? It's a profound ethical conundrum that challenges our traditional notions of responsibility.
Lastly, the power of AI in the wrong hands could lead to nefarious use cases. Imagine deepfakes being used to propagate fake news, or AI-powered drones being used as weapons. Remember the image of the pope in a puffer jacket that was generated by an AI just a few months back? Now, imagine if countries like Russia or North Korea decide to utilize the full power of AI to undermine democracy in ways they have already been willing to do through other means. What is the use of it to go even further than that?
These aren't dystopian sci-fi fantasies but real possibilities that could destabilize societies. However, the goal here isn’t to paint a grim picture or stifle the potential of AI. These risks, while real, serve as a call to action. They underscore the importance of robust regulations that ensure AI's use aligns with our ethical principles and societal norms, protecting us from potential harm while enabling us to harness AI's full potential. They remind us that while we strive to create intelligent machines, we must not forget the wisdom that guides their use.
Let's delve deeper into the role of a regulatory framework in shaping this narrative.
The Call for a Regulatory Framework
A framework is just that, a frame. Something in which one is free to operate freely but one which you should not step outside. The reason is that if you do so, you might harm others. I’ve been putting considerable thought into how we might define these frames. I’m thinking about it from two perspectives. One is the perspective of laws. Laws are generally there to limit us. But then we have the perspective of principles. So what if we let a number of core principles guide the creation of our regulation?
I finally narrowed it down to five core principles that should be treated as unbreachable when creating a framework for the future of AI.
- Humanity First Principle: The use of AI should always prioritize the betterment of human life and societal advancement. Regulations should encourage the use of AI to solve pressing societal issues and enhance the quality of life while avoiding applications that could potentially harm humans or lead to an unfair concentration of power.
- Transparency Principle: AI developers and users should disclose clear, understandable information about how their AI system works. The process behind an AI’s decision-making should be transparent and made available to users so that the users have the ability to understand the reasoning the AI employs to make its decisions..
- Human Override Principle: No matter how autonomous an AI system is, there must always be an option for human intervention and oversight. AI systems should be designed to defer final decision-making authority to human users. There should always be a human-operated “kill-switch” that can override an AI decision.
- Data Protection Principle: AI systems should be designed with robust security measures that respect and protect user data privacy. It must be mandatory for AI’s to comply with data protection laws. This compliance should be encouraged both with guidance and with strict penalties for any overstepping.
- Accountability Principle: Finally, entities that deploy AI should be held accountable for how their systems operate. If an AI system causes harm, there should be a clear process for addressing the harm, and the entity responsible for the system should be held accountable.
Together, these principles provide robust yet easy-to-understand principles on which a regulatory framework can be based. They guide the responsible harnessing of AI power while ensuring respect for human life, values, and rights.
In this social experiment we are currently conducting with artificial intelligence, we must strike the right balance, an equilibrium where innovation thrives while society is shielded from its potential adverse effects.
Emil Åkesson, Chairman and President at CLC & Partners, is a serial entrepreneur, passionate since an early age with technology and innovation. Having studied supply chain and project management, he is equipped to not only understand but realize the solutions that blockchain technology offers. He lives by a standard that he set forth at his company, which is doing things for the right reasons and with the right people. To learn more about Emil and CLC, please visit: https://www.clc.partners/.
The views and opinions expressed herein are the views and opinions of the author and do not necessarily reflect those of Nasdaq, Inc.