While Washington is stuck in gridlock over how to handle artificial intelligence, the European Union is setting the pace for the rest of the world.
The EU has ratified a provisional agreement setting AI rules that could set the tone for how the U.S. (and other countries) acts next. A vote by the full legislative assembly will come in April.
The AI Act, as it is called, sets rules for a number of industries, from banking to transportation, as well as guidelines for how law enforcement can use AI in their duties. Also covered is how large language models can be built, so as to protect both individual privacy and corporate secrets.
While government officials are seemingly happy with the AI Act, big tech firms in Europe are a bit less enthusiastic, saying the wording, as it stands now, is vague and could have broader impacts than intended.
Sam Altman, CEO of OpenAI, has called for the creation of an international body, much like the International Atomic Energy Agency, to oversee AI (Altman has been an impassioned proponent of government regulation of AI systems, as he has seen the rapid advancement of the technology).
At the World Governments Summit earlier this month, he suggested the foundation of a “regulatory sandbox” to test the technologies and set limits on their use.
“It’s very hard to get all the regulatory ideas right in a vacuum,” he said. “And if there was a contained way that I could give people the future and let them experiment with it and then see what makes sense, what went wrong, what went right, that seems like an interesting experiment.”
Altman proposed the United Arab Emirates, which hosted the conference, would be a good place for that (OpenAI, it should be noted, is currently seeking investors in the area).
The AI Act, however, is more than a proposal—or it will be after what is expected to be a largely rubber stamp vote. The bill takes a “risk-based approach” not to the AI itself, but to the products and services that use it. The bigger the risk, the more restrictive the regulation.
So, for instance, if a company is using AI to weed out spam in email systems, the regulation would be minimal, but products that use AI in the medial or financial fields would face more severe restrictions. And some uses of AI would be largely banned, such as real-time public facial recognition except in extreme cases, such as kidnapping and terrorist incidents.
The AI Act will go into effect in two years, assuming it is passed.
In the U.S., President Biden, in October, issued an executive order on AI that required AI companies to share safety test results and other information with the government, but it left the creators of the technology to create their own safety standards. Other legislative efforts are in the works, but none are close to passing at present.
The EU has increasingly led the world on changes in the tech industry. Apple abandoned its Lightning cable after EU rules required its products to adopt a charging plug that was the industry standard. And Meta has changed its Messenger service on Facebook as scrutiny from the EU has increased. While the AI Act might not be a global standard, it could very likely serve as a blueprint that other legislations build from.
The views and opinions expressed herein are the views and opinions of the author and do not necessarily reflect those of Nasdaq, Inc.