Artificial Intelligence

AI Faces EU Regulators: Positive or Problematic for European Investment?

Abstract technology image

By Gemma Allen, VP of B2C technology at IDA Ireland

The noise surrounding Artificial Intelligence is so pervasive across all aspects of society at present that it is difficult to comprehend. In what seems like a lightning bolt, it has become front and center of all conversations on technology and the future, injecting an arguably grandiose projection of the new possible. It has turned into something of a media, social and political frenzy. The hype is not unwarranted. AI is expected to see an annual growth rate of 37.3% from 2023 to 2030. It is being heralded as the next big technological revolution, compared to the advent of electricity and the internet.

Unsurprisingly, it is also a cause for broad sweeping political and sociological concern, less about its advancements, but more about the rapid onset circumventing regulation. Technology titans like Microsoft, Google, AWS have announced what they deem a race to the top, while governments and society are eager and, perhaps, anxious spectators. However, one thing has been abundantly clear, there have been no explicit rules and criteria. Until now.

The EU Announces a Major Change

Earlier this year the European Parliament announced a major change in the regulation and monitoring of AI use via the AI Act. The planned law puts new constraints on some of technology’s riskiest usage areas. It blocks the use of facial recognition software and also forces technology to disclose more about the data used to create their programs and large language models as a whole. This is the first of its kind and a significantly divergent approach to the USA where state governments are still struggling to define and implement legal regulations.

The EU is taking a frontier position and the question remains as to whether there will be a series of fast followers. For European tech investors and global MNCs continuing to scale and grow, it creates an interesting dynamic. AI governance is a necessity, and being at the forefront of its inception is a positive step in guiding research to global challenges, strengthening common oversight, and playing a lead role in the sharing of best practices and data.

At the cornerstone of the EU’s policy framework for AI is the aspiration of what are termed digital decade targets. This is a public declaration by EU member states who, together, aim ‘to empower businesses and people in a human-centered, sustainable and more prosperous digital future.’ They are explicit that although this is a continental initiative, it will have global reach and be facilitated through partnerships with private industries. The aim is to create a safe digitally inclusive future for all while also enhancing global supply chains and delivering worldwide solutions to societal and economic challenges.

For example, countries like Ireland have set out clear strategies for a holistic ecosystem involvement, forming advisory councils with individuals from academia, business, law, security, social sciences, economics and civil society to provide independent expert advice to the government on AI policy, with a specific focus on building public trust and promoting the development of trustworthy, person-centered AI.

Lessons from the Past

The AI Act’s timing, along with unanimous big tech commitment to risk reduction and AI’s positive impact, forms an intriguing intersection. In a bid to keep up and stay ahead, R&D spending across the tech industry is reaching unprecedented. levels. The tech industry has been down this road before with the mass integration of social media, lack of regulation and subsequent broad-scale societal and political impacts. Worryingly, AI has potentially far more disastrous consequences. Partnering with bodies like the EU and putting subject matter experts and technologists at the forefront of the conversation can create a more informed and valuable outcome and mitigate future risk. It is not a question of regulation versus agility but protecting human values and pioneering a path to integral progress.

The Future of AI Regulation

As the EU leads the way on AI regulation, there remains an open question on whether the USA will be quick to follow. What’s clear is that it is a topic of mass importance to Capitol Hill as evidenced by an invite from Senator Chuck Schumer to tech leaders to meet on September 13. While there is a notable difference in legislative pace, the implications for global business transatlantic are looming. Companies scaling and developing AI products and solutions in Europe have an opportunity to be part of a testbed built on viable future outcomes and early adopters in the war on technology and integrity.

The views and opinions expressed herein are the views and opinions of the author and do not necessarily reflect those of Nasdaq, Inc.

Other Topics

Market Regulation

Gemma Allen

Gemma Allen is the vice president of B2C technology at IDA Ireland, the agency responsible for the attraction and retention of inward foreign direct investment into Ireland. In her current role with IDA Ireland, she is responsible for building relations with business leaders, political stakeholders and key industry players furthering foreign direct investment into Ireland. Allen is based in New York City and has over 15 years’ experience working with the world’s largest technology companies in the U.S. and Europe.

Read Gemma Allen's Bio