Technology

Emerging Technologies: What Will 2023 Be Remembered For?

Globe Technology-greenbutterfly-adobe

No doubt 2023 will mostly be remembered as the year of Generative Artificial Intelligence, or GenAI. OpenAI, with its ChatGPT app, sparked more interest since its launch in late 2022 than any other application that has ever existed, reaching millions of users in less than a week and receiving hyped media coverage, prompting tech and finance titans to pivot to AI and GenAI.

ChatGPT got so much attention that Google parent Alphabet, which had seen itself as an AI-first company for almost a decade, worried that it was losing ground to rival Microsoft (which has a 49% stake in OpenAI), and hastily released a promotional demo of their own GenAI bot, Bard, in early February. The move backfired when, during the demo, Bard posted a wrong answer, and Alphabet stock lost $100 billion in value that day.

Alphabet has continued to play catch-up with OpenAI, often making announcements or launching features at around the same time as OpenAI, but running into controversy as they do so, showing that they are struggling to keep pace with OpenAI. Beyond Alphabet, other companies are trying to surf the AI wave:

  • Microsoft has utilized GPT capabilities in all of its products including Azure and Microsoft 365;
  • Amazon launched its own GenAI bot, Amazon Q for business;
  • Elon Musk, a former OpenAI board member, unveiled a GenAI bot named Grok, which might represent a serious threat to ChatGPT;
  • Mark Zuckerberg announced in March that “we're creating a new top-level product group at Meta focused on generative AI to turbocharge our work in this area;" and
  • Tim Cook said in February, "[AI] is a major focus of ours," adding that “Apple sees an enormous potential in this space to affect virtually everything we do."

Big tech companies have been investing in AI development and utilizing it in their services and products for many years, but in 2023, the focus became more intense and the competition fiercer.

AI and the Financial Sector

This intensified focus on AI and its utilization is not reserved to big tech companies; financial institutions are also zeroing in on advanced AI products and services:

  • JPMorgan CEO Jamie Dimon acknowledged AI's benefits during the bank’s last annual meeting, calling the technology "extraordinary and groundbreaking,” further indicating that "AI has helped us to significantly decrease risk in our retail business and improve trading optimization and portfolio construction."
  • JPMorgan applied to trademark a product called IndexGPT and is developing a ChatGPT-like software service that leans on AI to select investments for customers.
  • Bloomberg earlier this year released a research paper detailing the development of BloombergGPT, a new large scale GenAI model.
  • In early December, Mastercard unveiled a GenAI retail assistant tool that offers shoppers personalized experience, where customers’ colloquial language is translated into tailored product recommendations.
  • BlackRock announced in early December that it plans to roll out a GenAI feature to clients who will be able to use their large language model (LLM) to extract information from its AI-powered platform called Aladdin.
  • CaixaBank has assembled a multi-disciplinary task force of more than 100 people to exclusively work on and deploy GenAI in specific areas of internal and customer-related services, working in partnership with Microsoft and Accenture.

Financial institutions have acknowledged the power of AI and have been implementing it in various services and products for decades, and the models and services have become more sophisticated as the technology advanced over the years. Now the financial industry is taking it to the next level with the new capabilities GenAI can offer.

Challenges and Risks

Like with any technology, with the benefits also comes challenges and risks. The most discussed concern has been AI systems bias. AI models are data-driven, using data that we humans have created. Every subtlety of bias, which we humans have, consciously or unconsciously, is reflected (and amplified) in the data we feed into the algorithm. Bias results in discrimination – when the algorithm favors or disfavors a sub-group of people – that’s a discriminatory algorithm.

Another highlighted concern has been the misuse of AI for disseminating misinformation and scams, especially via deepfakes. Unscrupulous behavior has been around since the dawn of civilization, it's just that this technology has simply made it easier and cheaper for criminals. Technology, after all, is a tool, and like any tool, it can be used for good or evil.

That may not even be our biggest concern: Elon Musk has repeatedly warned that unrestrained development of AI poses a potential existential threat to humanity and calls on governments to develop clear safety guardrails for AI technology. Legislators around the world are taking notes, and while tech companies were competing on who reigns supreme, government officials were working on how to regulate AI.

AI Regulation

U.S.

In late November, the Biden administration issued an Executive Order on Safe, Secure and Trustworthy Development and Use of AI. It demonstrates that the administration is taking seriously its responsibility not only to foster a vibrant AI ecosystem but also to harness and govern AI. The order ensures that it is committed to trustworthy AI for the American people and the broader global community by directing agencies to ensure safety and security, promote rights-respecting development and international collaboration, and protect against discrimination.

The order requires a set of obligations, which builds on a recent wave of White House initiatives on foundation models. Foundation models refer to AI systems that are trained on massive data at a scale to attain broad capabilities that can be adapted to a wide range of different, more specific purposes. In other words, the original model provides a base (hence “foundation”) on which other things can be built. Therefore, it is important to ensure that foundation models are safe and protect people and creators’ rights.

The White House also launched a red-teaming initiative in August and voluntary commitments for tech companies in July and September. The White House’s efforts on foundation models accompany the simultaneous release of the G7 principles and code of conduct on GenAI and foundation models.

European Union (EU)

In early December, Europe reached a provisional deal on landmark EU rules governing the use of AI including governments' use of AI in biometric surveillance and how to regulate GenAI. The final votes on the AI Act are expected to take place in early 2024; the law will then have a gradual period before it becomes fully applicable.

First proposed in April 2021, the Act presented a risk-based approach according to the potential risk AI poses to the safety of citizens and their fundamental rights: minimal, limited, high and unacceptable. But when OpenAI launched ChatGPT and unleashed a global furor over chatbots, the commission had to reconsider the approach to foundation models that power these chatbots and other GenAI applications.

The Commission's original proposal did not introduce any provisions for foundation models, forcing lawmakers to add an entirely new article with an extensive list of obligations. There is also a special requirement that high-impact foundation models with systemic risk will have to conduct model evaluations to assess and mitigate systemic risk.

Blockchain

Legislators can ask companies to adhere to responsible AI and ensure safety, security, and trust in AI systems and applications. But how can they enforce this? This is where blockchain technology can assist in the implementation and manifestation of auditable and responsible AI that is safe, secure, and trustworthy.

And speaking of blockchain technology, 2023 will also be remembered for its strides in regulated tokenized assets:

  • Several major traditional financial institutions, such as CitiGroup, JPMorgan, Swift, UBS and LSEG, announced tokenized assets services and products,.
  • PayPal launched its own regulated stablecoin, PayPal USD (PYUSD). The support for the launch of PayPal stablecoin by the Chairman of the House Financial Services Committee may indicate a positive sentiment among legislators for regulated tokenized assets.
  • In late October, global regulators in Japan, Singapore, the UK, and Switzerland formed a tokenized assets policy forum to explore tokenized assets and funds use cases. The project aims to share knowledge and examine the benefits, regulatory challenges, and commercial use cases of tokenized assets and funds.
  • Fed Governor Christopher J. Waller gave a speech on “Innovations and the Future of Finance,” explicitly talking about tokenization and the advantages that blockchain technology offers relative to traditional approaches to conducting transactions.

Coming Full Circle

2022 ended with the collapse of FTX, the centralized crypto exchange, and the arrest of its founder and ex-CEO, Sam Bankman-Fried, for fraud. It stirred a deepening of the “crypto winter” and increased misconceptions on what blockchain technology and Web3 are. In November, Bankman-Fried was convicted and faces a maximum sentence of 115 years in prison. Maybe now we can put this fiasco behind us and focus on what is truly important: how technology can benefit society.

But even as we make tremendous strides forward and continue to innovate, it is important to innovate responsibly. Innovators should not halt innovation and wait for regulators. They need to develop and nourish a “Responsible Innovation” mindset, understanding that being responsible will gain the trust of stakeholders and regulators, and help to create a more optimistic future.

The views and opinions expressed herein are the views and opinions of the author and do not necessarily reflect those of Nasdaq, Inc.

Merav Ozair, PhD

Dr. Merav Ozair is a global leading expert on Web3 technologies, with a background of a data scientist and a quant strategist. She has in-depth knowledge and experience in global financial markets and their market microstructure.

Read Merav's Bio