Emerging Technologies: What to Expect in 2024 and Beyond

Abstract technology image

In 2023, Generative Artificial Intelligence (GenAI) exploded onto center stage, and immediately brought with it an unprecedented amount of hype and concerns, along with burgeoning regulations. ChatGPT and the like, powered by GenAI, with its easy-to-use conversational search engine, made it accessible for everyone to use, bringing AI from the tech fringes into the mainstream, and it's clear that GenAI won't be going away any time soon.

Let’s put things in perspective. AI has been around for eight decades and has evolved and advanced over the years. GenAI represents the latest advances of AI technology, and it’s not going to stop there. It will only continue to exponentially evolve.

Tech companies and the financial industry have been utilizing AI in many of their products and services for decades, and the technology has already been impacting our lives even before GenAI: Think email/text corrections/recommendations, spam filtering, Google maps, search engines, credit card fraud alerts, personalized recommendation on any site you visit, customer service chatbots, Apple Watch, Fitbit and the list goes on.

But hardly anyone spoke about the fact that these things are AI-powered. In short, we have already been using AI in our daily lives. And that is what will happen with GenAI: services and products will be GenAI-powered, impacting every aspect of our lives without us being aware of what happens behind the scenes. That’s the most powerful state of any technology, when it becomes unconsciously part of our everyday lives.

As GenAI continues to evolve and be adopted, other technologies will innovate at an accelerated pace to fulfill an integrated “full system” solution.

Let’s look at what lies ahead, not just for AI, but all aspects of innovative technologies:

AI and GenAI

Everyday Automation

If 2023 was the year that GenAI burst into the mainstream, then 2024 will be the year when everyone starts to understand just how transformative GenAI will be to our lives. More and more companies will utilize GenAI, looking to increase productivity and efficiency by handing over menial tasks like obtaining information, scheduling, managing compliance, organizing ideas and structuring projects, leveraging our truly human skills.

We will spend more time being creative, exploring new ideas and original thinking, or communicating with humans rather than programming machines. There are still ethics and regulations issues to be solved, which will also be addressed as technology evolves.

Small Will be the New Big

Current Large Language Models (LLM) will continue to thrive, but the diverse needs of enterprises will promote the rise of smaller, flexible and more efficient language models. These models will get smaller and smaller to run on low-footprint installations with limited processing capabilities. As AI decentralizes and the algorithm evolves to support smaller models and smaller data sets, right-sized, less power-hungry infrastructure is increasingly important.

We’ll see the speed of evolution of architecture techniques, making GenAI outcomes more accessible, such as Retrieval Augmented Generation (RAG). A good use case would be when your model is good and performs the task you want but could use some specific learning based on content from a custom document dataset – a knowledge base for example. The benefits are that the RAG process doesn’t require time-consuming training runs. The LLM is trained prior to engaging in the process, and you simply make new domain-specific knowledge easier for the LLM to digest.

This will lead, in the long run, to the creation of interconnected networks of models designed and fine-tuned for specific tasks, and to develop true multi-agent generative ecosystems.

GenAI Becomes Culturally Aware

Culture influences everything: It is the foundation for how each one of us exists within a community. LLMs trained on culturally diverse data will gain a more nuanced understanding of human experience and complex societal challenges. This cultural fluency promises to make GenAI more accessible to users worldwide.

In the past few months, non-Western LLMs have started to emerge: Jais, trained on Arabic and English data; Yi-34B, a bilingual Chinese/English model; and Japanese-large-lm, trained on an extensive Japanese web corpus. These are signs that culturally accurate non-Western models will open GenAI to hundreds of millions of people with impacts ranging far and wide, from education to medical care.

Assistant AI and Agents

In November, OpenAI announced Assistant API, the first step to helping developers build agent-like experiences within their own applications. An assistant is a purpose-built AI that has specific instructions, leverages extra knowledge, and can call models and tools to perform tasks. AI assistants for developers will evolve from basic code generators into teachers and tireless collaborators that provide support throughout the software development lifecycle.

Assistants AI or agents will evolve beyond the developers’ community. As Bill Gates says in a blog post, agents will change the way we use computers today: “You won’t have to use different apps for different tasks. You’ll simply tell your device, in everyday language, what you want to do. And depending on how much information you choose to share with it, the software will be able to respond personally because it will have a rich understanding of your life. In the near future, anyone who’s online will be able to have a personal assistant powered by AI that’s far beyond today’s technology.”

Gates believes that within five years, we’ll have access to this type of agent. This is yet to be seen. But in the meanwhile, we’ll see GenAI getting stuff done for us – making reservations, planning a trip, or connecting to other services. We’ll take steps towards multimedia GenAI applications, beyond text and images, with the advances in Multimodal models (e.g., Gemini).

The Future of Jobs

In a report earlier this year, Goldman Saches predicted that if GenAI lives to its promises “extrapolating our estimates globally suggests that generative AI could expose the equivalent of 300mn full-time jobs to automation.” But “the good news is that worker displacement from automation has historically been offset by creation of new jobs, and the emergence of new occupations following technological innovations accounts for the vast majority of long-run employment growth.”

This can be expected. We have been through different cycles of technological evolutions, and every cycle displaced jobs and created new jobs that did not previously exist. A study shows that more than 60% of jobs done in the United States in 2018 had not yet been invented in 1940.  

We are going to experience how GenAI will affect knowledge workers, people who have been largely spared by the computer revolution in the past 30 years. Creative workers, lawyers, finance professionals and more are going to see their jobs change. It should be making their jobs better, allowing them to do new things they couldn't have done before. Rarely will it completely automate any job – it's mostly going to be augmenting and extending what we can do. 

JPMorgan CEO Jamie Dimon told Bloomberg TV that "people have to take a deep breath. Technology has always replaced jobs. Your children are going to live to 100 and not have cancer because of technology, and literally they’ll probably be working three-and-a-half days a week." This is consistent with the McKinsey report stating that employees could scale back on their working hours thanks to the technology being used to automate some of their activities.

Data and Computing


Data is the driving force of AI and GenAI models. An organization's outcomes are only as good as the data on which they rely – garbage in, garbage out. IDC research found that organizations are only able to extract 38% of the value of their data, likely because locating, accessing, processing and protecting data throughout its lifecycle and across disparate environments is way too difficult.

Organizations will be pressed to get better control of their data and thus drive the need for a multicloud storage approach – using at least two different cloud providers to run applications.

Database Structure

As GenAI models evolve and especially with agents, there will be a need for new types of database structure that can capture all the nuances and complex relationships, and quickly recall the information while maintaining privacy. We are already seeing new ways of storing information, such as vector databases – database that stores data as high-dimensional vectors, which are mathematical representations of features or attributes, that may be better for storing data generated by AI models.

Internet of Things (IoT)

As 5G networks will blanket most major cities globally, it will amplify the potential for IoT, such as self-driving cars, smart cities or remote surgery, all magnified by 5G’s unparalleled bandwidth and minimal latency. The real transformation will be when AI and GenAI integrate into this framework. The efficiency of IoT applications will be poised for a significant uplift, with AI meticulously analyzing vast data volumes to deduce meaningful patterns.

Edge Computing for Better Data Outcomes

With the vast computational needs of GenAI models, the cloud’s extensive resources will be sought after. Edge computing will emerge as the linchpin for applications craving instant responsiveness. By processing data close to its origin, edge computing will aim to drastically reduce latency, becoming crucial for real-time operations like autonomous driving and advanced medical imaging.

It’s critical for organizations to use their own data to train and tune new models and run inference where data is created. Think about a smart factory doing heavy process automation and creating real-time data. More and more resources are going to be at the edge of the network doing computation and storing information.

Quantum Computing

If a supercomputer gets stumped, that's probably because the big classical machine was asked to solve a problem with a high degree of complexity.

Complex problems are problems with lots of variables interacting in complicated ways. AI is already assisting in solving complex problems, but when problems are too complex, using AI models is not enough, we also need appropriate computing power.

The combination of AI and quantum computing in compute-heavy fields holds great potential, including drug discovery, genome sequencing, cryptography, meteorology, material science, optimization of complex systems such as traffic flow through large cities, and even the search for extraterrestrial life.

The financial industry has been one of the early investors in quantum computing to enhance the power of AI systems developed for purposes such as fraud detection, risk management or high-frequency trading. As more organizations and industries utilize AI and GenAI and as those systems evolve, we’ll see the benefits of quantum computing applied across various compute-heavy fields other than finance.



As we are entering an election year, most likely we should not expect much legislation in 2024. In the coming year we will assess the effectiveness of the Biden administration's Executive Order on Safe, Secure and Trustworthy Development and Use of AI, especially as it applies to foundation models that power GenAI applications. When legislators convene again after the election, they’ll have the opportunity to use the lessons learned from the Biden executive order to structure AI regulations.

By mid-2024, two U.S. states – California and Colorado – will have adopted regulations addressing automated decision-making in the context of consumer privacy. While these regulations are limited to AI systems that are trained on or collect individuals’ personal information, both offer consumers the right to opt-out of the use of AI by systems that have significant impacts, such as in hiring or insurance.

Companies will have to start thinking about what it means when customers exercise their rights. If, for example, a large company using AI to assist with hiring process, and hundreds of potential hires request an opt-out, do humans have to review those resumes? Does it guarantee a different, or better, process than what the AI was delivering? It would be quite beneficial to understand these issues.

European Union (EU)

In early December, Europe reached a provisional deal on landmark EU rules governing the use of AI, including governments' use of AI in biometric surveillance and how to regulate foundation models and GenAI. The final votes on the AI Act are expected to take place in early 2024, and if passed, the law will then have a gradual period before it becomes fully applicable.

The Act requires a high level of transparency, asking providers to furnish model cards with documentation on the training process and relevant details for downstream development. Most tech companies have put together policies on Responsible AI and published these policies on their website. It is yet to be seen how they will provide the documentation and audit trail that the EU Act requires.


The Need for Auditable Responsible AI

In practice, AI models’ monitoring often consists of periodic checks to track changes in key parameters and data distributions, and performance measures of the models over time. But they still do not provide an audit trail of what happened over time, or explanations of the causes for any changes. And there is no real-time monitoring.

Legislators, though, require transparency of downstream development documentation and audits. AI systems are not set up in a way to provide this information. A need for auditable Responsible AI emerges, and this is where blockchain technology can assist to provide a solution that enables companies to implement Auditable Responsible AI that is safe, secure and trustworthy.

Tokenized Assets

In 2023 we’ve seen some strides in regulated tokenized assets, with traditional financial institutions such as JPMorgan, Citi, Swift and the LSEG announcing tokenized assets and services. They have been working with regulators and are waiting for approval. Now that the FTX fiasco is behind us, we can focus on this technology and how it can benefit society. The Fed and other global regulators do recognize and appreciate the advantages that blockchain technology offers relative to traditional approaches.

Given regulators’ positive sentiment, it is likely that at least one of these tokenized assets or services would be approved in the coming year.

Bitcoin exchange-traded fund (ETF)

In the past ten years, dozens of companies applied for a Bitcoin ETF, with different structures and methods, but all failed to receive the Securities and Exchange Commission (SEC) approval.

In June, BlackRock, the largest asset management company in the world, and the largest ETF provider, filed for a spot Bitcoin ETF. In December, representatives from BlackRock, Nasdaq, and the SEC met to discuss rule changes that are necessary to list the bitcoin ETF. Nasdaq Rule 5711(d) establishes specific criteria and regulatory guidelines for the listing and trading of Commodity-Based Trust Shares on the Nasdaq Exchange, and detailing the requirements for initial and continued listing, along with surveillance and compliance measures to ensure market integrity and protection against fraudulent activities. The inclusion of a surveillance-sharing agreement aims to mitigate market manipulation risks associated with crypto trading – something that the SEC is very concerned about.

It seems that this time it’s different, and BlackRock’s ETF is likely to get approved.

If the BlackRock Bitcoin ETF is approved, it will not only potentially signify the approval of their Ethereum ETF, which they filed in November, but also the approval of other digital assets ETFs that were filed by other companies.

Moreover, this move might clear some misconceptions about blockchain technology and its further adoption by institutions as well as mainstream investors.

Final Thoughts

There is much to look for in the years ahead as emerging technologies evolve and advance. No technology is perfect, and its potential to bring immense benefits to society is as great as its potential to cause damage and harm. Legislators require companies to adhere to responsible AI and responsible innovation, and we, as a society, should demand it.

It is imperative that innovators maintain a “responsible innovation” mindset, ensuring we enjoy the benefit of innovation while keeping its risks at bay. Only then technology can transform our lives for the better.

The views and opinions expressed herein are the views and opinions of the author and do not necessarily reflect those of Nasdaq, Inc.

Merav Ozair, PhD

Dr. Merav Ozair is a global leading expert on Web3 technologies, with a background of a data scientist and a quant strategist. She has in-depth knowledge and experience in global financial markets and their market microstructure.

Read Merav's Bio