TradeTalks 2025 Article Hero

Defining a Responsible AI Standard

Talking Trends

The global gaming industry is at a pivotal moment after experiencing years of double-digit growth. Joel Simkins, Founder & CEO of XST Capital Group LLC, Scott Sadin, Co-CEO of IC 360, Desiree Dickerson, Co-Founder & CEO of THNDR, Carson Hubbard, Founder & CEO of Rebet, joined Tradetalks today to discuss CFTC Event Contracts, innovation in gaming, regulatory framework, market impact and potential future developments. We explore implications for stakeholders, including regulators, operators and consumers, providing a comprehensive analysis of the evolving gaming landscape. Ultimately, regulatory clarity acts as the cornerstone for sustainable growth, helping to balance innovation with responsibility and ensuring that all parties involved in the gaming landscape can thrive within a secure and well-structured environment.

For regulators, clarity reduces the risk of misinterpretation and facilitates effective oversight, ensuring that new gaming mechanisms adhere to ethical standards and public policy objectives. Operators benefit from a well-articulated regulatory landscape by gaining confidence in developing and marketing products without fear of inadvertent legal violations. Moreover, consumers are more likely to engage with gaming platforms when they are assured of their legal standing, security and fairness.

WATCH

 


Why the Financial Industry Is Undergoing Its Largest Technological Shift

Industry leaders delve into TradFi’s demand for digital asset infrastructure and why the financial industry is undergoing its largest technological shift since electronic trading. 

How Fiscal and Monetary Policies Could Impact the Stock and Bond Markets

Brian Joyce and Maddie Radner from the Nasdaq Market Intelligence Desk and Doug Huber, Deputy Chief Investment Officer of Wealth Enhancement, explore how fiscal and monetary policies could impact the stock and bond markets.

How AI Is Impacting Data Governance and Privacy

Experts discuss how AI is impacting data governance and privacy, as well as cybersecurity litigation trends.

Scott Zoldi, Chief Analytics Officer at FICO

This Week's Guest Spotlight

Scott Zoldi, Chief Analytics Officer at FICO

 

Over the past couple of years, we’ve seen the rise of new types of AI, including generative and agentic. How do you see AI continuing to evolve in 2025 and beyond?

Technologies, such as generative AI and agentic AI / agentic workflows, are newly popular but have been applied in various ways for many years. I believe that what we are seeing is both broader exposure of, and new developments in, these technologies, along with open-source toolsets that make them more accessible. For example, generative AI has been around for decades, but new transformer technologies and compute capabilities make Gen AI easier and more attractive to experiment with.

AI technology is continuously evolving; in terms of 2025 and beyond, I believe we will continue to see complex algorithms, the kind once reserved for PhDs and expert computational scientists, to be comfortably held in the hands of more quotidian practitioners. This will fuel a fly wheel of experimentation, proofs of concept and big demand for enterprise-level AI capabilities in interpretable AI methods and ethical AI testing. These capabilities are pivotal in allowing algorithms to mature into enterprise-grade solutions. With interpretable, ethical AI, more organizations will be able to enter “the golden age of AI,” where these amazing technologies can be used safely on responsible AI rails.

In your TradeTalks interview, you mentioned that companies need to follow a standard under which to develop AI. How should companies go about defining that standard?

Defining a responsible AI standard requires first surveying the organization’s AI maturity. Questions on that survey should include:

  • Do you have a chief analytics or AI officer responsible for directing and leading AI development?  
  • Are you organized as business / product teams with separate AI teams reporting into respective business units?
  • Or does the organization use AI only in a specialized AI research team?
  • Or are you are just starting the AI journey?

It is important to understand all stakeholders’ opinions and ensure they are heard. This process should incorporate existing AI expertise and determining where common approaches exist, and where there are differences in algorithms and practices. This will facilitate open discussion, but companies still need to reach a single standard AI approach, which I call the Highlander Principle. For companies that don’t have an AI practice to leverage, many organizations are happy to share their approaches to get you jump-started.

How can companies ensure that their standard is able to adapt to evolving regulations?   

The power of having a corporate AI standard is that instead of managing tens, hundreds or thousands of AI models to ensure they meet regulation thresholds, you instead manage a single standard––a standard you can discuss openly with regulators, get their input and then evolve it.

Tools like blockchain will enable the current standard to be enforced and help practitioners meet model governance requirements. In doing so, you’ll carve out more time for these experts to focus on innovation, find new, more effective ways to meet regulations, or evolve the standard based on new regulations. Again, this can be accomplished through the vehicle of the single model standard, versus having data scientists assess individually the organization’s tens, hundreds, or thousands of AI projects. Once you determine how to change and update the standard, you can then introduce and govern all projects consistently, keeping data scientists aligned on meeting regulatory requirements across a multitude of projects.

On the regulation front, do you expect regulations around AI to change during this administration and, if so, how?

Some think regulation limits innovation, but I think regulation creates the spark that inspires innovative solutions. Take for example DeepSeek - its Chinese development team was constrained by fewer and less performant GPUs; they had to innovate, hard, to produce a performant, viable LLM competitor model at much lower cost. So, although we may see less AI regulation with the current administration, this doesn’t mean that proactive, inventive organizations will not strive to meet their AI objectives with safe, responsible AI––and do the innovation work to get there.

You wrote a blog in February about what ethical AI is and identifying hidden bias. Can you elaborate on how companies can find hidden bias within their datasets?

What makes AI so amazing is that many AI applications utilize machine learning, which is the science of algorithms finding solutions not prescribed by humans. This capability is fundamentally powerful, as these algorithms can explore relationships between inputs that a human would not anticipate as predictive. This is what makes machine learning superhuman. However, machine learning poses a double-edged sword; it delivers both more predictive power and accuracy, but often in ways that a human won’t understand, and in ways such that machine learning models find proxies for protected groups. The latter can, in effect, propagate mass bias.

To find hidden bias, data scientists can do two things: First, use interpretable machine learning algorithms, which expose for human inspection the relationships between variables learned by the machine learning. Second, they can use automated bias testing, which taps interpretable machine learning algorithms to constrain the complexity of data relationships such that humans can still interpret them, automate bias acceptance criteria testing, and question the datasets to protect from bias. This helps prevent data scientists from unknowingly folding bias into their models and thus continue to propagate bias at scale.

What can companies do today to prepare for the next wave of AI innovation?

First and foremost, you’ve got to ensure that you’re constantly following new developments in AI. Then, consider what business problems you need to solve, and if you are effectively waiting on new AI innovation or a capability to do so.

The reality is, AI innovation can be a hammer thinking everything is a nail; if you are satisfactorily solving your business problems today through forms of AI or other methods, preparing for that next wave means ensuring you are not caught up in chasing every new AI fad and development. If there is a large unsolved business need that aligns with a new AI innovation promise, be ready to build your AI staff or work with vendors specializing in that specific AI innovation. But to me, the best way to prepare is to understand the right time to leap. Leaping into every AI innovation that arises can be unproductive and hurt business results in the short and long term. 


 

This article was originally our TradeTalks newsletter. Sign up here to access exclusive market analysis by a new industry expert each week. We also spotlight must-see TradeTalks videos from the past week.

Sign up Now to Get Full Access

Create a Nasdaq.com account to get access to exclusive content and best-in-class insights. 

Create Your Account ->

TradeTalks Newsletter

Sign up to receive your weekly dose of trading news, trends and education. Delivered Wednesdays.

TradeTalks

From technology to digital assets and more, TradeTalks explores the trends that are shaping the global markets. Broadcasting live from Nasdaq MarketSite and beyond, our series features engaging conversations with top industry leaders.

Learn More ->

More Related Articles

Info icon

This data feed is not available at this time.

Data is currently not available