Artificial Intelligence

AI and the Future of Identity Security

By Rohit Ghai, Chief Executive Officer of RSA

When you strip away all the jargon, policies, titles, and standards, cybersecurity has always been a numbers game. Security teams must protect X many users, applications, entitlements, and environments. Organizations rely on Y many security professionals. They have budget Z to spend on technologies, tools, and training.

Those numbers are getting away from us. The identity universe is expanding far quicker than human actors can keep up: in a 2021 survey, more than 80% of respondents said that the number of identities had more than doubled, and 25% reported a 10X increase.

It’s not just that we’re creating more identities—we’re creating identities that can do far more than they need to. Roughly 98% of permissions go unused. Those risks scale as organizations bolt-on more environments.

It’s no wonder that 58% of the time security teams found out they had been breached through threat actor disclosure. More than half the time, organizations learned they had been beaten when the bad guys told them they’d lost.

Time and again, we’ve seen threat actors exploit these odds by attacking organizations’ identity infrastructures. Colonial Pipeline, SolarWinds, LAPSUS$, and state-sponsored threat actors all demonstrated how large, interconnected, and vulnerable identity infrastructures have become.

Don’t get me wrong: I don’t blame cybersecurity teams for these breaches. It’s not just that their adversaries were clever, or lucky, or both. It’s not just that private organizations can’t be expected to match a nation-state’s resources.

Focusing on those variables misses the point, which is that that human actors cannot be expected to ensure the security, compliance, and convenience of an organization’s IT estate any longer. The speed, scope, and complexity of what we must protect have grown beyond human capacity.

Cybersecurity needs AI to get to zero trust

The good news is that humans don’t have to act alone. Just as the identity universe is expanding beyond human capacity, artificial intelligence—AI—can now help secure the entire identity lifecycle.

We’ve creating new tools suited to this moment because AI is great at doing something that humans struggle with: making sense of large quantities of data quickly.

As an example, recall that 98% of entitlements are never used. That’s likely because IT and identity teams over-provision accounts from the moment a new user is onboarded. Because humans tend to see the world in fine-grained approximations, most users begin with more entitlements than they need. The tail wags—and endangers—the dog.

While fine-grained approximations are useful constructs, they’re fundamentally at odds with the zero-trust directive to enforce least privilege. Zero trust demands coarse-grained, just-in-time analysis and decision-making. Getting to zero trust means knowing who a user is, what they need, why, and for how long, then re-examining that information continuously to assure a request is appropriate.

Humans can’t operate at that level or speed. But AI can. A machine isn’t daunted by thousands of users with millions of entitlements changing every second. In fact, a machine can become more effective by learning from a broader dataset. While humans are overwhelmed by that much data, machines can use it to develop stronger, better, faster cybersecurity.

We have zero chance of getting to zero trust without AI. The good news is that AI-powered cybersecurity isn’t vaporware: more than 60 startups and major vendors, including RSA, have announced AI-powered security innovations. AI can parse authentication data to find out who is trying to gain access, assess entitlements data to learn what someone could access, and study usage data to see what someone really is accessing.

AI can prevent risks, detect threats, and automate responses. And by identifying the highest-priority vulnerabilities, it helps security teams focus on the right thing rather than everything.

Identity must adapt

Identity establishes every organization’s most critical defenses. But if identity is the defender’s shield, then it’s also the attacker’s target. Identity is the most attacked part of the attack surface: 84% of organizations reported an identity-related breach in 2022, per the Identity Defined Security AllianceVerizon found that passwords have been a leading cause for all data breaches every year for the last 15 years.

We can’t wait for the Security Operations Center (SOC) to step in: a rapidly-growing identity universe means more endpoints, network traffic, and infrastructure for them to monitor. SOC teams lack visibility into brute force, rainbow tables, or other identity threats. They’re not part of the SOC’s remit and not about to be, either.

With the SOC overwhelmed and identity under attack, identity must adapt. It’s not enough that an identity platform is great at defense. In the future, identity also needs to be great at self-defense.

We need to build platforms that do identity threat detection and response (ITDR) intrinsically—not as a feature or an option, but as a fundamental part of their nature.

Our industry is developing those capabilities—but we need to move faster. Cybercriminals are already using AI to write polymorphic malware, improve and execute phishing campaigns, and even hack basic human judgement and reasoning with deepfakes.

Identity is going to get hit by smarter attacks. We can either wait to see their impact or we can work to even the odds.

Humans must evolve

Integrating AI into cybersecurity will be difficult work, but even in the early days we’re already seeing its potential: IBM found that organizations with fully deployed AI security and automation reduced the time it took to identify and contain a breach by 74 days and lowered the cost of a data breach by more than $3 million.

But this work won’t be without its challenges: we humans face a pending identity crisis. Cybersecurity professionals will need to reimagine our roles working alongside AI. We’ll have to learn new skills training, supervising, monitoring, and even protecting AI. We’ll need to prioritize asking AI better questions, setting its policies, and refining its algorithms to stay a step ahead of our adversaries.

Ultimately, it’s not just the technology that must evolve. It’s all of us.

Rohit Ghai is Chief Executive Officer of RSA, a global leader in identity and access management (IAM) solutions for security-first organizations. Around the world, 12,000 organizations rely on RSA to manage 25 million enterprise identities and secure access for millions of users. Previously, Rohit has run software and SaaS organizations focused on cybersecurity and information management in highly regulated markets. He advises global customers on their digital and security transformation and is often cited in broadcast and print media on topics like data privacy, content management, information governance, digital risk, cybersecurity, and how organizations can adapt to new technologies and thrive in the digital era.

The views and opinions expressed herein are the views and opinions of the author and do not necessarily reflect those of Nasdaq, Inc.

Other Topics

Cybersecurity