There is a cybersecurity arms race underway, and it is quickly changing how we fight cybercrime.
While many of us have been figuring out how to implement AI safely in our organizations, criminals have been embracing the technology and learning how to incorporate it into their operations. That caution put us a few steps behind. Today, we need to catch up, regroup, and—where appropriate—arm ourselves with AI-enabled tools capable of adapting quickly to new threats.
In this article, I cover three AI-driven threats that are already showing up in the real world—and practical steps organizations can take now to counter them.
The most publicized way criminals are using AI is through deepfakes that impersonate people employees interact with—colleagues, vendors, even executives. These deepfakes can be used to trick employees into sending money, sharing sensitive information, or even hiring fraudulent candidates for remote roles.
The primary defense today is awareness: employees should not trust their eyes and ears alone in online communications. When something feels off—especially involving payments, credentials, or urgency— they should be taught to verify identity using a second channel (or in person). In large organizations where verification is difficult, I expect a new class of tools will emerge to help individuals reliably prove identity to others.
A second AI-assisted threat is AI-generated phishing and spear-phishing. Traditional phishing often came from unknown senders and was easier to spot due to obvious errors. Today, criminals can use AI to learn who an employee is likely to interact with and generate polished, convincing messages that impersonate trusted contacts.
As these emails become harder for filters to detect, organizations should continue updating email defenses with AI-aware capabilities while reinforcing simple user habits: slow down, validate unexpected requests, and avoid clicking links or opening attachments when anything seems unusual.
The threat that keeps many cybersecurity professionals awake at night is AI-powered botnets that can find and exploit vulnerabilities in websites and systems. In some cases, AI systems can identify a previously unknown vulnerability and generate malware to exploit it within minutes.
That reality changes the old assumptions: patching known vulnerabilities within 30 days and running an annual penetration test is no longer enough. We need our own AI-powered capabilities to continuously test defenses, hunt for new weaknesses, and validate exploit paths—so we can learn about vulnerabilities before criminals do.
The challenge is that many of the defensive tools needed to keep up with these threats are still being developed and aren’t mature yet. The next decade will likely be characterized by continuous investment in defensive mechanisms that feel like science fiction today but will become standard parts of a strong cybersecurity program.
To prepare for AI-enabled threats, focus on a few practical steps you can start this quarter:
AI-driven threats aren’t slowing down—and neither should your defenses. If you need help strengthening your cybersecurity program and overall strategy, we're here to help. Contact us to get the conversation started!