Artificial Intelligence
b3rt0ll0,
Aug 14
2025
The IBM Cost of a Data Breach Report is an annual global benchmark study that analyzes hundreds of real breaches to quantify their impacts.
It’s widely regarded by cybersecurity leaders as a bellwether for emerging trends in cyber risk and defense. This year’s report unveils a complex picture: for the first time in five years, the global average cost of a data breach actually declined to USD $4.44 million (down 9% from last year), thanks in part to organizations responding faster with AI-assisted defenses.
Yet, that silver lining comes with new storm clouds. Attackers are rapidly weaponizing AI:
Roughly 1 in 6 breaches now involve malicious use of AI, such as generative AI-crafted phishing and deepfake scams.
Many organizations are scrambling to keep up with AI governance, with 63% lacking formal AI security policies.
A stunning 97% of AI-breach victims have had no proper AI access controls in place.
These findings underscore a widening “AI oversight gap” between the rush to adopt AI and the ability to secure it.
For Hack The Box (HTB), whose mission is advancing cybersecurity readiness and skills development, IBM’s report is both a wake-up call and a validation. In the following sections, we break down the top five insights from the 2025 report and discuss how to address each.
We’ll see how continuous training and realistic simulations—the core of HTB platforms—can help organizations translate these insights into concrete action.
In 2025, threat actors have increasingly turned to artificial intelligence to amplify their attacks; with 16% of data breaches involving attackers using AI tools, marking the first year with significant AI involvement.
The most common malicious uses were AI-generated phishing campaigns (37% of attacker AI usage) and deepfake impersonation attacks (35%).
Generative AI allows cybercriminals to craft convincing phishing emails or fake voices and videos at machine speed and scale, reducing the time needed to create a phishing lure from 16 hours to just 5 minutes.
For defenders: organizations now face machine-assisted adversaries who can continuously refine their tactics with AI, making attacks more personalized and harder to detect. How can we counter this threat together?
AI Red Teaming: Test your models like attackers wouldEmbrace a proactive defense strategy against AI-powered attacks. The courses featured in the AI Red Teamer job-role path (in collaboration with Google) are designed to bridge the skill gap in AI security, ensuring that organizations have and can source trained professionals who can safeguard pivotal assets from AI-augmented threats. The program provides practical knowledge to identify and mitigate adversarial AI threats (including data poisoning, model evasion, and jailbreaks), apply red teaming methodologies to evaluate AI agents, understand best practices for AI security aligned with Google’s Secure AI Framework (SAIF), and engage in real-world attack simulations or assessments. |
The report shines a light on “shadow AI”—the use of AI tools or systems within an organization without proper approval or security oversight. This has rapidly emerged as a major risk factor.
In 2025, 20% of organizations studied reported a breach that was caused by shadow AI usage, with an average of $670,000 in additional breach costs. Even our exclusive community research revealed that nearly two-thirds of participants (63%) are already using AI tools like ChatGPT or GitHub Copilot during training or day-to-day tasks.
In other words, one in five breaches began with employees or departments using unsanctioned AI apps, APIs, or platforms that introduced vulnerabilities.
Why such a big impact?
Unvetted AI tools often bypass normal security controls and may connect to multiple services, creating wide attack surfaces. The rush to leverage AI for productivity is leading to unintended security holes – a risk many organizations didn’t fully anticipate. But then, how can teams keep pace with adversaries?
Discover how to use Hack The Box MCP Server self-service tokens and admin controls to turn CTFs into guided, AI-augmented learning experiences for all. MCP is the answer to seamlessly embed HTB into these AI-native workflows by standardizing how AI agents or LLMs interact with our platforms, and transforming how people learn and compete.
TRY MCP SERVER
Despite all the new threat vectors out there, the IBM report confirms that phishing remains the most common initial attack vector for breaches (16% of total, also confirmed by our customer survey).
Phishing’s enduring success hinges on exploiting human failure – tricking employees or users into divulging credentials, clicking malicious links, or executing malware.
The average cost of a breach caused by phishing is USD $4.8 million, roughly on par with the global average, indicating that phishing incidents can be just as damaging as more “sophisticated” attacks, and we unfortunately saw it unfold in the recent activities from Scattered Spider.
While technology evolves, adversaries will continue to prey on fundamental human trust and error, fueled by the rise of attacker AI (as noted above) which is turbocharging phishing. So, not only is phishing not going away. It’s getting smarter.
HTB defensive labs (Sherlocks) simulate realistic cyber incidents, dropping SOC and DFIR teams in the middle of the identification, remediation, and recovery. Teams can double-down on phishing prevention and response by mapping MITRE ATT&CK tactics used by real adversaries to HTB scenarios and successfully implement predictive defensive measures.
Each reported phish or thwarted click is a small win that, over time, can prevent a multi-million dollar breach – get the basics right and secure your business.
The average breach lifecycle (time from identification to containment) dropped to 241 days – a nine-year low, and 17 days faster than the previous year.
Security teams have been steadily improving their mean time to identify (MTTI) and mean time to contain (MTTC) since a peak of 287 days in 2021. This acceleration is largely attributed to better monitoring, threat hunting, and the adoption of AI/automation in defense.
Breaches contained in under 200 days cost significantly less (around $3.87M) than those that dragged on longer than 200 days (around $5.01M).
However, 241 days is still about 8 months, which is an eternity in cyber terms. Third-party vendors or supply chain compromises often extend well beyond the average (often 260+ days to resolve), due to their complexity.
The goal must be to drive detection and containment times down further, ideally to days or weeks, not months. How can we achieve this level of collaborative security?
Security teams can use the Detection & OpSec Cyber Range (or any other purple-oriented course) as a stage for conducting purple team exercises where both red and blue work in sync.
Provide objectives to the red team and the blue team’s goal is to detect and contain them. Because the range is controlled and repeatable, these exercises can be run multiple times to measure improvement – move your cyber operations from reactive to predictive.
A paradox in the 2025 report is that while attackers rush to use AI, many defenders are still hesitant.
Though, organizations that do harness AI and automation in cybersecurity are reaping significant benefits: those using security AI strategically saved $1.9 million on average in breach costs and shortened their breach lifecycle by 80 days.
Yet, adoption of AI/automation in security appears to be less straightforward than what we would think, with only about one-third of organizations reported using AI and automation extensively across their security operations.
The majority of companies are still relying on traditional, manual methods in many parts of their security workflow or fragmented tooling (and there could be many reasons for this slow uptake: budget constraints, lack of trust in AI tools, or simply a shortage of in-house skills).
Not leveraging these technologies are leaving money (and time) on the table – potentially ceding ground to attackers who face no such hesitance. AI and automation are a force multiplier for defense, just as they are for offense.
But how can enterprises adopt them strategically?
Unlock the best of both worlds—AI for speed and breadth, humans for creativity and intuition on the trickiest parts. Less experienced professionals (like new hires) could learn from AI agents by observing their approach to problems, somewhat like how chess players analyze AI games while agents can be trained on the correct protocols to follow.
Make sure your AI agents are tested before deploying them in production, with clear capabilities and fine-tuned collaboration with human cyber workforce.
In an era of AI-fueled threats and fast-moving technology, organizations can no longer afford a passive, reactive approach to cybersecurity.
Breach costs can be lowered, and incidents can be shortened—if we invest in the right areas.
And it’s not only about technology. The missing ingredient for many, as we’ve discussed, is the human factor; having a team that’s skilled, practiced, and adaptable enough to leverage new tools and counter new threats.
By training with HTB’s labs, courses, and simulations, your team can build the muscle memory to respond to incidents faster, recognize phishing tactics, manage the risks of uncontrolled technology, and confidently build an AI-augmented cyber workforce.