AI driven attacks are accelerating, but they still depend on human identity. The recent Anthropic investigation shows that attackers used ai to automate reconnaissance, exploit development, and credential harvesting, yet the breach succeeded because legitimate usernames and passwords were harvested. Human Risk Management strengthens the identity layer by correlating signals to the people behind them, proactively reducing the opportunities attackers can exploit at machine speed.
When Anthropic released its recent report detailing how a Chinese state-sponsored group used an AI model to automate reconnaissance, exploit development, and credential harvesting across dozens of global organizations, it marked a clear shift in how cyberattacks are now executed. The attackers used AI to scan networks, identify vulnerabilities, generate exploit code, and even document their own activity.
But despite the sophistication of the attack, the turning point was surprisingly familiar.
The AI agent succeeded because it harvested legitimate usernames and passwords. Once it had those credentials, it located the highest-value accounts, created backdoors, and exfiltrated sensitive data with minimal human oversight.
Anthropic explains this plainly:
“The framework was able to use Claude to harvest credentials (usernames and passwords) that allowed it further access and then extract a large amount of private data,… The highest-privilege accounts were identified, backdoors were created, and data were exfiltrated with minimal human supervision.”
AI accelerated the attack, but human identity structures made it possible. And that is where Human Risk Management plays a vital role. HRM is not a silver bullet. It cannot prevent every threat or eliminate every vulnerability. But it can reduce the weak points attackers rely on, strengthen workforce habits, and make organizations more resilient. In a world where attackers move at machine speed, those human-centric improvements matter. Below are the key questions raised by this attack and how HRM contributes to the answer.
How are AI-powered attacks different from standard cyberattacks?
The Anthropic investigation showed that AI transforms cyberattacks from slow, manual, sequential campaigns into automated operations that scale quickly. According to Anthropic and coverage from outlets like the Associated Press, the attackers used AI to automate reconnaissance, probe systems, generate malicious code, and harvest credentials across about thirty targets worldwide.
AI does not introduce new attack techniques. It simply performs them faster, more consistently, and without fatigue.
Anthropic reports the AI performed eighty to ninety percent of the operation and made thousands of requests per second.
Yet even with this speed, AI still must operate within the identity environment created by humans. It needs real accounts. Real permissions. Real passwords. Real pathways. That means defending against AI attacks must begin with strengthening human behavior around identity hygiene, not just relying on traditional security tools.
What role do employees and human access patterns play in AI-powered attacks?
Anthropic’s report does not detail internal configurations or security decisions inside the victim organizations. However, across thousands of industry incidents, we consistently see human-driven patterns shaping the identity landscape that attackers rely on to obtain credentials. These patterns are not signs of negligence. They are the natural outcome of real people working in real environments under real pressure.
A Human Risk Management program brings together signals from identity systems, DLP tools, SIEM platforms, email and phishing defenses, and training programs, then connects that information back to the human identities behind it. Instead of isolated technical alerts, HRM gives CISOs and SAT leaders a clear understanding of where risky patterns are forming across the workforce, which groups or roles may need additional support, and which behaviors can be strengthened. With this clarity, security teams can guide employees more effectively through timely nudges, coaching, and micro-training that help people build resiliency in the course of their everyday work.
Here are common human-driven patterns found in many organizations, and how HRM helps improve them:
Skipping or delaying patches
Updates often feel risky or disruptive, so people postpone them. Endpoint management tools, patch management systems, and vulnerability scanners already capture which devices and users are falling behind on updates.
A Human Risk Management platform brings this information together and ties it to the human identities and roles behind it. Instead of only seeing that “a number of machines are unpatched,” HRM helps security teams understand which segments of the workforce tend to delay updates.
With that clarity, HRM can automatically offer gentle nudges, supportive coaching, or short, timely micro-lessons that help employees feel more confident applying updates on schedule.
Unsafe credential storage
Passwords often end up in chat threads, documents, shared folders, or scripts because people are trying to help teammates, move faster, or avoid blocking a project. DLP and EDR tools usually detect when sensitive information shows up in places it shouldn’t. A Human Risk Management platform brings these signals together and connects them to the individuals and roles involved. Instead of treating it as a technical alert, HRM helps security teams understand which groups tend to rely on these shortcuts and why. With that clarity, SAT teams can offer gentle nudges, timely reminders, or short micro-lessons that build healthier credential habits over time. This human-centered support reduces exposure and strengthens everyday security behavior across the organization.
Password reuse across systems
Reusing passwords is a coping mechanism, not laziness. Identity systems and authentication logs can reveal where this pattern is happening, but it ties this information to the technology not the human. A Human Risk Management program connects these signals back to specific users, roles, and teams. With this understanding, security teams can provide gentle nudges, helpful prompts, or short micro-trainings during natural workflow moments, making stronger password habits feel more manageable and less overwhelming. Over time, this reduces the likelihood that one stolen credential opens multiple doors for an attacker.
Incorrect or incomplete MFA setup
People often skip MFA steps when they’re rushed or unsure how to complete the process. Identity and authentication tools can show which accounts are missing MFA, but the data is usually tied to devices or technical identities rather than the humans behind them. A Human Risk Management program connects these signals back to real users and roles, helping security teams see which groups consistently struggle with MFA setup. With that clarity, teams can offer timely prompts or automated micro-lessons strengthening MFA adoption across the workforce.
Over-privileged accounts
As employees take on new responsibilities, they often receive additional access that never gets removed, leaving accounts with more privilege than necessary. Identity governance tools surface these excess permissions, but the data is usually tied to accounts rather than clearly mapped to the individuals behind them. A Human Risk Management platform connects these signals to real users and roles, helping security teams pinpoint who holds unnecessary access and where it accumulates. With that clarity, managers and employees can right-size permissions in a constructive way that supports both productivity and security.
Shadow IT
Employees often turn to unapproved tools when they need to move quickly or fill workflow gaps. CASBs, SaaS discovery tools, network monitoring systems, and DLP platforms can detect when these unsanctioned apps appear, but the signals usually map to devices rather than the people using them. A Human Risk Management platform links this data back to real users and roles, helping security teams pinpoint who is relying on unapproved tools and then work directly with those employees to explain the risks, offer safer alternatives, or help bring the tools into compliance without slowing down their work. These patterns are not personal failures. They are predictable outcomes of how people work within complex systems. A Human Risk Management program uses data from across the security stack to connect these signals back to real users, giving organizations the clarity to pinpoint where human risk exists and take targeted action. By identifying who needs support and why, security teams can reinforce safer habits, strengthen awareness where it’s needed most, and reduce the number of weak points an AI-driven attacker can exploit.
Which Human Risk Management controls reduce this identity-driven attack surface?
Human Risk Management reduces identity-driven exposure by tying signals from identity, DLP, SIEM, email, phishing, and training tools back to the humans behind them. This identity-level correlation helps security teams see who is creating risk, where it is concentrated, and what needs intervention. As Living Security highlights in its analysis of preparing for autonomous co-workers, organizations can only manage modern workforce risk when activity across the tech stack is anchored to human and/or agentic identity.
With that clarity, HRM enables focused controls such as:
HRM doesn’t replace identity or detection tools; it strengthens them by ensuring security teams know which humans need support to close the gaps attackers rely on.
How do credential harvesting and lateral movement change with AI?
Credential harvesting has not changed, but AI makes it dramatically faster. Once the AI agent collected usernames and passwords, it rapidly tested them, escalated access, mapped internal systems, identified privileged accounts, and exfiltrated data. Tasks that once took hours or days now take minutes. Human Risk Management helps organizations reduce the damage attackers can do by strengthening identity hygiene before credentials are stolen. Safer habits, like not reusing passwords, maintaining MFA, and limiting privilege, make lateral movement harder even for AI-driven attackers.
What culture and training shifts help employees defend against AI threats?
While employee behavior did not trigger this breach, organizations still need a workforce that understands the role human identity plays in modern attacks. Employees benefit from clear, practical guidance on topics like why credentials are valuable targets, how access shortcuts create exposure, why MFA matters, and when to report something unusual.
A Human Risk Management program supports these cultural shifts by linking signals from identity, email, DLP, phishing, and training tools back to individual users, making it clear which employees or teams need additional context or support. With that insight, security teams can deliver timely nudges or short, targeted training that aligns with the specific risks those users face.
Living Security’s guidance on incident response reinforces this approach by showing how identity context helps organizations prioritize effectively and act faster during an event:
By focusing on identity-linked insight rather than broad, generic messaging, HRM helps security teams strengthen awareness where it matters most and build a workforce prepared to recognize and reduce the risks AI-driven attackers rely on.
Why is Human Risk Management becoming central to cybersecurity in the age of AI?
AI speeds up every phase of the attack lifecycle, including reconnaissance, exploitation, credential harvesting, privilege escalation, and data exfiltration. What it does not change is the attacker’s core dependency: human identity. Anthropic’s findings reinforce what the broader industry already sees. Once attackers automate the technical work, the most reliable path into an organization is still a legitimate user account. Identity remains the doorway, and stolen credentials remain the most efficient key.
This reality is what makes Human Risk Management a strategic priority. HRM does not replace IAM, EDR, or detection tooling. It strengthens the part of the security stack that AI cannot bypass: the human decisions and access patterns that determine how valuable a compromised credential becomes. Even the strongest technical controls can be undermined when people reuse passwords, delay MFA setup, retain outdated privileges, or rely on unapproved tools. These behaviors create openings that AI-enabled attackers can exploit much faster once credentials are obtained.
By correlating signals from identity, DLP, SIEM, email, phishing, and training tools back to the human behind each account, HRM gives organizations the visibility needed to reduce the number of identity weak points attackers rely on. This allows security teams to intervene earlier, support the individuals who need help building safer habits, and shrink the opportunities available to AI-driven intruders.
In a time when machine-speed attacks target human-accessed systems, HRM is becoming central because it strengthens the one layer of defense no technology can automate: the humans whose identities shape the modern attack surface.