# #

August 26, 2025

Planning for a Future with Autonomous Co-Workers

How Living Security Thinks About Agentic AI

AI isn’t just powering chatbots anymore. Across industries, we’re seeing the rise of Agentic AI—autonomous systems capable of making decisions, moving data, authenticating, or triggering workflows without waiting for a human click. These tools are quickly becoming “digital co-workers,” operating right alongside employees and influencing how work gets done.

For security leaders, this introduces both opportunity and risk. Agentic AI can unlock enormous productivity gains, but it also expands the attack surface in new ways.

At Living Security, we believe the organizations that succeed will be the ones that bring Agentic AI into their Human Risk Management (HRM) frameworks from the very beginning.

The Expanding Attack Surface

Today’s attack surface isn’t just networks, devices, and human actions, it’s also the systems acting on behalf of humans. Autonomous AI agents can:

  • Access sensitive systems
  • Chain together actions across tools
  • Scale mistakes (or malicious use) at machine speed

This creates a new class of risks: not just behavioral mistakes, but identity misuse and external manipulation of agents through things like prompt injection or compromised APIs.

That’s why our approach to Human Risk Management has always looked beyond awareness training alone. Risk comes from three interconnected pillars:

  • Behaviors – Human or autonomous agent actions that introduce exposure.
  • Identity & Access – Who (or what) has permissions, and whether they’re governed properly.
  • External Threats – Adversaries who exploit those behaviors or access gaps.

Why Agentic AI Needs HRM Guardrails

Living Security and Cynetia Institute research already shows that 10% of users drive the majority of risky activity. Now imagine those same users directing autonomous agents with standing access and decision-making authority. The potential for scaled consequences is real.

This is why Agentic AI must be treated with the same rigor as human identities:

  • Each agent requires a unique ID, clear ownership, and an auditable trail.
  • Permissions should be role-based, time-bound, and revocable in real time.
  • Guardrails should exist to cut off unsafe workflows, revoke tokens, or disable access automatically when risks are detected.

Living Security’s Forward-Looking HRM Vision

We see Human Risk Management evolving into blended workforce governance—where both humans and AI agents are visible, accountable, and controllable within the same framework.

That means:

  • Visibility across humans and agents to understand who or what poses risk, and why.
  • Accountability for both, using scorecards, metrics, and ownership.
  • Automation of mitigations, not just alerts—think workflow suspension, access revocation, or contextual nudges delivered at the moment of risk.

And as part of our ongoing work, we’re aligning this vision with MITRE ATT&CK, NIST CSF, and industry input so that organizations can rely on a shared, non-biased framework that evolves as the role of Agentic AI expands.

Why This Matters Now

The adoption curve for Agentic AI is steep. Within the next year, many organizations will see these systems embedded in day-to-day operations. Waiting to apply governance means waiting until after the risks have already scaled.

At Living Security, we’re helping customers prepare now by:

  • Quantifying human and agent risk together
  • Aligning controls to real-world threat models
  • Building mitigation playbooks that extend to autonomous systems
  • Opening our HRM Framework to industry collaboration

Closing Thought

Agentic AI is not a future problem—it’s an emerging reality. Treating these agents as part of your human risk strategy is how organizations will scale innovation safely. With the right guardrails in place, autonomous co-workers can be an asset instead of a liability.

And this work has already begun. Living Security is adding new capabilities through bi-directional integrations with partners making it easier for security teams to see, understand, and manage risk from both humans and AI agents in real time. These integrations provide the foundation for applying HRM principles to the blended workforce—where visibility, accountability, and control extend across people and machines alike.

You may also like

Blog May 20, 2024

Advance Your Career with Human Risk Management

link

Blog October 02, 2024

Identify & Protect: How to Measure & Mitigate Security Behaviors with User Behavior Analytics and Insider Threat Detection

link
# # # # # # # # # # # #