
Redazione RHC : 3 December 2025 11:24
The widespread adoption of artificial intelligence in businesses is profoundly transforming operational processes and, at the same time, introducing new security vulnerabilities . Companies are using these systems to increase productivity and strengthen competitiveness, but their increasing autonomy requires a rethinking of control and governance rules.
AI-powered assistants no longer perform just support tasks, such as composing emails or writing summaries. In many organizations , they are now able to initiate work orders, analyze technical logs, manage accounts, and automatically respond to anomalies. These functions, typical of the new generation of ” agentive ” agents, allow systems to take direct action in place of human operators.
The most significant step in this evolution is the emergence of agents capable of interpreting objectives, defining a sequence of actions , calling APIs, and engaging other agents, all without prior intervention from security teams. Across various departments— from marketing to DevOps operations —these systems make decisions and respond to failures with a speed that surpasses the ability of humans to oversee.
Intelligent agents are significantly different from traditional non-human identities, such as service accounts or API keys. They don’t follow fixed operational flows: they adapt their methods and access multiple systems based on context.
This flexibility makes them powerful tools, but also potential vulnerabilities, as they can move between databases, CRMs, and internal platforms with very broad privilege levels . Complexity increases further when one agent calls upon others, making it difficult to link the final action to the original human source.
Many companies are witnessing the rise of “shadow AI,” consisting of unofficial tools introduced by teams without a formal review process. New services are being activated by product managers, meeting bots are being connected to internal systems, and developers are experimenting with local assistants capable of querying sensitive data. These initiatives often escape traditional visibility mechanisms , and even security systems struggle to identify agents running on cloud functions or virtual machines.
Faced with identities operating at the speed of machines, security teams are introducing new forms of governance . Each agent must be associated with an accountable person, have a defined lifecycle, and include clear information about the intent behind each operation.
Default permissions should be limited to read-only, while write privileges should be granted with specific time limits . However, many enterprises still lack standard procedures for deactivating agents that are no longer needed, resulting in neglected systems that continue to operate with outdated credentials or excessive privileges.
For this reason, some organizations are introducing formal registries of active agents , documenting their purpose, manager, permissions, and validity period . This is a necessary step to bring these new identities into a structured management framework. The goal is not to curb the adoption of AI, but to ensure it operates within clear boundaries, just as it does for human personnel who don’t immediately receive administrative access.
The growing use of autonomous agents therefore requires automated control mechanisms capable of limiting permitted operations, recording behaviors, and blocking any anomalous processes before they cause damage . Since these systems already interact with customers, financial flows, and critical infrastructure, inadequate management of so-called “shadow AI” risks turning isolated anomalies into structural problems.
In this scenario, it is necessary to recognize a third category of identities—autonomous agents—with traceable responsibilities and rigorous access rules , so as to integrate these technologies as “advanced colleagues” and not as simple unsupervised scripts.
Redazione