As we navigate the new era of threats in cyber resilience, the rise of generative AI brings forth intriguing security challenges for enterprises. The adoption of AI agents in workflows introduces a higher security risk due to the access they require to sensitive data and documents.
According to Nicole Carignan, VP of strategic cyber AI at Darktrace, the use of multi-agent systems opens up new attack vectors and vulnerabilities that need to be secured from the start. The interconnected nature of multi-agent systems amplifies the impact of potential vulnerabilities.
Why AI agents pose such a high security risk
AI agents, acting autonomously on behalf of users, have gained popularity for their ability to streamline workflows and perform various tasks. However, they present a challenge for security professionals as they require access to data while ensuring the protection of sensitive information. This raises questions of accuracy, accountability, and potential compliance issues.
Chris Betz, CISO of AWS, highlights the significance of retrieval-augmented generation (RAG) and agentic use cases in security considerations. Organizations need to evaluate their default sharing policies to prevent agents from accessing irrelevant or sensitive data.
AI agent vulnerabilities
While gen AI has increased awareness of vulnerabilities, agents introduce new potential risks. Attacks like data poisoning, prompt injection, and social engineering can exploit vulnerabilities within multi-agent systems, emphasizing the need for robust security measures.
Security issues surrounding human employee access can extend to agents, making it crucial to control and monitor the data they can access throughout workflows.
Give agents an identity
Assigning specific access identities to agents can enhance security measures. Jason Clinton, CISO of Anthropic, advocates for recording the identity of both agents and the human responsible for agent requests to improve accountability.
Implementing employee-like access and identification protocols for agents can help organizations manage data access and rethink information-sharing practices.
The old-fashioned audit isn’t enough
Enterprises can leverage agentic platforms like Pega’s AgentX to monitor agent activities and ensure transparency in workflows. Auditing every step an agent takes can provide insights into their actions and improve security measures.
While audits, timelines, and identification are essential, ongoing exploration and experimentation with AI agents will lead to more targeted solutions for addressing security challenges.