Agentic AI ‘personal agents’ are quickly rising in popularity and opening new opportunities for users. At the same time, they are also raising major red flags for CIOs. Read on…
Before you start to use (or allow use of) personal agents in your organization, there are five things you need to address.
What is a personal agent?
As we learn more about how agentic AI works, we find new ways to leverage it in our everyday work. The latest form is that of personal agents. Unlike copilots and chatbots where the user makes requests and receives responses, personal agents leverage the user’s specific access to data and systems and perform work on behalf of the user. While this opens the door for new, exciting possibilities, it is creating major concerns for CIOs by opening new pathways for malicious activities and data leaks.
Personal agents such as OpenClaw and Anthropic’s Cowork are quickly catching on as fascinating and incredibly powerful tools. These personal agents can highlight insights and do ‘work’ on our behalf. They understand our work environment and take charge of actions. They function as our personal agent that knows who we are, what we have access to and what we need to do. In all, personal agents are an extension of our work and productivity tool that we all need by our side.
However, there’s a gotcha. A big gotcha for enterprises.
For personal agents to perform work as the user, they need access to your files, documents, emails and applications. Remember, these agents are working on your behalf using your credentials. Whatever you have access to, your personal agent can access as well. Of course, the user does have some control of what they give their personal agent access to.
By giving the agent access via your credentials, you are allowing the agent to access what you already have access to. In doing so, you are giving the personal agent agency. This is what enables the agent to work. Some may view this as somewhat benign as it is limited to only that which the user has access to. Unfortunately, it is not that simple.
On a personal level, giving access to your personal email and documents, the risk may seem benign. On a professional level, personal agents are setting off alarm bells for CIOs. A recent discussion in our CIO Think Tank was centered almost exclusively on the concerns coming from personal agents in the enterprise. In some organizations, CIOs are feeling pressured to open the door to personal agents without fully understanding…and mitigating the risk.
According to a 2025 Accenture study, 66% of organizations recognize AI’s impact on cybersecurity. However, only 37% have the processes in place to assess AI tool security before deployment. That was before personal agents.
Five items to be aware of
There is a huge opportunity for personal agents…even more so in the enterprise. Personal agents can engage to make users more efficient and ultimately increase their value to the organization. At a time when jobs are being eliminated and then created for those that leverage AI, it is more than just efficiency on the table. The tradeoff is that the risks need to be addressed first. Otherwise, the consequences could be catastrophic for the business.
Personal agents have access to more than you realize
There are many reasons why the enterprise puts governance models in place around systems and data. Governance around data and systems is not new but has seen a resurgence in the era of AI.
For any given user, credentials give access to things you already know about. These are the documents, systems, emails that we access every day. However, your credentials also provide access to files, documents and applications you’re not aware of.
Remember that governance models are based on groupings, not always individual access. Everyone in HR has access to shared file area or system. Does everyone need access to every directory, system and file? No. But it is easier to go broader groupings than specific user-level access across the board. Meaning, a given user often has access to more than they need or use. So, while the user may not be aware of this…the agent will be aware!
The agents need to know everything it has access to in order to provide the best service for the user. That makes sense. It also exposes flaws in governance models which could inadvertently expose enterprise data and systems more than intended. This is likely to drive another factor into discussions on how to reapproach governance in the era of AI.
Humans are likely to make ethical decisions, and this is why we talk about the ‘human in the loop’ when it comes to AI. Of course there are humans with malicious intent too. Personal agents are doing work on behalf of the user…and potentially at scale without this filter.
Individuals are in control without oversight
Using personal agents is a personal choice, not an enterprise choice. Meaning, individual users can access…and give access…to personal agents on their own. They don’t need to ask permission nor adhere to additional scrutiny beyond what their credentials have access to. Users can solely choose to give personal agents their credentials and choose what tasks they want their agents to do without requiring permission.
The concern is with personal agents from an untrusted source and/or not going through security audits. The users are focused on getting their work done and how to improve the process, not worry about backend security audits and governance models. Typically, those functions are largely handled outside of the purview of the user. For most users, the intent is not malicious, but they are inadvertently opening the enterprise to risk…on their own.
Personal agent extensions can be a conduit for malware
Personal agents have many capabilities out of the box. You can also add extensions or plugins that provide additional functionality and capabilities. Like the personal agents themselves, those extensions are untested and not verified. Yet, they have access to the credentials of the user. In addition, extensions can come from other 3rd parties not affiliated with the personal agent.
For the nefarious entity, this opens the door to a new conduit for malware.
Without realizing it, a user could allow malware access to sensitive documents and systems. We have already seen examples in the wild where personal agent extensions were malware. More sophisticated approaches are likely in the wings that will be harder to detect.
Unfortunately, there is no easy way to observe and detect this type of malware short of learning the user’s behavior and flagging when it changes. This is along the same lines as tools that can detect a different human at the keyboard based on the stroke and cadence of how they hit keys on the keyboard. Everyone has a unique ‘signature’ in how they use a keyboard…and that can be detected. So, you know it is a user’s credentials, but the keyboard signature doesn’t match. We will likely need a similar level of sophistication to understand personal agent and user behaviors. This is something we largely don’t do today in the enterprise.
Even if you wanted to, because personal agents are new, we don’t even have a behavior map we could leverage if we could observe it.
Users need education to raise awareness of opportunities and consequences
The hype and focus have been on the opportunities that personal agents bring. The new capabilities and opportunity to work more efficiently is considerably alluring. However, users need to understand the consequences and how they can protect themselves. One could argue they don’t know any better. The other side of the argument is that today users should inherently be more careful and not be as trusting of technology. That’s not to suggest technology is bad. It is to suggest there is a balance to be struck. For every opportunity, there is someone looking to exploit it.
Sidenote: Changing user behavior is a broader issue that we, as CIOs and CISOs need to revisit in the agentic era.
In response, some organizations are already taking a ‘block access’ approach until they can get their arms around the downsides. However, like when ChatGPT first came out, this will only work to a certain degree and is far from foolproof.
Which personal agents to use?
With all the considerations and competing personal agents cropping up, are some better than others? Unfortunately, there is no direct answer right now. Generic personal agents that are catching on like wildfire are probably the riskiest as compared with those coming from established companies like Anthropic and Google.
One could surmise that an established company is more likely to take additional precautions. Of course, that is a big assumption that should not simply be relied upon.
In addition, AI companies are also looking for new sources of data to enhance their products. By giving your personal agent, and the AI company, access to your confidential data via your credentials, they likely bring their own risks. So, there is a potential downside either way that must be considered.
CIO perspective
Personal agents have a ton of potential on the positive impact they could have in the enterprise. At the same time, technology is advancing at such an accelerated pace, it is hard to keep up and ensure that the right balance of opportunity and risk are properly evaluated.
CIOs need to actively engage in the conversation around personal agents. The potential for inadvertent malicious activity and data breach is far too great. At the same time, CIOs must consider how to increase the velocity their organization operates. Otherwise, they will never be able to keep up with the increase in change.
Personal agents are both exciting and concerning. They open the door to new possibilities to increase employee productivity. They also open the door to malicious use of systems and breach of data. Think about a DDoS attack from the inside.
CIOs must protect the jewels…once they’re out, you can’t get them back.
Discover more from AVOA
Subscribe to get the latest posts sent to your email.
