
This is the second post in a series about my use of AI agents, broadly speaking, and Claude Cowork, specifically. However, there are a number of foundational topics we need to explore first, together.
The first post was about going slow and not feeling pressured to jump in before you are ready. This one is about understanding the actual risks, so that when you do decide to use these tools, you are making an informed choice. I can't say this enough:
This is your permission to go slow and resist the temptation to jump in head first.
Let's start with where I stand in all this stuff. I am not a security expert. I am is someone who has spent a lot of time reading and thinking about this, and who wants to help translate some of what I have learned more from a beginner's mind. So let us talk about what can actually go wrong. This isn't intended to be a complete description of all the things. More so, these are issues that I'm not seeing talked anywhere near enough in the discourse about what's possible with these agentic AI tools.
Email as an Example of a Huge Risk Point
One of the most important things to understand about AI tools that integrate with your accounts is that access tends to cascade. Email is the clearest example of this, in terms of how this access could be compromised.
Your email is not just a place where messages live. It is the recovery address for nearly every other account you have. Your bank. Your health portal. Your university systems. Your social media. If someone, or something, gains access to your email, they can use it to trigger password resets on almost everything else. And with two-factor authentication now widely used, that access can extend to getting codes texted to your phone or sent to that same email, which means an attacker can potentially lock you out of your own accounts entirely.
Dave sent me a Daring Fireball post yesterday about a guy who documented a phishing attempt that could easily have resulted in some bad stuff happening: Matt Mullenweg Documents a Dastardly Clever Account Phishing Scam. When I read it, I had the thinking of how easily even the more technical among us could have fallen victim to that, especially if we were rushing and not paying as much attention.
Getting access to people's email accounts is the mechanism behind a large proportion of real-world identity theft and account takeover. Before you grant any AI tool access to your email, it is worth asking what level of access you are granting. Read-only is very different from the ability to send, delete, or manage. And even read-only access means the tool can see, and in some cases store or use, the contents of your messages.
Personal Data Risks
There is a parallel risk that operates more slowly and less dramatically, but is no less real. Your personal data, gathered across apps, websites, AI tools, and services you use every day, is part of a large and largely unregulated commercial ecosystem. Data brokers collect it, package it, and sell it, often without your knowledge and without any direct relationship with you.
This matters in the context of AI tools because many of them, especially free or low-cost ones, have business models that depend on data. When a tool is free, it is worth asking what you are providing in exchange. Sometimes the answer is your usage patterns. Sometimes it is the content of your conversations. Sometimes it is both.
Anthropic, the company behind Claude, updated its privacy policy in 2025 in ways that are worth knowing about. Previously, Claude did not use consumer conversations to train its models. That changed. If you use Claude on a Free, Pro, or Max plan and did not actively opt out, your conversations may now be used for model training and retained for up to five years. The setting is in Claude Settings under Privacy. You are looking for the toggle labeled “Help improve Claude.” Turning it off means your new conversations will not be used for training.
This is not unique to Anthropic. It is an industry-wide pattern worth paying attention to across any AI tool you use. Stanford's Human-Centered Artificial Intelligence (HAI) provides a history of privacy policies and a cautionary note: Be Careful What You Tell Your AI Chatbot.
If you work for a university, before you do anything with AI, familiarize yourself with existing policies around your use. Ohio University calls out the key risks to be aware of, as well as instructions for how members of their community should use AI in response.
Copyright Issues
I want to share something that is personal to me, though not at all unique to me.
My first book was published by Stylus, an independent academic press that was later acquired by Routledge. Routledge's parent company, Informa, subsequently entered into agreements with AI companies to license academic content for model training. Authors were not asked for permission. Many were not notified at all.
The Authors Guild has been working to establish that publishers cannot license authors' works for AI training without seeking permission by separate agreement. Their position is that AI training rights were never contemplated in publishing agreements and cannot simply be assumed. They also maintain guidance on practical steps authors can take to try to protect their work going forward.
If you have published with an academic press, it is worth checking whether your publisher has entered into any AI licensing agreements. Ithika S+T has a Generative AI Licensing Agreement Tracker that shows which publishers have signed deals to allow AI companies to train on scholarly content.
There is also a current legal settlement related to this worth knowing about. Anthropic was sued by authors whose books were acquired from piracy sites and used to train Claude. A settlement has been proposed. If you have published books, you can search the settlement works list to see if your titles are included. The deadline to file a claim is March 30, 2026.
Other Risks
A few additional categories are worth at least brief mention.
Prompt injection is a risk specific to AI agents, tools that can browse the web, read documents, or take actions on your behalf. A malicious actor can embed hidden instructions in a webpage or document that the AI reads, causing it to take actions you did not intend. Some scholars have hidden AI prompts in their article submissions in an attempt to garner better reviews, just one of many examples illustrating the need for more heightened verification methods and protocols.
Data breaches at AI companies are also a real possibility, like this one regarding a Meta AI leak. When you have conversations with an AI tool, those conversations are stored on servers. If those servers are compromised, your conversations could be exposed. Deleting conversations when you are done with them is one practical step you can take.
Surveillance creep is a slower and more diffuse risk. The more you connect AI tools to your accounts, your calendar, your location, your habits, the more detailed a picture exists of how you live and work. That picture may be used by the AI company itself, or it may become accessible to others through data sharing agreements, legal requests, or breaches. The question is not only “is this safe today” but “do I want this data to exist at all.” This is particularly an issue because of how quickly corporations can change their policies and practices, making it that much more difficult to keep up and mitigate risk. This example from Clara Hawking on LinkedIn related to something many of us have done in the past describes the insidious nature of this slow creep well.
I know I've not come close to naming all the risks, but at least wanted to mention a few issues that come to my mind, as I decide my own risk profile for these sorts of endeavors.
Where to Learn More
If you want to go deeper on any of these risks, a few resources to explore further:
The Electronic Frontier Foundation covers digital rights, surveillance, and AI privacy for a general audience. Their guides are practical and regularly updated.
Kashmir Hill's reporting at the New York Times covers privacy and technology in a human, narrative way that is genuinely readable. She has written extensively about data brokers, facial recognition, and the ways AI is reshaping privacy in everyday life.
Leon Furze's Teaching AI Ethics series has a full section on privacy and data that goes into more depth than I have here, with research citations and teaching applications if you want to explore any of this with students.
The next post in this series moves from the general landscape to the specific framework I have used for my own decisions: three categories of considerations that have helped me decide what to give Claude access to and what to keep off limits.

