• Skip to primary navigation
  • Skip to main content
  • Skip to footer

Teaching in Higher Ed

  • Podcast
  • Blog
  • SPEAKING
  • Media
  • Recommendations
  • About
  • Contact
BLOG POST

Permission to Go Slow

By Bonni Stachowiak | March 24, 2026 | | XFacebookLinkedInEmail

robots statues made out of pottery in a garden

I'm beginning a series of posts about my experimentation with Claude Cowork, specifically, but also about the landscape of AI agents, more broadly. However, I want to say something before we get into caveats and considerations, security settings and privacy policies, and all the rest of it. Something I'm not hearing explicitly stated anywhere near enough in conversations about AI.

You do not have to do any of this yet. Slow down.

There is enormous pressure, most of it implicit, to jump in, try the tools, connect the apps, grant the access, and figure it out as you go. The tech industry moves fast and can seem like it rewards people who move fast with it (move fast and break things, anyone?). But curiosity about AI does not require you to immediately hand over access to your files, your calendar, your email, or anything else. It is completely okay (and I would argue, even necessary) to be in a learning phase.

Marc Watkins, Assistant Director of Academic Innovation at the University of Mississippi, describes this as cultivating skepticism and curiosity in the age of AI on Episode 613 of the Teaching in Higher Ed podcast and in his writing about how generative AI is impacting education.

Speed Disguises Itself as Progress

Sam Illingworth, a Full Professor of Creative Pedagogies in Edinburgh, runs a newsletter called Slow AI. His subtitle says it plainly:

…knowing when to use AI and when to leave it the hell alone. Everyone is teaching you how to use AI faster. Nobody is teaching you how to think about what you lose when you do.

Sam tells us that most of the advice about AI is wrong. Not because the tools are bad, but because nobody is asking what we give up when we use them.

If you are feeling behind because you have not connected an AI tool to your calendar or your email yet, I encourage you to follow Sam's advice and slow way down. There are serious security and privacy concerns at play and these issues deserve our careful attention. Sam has also written about what happens when organizations distribute AI tools before anyone knows how to use them safely, and then call the chaos adoption. That is true at the institutional level. It is also true at the personal level.

Connecting things before you understand what you are connecting is not getting ahead and winning some kind of race. It is just moving fast and almost assuredly breaking things in the process.

There Are Real Costs Worth Understanding First

Leon Furze, an international consultant, author, and speaker, has written one of the most thorough and accessible series I have come across on AI ethics. His Teaching AI Ethics project covers bias, environmental impact, copyright, privacy, human labor, and power. It was originally written for educators and students, but it reads clearly for anyone who wants to understand what is actually happening underneath the hood of these tools. The updated 2026 series is available as a free, open-access ebook: Teaching AI Ethics – A Guide for Educators.

Furze's work is a good place to start if questions like these are on your mind: Who does the labor that makes these systems run? What does it cost the planet to train and operate them at scale? Whose work was used without permission to build them? He also encourages us on Episode 572 to not solely refuse to learn anything about AI because of these ethical concerns, but to remain curious and in a position of learning. He shares:

We can take a personal moral stance, but if we have a responsibility to teach students, then we have a responsibility to engage with the technology on some level. In order to do that, we need to be using it and experimenting with it because otherwise, we're relying on third party information, conjecture, and opinions rather than direct experience.

While Sam and Leon tend more on the experimental side of things (with curiosity and skepticism at the forefront), there are other voices worth centering.

Critics Worth Listening To

Emily M. Bender and Alex Hanna are the co-hosts of Mystery AI Hype Theater 3000 and the authors of The AI Con: How to Fight Big Tech's Hype and Create the Future We Want. I talked with both of them on Episode 576 of Teaching in Higher Ed, where Emily described the two sides of the same coin:

The boosters say AI is a thing. It's inevitable, it's imminent, it's going to be super powerful, and it's going to solve all of our problems. And the doomers say AI is a thing, it's inevitable, it's imminent, it's going to be super powerful, and it's going to kill us all. And you can see that there's actually not a lot of daylight between those two positions, despite the discourse of saying these are two opposite ends of a spectrum.

Meredith Whittaker, president of Signal and co-founder of the AI Now Institute, has been one of the most consistent and credible voices raising alarms about what happens when AI agents, tools that act on your behalf, get access to large parts of your digital life. She has called it “putting your brain in a jar.” She is worth following if you want someone who speaks plainly about the structural risks, not just the individual ones.

Kate Crawford, co-founder of the AI Now Institute and author of Atlas of AI, takes a more structural and academic approach. Her work examines the economic incentives that make data collection the default, and what is lost when we consent without fully understanding what we are agreeing to.

Kashmir Hill is a technology reporter at the New York Times who covers privacy in a way that is accessible and human-scale. Her book, Your Face Belongs to Us: A Tale of AI, a Secretive Startup, and the End of Privacy, about facial recognition technology and what it means for privacy, is a compelling read. Her ongoing reporting tracks the kinds of policy changes that affect everyone who uses these tools.

Kashmir's TED talk with her collaborator, Surya Mattu, What Your Smart Devices Know (and Share) About You is well worth a watch, to remind us of what's at stake.

The Electronic Frontier Foundation is the most reliable starting point I know for guidance on privacy and security. They publish regularly and write for non-technical audiences. Well worth a look is the Tools section of the EFF website, which includes tangible ways to defend ourselves against the threats to our privacy and security online.

What This Series Is About

Over the next few posts, I am going to walk through the specific considerations I have worked through as I have decided what to give Claude Cowork access to and what to keep off limits. That will include thinking about your employer, your own personal privacy, and other people's information.

But I wanted to start here, with this: none of this has to happen on anyone else's timeline. You are not only allowed to go slow, but it is prudent to do so, particularly given the pace of change related to the AI tools' capabilities. You are allowed to decide that some things are not worth the tradeoff, at least not yet. You are allowed (and urged) to keep some parts of your life outside of any of this entirely.

At the same time, I would ask that you heed Maha Bali's advice and not engage in AI-shaming, should you choose to engage further with these posts about my experimentation. Maha is a Professor of Practice at the Center for Learning & Teaching at the American University in Cairo (AUC) and a full-time faculty developer. That translates to her being expected to help people “make thoughtful decisions about how they're going to teach and assess in a time where this thing exists.” Some of us have jobs that require we remain simultaneously curious and skeptical about AI and we aren't afforded the opportunity to ignore what's happening across higher education.

On Episode 529 of the Teaching in Higher Ed Podcast, James Lang, Professor of Practice at the Kaneb Center for Teaching Excellence at the University of Notre Dame and author of six books, discussed a beautiful piece he wrote: Voltaire on Working the Gardens of Our Classrooms – Are you a Pangloss, Martin, or Candide?

I'll admit I've long since forgotten the Voltaire specifics, but I walked away reminded of something I already knew: teaching isn't a race. We're not supposed to go fast and break things, because people can get hurt in the meantime and we can wind up forgetting why we got into this work in the first place.

Jim shares about his own teaching:

I have skills and experiences that I have developed over a lifetime, and a commitment to supporting teachers and learners. I still see those skills and experiences making a positive difference in the lives of other humans. You might be feeling the same way. You feel storm clouds gathering above you, and are worried about the future of education, but in the meantime you are connecting with students and creating learning in the gardens of your classrooms.

He continues the garden metaphor throughout the piece and ends by encouraging us to go work in our gardens. It is in that spirit that I seek to share what I'm learning about agentic AI, as it relates to the various roles I hold, while encouraging all of us to go slower than we might normally, and to be curious and skeptical as we do our tending.


Featured photo attribution:
Photo by Naoki Suzuki on Unsplash

Filed Under: Resources

Bonni Stachowiak

Bonni Stachowiak is dean of teaching and learning and professor of business and management at Vanguard University. She hosts Teaching in Higher Ed, a weekly podcast on the art and science of teaching with over five million downloads. Bonni holds a doctorate in Organizational Leadership and speaks widely on teaching, curiosity, digital pedagogy, and leadership. She often joins her husband, Dave, on his Coaching for Leaders podcast.

Woman sits at a desk, holding a sign that reads: "Show up for the work."

GET CONNECTED

JOIN OVER 4,000 EDUCATORS

Subscribe to the weekly email update and receive the most recent episode's show notes, as well as some other bonus resources.

Please enter your name.
Please enter a valid email address.
JOIN
Something went wrong. Please check your entries and try again.

Related Blog Posts

  • Engaging with Intentionality and Curiosity
  • Cultivate curiosity in higher ed students
  • Igniting Curiosity and Imagination

TOOLS

  • Blog
  • Podcast
  • Community
  • Weekly Update

RESOURCES

  • Recommendations
  • EdTech Essentials Guide
  • The Productive Online Professor
  • How to Listen to Podcasts

Subscribe to Podcast

Apple PodcastsSpotifyAndroidby EmailRSSMore Subscribe Options

ABOUT

  • Bonni Stachowiak
  • Speaking + Workshops
  • Podcast FAQs
  • Media Kit
  • Lilly Conferences Partnership

CONTACT

  • Get in Touch
  • Support the Podcast
  • Sponsorship
  • Privacy Policy

CONNECT

  • LinkedIn
  • Instagram
  • RSS

CC BY-NC-SA 4.0 Teaching in Higher Ed | Designed by Anchored Design