• Skip to primary navigation
  • Skip to main content
  • Skip to footer

Teaching in Higher Ed

  • Podcast
  • Blog
  • SPEAKING
  • Media
  • Recommendations
  • About
  • Contact

Resources

AI, Privacy, and the Risks Worth Understanding Before You Dive In

By Bonni Stachowiak | March 30, 2026 | | XFacebookLinkedInEmail

security camera adhered to the side of a building

This is the second post in a series about my use of AI agents, broadly speaking, and Claude Cowork, specifically. However, there are a number of foundational topics we need to explore first, together.

The first post was about going slow and not feeling pressured to jump in before you are ready. This one is about understanding the actual risks, so that when you do decide to use these tools, you are making an informed choice. I can't say this enough:

This is your permission to go slow and resist the temptation to jump in head first.

Let's start with where I stand in all this stuff. I am not a security expert. I am is someone who has spent a lot of time reading and thinking about this, and who wants to help translate some of what I have learned more from a beginner's mind. So let us talk about what can actually go wrong. This isn't intended to be a complete description of all the things. More so, these are issues that I'm not seeing talked anywhere near enough in the discourse about what's possible with these agentic AI tools.

Email as an Example of a Huge Risk Point

One of the most important things to understand about AI tools that integrate with your accounts is that access tends to cascade. Email is the clearest example of this, in terms of how this access could be compromised.

Your email is not just a place where messages live. It is the recovery address for nearly every other account you have. Your bank. Your health portal. Your university systems. Your social media. If someone, or something, gains access to your email, they can use it to trigger password resets on almost everything else. And with two-factor authentication now widely used, that access can extend to getting codes texted to your phone or sent to that same email, which means an attacker can potentially lock you out of your own accounts entirely.

Dave sent me a Daring Fireball post yesterday about a guy who documented a phishing attempt that could easily have resulted in some bad stuff happening: Matt Mullenweg Documents a Dastardly Clever Account Phishing Scam. When I read it, I had the thinking of how easily even the more technical among us could have fallen victim to that, especially if we were rushing and not paying as much attention.

Getting access to people's email accounts is the mechanism behind a large proportion of real-world identity theft and account takeover. Before you grant any AI tool access to your email, it is worth asking what level of access you are granting. Read-only is very different from the ability to send, delete, or manage. And even read-only access means the tool can see, and in some cases store or use, the contents of your messages.

Personal Data Risks

There is a parallel risk that operates more slowly and less dramatically, but is no less real. Your personal data, gathered across apps, websites, AI tools, and services you use every day, is part of a large and largely unregulated commercial ecosystem. Data brokers collect it, package it, and sell it, often without your knowledge and without any direct relationship with you.

This matters in the context of AI tools because many of them, especially free or low-cost ones, have business models that depend on data. When a tool is free, it is worth asking what you are providing in exchange. Sometimes the answer is your usage patterns. Sometimes it is the content of your conversations. Sometimes it is both.

Anthropic, the company behind Claude, updated its privacy policy in 2025 in ways that are worth knowing about. Previously, Claude did not use consumer conversations to train its models. That changed. If you use Claude on a Free, Pro, or Max plan and did not actively opt out, your conversations may now be used for model training and retained for up to five years. The setting is in Claude Settings under Privacy. You are looking for the toggle labeled “Help improve Claude.” Turning it off means your new conversations will not be used for training.

This is not unique to Anthropic. It is an industry-wide pattern worth paying attention to across any AI tool you use. Stanford's Human-Centered Artificial Intelligence (HAI) provides a history of privacy policies and a cautionary note: Be Careful What You Tell Your AI Chatbot.

If you work for a university, before you do anything with AI, familiarize yourself with existing policies around your use. Ohio University calls out the key risks to be aware of, as well as instructions for how members of their community should use AI in response.

Copyright Issues

I want to share something that is personal to me, though not at all unique to me.

My first book was published by Stylus, an independent academic press that was later acquired by Routledge. Routledge's parent company, Informa, subsequently entered into agreements with AI companies to license academic content for model training. Authors were not asked for permission. Many were not notified at all.

The Authors Guild has been working to establish that publishers cannot license authors' works for AI training without seeking permission by separate agreement. Their position is that AI training rights were never contemplated in publishing agreements and cannot simply be assumed. They also maintain guidance on practical steps authors can take to try to protect their work going forward.

If you have published with an academic press, it is worth checking whether your publisher has entered into any AI licensing agreements. Ithika S+T has a Generative AI Licensing Agreement Tracker that shows which publishers have signed deals to allow AI companies to train on scholarly content.

There is also a current legal settlement related to this worth knowing about. Anthropic was sued by authors whose books were acquired from piracy sites and used to train Claude. A settlement has been proposed. If you have published books, you can search the settlement works list to see if your titles are included. The deadline to file a claim is March 30, 2026.

Other Risks

A few additional categories are worth at least brief mention.

Prompt injection is a risk specific to AI agents, tools that can browse the web, read documents, or take actions on your behalf. A malicious actor can embed hidden instructions in a webpage or document that the AI reads, causing it to take actions you did not intend. Some scholars have hidden AI prompts in their article submissions in an attempt to garner better reviews, just one of many examples illustrating the need for more heightened verification methods and protocols.

Data breaches at AI companies are also a real possibility, like this one regarding a Meta AI leak. When you have conversations with an AI tool, those conversations are stored on servers. If those servers are compromised, your conversations could be exposed. Deleting conversations when you are done with them is one practical step you can take.

Surveillance creep is a slower and more diffuse risk. The more you connect AI tools to your accounts, your calendar, your location, your habits, the more detailed a picture exists of how you live and work. That picture may be used by the AI company itself, or it may become accessible to others through data sharing agreements, legal requests, or breaches. The question is not only “is this safe today” but “do I want this data to exist at all.” This is particularly an issue because of how quickly corporations can change their policies and practices, making it that much more difficult to keep up and mitigate risk. This example from Clara Hawking on LinkedIn related to something many of us have done in the past describes the insidious nature of this slow creep well.

I know I've not come close to naming all the risks, but at least wanted to mention a few issues that come to my mind, as I decide my own risk profile for these sorts of endeavors.

Where to Learn More

If you want to go deeper on any of these risks, a few resources to explore further:

The Electronic Frontier Foundation covers digital rights, surveillance, and AI privacy for a general audience. Their guides are practical and regularly updated.

Kashmir Hill's reporting at the New York Times covers privacy and technology in a human, narrative way that is genuinely readable. She has written extensively about data brokers, facial recognition, and the ways AI is reshaping privacy in everyday life.

Leon Furze's Teaching AI Ethics series has a full section on privacy and data that goes into more depth than I have here, with research citations and teaching applications if you want to explore any of this with students.

The next post in this series moves from the general landscape to the specific framework I have used for my own decisions: three categories of considerations that have helped me decide what to give Claude access to and what to keep off limits.


Photo by Joe Gadd on Unsplash

 

Filed Under: Resources

Permission to Go Slow

By Bonni Stachowiak | March 24, 2026 | | XFacebookLinkedInEmail

robots statues made out of pottery in a garden

I'm beginning a series of posts about my experimentation with Claude Cowork, specifically, but also about the landscape of AI agents, more broadly. However, I want to say something before we get into caveats and considerations, security settings and privacy policies, and all the rest of it. Something I'm not hearing explicitly stated anywhere near enough in conversations about AI.

You do not have to do any of this yet. Slow down.

There is enormous pressure, most of it implicit, to jump in, try the tools, connect the apps, grant the access, and figure it out as you go. The tech industry moves fast and can seem like it rewards people who move fast with it (move fast and break things, anyone?). But curiosity about AI does not require you to immediately hand over access to your files, your calendar, your email, or anything else. It is completely okay (and I would argue, even necessary) to be in a learning phase.

Marc Watkins, Assistant Director of Academic Innovation at the University of Mississippi, describes this as cultivating skepticism and curiosity in the age of AI on Episode 613 of the Teaching in Higher Ed podcast and in his writing about how generative AI is impacting education.

Speed Disguises Itself as Progress

Sam Illingworth, a Full Professor of Creative Pedagogies in Edinburgh, runs a newsletter called Slow AI. His subtitle says it plainly:

…knowing when to use AI and when to leave it the hell alone. Everyone is teaching you how to use AI faster. Nobody is teaching you how to think about what you lose when you do.

Sam tells us that most of the advice about AI is wrong. Not because the tools are bad, but because nobody is asking what we give up when we use them.

If you are feeling behind because you have not connected an AI tool to your calendar or your email yet, I encourage you to follow Sam's advice and slow way down. There are serious security and privacy concerns at play and these issues deserve our careful attention. Sam has also written about what happens when organizations distribute AI tools before anyone knows how to use them safely, and then call the chaos adoption. That is true at the institutional level. It is also true at the personal level.

Connecting things before you understand what you are connecting is not getting ahead and winning some kind of race. It is just moving fast and almost assuredly breaking things in the process.

There Are Real Costs Worth Understanding First

Leon Furze, an international consultant, author, and speaker, has written one of the most thorough and accessible series I have come across on AI ethics. His Teaching AI Ethics project covers bias, environmental impact, copyright, privacy, human labor, and power. It was originally written for educators and students, but it reads clearly for anyone who wants to understand what is actually happening underneath the hood of these tools. The updated 2026 series is available as a free, open-access ebook: Teaching AI Ethics – A Guide for Educators.

Furze's work is a good place to start if questions like these are on your mind: Who does the labor that makes these systems run? What does it cost the planet to train and operate them at scale? Whose work was used without permission to build them? He also encourages us on Episode 572 to not solely refuse to learn anything about AI because of these ethical concerns, but to remain curious and in a position of learning. He shares:

We can take a personal moral stance, but if we have a responsibility to teach students, then we have a responsibility to engage with the technology on some level. In order to do that, we need to be using it and experimenting with it because otherwise, we're relying on third party information, conjecture, and opinions rather than direct experience.

While Sam and Leon tend more on the experimental side of things (with curiosity and skepticism at the forefront), there are other voices worth centering.

Critics Worth Listening To

Emily M. Bender and Alex Hanna are the co-hosts of Mystery AI Hype Theater 3000 and the authors of The AI Con: How to Fight Big Tech's Hype and Create the Future We Want. I talked with both of them on Episode 576 of Teaching in Higher Ed, where Emily described the two sides of the same coin:

The boosters say AI is a thing. It's inevitable, it's imminent, it's going to be super powerful, and it's going to solve all of our problems. And the doomers say AI is a thing, it's inevitable, it's imminent, it's going to be super powerful, and it's going to kill us all. And you can see that there's actually not a lot of daylight between those two positions, despite the discourse of saying these are two opposite ends of a spectrum.

Meredith Whittaker, president of Signal and co-founder of the AI Now Institute, has been one of the most consistent and credible voices raising alarms about what happens when AI agents, tools that act on your behalf, get access to large parts of your digital life. She has called it “putting your brain in a jar.” She is worth following if you want someone who speaks plainly about the structural risks, not just the individual ones.

Kate Crawford, co-founder of the AI Now Institute and author of Atlas of AI, takes a more structural and academic approach. Her work examines the economic incentives that make data collection the default, and what is lost when we consent without fully understanding what we are agreeing to.

Kashmir Hill is a technology reporter at the New York Times who covers privacy in a way that is accessible and human-scale. Her book, Your Face Belongs to Us: A Tale of AI, a Secretive Startup, and the End of Privacy, about facial recognition technology and what it means for privacy, is a compelling read. Her ongoing reporting tracks the kinds of policy changes that affect everyone who uses these tools.

Kashmir's TED talk with her collaborator, Surya Mattu, What Your Smart Devices Know (and Share) About You is well worth a watch, to remind us of what's at stake.

The Electronic Frontier Foundation is the most reliable starting point I know for guidance on privacy and security. They publish regularly and write for non-technical audiences. Well worth a look is the Tools section of the EFF website, which includes tangible ways to defend ourselves against the threats to our privacy and security online.

What This Series Is About

Over the next few posts, I am going to walk through the specific considerations I have worked through as I have decided what to give Claude Cowork access to and what to keep off limits. That will include thinking about your employer, your own personal privacy, and other people's information.

But I wanted to start here, with this: none of this has to happen on anyone else's timeline. You are not only allowed to go slow, but it is prudent to do so, particularly given the pace of change related to the AI tools' capabilities. You are allowed to decide that some things are not worth the tradeoff, at least not yet. You are allowed (and urged) to keep some parts of your life outside of any of this entirely.

At the same time, I would ask that you heed Maha Bali's advice and not engage in AI-shaming, should you choose to engage further with these posts about my experimentation. Maha is a Professor of Practice at the Center for Learning & Teaching at the American University in Cairo (AUC) and a full-time faculty developer. That translates to her being expected to help people “make thoughtful decisions about how they're going to teach and assess in a time where this thing exists.” Some of us have jobs that require we remain simultaneously curious and skeptical about AI and we aren't afforded the opportunity to ignore what's happening across higher education.

On Episode 529 of the Teaching in Higher Ed Podcast, James Lang, Professor of Practice at the Kaneb Center for Teaching Excellence at the University of Notre Dame and author of six books, discussed a beautiful piece he wrote: Voltaire on Working the Gardens of Our Classrooms – Are you a Pangloss, Martin, or Candide?

I'll admit I've long since forgotten the Voltaire specifics, but I walked away reminded of something I already knew: teaching isn't a race. We're not supposed to go fast and break things, because people can get hurt in the meantime and we can wind up forgetting why we got into this work in the first place.

Jim shares about his own teaching:

I have skills and experiences that I have developed over a lifetime, and a commitment to supporting teachers and learners. I still see those skills and experiences making a positive difference in the lives of other humans. You might be feeling the same way. You feel storm clouds gathering above you, and are worried about the future of education, but in the meantime you are connecting with students and creating learning in the gardens of your classrooms.

He continues the garden metaphor throughout the piece and ends by encouraging us to go work in our gardens. It is in that spirit that I seek to share what I'm learning about agentic AI, as it relates to the various roles I hold, while encouraging all of us to go slower than we might normally, and to be curious and skeptical as we do our tending.


Featured photo attribution:
Photo by Naoki Suzuki on Unsplash

Filed Under: Resources

Balancing Structure and Emergence in Teaching

By Bonni Stachowiak | January 5, 2025 | | XFacebookLinkedInEmail

A spiral structure that could be part of playground equipment or an outdoor modern art piece with clouds and blue sky in the background

Throughout my teaching career, I’ve often swung between two extremes when it comes to structure and flow. At times, I’ve been highly structured and organized—a good thing, but one that can become limiting when I miss what’s emerging in the moment. On the other end of the spectrum, if I lose track of the overall goals of a session or workshop, I risk not meeting my commitments or aligning with participants’ expectations. It also creates challenges for the broader structure of the course or event—whether it’s a class within a degree program or a workshop designed to support a university’s teaching and learning goals.

Mia Zamora discusses this tension on Episode 475 of Teaching in Higher Ed: Making Space for Emergence. In the interview, she describes how we can create “buckets” to hold topics that we can explore together, which is especially helpful for the kind of class content that will be responding to what's happening in an internal or external context, for example. In my business ethics class, we analyze news stories weekly, and there's a “bucket” where our reflections and analysis can be placed.

Alan Levine has co-taught with Mia previously and they both talk about courses having “spines” to keep the needed structure. You can see an example of their #NetNarratives class spine mid-way through Alan's blog post: My #NetNar Reflection. On Episode 218, Alan discusses the importance of giving people opportunities to explore, as part of their learning. He shares:

You get better by just practicing. Not rote practicing, but stuff where you’re free to explore.

Speaking of exploring… I just went to visit Alan's CogDogBlog – and discovered a recent post with “one more thing about podcasts” where he talks about a cool podcast directory that I wasn't aware of… and ways of sharing one's podcast feed with others. Now it is taking every ounce of discipline not to go down the rabbit trail of discovering more. But I leave for Louisiana in three days, the semester starts tomorrow, and I have a 5:30 AM keynote on Tuesday morning. All this to say, I had better behave myself and share a few more things about facilitiation I've been thinking about, as I prepare for those adventures.

Two Additional Approaches for Managing the Tension Between Structure and Flow

Over time, I’ve discovered two other helpful strategies for balancing structure and in-the-moment flexibility. These tools and insights have transformed how I prepare for and facilitate learning experiences.

1. SessionLab: Visualizing and Adjusting the Flow

A while back, I discovered a tool called SessionLab, and it’s become a game-changer, especially when preparing workshops. It helps me create a “run of show” document—something Kevin Kelly has discussed both on Episode 406: How to Create Flexibility for Students and Ourselves, as well as in his book on flexibility in teaching: Making College Courses Flexible Supporting Student Success Across Multiple Learning Modalities. A run of show outlines the timing, activity titles, descriptions, and any additional information for a session, helping me stay on track while leaving space for flexibility.

SessionLab allows me to break down a workshop or class into blocks of time and activities. Though it includes a library of standard activities, I mostly use it to map out my own. One of my favorite features is the ability to highlight sections in the “additional information” column. This has been a game-changer for virtual facilitation. For example, when sharing resources or instructions during a Zoom session, I pre-highlight key content so I can easily copy and paste it into the chat in real time.

Beyond that, the tool allows you to color-code blocks to visually assess the balance between different types of learning activities—like how much time you’re spending on lecture versus active learning. It even lets you generate a PDF version for offline reference.

This morning, I was preparing for Tuesday morning's keynote and realized (yet again) I’d tried to squeeze too much into my allotted time. SessionLab helped me get realistic about pacing, build in breathing room, and ensure space for those organic moments that make these moments of learning in community so powerful. After all, if everything were going to be rigidly planned, why not just record a video and skip live interaction altogether?

If you’re looking for a tool to help you balance structure with flexibility, I highly recommend giving SessionLab a try.

2. Padlet: Unlocking a Hidden Feature for Better Facilitation

The second resource I want to highlight is in an upcoming book by Tolu Noah on facilitation: Designing and Facilitating Workshops with Intentionality: A Guide to Crafting Engaging Professional Learning Experiences in Higher Education. I had the privilege of reading an advance copy, and it felt like every page introduced me to a new tool or a fresh way of thinking.

One of many insights that stood out was a feature I hadn’t realized existed in Padlet, a virtual corkboard I already use often for collaborative activities. Tolu explained that you can create breakout links to share just a single column from a Padlet board rather than the entire board.

This has been incredibly helpful for making my Padlet boards more user-friendly. Before, when I shared an entire board, participants sometimes found it visually overwhelming—unsure where to post their contributions. Now, if I’m running an activity with multiple columns (e.g., ideas related to sustainability in one, corporate social responsibility in another), I can send a direct link to the specific column where I want participants to share. It simplifies the process and improves clarity for everyone.

When Tolu Noah’s book comes out, I can’t recommend it enough—it’s packed with facilitation wisdom and practical strategies for creating more engaging learning environments.

Resources

Here’s a summary of the tools and people mentioned in this post:

  • Episode 475 with Mia Zamora
  • Episode 218 with Alan Levine
  • SessionLab – A tool for creating run-of-show plans, structuring workshops, and balancing structure with flexibility.
  • Kevin Kelly – Educator and author who explores flexibility in teaching and learning; referenced for his insights on “run of show” documents.
  • Making College Courses Flexible Supporting Student Success Across Multiple Learning Modalities – Kevin Kelly's book: “Addressing students’ increasing demand for flexibility in how they complete college courses, this book prepares practitioners to create equivalent learning experiences for students in the classroom and those learning from home, synchronously or asynchronously.”
  • Padlet – A virtual corkboard tool for collaborative activities, with a feature for sharing breakout links to individual columns.
  • Tolu Noah – Educator and author of a forthcoming book on facilitation, emphasizing practical strategies for inclusive teaching.
  • Designing and Facilitating Workshops with Intentionality: A Guide to Crafting Engaging Professional Learning Experiences in Higher Education – Tolu Noah's forthcoming book: “Workshops are one of the most frequently used forms of professional learning programming in higher education and beyond. However, in order for them to have a meaningful impact, they must be crafted with intentionality. Designing and Facilitating Workshops with Intentionality_ offers practical guidance, tools, and resources that can help you create more engaging, enriching, and effective workshops for adult learners.”

 

Filed Under: Resources

Lessons Learned from Intentional Teaching Podcast Episode About AI Across the Curriculum

By Bonni Stachowiak | December 4, 2024 | | XFacebookLinkedInEmail

Podcast Inspiration: AI across the curriculum Background is blurry technology with a person standing looking out (we see the back of their head)

I drew much inspiration from this morning's listen to Derrick Bruff's interview with Jane Southworth about AI across the curriculum. Derrick Bruff's podcast, Intentional Teaching, gives us bountiful opportunities to learn from the experiences of educators who are transforming educational experiences for students across a wide variety of disciplines and contexts. While the episode did focus on what is obvious from the title, AI Across the Curriculum, I drew a lot of inspiration well beyond just that topic of AI. There are many layers of what they talked about that go well beyond the broad topic of artificial intelligence. Other aspects of leading and teaching within a university context are shared well beyond the particular initiative they discuss.

Jane talks about the difficulty of making such a massive change across a complex institution. She made a few jokes about the difficulties, although she said it was such lightheartedness that I felt such kindness toward her in what must have been such challenging endeavors. Consider what it takes to make something like this happen, and all the committee work that it takes, all the different people that are need to be talked to, all the perspectives to consider. The intricacies, not just to make something work, but to make the fruit of that work visible to students such that they enroll in the program and pursue the educational aims beyond the requirements for their majors. Jane shares examples of them starting an AI certificate program within their curriculum. The mammoth effort that it was to make that technically possible from an operations standpoint, such that someone could take the right classes and that they would go through all the curriculum committees and get that to work within their policies and procedures is one thing. But another layer I found quite fascinating was how do you then make that visible to students such that they're even aware that this certificate exists and that they find it of interest and worthwhile to pursue further learning.

As Sam Cooke sang years ago, I also “don't know much about geography.” There's no doubt in my mind that I have subscribed to some of the myths that Jane described about her discipline of geography. Jane described how in the United Kingdom, when she was in college, that it was the third or fourth most popular degree. Geography graduates found themselves receiving among the highest earnings as they left school, as well as being surprised when they discovered just how much more the field is than studying rocks, like they had initially believed.

In the show notes for the episode, Derek shares a couple of resources that come both from conversations with Jane, as well as from his ongoing collaborations with Flower Darby, co-author of Small Teaching Online: Applying Learning Science in Online Classes and The Norton Guide to Equity-Minded Teaching. The first article linked by Derek in the show notes is Developing a Model for AI Across the Curriculum: Transforming the Higher Education Landscape via Innovation in AI Literacy by Southworth et al. The second article was Building an AI University: An Administrator's Guide by Joe Glover. I'm grateful, as always, to Derek and all of the opportunities he makes available to those of us interested in teaching with intention.

Resources

  • Intentional Teaching Episode AI Across the Curriculum with Jane Southworth on Spotify, Overcast, Apple Podcasts, or the web
  • Developing a model for AI Across the curriculum: Transforming the higher education landscape via innovation in AI literacy, by Southworth, et al
  • Building an AI University: An Administrator's Guide, by Joe Glover from the University of Florida

Filed Under: Resources

Ethan Mollick Shares Principles for Working with AI on Coaching for Leaders with Dave Stachowiak

By Bonni Stachowiak | April 1, 2024 | | XFacebookLinkedInEmail

"Assume this is the worst AI you will ever use." Ethan Mollick on Coaching for Leaders

I enjoyed listening to Coaching for Leaders episode 674: Principles for working with AI with Ethan Mollick this morning. Dave is traveling this week, but it was almost like he was here, keeping me company, as I listened to the interview. 😂

One key point from the conversation that really resonated with me was how quick and easy it is to assess the AI's output, it if is doing something that you're already good at. I have found many examples of that truth, in experimenting with various AI tools.

We use the CastMagic.io service for the first pass at our podcast transcripts, for example. It can identify key quotes from the interviews and recommend discussion questions. For me (or someone on our team) to carve out the time to listen to the entire episode and try to figure out which quotes might be good to share just isn't practical. Yet we can quickly look and discard what the tool identified as not particularly helpful in illuminating or amplifying the conversation.

In a recent workshop with faculty, they were surprised to learn how easy it is to set up a form for students to make a request for a letter of recommendation or reference for a job or for grad school. Then, an AI can take the first pass at writing a draft, based on your writing style and preferences for length, tone, etc. How much easier is it to correct it for what it got wrong about a particular student's recommendation vs starting from scratch?

I've been using an AI app called Whisper Memos, which is on both my iPhone and on my Apple Watch. When I get an idea or something I want to share with someone, I just tap the complication on my watch face and start talking. The key differentiator for Whisper Memos for me is that it automatically puts in carriage returns, making it that much faster for me to make edits later on.

Another thing I like is that I discovered my favorite “chicken scratch” notes app on my iPhone and Apple Watch, Drafts, has a special email address I can use to send text to it. So now I have Whisper Memos set up to send to my unique Drafts email address and all my thoughts wind up in one place, ready for me to process when I have time.

I encourage you to listen to episode 674 with Ethan Mollick on Coaching for Leaders with Dave Stachowiak. When you're done, check out the AI-related conversations that I've had for Teaching in Higher Ed.

How are you using AI in your work these days?

Filed Under: Resources

  • Page 1
  • Page 2
  • Page 3
  • Interim pages omitted …
  • Page 17
  • Go to Next Page »

TOOLS

  • Blog
  • Podcast
  • Community
  • Weekly Update

RESOURCES

  • Recommendations
  • EdTech Essentials Guide
  • The Productive Online Professor
  • How to Listen to Podcasts

Subscribe to Podcast

Apple PodcastsSpotifyAndroidby EmailRSSMore Subscribe Options

ABOUT

  • Bonni Stachowiak
  • Speaking + Workshops
  • Podcast FAQs
  • Media Kit
  • Lilly Conferences Partnership

CONTACT

  • Get in Touch
  • Support the Podcast
  • Sponsorship
  • Privacy Policy

CONNECT

  • LinkedIn
  • Instagram
  • RSS

CC BY-NC-SA 4.0 Teaching in Higher Ed | Designed by Anchored Design