• Skip to primary navigation
  • Skip to main content
  • Skip to footer

Teaching in Higher Ed

  • Podcast
  • Blog
  • SPEAKING
  • Media
  • Recommendations
  • About
  • Contact
EPISODE 613

Skepticism and Curiosity in the Age of AI with Marc Watkins

with Marc Watkins

| March 12, 2026 | XFacebookLinkedInEmail

https://media.blubrry.com/teaching_in_higher_ed_faculty/content.blubrry.com/teaching_in_higher_ed_faculty/TIHE613.mp3

Podcast (tihe_podcast):

Play in new window | Download | Transcript

Subscribe: Apple Podcasts | Spotify | RSS | How do I listen to a podcast?

Marc Watkins shares about cultivating skepticism and curiosity in an age of AI on Episode 613 of the Teaching in Higher Ed podcast.

Quotes from the episode

I do think online education is going to be the focal point for this next year, and how it can survive with an agentic AI. My feeling is, we need to be offering students more embodied experiences and disembodied spaces.

I do think online education is going to be the focal point for this next year, and how it can survive with an agentic AI. My feeling is, we need to be offering students more embodied experiences and disembodied spaces.
-Marc Watkins

Every technology has its affordances and the things that are negative about it too; your cell phone, the computer, the fact we're talking about this right now on the systems that we are using, cloud computing, that all has a cost.
-Marc Watkins

For an incoming freshman student in college to take 4 or 5 classes and have 4 or 5 very different AI policies, 4 or 5 very different understandings of what AI is, it is incredibly confusing.
-Marc Watkins

Resources

  • Sesame Street: One of These Things (Is Not Like the Others)
  • What We Give Up When We Let AI Decide: Automation Is Easy. Judgment Is Not, by Marc Watkins
  • Working with AI is more Mindset than Skill, by Marc Watkins
  • Civics of Technology’s Privacy Week Resources
  • The Opposite of Cheating
  • The Transformers: Imagining the Future of the Teaching of Writing, by Anna Mills, Jon Ippolito, Maha Bali, Jeremy Douglass, Mark C. Marino, Annette Vee, Marc Watkins

ARE YOU ENJOYING THE SHOW?

REVIEW THE SHOW
SEND FEEDBACK

ON THIS EPISODE

Marc Watkins

Assistant Director of Academic Innovation

Marc Watkins directs the AI Institute for Teachers and is an Assistant Director of Academic Innovation at the University of Mississippi, where he is a Lecturer in Writing and Rhetoric. He has led research initiatives, exploring generative AI’s impact on student learning, training workshops for faculty on AI literacy, and multiple institution-wide AI institutes. He advocates approaching the technology’s integration in education with skepticism and curiousity. When training faculty in applied artificial intelligence, he believes educators should be equally supported if they choose to work with AI or include friction to curb AI’s influence on student learning. Watkins’ work with training faculty in AI literacy has been profiled in The Washington Post. He regularly writes about AI and education on his Substack Rhetorica.

Bonni Stachowiak

Bonni Stachowiak is dean of teaching and learning and professor of business and management at Vanguard University. She hosts Teaching in Higher Ed, a weekly podcast on the art and science of teaching with over five million downloads. Bonni holds a doctorate in Organizational Leadership and speaks widely on teaching, curiosity, digital pedagogy, and leadership. She often joins her husband, Dave, on his Coaching for Leaders podcast.

RECOMMENDATIONS

Civics of Technology’s Privacy Week Resources

Civics of Technology’s Privacy Week Resources

RECOMMENDED BY:Bonni Stachowiak
The Transformers: Imagining the Future of the Teaching of Writing

The Transformers: Imagining the Future of the Teaching of Writing

RECOMMENDED BY:Marc Watkins
Woman sits at a desk, holding a sign that reads: "Show up for the work."

GET CONNECTED

JOIN OVER 4,000 EDUCATORS

Subscribe to the weekly email update and receive the most recent episode's show notes, as well as some other bonus resources.

Please enter your name.
Please enter a valid email address.
JOIN
Something went wrong. Please check your entries and try again.

Related Episodes

  • EPISODE 449Teaching Writing in an Age of AI
    John Warner

    with John Warner

  • EPISODE 095Teaching in the Digital Age

    with Mike Truong

  • EPISODE 568Teaching for Integrity in the Age of AI

    with , Tricia Bertram Gallant, David Rettinger

  • EPISODE 437Reviving Our Own Curiosity

    with Lindsey Kealey

  

EPISODE 613

Skepticism and Curiosity in the Age of AI with Marc Watkins

DOWNLOAD TRANSCRIPT

EPISODE 613: Skepticism and Curiosity in the Age of AI, with Marc Watkins

Bonni Stachowiak [00:00:00]:

Today, on episode number 613 of the Teaching in Higher Ed podcast, Skepticism and Curiosity in the Age of AI, with Mark Watkins.

Bonni Stachowiak [00:00:14]:

Production Credit: Produced by Innovate Learning, Maximizing Human Potential.

Bonni Stachowiak [00:00:22]:

Welcome to this episode of Teaching in Higher Ed. I’m Bonni Stachowiak, and this is the space where we explore the art and science of being more effective at facilitating learning. We also share ways to improve our productivity approaches, so we can have more peace in our lives and be even more present for our students. Today on episode 613, I’m joined by Mark Watkins. Mark directs the AI Institute for Teachers, and serves as Assistant Director of Academic Innovation at the University of Mississippi. He’s also a lecturer in writing and rhetoric. Mark’s work sits right at the intersection of generative AI, student learning, and faculty development. He has been a steady voice urging us to hold two things together at once: skepticism and curiosity.

Bonni Stachowiak [00:01:27]:

Mark Watkins, welcome to Teaching in Higher Ed.

Marc Watkins [00:01:31]:

Thank you for having me here, Bonni. I really do appreciate it, and I look forward to this conversation.

Bonni Stachowiak [00:01:35]:

I have been looking forward to it since you said yes. You’re one of those people that’s been on my list for what feels like forever, so I’m grateful you could carve out the time.

Bonni Stachowiak [00:01:43]:

I was having these flashbacks to 1970s Sesame Street, and there was a little jingle. People who may be my age, or around my age, they would go, “One of these kids is doing their own thing”.

Bonni Stachowiak [00:01:58]:

And I wanted to get— because I didn’t really remember the year, so I had to fact-check myself and go back. And so I was watching a TikTok video – I’m not going to link to it in the show notes because TikTok, and then I don’t know the user, you know, it introduces problems I don’t feel like introducing into your lives. But I was laughing so hard, Mark, because I don’t— I remember this as a kid, but I don’t remember that it was this easy. Because the one that I did find, the first link on TikTok, one of these kids is doing their own thing, and 3 kids are right side up, and one kid’s— their camera is turned upside down, and their pigtails are going like this. So I’m like, I didn’t realize it was that easy.

Bonni Stachowiak [00:02:35]:

But in your case, it’s not that easy. You have in the bio that I just shared, you talk about skepticism and curiosity as a way we might be thinking about AI. And I can assure you that’s not something that we hear too terribly often. How do those kids or those things go together in your mind?

Marc Watkins [00:02:59]:

So I work with faculty every week, faculty here, faculty elsewhere too, and it’s not just strict people who are interested in adopting AI or engaging with it. There are people that are very upset by this, that want to, you know, if they could get a bucket of water and dump it on the OpenAI servers to get rid of ChatGPT, they would definitely do that in a heartbeat. So you kind of have to maintain a balance between being skeptical about what these companies are selling, what the technology can actually do for you and for students, but also being really curious about— I mean, I use AI tools, I do. I think many other people do as well. I think it’s important we talk about this, and we talk about it openly and not hide behind it. And we kind of view AI with a warts-and-all approach, you know. And this idea, too, that this is a good technology or bad technology, isn’t a good framing at all.

Marc Watkins [00:03:51]:

I mean, every technology has its affordances and the things that are negative about it too, your cell phone, the computer, the fact we’re talking about this right now on the systems that we are using cloud computing, that all has a cost. I will say though, I have not seen the sort of just volume of responses based on these older technologies that we’ve used. You know, we didn’t see this when Zoom came along, because we had to during the pandemic. We didn’t see this really with MOOCs either when they came out here 15 years ago. So it’s this very interesting kind of situation to be involved in the faculty.

Bonni Stachowiak [00:04:27]:

When you say we didn’t see this, am I to mean we didn’t see the— I’m not sure what word to use— vitriol, or the, the angst, or— I’m not sure I’m using the right words.

Marc Watkins [00:04:38]:

I think you’re using the right words. The spasms of emotions that are very heartfelt, very real. And, you know, one conversation I had this week with faculty is, are you making a policy for your students? Are you trying to articulate a preference? Is there a difference in your mind? And that got a lot of good sort of conversations started. A little bit provocative, to say the least, because we all have certain preferences about what we use, but we don’t really have prior examples for saying, “hey, I don’t like this technology, so I want to exile from my classroom. And that might mean going to no technology whatsoever”. Like, I don’t think that’s going to work for a lot of reasons. So, trying to have some conversations with folks, trying to make sure that they can maintain some level of engagement, even if they’re skeptical.

Marc Watkins [00:05:23]:

I mean, that’s an important thing. We do need skeptics in this. We need them to be active and engaged, and we also need people that are really pro-AI, maybe in some cases to slow down a little bit too and kind of ask some skeptical questions. So I think really it’s a balance, and we can get there, we can do it. It’s just really hard with our polarized atmosphere right now.

Bonni Stachowiak [00:05:43]:

Yeah, you said the word balance, and I’m thinking, not really something we seem to be doing terribly well as human beings at this exact moment. But yeah, it’s not.

Marc Watkins [00:05:52]:

It’s not. It’s really bad.

Bonni Stachowiak [00:05:55]:

Speaking of not balancing things out well, we can be so confusing. And when I’m saying we, I’m speaking higher education. So much of your writing and your thinking and your speaking is looking at that more global view. What kinds of mixed messages are we sending students about artificial intelligence?

Marc Watkins [00:06:14]:

Oh boy, I was just thinking about this the other day. So by this fall, this will be 4 years since ChatGPT was launched. And that means we’re going to have 4 years of students in high school coming through higher education, 4 years of students in higher education graduating, in theory, going out there in the workforce. And we still don’t have really a consensus about where this technology should be useful, what responsible usage looks like, what is ethical, what is even practical sometimes in my field, versus not practical. So, the mixed messages we’re sending students really kind of into this situation where most institutions allowed faculty to come up with their own AI policy. And again, there’s lots of good reasons for that too. We all have our own agency. We all have our own sort of understanding of how we teach.

Marc Watkins [00:07:02]:

We also have our own disciplinary specific reasons for it. But for an incoming freshman student in college to take 4 or 5 classes and have 4 or 5 very different AI policies, 4 or 5 very different understandings of what AI is, it is incredibly confusing. We should not be in a situation where a student might face being taken up on academic misconduct charges for using AI in one class when their professor’s telling them to use AI in their class. So I think we need to have a little bit more of an understanding, too, that we’re not going to be able to apply these really sort of very narrow definitions of what is cheating or what is not cheating into our own classes using AI. We’re going to have to think about that in assessment, of course, but we’re going to have to also really open up the sort of field too to see where these tools are being used by students, and talk to them about this, you know, have conversations. How can you get students that are afraid about using AI because they’ve only associated with cheating for a lot of cases to open up and be honest with you about how they’re actually using it in your class? That’s valuable information to have. That’s valuable to understand how your students are using this too, because you might have some very deep-seated understanding, impressions about what the tools are using in your classroom. Your students might completely turn that around 180 degrees on you, too, and say that actually, no, it is effective the way they’re using.

Bonni Stachowiak [00:08:29]:

Would you share about the student that you came across who was handwriting things in your class, not because of something that you did?

Marc Watkins [00:08:36]:

So I had a student last semester in an advanced writing class I teach here at the University of Mississippi. Very smart, very articulate, will come to class every single class period with a notebook and pen and just jot away. You know, it was a very interesting thing because I was teaching an in-person class, but I allowed technology. I allow students to have laptops. When a student only comes with handwriting, it’s a little bit of a sort of a question of why. And so I talked with her about it. I said, why are you basically taking notes in class too? She’s like, well, I do this because I was accused in high school of using AI, and I couldn’t prove otherwise. So this is my only evidence that I’m using my own brain.

Marc Watkins [00:09:19]:

And that really did kind of set me back too, that a student is changing the way they actually communicate, they take notes, they’re going through this process not because it’s necessarily pedagogically sound, not because it’s helping them, because they’re afraid. And to me, that’s a really strong message that that student received from a faculty member too, that I think can be very dangerous, if this is not something that we think about and talk to them about.

Bonni Stachowiak [00:09:44]:

And as you said too, I want people listening to be thinking about that, that is happening at the same time as other professors are saying, “If you don’t use this, you’re never getting a job”. And that confusion that— by the way, I hope it doesn’t sound like I am blaming people. The conversation that I’ve been so looking forward to having with Mark is because this is things I struggle with myself.

Bonni Stachowiak [00:10:07]:

I mean, I’m, I’m all discombobulated. I will really think sometimes with skepticism from Mark’s bio, drawing inspiration there, sometimes with curiosity, and sometimes a lot of both, that is completely discombobulated.

Bonni Stachowiak [00:10:20]:

So I don’t want to sound like I’m blaming any specific person. What I am hoping that Mark can do for us today is just to recognize just how confusing that might be. Can we empathize with a student?

Bonni Stachowiak [00:10:33]:

And I know that next you’re going to be sharing with us what mixed messages, so we can truly reveal your and my empathy for faculty.

Bonni Stachowiak [00:10:41]:

What mixed messages are you seeing faculty in higher education getting about artificial intelligence?

Marc Watkins [00:10:48]:

Oh, oh boy. This is the other sort of framework. It’s not just students using AI and also getting those mixed messages. It’s faculty too. And we are starting to see faculty experiment not only with instructional design to use AI to streamline their actual setup of their course, or even the running of their course through automatic agents, which could be absolutely really helpful, can support students. But we’re also seeing some evidence of faculty using this to grade… And again, that’s not necessarily a bad thing, but if it’s not done in a way that’s disclosed, if it’s not done in a way too, that really puts students, and the student experience at the heart of it, it could end up really impacting the relationship between the faculty and the student. And the reason why I think this is so difficult is that all the incentives right now, well, I doubt about all, but the majority of incentives are aligning again for both students and faculty to use this heavily.

Marc Watkins [00:11:43]:

For faculty, the time to actually go through this— if you— I teach a writing class, generally when I get a set of essays, it would usually take me a full week to actually grade them and get them back. And if I could do that now with an AI agent and set it up for me to make it all personalized feedback, I could have the agent run, walk out of my door, my office, go across campus, get a cup of coffee, come back, and the task is done. So it’s incredibly attractive to do that from the labor standpoint, from a time standpoint, just like our students are. And so you are seeing some institutions say to faculty, you know, lean in, use this to try to meet the material conditions of your labor, to help you, to make you better at your job. Others are hearing messages not just from their institution, but their colleagues too, basically wondering, hey, are we basically losing the most intimate parts of our jobs, the things that make us important too? Are we automating ourselves out of future employment? You know, there’s a lot of questions here. It’s provocative. I think it’s a good provocative conversation to have. We’ve been starting to have those conversations now here at the university, both across campus and also, individual departments.

Marc Watkins [00:12:53]:

And we’re asking questions of what happens when one faculty member starts really heavily using AI, versus another faculty member that doesn’t. You know, what is that going to do to the dynamic in the department? What’s that going to do to the people’s teaching conditions? What happens when someone gets offered an overload? Is it fair to give the faculty member an overload if they’re using AI and they’re not going to go through this, versus the faculty member who would, wouldn’t, but would also be delaying responses to students because it’s human? You know, these are thorny, sticky questions, but they’re human, and it’s designed to have this situation too, where we can talk about it.

Bonni Stachowiak [00:13:28]:

You used a phrase. I’m actually going to repeat back two phrases to you that you used. You said, “I could,” and I’m guessing— I mean, you said you teach advanced writing, and I’ve also read your writing.

Bonni Stachowiak [00:13:38]:

You’re an exceptionally good writer, so I’m going to assume that was an intentional word choice of “I could” when you could have used a different word.

Bonni Stachowiak [00:13:45]:

And then you also said, “And the task is done.” So first off, am I correct in assuming that the word “I could” might mean that you have not, or you have? Have you automated grading using AI?

Marc Watkins [00:14:01]:

I have not automated grading. In fact, my AI syllabus policy isn’t about my students using AI. I start by actually outlining my own usage of AI, and I tell my students in writing-specific classes, I’m not going to use AI in email, I’m not going to use AI in feedback, or grading, or letters of recommendation. Other faculty might use that, that’s up to them. For me, I value the relationship that we have and that sort of human connection between writer and reader. Again, there’s other ways we’re going to use AI in this class. I’m going to show you. There’s other ways that you as a student are going to use AI, and other feelings you might have too. Put those values down on a piece of paper.

Marc Watkins [00:14:39]:

That’s our first class assignment too, where they come up with their own sort of statement about what they want AI in this course that we’re talking about, and how it’s going to impact them. So no, I don’t. I think that it’s sometimes if it’s brunch time and I have 19 other things to do too, it’s very attractive, the idea to, to just let AI go through this process. There’s also tons of arguments and also now, lots of research coming out here too that’s saying that some level of AI feedback might be very helpful for students too. We talk a lot about how AI might be biased, but you know what? We’re biased as human beings too, and we’re also very tired sometimes. If I grade alphabetically from students and doing 84 students a semester, by the time I get done to the people in the W’s, they’re not going to get the same value of feedback. Of course, there’s pedagogical tricks we do with this.

Marc Watkins [00:15:27]:

We change it around, we randomize when we grade and how, but the reality is, we are only human and, we’re only capable of doing things to a certain degree and a certain extent, and it’s not really fair to compare that sort of task with that of the machine that doesn’t sleep, doesn’t get tired, and doesn’t have to deal with all those issues.

Bonni Stachowiak [00:15:48]:

Sounds like we have some similar alignments with our values and our practices, and particularly with giving feedback. But what I’m drawing inspiration from what you just shared is I don’t use AI to give feedback on student assignments, and I really feel strong— and I also don’t use it to write emails because to me that, I mean, that’s feedback is a conversation. But I know, so many times students don’t have that experience necessarily with other faculty who may view it the same way or have that same approach. And so, I mean, speaking of confusing, that can be really confusing, and I’m realizing that I have to work extra hard to be more transparent about that. And so I’m taking down a note.

Bonni Stachowiak [00:16:30]:

I always take notes while the person’s talking, and generally speaking, it’s notes that I would share, and of course, I will share this with, with the community, but I’m also taking a note down to myself. Okay, this is your— this is something tangible that I could do to better be transparent about that and, and add that to my syllabus for next semester. So thank you for adding something wonderfully edifying to my task list for the coming semester.

Bonni Stachowiak [00:16:54]:

So I want to also then ask you about the second half. So you said, I could, and now we’ve discovered, but you do not. But then you said, and the task is done. So what is the task? So if you choose not to use it and you’re— and there’s a reason behind it, what is that task then? What, what are you not offloading? What are you not delegating to AI?

Marc Watkins [00:17:17]:

So for me, that task might be something more than just going through and grading papers. It might be a situation, too where I’m offloading that sort of core relationship value that I have with my students. And that is something that does concern me, but I, again, I teach writing. Other faculty teach multiple other disciplines sometimes, and they are teaching in different conditions than I am. Sometimes they’re doing online, sometimes hybrid, sometimes a mix of both. And there are situations too, where I think that there is going to be a place for AI in that process. And we have to be open to hearing that. We have to be open to having a conversation with it.

Marc Watkins [00:17:55]:

I would hope we would include student voices to hear their reaction and their response, too. Some of the strongest voices I’ve heard are students who are upset by the idea that they’re paying for college, and from their perspective, having ChatGPT, which they think of as a free tool, give them feedback on assignments that they’ve submitted is really upsetting to them. But I think the sort of thousand-foot view of this, when we talk about the task, I think another way of saying is, where does the assessment begin with our students? You know, is it going to start when they open up a book and start reading? Does it start when they sit down in class with a discussion? Or is it going to start when you give them an actual exam, a test, or an essay? So I think we really need to start defining when we are looking at assessment and how we’re looking at assessment to find out where AI’s place basically belongs for ourselves, and also for our students. And that’s actually helped me a lot too, because I’ve gone through and redesigned my assignments, not because of AI, because again, I can’t really keep up with it, but really designed for transparency on my own sake too, and started asking myself a question too. If I’m 18 looking at this class, what would be confusing to me? What would be upsetting to me? And I’ve actually used AI to help me with that. I’ve actually created a persona of an 18-year-old student that would go through some of the course materials too and say, well, what questions would you have? And that was a powerful tool for me, that was insightful to me, that showed some gaps in my own instruction, where I wasn’t being very clear to students about where different parts of our assignment or assessments began.

Bonni Stachowiak [00:19:30]:

I was sharing a little bit about my own journey here, and how difficult some of this can be, and one of the conversations I’m seeing reflected really broadly are ,for those of us that teach online, and specifically, not always, but specifically those who might teach asynchronous classes. Can you talk some about how we might think about ensuring that online learning remains a valid pathway for students now that, for example, agentic AI could take an entire class for students?

Marc Watkins [00:20:04]:

Yeah, it is really bad right now. Perplexity has their Comet-enabled browser, which they’ve released. Google, OpenAI, many other ones are now following suit with their own browser-based agentic AI that can go into, and open course module, impersonate the student, and take all the assignments. We are Blackboard University here, at the University of Mississippi. We have had notification from Blackboard that there’s no way they can actually differentiate between a human user, versus a student within the course. So that’s very difficult for us. That means that if you’re trying to block agentic AI systems, you’re probably going to use a lockdown browser, maybe some sort of actual monitoring system on top of it too. That can work for some assessment types, but it’s not going to be appropriate for all of them.

Marc Watkins [00:20:51]:

It’s also not going to be something that I think we want students to be, locked down in information when they’re trying to actually, in some cases, use these tools effectively. So I do think online education is going to be the focal point for this next year, and how it can survive with an agentic AI. My feeling is, we need to be offering students more embodied experiences and disembodied spaces. And one thing I am thinking about doing, and have tried a little pilot of, was giving my students the option of writing in a notebook and keeping a notebook for their online class. And again, it’s not something I’m assigning, not something I’m making them do. But one thing that an AI agent can’t do is write with you and write through this process. And how I actually would think about bringing this into assessment, again, we’ve got lots of things to think about right now, too. Title II is on the horizon too, for ADA compliance.

Marc Watkins [00:21:43]:

All of our materials must be accessible, all of our materials must be there. And I’m just basically thinking about, if students could take a picture of their handwritten journals for an online class and load that to Blackboard, if I could actually then use Gemini or ChatGPT, the optical character recognition to actually read that handwriting for me, to make that text legible. If that would be effective. The other thing we’ve done too is a lot of video-based assignments where they are not only talking to their screen, they’re sharing their actual artifact, the actual piece of writing they’re working on, too. I like that they walk through their process, so it’s not them just generating a script through ChatGPT and doing the talking head thing. So I think there are going to be some moments where we can have some connection and some embodiment, but to me, asynchronous is probably not going to last very much longer, at least as we know it, because we need to find more synchronous opportunities to actually meet with our students, to see their humanity, and actually help them work through this process. 

Marc Watkins [00:22:48]:

Because I don’t blame them for using an AI tool for some of these classes, I really don’t. I mean, if you can’t tell your students using an agent in your class versus them, it probably means you don’t have enough actual FaceTime with that student.

Bonni Stachowiak [00:23:00]:

One of the books that has meant so much to me, we’re actually doing a book club on it at my university, and then I’ve had the authors on, is The Opposite of Cheating. And, they really have helped me so much because they distinguish between enrolled students, who are almost exclusively just to get the credential. And for me, I don’t, I don’t, I don’t want to sort students into categories, but at the very least, if I cannot allow myself to have my assessment models be entirely to combat a student who is purely only interested in the credential, I just think I’m better for it. I’m able to serve students better if that’s not my only reason for existing, is to try to combat that.

Bonni Stachowiak [00:23:45]:

Because as you said, I mean, it does feel like a bit of just constantly moving… The tools are changing, and all the things, and the responses from these learning management systems, like you talked about Blackboard, there’s been similar discourse, not identical, but similar discourse from Canvas, from Instructure, the makers of Canvas learning management system. So it feels like it’s constantly changing all the things. So for me, I’m thinking about, can I have most of my time, not exclusively, but most of my time, and attention and talents, you know, beyond students for whom, on a good day, are interested in learning, in addition to, yes, earning a credential. So that’s been important to me. Secondarily, it’s important to me that I am using the— I didn’t come up with this, but so many people out there have used a Swiss cheese analogy, or in different parts of the world, it’s called Roumi cheese. So I mean, because I still have— I’ll assign, for example, Quizlet. Quizlet, I’ll have flashcards on there, and that you can, with a paid account, you can set up assignments, and then you can have them be responsible for doing different ways of assessing the learning. Now, have I tried out to see if, for example, the Comet Browser can do it?

Bonni Stachowiak [00:25:05]:

No, but it’s kind of one of those, if that is one of the opportunities where they get some bot or agent to do that for them. That’s just one layer of the Swiss cheese, where there may be a hole. But then, there are different kinds of assessments that in some ways, I’m not trying to trick students, but you mentioned the example of, I believe you said a video-based assignment sharing the artifact, so I’ll have them screencast as an example. I teach Mike Caulfield’s SIFT fact-checking model. So there are some things that they might be able to get help from an AI bot or agent to do. But by the time they get to that, I’m not going to say it’s impossible, Marc. I’m just going to say, if you get to the point where you have cloned your voice, cloned yourself, your video of yourself, and, if you’ve led up to that, at this exact day that you and I are talking, you are such an outlier that I just don’t want to be spending exclusive-

Bonni Stachowiak [00:26:09]:

So I’m doing the Swiss cheese. I’m not doing the like somehow to imagine there is a world right now where I could 100% avoid this from happening, as somebody who does teach asynchronous and other types of hybrid classes. So anyway, I’ll let you respond to that. It was a little bit of a rant, as you can tell. It’s tough, it’s so, so hard.

Marc Watkins [00:26:31]:

Yeah, I think that we are going to absolutely have to expect students are going to be doing like they were before. Some students would do our assignments and some students wouldn’t. Some people would just blaze through it. Some would cheat, obviously. That was normal student behavior. I think that it is going to remain. I don’t think everyone’s going to be using agent systems. I think that it’s just going to become a conversation of What aspect of your class do you want to make sure is secure? And that doesn’t have to be everything.

Marc Watkins [00:27:01]:

I don’t think it’s, I don’t think it’s possible to secure all forms of assessment. I think we lose our minds. I’d be pulling our hair out, right? So be targeted, be thoughtful about it. And then, if students are using any of the other assessments, let’s work with them. Let’s talk with them about them. Let’s see how this is actually functional for them. And let’s actually have a conversation with that and say, are you using this just to save time or is this actually helpful for you in some ways?

Bonni Stachowiak [00:27:24]:

I know another area that you’re starting to do some thinking and reflecting on has to do with oral assessments. Talk to me a little bit about what you’re seeing, what’s concerning you, where’s your curiosity and skepticism coming into play with regard to AI oral assessment?

Marc Watkins [00:27:42]:

So a lot of faculty that teach in person are switching to either blue books or oral assessment to try to make sure students are actually learning what they did, which has lots of issues, both for labor and everything else. We are starting to see some AI systems being put in place that can proctor an oral assessment for you. So there was a SUNY professor, I’ll send a link to in the actual resources so you have this, but he trained an AI oral assessment bot and deployed it with his students. And then he had a council of AIs to grade it and the whole process only cost $15. All to actually sort of secure student learning against AI. And that bothers me, because the one thing about oral assessment is that it is probably the most human type of assessment we have. Generally speaking, two people are talking either face-to-face in person or synchronously with each other online. The idea that you’re now doing that with an AI system is really sort of discombobulating for me.

Marc Watkins [00:28:43]:

I will note that Caltech has now gone to an AI oral assessment system for students who are enrolling for the first time at the university, and they have a research component to their actual admissions packet. So, to ensure that ChatGPT was not used for the research, they are giving them an AI-proctored oral assessment to get into the university. So it is starting to scale and be up there. To me, the thing that kind of scares me is that a lot of faculty, if they suspect students are using AI, will try to give them an oral assessment. Well, now you might actually then outsource that to a company, a vendor that has had its own AI proctored oral assessment. So the madness of our situation is your students might be using AI, you might be using AI to construct assignments, and then if you suspect it’s not to your class policy, you might be sending it to a third party third-party service that uses an AI voice proctoring agent to assess them, which goes back to the big question of what is the point to all this, right? Like, this is getting to almost complete capture of what we do. And I think that’s where that skepticism hat comes on for me. I say that, but I also say too, the incentives are all there for people to use it.

Marc Watkins [00:29:57]:

It’s so cheap. It’s so easy to scale and push this up there. And it’s also, I think, for faculty who are very upset by AI misuse in their class, very attractive for them, because they can just turn to another system to not have to deal with it.

Bonni Stachowiak [00:30:12]:

Yes. And as we think about this, we recognize that there is no such thing as 100% on any of these fronts. And then when you start to look at who may be left out of this, then they are the historically and/or currently marginalized populations that for many of us, this is the whole deal of why we got into this work, why we remain in this work, is to serve those people well. And that definitely is not serving them well when we— but I mean, as you’ve alluded to, such good intentions behind so much of this, wanting our degrees to mean something, wanting there to be integrity in it. And I, for one, being in higher education more than 2 decades, can say that I certainly have gone awry in the past with my choices.

Bonni Stachowiak [00:30:59]:

And so I’m not certainly blaming anyone listening who’s trying these things. I entirely understand why we might wish to. As you said, there are the incentives there. I just want to— because I didn’t always— I wasn’t always aware at the time when I was making those choices of the kind of people that I might leave out through those choices, just to kind of make sure we’re acknowledging that and considering that in the conversation.

Marc Watkins [00:31:20]:

Oh, definitely, definitely. I think that is a big thing to consider too, because, again, we’re not really thinking about students holistically, right? We’re just thinking about students numerically in this situation too. Every student’s going to have differential needs. They’re going to have different experiences. Many students right now, it’s fascinating, a lot of them are trying and asking to opt out of AI. And we’re noticing a lot more of that. So we have to really be aware of these types of conversations. We have to be aware that our AI policies as faculty are part of our agency, we have to be aware that students also have their own agency within this too.

Marc Watkins [00:31:54]:

And some of them want to use these tools, others do not, and they have their own reasons for that. So we have to navigate this. I, again, I, I can’t tell anyone that’s doing anything bad because I understand the reason why. I just want people to be aware that there are going to be consequences for these actions long term that we can barely even really think about right now.

Bonni Stachowiak [00:32:13]:

Are there any particularly important reasons to bring up, that we haven’t already about why a student might want to opt out of AI that you’re hearing? Is there something important we should share with listeners around that?

Marc Watkins [00:32:25]:

So what I’ve been following too is that some students feel like a lot of their courses are becoming saturated with AI use from instructors, and they’re not feeling like the value of what they’re paying for, or their scholarship that they’re on, or something else in that situation doesn’t align with those values that they have. So, it is really important to have some conversations if you are in a situation where you’re doing some faculty development to, this might be a good time to create a student panel, invite faculty to hear their responses. Here’s some pro ones, here’s some negative ones, and start thinking about ways you could even survey your students to see about their own experiences with AI, if they want these tools. If not, one of the reasons why they don’t want them.

Bonni Stachowiak [00:33:11]:

What knowledge and skills are you starting to believe are really important for students to know about AI?

Marc Watkins [00:33:19]:

You know, the best knowledge and skill about AI is to think critically and be resilient through these changes. Don’t be exhausted by it. Don’t get to the point too where you’re so upset by it, or also don’t get to the point where you’re so lured into using the chatbot interface through ChatGPT or Gemini that you think that’s only what generative AI can do. We really, I really do think this past year that now what they call vibe coding, is going to become a far more greater skill that we’re seeing, like Claude’s Opus 4.5, and now Codex and Cowork come out from OpenAI, and Anthropic as well too. There are things you can produce with that that are absolutely stunning and amazing. That’s very different than having a simple essay or test or quiz. You can do an entire app of a game, of a simulation, of a historical time period for your students or for them to actually create for their peers too, just using natural language through one of these new interfaces. So that’s awesome, that’s amazing.

Marc Watkins [00:34:22]:

I don’t know how I— we could put that together in a curriculum though, because it changes so quickly. So to me, it’s doubling down onto what we’ve taught and done so well, that critical thinking, ethical decision-making, being curious about these things, and balancing all those different types of experiences that we have in our lives with new tools and new possibilities.

Bonni Stachowiak [00:34:44]:

You also said this earlier, but I want to bring it up before we get to the recommendations, just to encourage people that how do you do it when it changes so fast? Well, I don’t have any easy answers, but I can say back to what Marc said earlier, it’s leaving time and space to have conversations with students because as emergence is occurring, that’s the most important time for us to be having left space for conversation and the unexpected, the unknown. And I think it just goes back to what we have known for a long time, but still some of us get tempted to try to pack even more in there, is to do less, so that you have the time and space for what happens in those moments.

Marc Watkins [00:35:27]:

Absolutely. Absolutely. 

Bonni Stachowiak [00:35:28]:

This is the time in the show where we each get to share our recommendations. Today I wanted to share a link to the Civics of Technologies website. They had— it’s happened multiple years, I’m not sure how many years, but they had Privacy Week. So I’m going to link over to their Privacy Week resources, and I’m just going to read a little bit from their description: “What does privacy mean in a human society where technologies and corporations are designed to extract, share, and monetize data without public understanding or will. Who benefits? Who pays the price? And who decides?” And so what this link is, it’ll take you to— they did a couple of sessions and there are— there’s a slide deck and there is a video of one of their two sessions. One of them was not recorded, but the one— I mean, my gosh, I watched the one video and was absolutely- just so engaging and so thought-provoking. Sometimes I think, oh, you know, I’ve read a few books about privacy and I think I have a decent understanding, but I’ll tell you, I haven’t read any books about libraries specifically having to do with privacy.

Bonni Stachowiak [00:36:45]:

So the books that I’ve read have been more broad and they would of course address it, but this was just so specific to libraries. And just as one example, one of the presenters, their name is Jamie Taylor, talking about they recently changed their record-keeping processes in the library where they work to delete records of previously checked out books, but that their public library has a My Reading History that’s off by default. But I just, I thought like, I don’t even know, Mark, what what is my public library? I don’t know what they have and don’t have as far as privacy. I tend to think of, and I think I’m right about this, that libraries are more likely to be concerned about their patrons, and their privacy than I think, for example, I’m not even going to name the company, but let’s just think of a company that used to be known only for selling books and now is known for selling a lot of other things.

Bonni Stachowiak [00:37:45]:

If I were to put my trust for my privacy, I would certainly be with my either university or local library leadership. But anyway, the presenter, again, Jamie Taylor, was mentioning that her threat model is not concerning. And that tends to be where a lot of us might go of like, oh, if they knew what books I read or what podcasts I listen to, it’s not so concerning. But then when Jamie went on to say, for others it might not be. One example was given of “easily inferable health information.” I got to tell you, Mark, my head just started going to some very dark places. So I think it’s one of those where we think that, “Oh, this is not a problem for us,” but we also want to be good citizens. So what would it look like if we were advocating for greater privacy rights for all of us, and how important that is in a free society? But then as soon as I just say stuff like that, Oh my goodness gracious. So I could keep going and going.

Bonni Stachowiak [00:38:44]:

I should probably invite them to come on and share here because I’m sure that there’s so much more that we could talk about. But for now, I’m going to pass it over to you, Marc, for anything that you would like to recommend.

Marc Watkins [00:38:54]:

Oh, that’s a wonderful resource, and I totally agree too. Privacy is something I think everyone’s thinking about because data is everywhere. And now that these systems can hear and see and interact with the world too, that’s another thing to consider. So, um, my recommendation is a writing group that I’m part of, that’s just launched called the Transformers. And so Mark Marino, and Anna Mills, Annette Vee, Maha Bali, and Jeremy Douglass, and Jon Ippolito, all started this sort of group that would meet once a month and talk about provocative issues with AI.

Marc Watkins [00:39:29]:

And so, we launched our first sort of version of this where we all start talking about these AI conversations. And it’s now up on the Transformer website, which I’ll share a link. And sometimes the conversations are hilarious. Sometimes some of the things that people point out and do— Mark Marino vibe codes games. He thinks it’s going to be the next new emoji, and we’ll drop that into chat. And you players all suddenly start laughing about certain things, because some of the games he’s done too, he’s vibe coded an instructor throwing notebooks, composition notebooks at students, to try to stop them from being so overwhelmed with AI, which is a little bit funny. And Jon Ippolito, who has designed quite a few different simulations, including- there’s a great tool that he has called “What Uses More” that can analyze your actual usage of energy and water usage compared with AI systems, with other systems that we use every day for energy and the environmental impact.

Marc Watkins [00:40:27]:

So I think it’s a great group. And Annette and I are co-authoring Norton’s Field Guide to AI Aware Teaching. And so we are just kind of getting together every week and talking about all these fun, exciting, terrifying, challenging processes. And it’s a great resource.

Bonni Stachowiak [00:40:44]:

And when you talk about getting together, is this something that people would be invited to go watch after the fact, or are people listening in live, or both?

Marc Watkins [00:40:53]:

Yeah, it’s after the fact. And then there’s a website so that faculty, anyone, any audience can go and see the recordings. And we usually have a written provocative piece of material too that people can interact with too.

Bonni Stachowiak [00:41:06]:

Oh, wonderful. Mark, it’s such a pleasure to finally get to talk to you. I didn’t mention this earlier, but it’s funny to have read so many of your words. I mean, your words have been so edifying for me, and I know for just countless people around the world.

Bonni Stachowiak [00:41:20]:

And so it was so delightful! But I’m still trying to get used to like seeing you, versus your voice and hearing you, because your voice has always been in my head as I read your words. And now it’s just going to— my brain has to get retrained to like, oh, this is what Mark sounds like, this is what he looks like.

Marc Watkins [00:41:37]:

So, well, Bonni, I really do appreciate you and that it means the world to me. And I love so much that you are keeping these conversations alive and rolling.

Bonni Stachowiak [00:41:45]:

Yeah, I’m so grateful to people like you, it’s your generosity to spend this time here, and, and thanks again for this conversation and for all the others that you’re doing as well. Can’t wait to share the links with the listeners and this episode.

Bonni Stachowiak [00:41:59]:

Thanks once again to Mark Watkins for joining me on today’s episode. Today’s episode was produced by me, Bonni Stachowiak. It was edited by the ever-talented Andrew Kroeger. If you’ve been listening for a while, it would be great if you would sign up for the weekly Teaching in Higher Ed update. You’ll receive the most recent episodes, show notes, as well as some other resources that go above and beyond that. Head over to teachinginhighered.com/subscribe to get those weekly emails coming in your inbox. And we’ll see you next time on Teaching in Higher Ed.

Expand Transcript Text

TOOLS

  • Blog
  • Podcast
  • Community
  • Weekly Update

RESOURCES

  • Recommendations
  • EdTech Essentials Guide
  • The Productive Online Professor
  • How to Listen to Podcasts

Subscribe to Podcast

Apple PodcastsSpotifyAndroidby EmailRSSMore Subscribe Options

ABOUT

  • Bonni Stachowiak
  • Speaking + Workshops
  • Podcast FAQs
  • Media Kit
  • Lilly Conferences Partnership

CONTACT

  • Get in Touch
  • Support the Podcast
  • Sponsorship
  • Privacy Policy

CONNECT

  • LinkedIn
  • Instagram
  • RSS

CC BY-NC-SA 4.0 Teaching in Higher Ed | Designed by Anchored Design