• Skip to primary navigation
  • Skip to main content
  • Skip to footer

Teaching in Higher Ed

  • Podcast
  • Blog
  • SPEAKING
  • Media
  • Recommendations
  • About
  • Contact
EPISODE 602

Navigating AI’s Rapid Transformation in Higher Ed with C. Edward Watson

with C. Edward Watson

| December 23, 2025 | XFacebookLinkedInEmail

https://media.blubrry.com/teaching_in_higher_ed_faculty/content.blubrry.com/teaching_in_higher_ed_faculty/TIHE602.mp3

Podcast (tihe_podcast):

Play in new window | Download | Transcript

Subscribe: Apple Podcasts | Spotify | RSS | How do I listen to a podcast?

C. Edward Watson shares about navigating AI’s rapid transformation in higher ed on episode 602 of the Teaching in Higher Ed podcast.

Quotes from the episode

There's a lot of incremental shifts, but the increments are quite large.

I never include AI in the beginning of my processes.
-C. Edward Watson

There's a lot of incremental shifts, but the increments are quite large.
-C. Edward Watson

I would argue that maybe this is the first time in the history of higher education that we have learning outcomes that are at war with one another.
-C. Edward Watson

We've never built a curriculum for something that's changing so quickly. We're being asked to keep up with this rate of change in a meaningful way that actually serves our students well.
-C. Edward Watson

Resources

  • Teaching with AI: A Practical Guide to a New Era of Human Learning, by José Antonio Bowen and C. Edward Watson
  • Teaching with AI Website (Including Free Resources)
  • AAC&U Artificial Intelligence Resources
  • AAC&U Teaching with AI Workshops
  • AAC&U Report: The Agility Imperative: How Employers View Preparation for an Uncertain Future
  • Wharton School of Business Survey: How Are Companies Using Gen AI in 2025?
  • Shell Game Season Two
  • Caraway Cookware

ARE YOU ENJOYING THE SHOW?

REVIEW THE SHOW
SEND FEEDBACK

ON THIS EPISODE

Man wearing glasses with a slight beard smiles at the camera

C. Edward Watson

C. Edward Watson, Ph.D., is the Vice President for Digital Innovation at the American Association of Colleges and Universities (AAC&U). He is also the founding director of AAC&U’s Institute on AI, Pedagogy, and the Curriculum. Prior to joining AAC&U, Dr. Watson was the Director of the Center for Teaching and Learning at the University of Georgia (UGA) where he led university efforts associated with faculty development, TA development, learning technologies, and the Scholarship of Teaching and Learning. He continues to serve as a Fellow in the Louise McBee Institute of Higher Education at UGA and recently stepped down after more than a decade as the Executive Editor of the International Journal of Teaching and Learning in Higher Education. His most recent publications are Teaching with AI: A Practical Guide to a New Era of Human Learning (second edition) (Johns Hopkins University Press, December 2025), Leading Through Disruption: Higher Education Executives Assess AI’s Impacts on Teaching and Learning (AAC&U, 2025), and the Student Guide to AI (Elon University & AAC&U, 2025). Dr. Watson been quoted in the New York Times, Chronicle of Higher Education, Inside Higher Ed, Campus Technology, EdSurge, Newsweek, Forbes, U.S. News, EdTech, Consumer Reports, UK Financial Times, and University Business Magazine and by the AP, CNN and NPR regarding current teaching and learning issues and trends in higher education.

Bonni Stachowiak

Bonni Stachowiak is dean of teaching and learning and professor of business and management at Vanguard University. She hosts Teaching in Higher Ed, a weekly podcast on the art and science of teaching with over five million downloads. Bonni holds a doctorate in Organizational Leadership and speaks widely on teaching, curiosity, digital pedagogy, and leadership. She often joins her husband, Dave, on his Coaching for Leaders podcast.

RECOMMENDATIONS

AAC&U Report: The Agility Imperative: How Employers View Preparation for an Uncertain Future

AAC&U Report: The Agility Imperative: How Employers View Preparation for an Uncertain Future

RECOMMENDED BY:Bonni Stachowiak
Shell Game Season Two

Shell Game Season Two

RECOMMENDED BY:Bonni Stachowiak
Caraway Cookware

Caraway Cookware

RECOMMENDED BY:C. Edward Watson
Woman sits at a desk, holding a sign that reads: "Show up for the work."

GET CONNECTED

JOIN OVER 4,000 EDUCATORS

Subscribe to the weekly email update and receive the most recent episode's show notes, as well as some other bonus resources.

Please enter your name.
Please enter a valid email address.
JOIN
Something went wrong. Please check your entries and try again.

Related Episodes

  • EPISODE 279Applied Creativity for Transformation

    with Brian LaDuca

  • EPISODE 137Teaching Naked Techniques
    Man wearing glasses with a slight beard smiles at the camera

    with C. Edward Watson

  • EPISODE 517Thinking with and About AI
    Man wearing glasses with a slight beard smiles at the camera

    with C. Edward Watson

  • EPISODE 490Navigating Insecurity in Teaching
    Man wearing glasses and a sports coat smiles warmly

    with Dave Stachowiak

  

EPISODE 602

Navigating AI’s Rapid Transformation in Higher Ed with C. Edward Watson

DOWNLOAD TRANSCRIPT

Bonni Stachowiak [00:00:00]:

Today on episode number 602 of the Teaching in Higher Ed podcast, Navigating AI’s Rapid Transformation in Higher Ed with C. Edward Watson. Production Credit: Produced by Innovate Learning, maximizing human potential.

Bonni Stachowiak [00:00:23]:

Welcome to this episode of Teaching in Higher Ed. Hi, I’m Bonni Stachowiak, and this is the space where we explore the art and science of being more effective at facilitating learning. We also share ways to improve our productivity approaches so we can have more peace in our lives and be even more present for our students. In this episode, I’m joined by C. Edward Watson, Ph.D. vice president for Digital Innovation at the American association of Colleges and Universities and Founding Director of AAC&U’s Institute on AI Pedagogy and the Curriculum. Eddie has long been at the forefront of helping institutions navigate emerging technologies, and his recent work, including Teaching with AI, the second Edition, Leading Through Disruption, and The Student Guide to AI, offer a practical and informed perspective on the changes shaping higher education. Our conversation, as you’ll hear, traces the rapid changes that we’ve been experiencing with AI on our various campuses, from the concerns that came very quickly about cheating to more broader questions around AI literacy and discipline specific integration and what does it look like to help prepare students for a workforce being shaped by artificial intelligence? Eddie and I explore how faculty can respond to the disorienting pace of change in AI models, what academic integrity looks like in an era of agentic tools, and why he and his co author Jose Bowen say why nearly every job was will require at least some level of AI competence in the future.

Bonni Stachowiak [00:02:23]:

We dig into his analysis of emerging AI literacy frameworks, the tensions we experience around assignment design, and what institutions must prioritize as they adapt. Eddie Watson, welcome back to Teaching in Higher Ed.

C. Edward Watson [00:02:40]:

Bonni, thanks for having me back.

Bonni Stachowiak [00:02:42]:

It’s so good to get to be in conversation with you again. I’ve been so looking forward to this time and I need you to orient us a little bit. It’s been a minute since we talked. We we last spoke in the first half of 2024 and so much has changed. So I’d love if you would just start out with us giving a little bit of an arc of kind of where we’ve been and it’s always risky to do this, but kind of sort of where we stand today. But we also both know where we stand today is still a lot of change it feels like, but give us a little sense of that arc.

C. Edward Watson [00:03:18]:

Well, I think the arc is almost one of the more things change, the more they stay the same I mean, if you think back over three years now, since generative AI burst onto the scene, at least for the broader public, the first semester for us was discovery. Wait, this exists? Do students know about this yet? What can it really do? And then once we got a sense of what it really could be used for, then it’s almost like the next academic year, it was really kind of about this conversation around cheating. So for the first 18 months, it was discovery and then concern. But then at the same time that we were having these conversations within sort of our higher ed bubble, the world of work was rapidly adopting generative AI and lots of different disciplines and contexts. So then there was this shift, probably 18 months, in where institutions started thinking about, oh, wait a second, this seems like it’s not the next second life. This is really here to stay.

Bonni Stachowiak [00:04:18]:

And.

C. Edward Watson [00:04:18]:

And that the world of work is figuring out how to implement this in a lot of interesting ways. Is this something that we need to indeed teach our students to do and to do well? So then there was this shift or sort of broadening of the conversation to not just how do we prevent students from using AI to wait, do we need to engender AI literacy, AI skills? And what does that look like from a curricular standpoint? And this notion of building that into maybe general education or part of a first year experience, or maybe the capstone components within right before students graduate and indeed join the world of work or go on to the next stages of their life. So the conversation broadened to, well, not keep it out of the classroom, but keep it out appropriately. But then there’s places that we do indeed need to be engendering these skills. So broader conversation about curriculum and pedagogy and then how we ourselves, faculty might leverage it to maybe improve the quality of what we’re doing, maybe save us some time here or there. But while that’s sort of been the broadening of the conversation, and Indeed, you know, AAC&U has an institute on AI pedagogy in the curriculum that has been focusing on helping institutions make these kinds of curricular and pedagogical shifts. However, recently we did a survey of faculty, and the number one concern that they expressed was academic integrity. So while things have changed and the conversation has broadened, those of us that are in the classroom from one week to the next are still struggling with the challenges associated with academic integrity that we first realized when generative AI came onto the landscape.

C. Edward Watson [00:05:54]:

And then, of course, there are new challenges now as agentic AI has become realized where it’s not just a passive form of AI where you ask it for something and then it would provide you with an answer. Now you can kind of give it a task to perform and it can go and do that. And this indeed includes asking an agent, agentic AI to complete tasks within the learning management system for you. So that’s completely different. Not completely different. It’s different than the initial generative AI push. We have new conversations and new challenges that are emerging. But again, while we might be thinking institutionally and preparing for the workforce and things like that, those of us in the trenches, in the classroom are still very much struggling with questions of academic integrity.

Bonni Stachowiak [00:06:42]:

I think about there really being. And thank you so much for rooting us in the arc that two changes that I think the former capabilities are still existing as myths and those two changes, one being that these large language models can now search, whereas previously I remember so vividly, Eddie, where it would be like, oh, well, it can only search within its model that it’s been trained on. So I, for me, if I ever was experimenting and saying, you know, what did it know about Eddie Watson, you know, coming on teaching in higher ed? If it hadn’t been, if it was only, I think the year might have been, oh, it’s only trained up through 2021 or 2022. So if, if it hadn’t happened yet, then it didn’t exist as far as the large language models knew. And a lot of faculty that I talked to believe that that’s still the case and haven’t quite understood yet how these search capabilities change the picture. And then as you mentioned, Eddie, holy cow. I mean, I’ve felt so grateful for the people willing to share almost in real time on some various social media platforms of the visceral reaction that they have discovering their entire. I was going to say the name of the learning management system, but it kind of doesn’t matter which one I name, so I’ll refrain from naming it because it’s certainly not a commentary on that specifically, but.

Bonni Stachowiak [00:08:11]:

But like to have watched that and just for me to see the emotions behind that, that there’s a lot there, as you said, a lot there for us to process. Is there anything else that’s coming to mind for you in terms of this arc that that might be important for thinking about as we continue our conversation?

C. Edward Watson [00:08:33]:

Well, that was a key nuance shift whenever there was a live connection between the large language models or the companies that produce those tools, those tools themselves, when there used to be just sort of a closed defined data set and Then they would publish like, this is from April of 2023 and it might be September, but you know, anything that’s happened since then, you don’t have really access to or really can’t ask about. Well, now that you have live access, you can go to a large language model and say, what time does Nuremberg play at the local theater? And it can bring up movie times and things like that. So it’s definitely a. It was a leap forward, for sure. That occurred about a year ago, but that definitely was one of those. It’s interesting. It’s like there’s a lot of incremental shifts, but the increments are quite large. And so when you have 10 reasonably large incremental shifts, it really feels like, wow, there has been this expansive change within these tools in a fairly short amount of time.

Bonni Stachowiak [00:09:34]:

Thank you for saying that. And one of the things you mentioned, academic integrity. And I’ve been so fortunate to be able to talk to people for whom their career span and expertise spans well Beyond November of 2022, when ChatGPT, the chat based large language model, got released. And in terms of as you’re talking about these incremental shifts, I feel so much empathy toward people for whom this is so disorienting. And I certainly have experienced that and will continue to experience my own sense of disorientation. This is not easy for any of us. None of us asked for this. Which in a future episode, dun dun dun with your co author Jose, does mention something about none of us asking for this.

Bonni Stachowiak [00:10:19]:

So I know you and he resonate with that with that as well. But. But I think so. I while I do feel empathy, I also think about the the wrongheadedness is too strong of a word. And yet I don’t like to leave long pauses during podcast interviews. So I’m having trouble coming up with it because wrongheadedness sounds judgmental in a way that I don’t intend for that word to mean. But if we’re orienting ourselves as educators toward avoiding cheating, and if cheating equals any use of artificial intelligence, could you help break that down, maybe find a better word? Because you seem far more diplomatic than I am in life, Eddie, and so is can you help me find a better word than wrong headedness, but also like, how might we instead think about, like, what could replace that? I’m going to design my classes around stopping people from cheating and cheating equals any use of AI. Help me break that formula down to maybe a better word and maybe a better aim.

C. Edward Watson [00:11:23]:

Yes, I’ll even broaden that a little bit further, because I think that it’s like a matrix of decision rather than this binary of, like, either they can use it or they can’t. I mean, you can kind of start maybe with the larger. The larger notion that maybe we would all agree that regardless of institution type, part of the purpose of higher education is to prepare students for life beyond graduation, which would include the world of work and lifelong learning scenarios and maybe grad school or whatever it might be, but that we have a purpose for higher education that is greater. That’s the aspiration. And indeed, the vast majority of our students that are in college today, they’re there for social mobility. They’re there to have maybe a better life than their parents did. It’s partly about finding the job. For the vast majority of our students, it is about career.

C. Edward Watson [00:12:20]:

So if we. If we accept that notion and if we see how the world of work has rapidly adopted AI, the question that might become, is this something we should be teaching our students to do well and ethically? And many institutions have concluded that, yes, that’s. That is something that we have to incorporate into the curriculum. Not that everyone’s teaching AI, but that at each institution, someone should. So then the question then is, okay, we’ve got that corner or that one line of thinking, right, that we should be teaching AI to our students. But then on this other end of the spectrum, the other end of the matrix, maybe there’s this notion, why would we want to teach our students to be power users of a tool that they could use to cheat with, number one? But then there’s even a second line of thinking, not just that they might use it to cheat with, but that it might actually diminish their learning. Indeed, I would argue that maybe this is the first time in the history of higher education that we have learning outcomes that are at war with one another. For instance, we have never said we can’t teach our students written communication.

C. Edward Watson [00:13:27]:

That’s going to mess with their critical thinking skills. No, they actually amplify one another, right? I mean, even if you were looking at disparate ends of things, you wouldn’t say that quantitative reasoning is something we shouldn’t teach in our creative courses. You know? No, no, these are all working together. But AI as a learning outcome actually has the possibility of diminishing achievement of these other outcomes. And so as we even drill down further to sort of this notion of cheating, the definition of cheating has become really murky. I mean, there’s one case use of AI that whether you’re a student, administrator or faculty member or parent we would all agree is cheating. I go to AI, I ask it to write this five page paper. I then copy and paste it and put my name on and turn it in.

C. Edward Watson [00:14:12]:

Everyone says that’s cheating. But what if I get that paper assignment from you and I go to AI and I say, provide me with a detailed outline regarding how I should respond to this prompt. And so I get this detailed outline and then I as the student. It’s a bunch of micro writing assignments, right? Okay, I need an introduction. Okay. Four sentences of introduction. Okay. Statement of purpose.

C. Edward Watson [00:14:34]:

Okay. Some sentences about that. Okay, so talk about the variables or whatever. Okay, so. So it’s just a bunch of micro writing assignments. So is that cheating? Yes or no? In fact, we provided scenarios to administrators about a year ago and there was a real split. About half said that was cheating, half said that either it wasn’t cheating or they weren’t sure whether that was cheating or not. There was this real bimodal distribution about whether this example that I’m sharing is actually cheating.

C. Edward Watson [00:15:03]:

But maybe cheating’s beside the point. Is it likely that me asking for that detailed outline might diminish my learning associated with that assignment? Almost assuredly. I’m not struggling with what goes in the paper and what doesn’t go in the paper. I’m not struggling what to emphasize and de. Emphasize what the order of ideas should be. And in truth, every assessment we give our students is an act of pedagogy. There’s lots of learning that takes place when students write papers or work as a team or deliver a presentation, or indeed even taking a multiple choice test. So is this likely? Is AI usage in this way likely to diminish achievement of the other learning outcomes that were designed into that assignment? I think the answer is yes.

C. Edward Watson [00:15:45]:

So there’s this matrix of like and this really a grand challenge for higher ed, where we need to be engendering these skills to prepare students for life beyond graduation. But doing that could actually have negative impacts on their learning or achievement of other learning outcomes. And indeed, depending on one faculty member to the next, their perceptions of what is cheating could very much be used for cheating, even in somewhat subtle ways, such as provide me with an outline or I write the whole paper and then I ask for feedback. I mean, the AI detection tools wouldn’t catch either of these examples of what might be termed cheating, but definitely could diminish learning. So it’s a really complicated landscape, and that’s just sort of a first blush, not even talking about agentic AI. But that’s sort of the landscape that we’re on and the tensions that many institutions and many faculty members in their own classrooms are struggling with today.

Bonni Stachowiak [00:16:40]:

I so appreciate your breaking the dichotomous thinking up. It just gets us into so much trouble. And thinking about things on a continuum, but also a multifaceted continuum, can be so helpful for us and getting curious about where people who think differently than we might, where they form their beliefs and perceptions and can be such a healthy thing. You and Jose Bo and your co author make a bold claim in this book, in this second edition, AI will change every job. All right, Eddie, it’s time. Back up your claim.

C. Edward Watson [00:17:21]:

Well, I mean, there have been studies that have shown that this is likely what we’re going to see moving forward. I mean, a couple of years ago, a group of Stanford researchers looked at the Federal Job index, which contains 961 job descriptions, and they concluded that every one of those job descriptions had at least one task that might be done better by AI. That doesn’t mean these jobs are going away, but it means there might be more and more of an expectation that AI would be a skill or component that would be required. And actually, a recent report from AAC&U, we do these employer surveys every two or three years. We’ve done them for almost 20 years now. But in our last iteration, so published late in 2025, we asked employers, we had over a thousand different employers, we asked them this question. How important is it for college graduates to have developed skills related to the use of artificial intelligence while they were in college? 91% of employers say, yes, this is important. This is something that students should develop skills around before they enter the world of work.

C. Edward Watson [00:18:29]:

So that’s a broad swath of the world of employment. I was giving a talk recently and someone asked me, well, which fields are immune? Which fields aren’t going to be touched by AI? And I sort of shrugged and I said, I don’t know, maybe the field of dance. And then a dance professor raised her hand and she said, actually, in my junior level course, we’re using AI to do set design. And she described how she was re envisioning certain plays a thousand years into the future and how they might be reframed in terms of set design at least. So I’m hard pressed to find corners where we may not see that. Another study from the Wharton School of Business in late in 2025 that surveyed employees and over 80%, over four out of five employees, are now reporting that they’re using generative AI at least once a week in the work that they do. So that’s just a broad swath of the world of work. And so we make the bold claim.

C. Edward Watson [00:19:33]:

I bet there’s someone somewhere that can say, ah, this isn’t going to touch this field. And that indeed might be the case. But I think the vast majority of positions in the world of work are going to be touched in either small or large ways by generative AI.

Bonni Stachowiak [00:19:51]:

As you and Jose have talked to so many people across all these different contexts, how are you thinking about, is it skills, is it literacy, is it competencies? How have you kind of considered what it looks like to try to define different sets of skills in these unique contexts, different kinds of disciplines or different kinds of institutions, different types of teaching styles?

C. Edward Watson [00:20:18]:

Well, that’s an interesting question. I mean, Jose and I, we, we sort of collected all of the frameworks, for lack of a better word, that have emerged. And there’s maybe nine that seem to have gained some real traction across higher ed or more broadly the world of, of education. I mean, UNESCO has one, the Open University has one, Barnard College had an earlier one, Stanford has one. I mean, there’s, there’s a number of them that are out there. And he and I tried to develop a bit of a, a meta model that looked at all of this other work and what are all of the commonalities and which pieces seem to be really important. I mean, there are some that had a discrete feature that, you know, didn’t show up anywhere else. And then there’s lots of elements that were across all of the models that we’ve seen.

C. Edward Watson [00:21:03]:

So I mean, there’s going to be, you know, I guess, a lot of different ways to kind of conceive of AI literacy. And most of these models that are out there kind of talk about it in sort of a generic version of AI literacy. But I think there’s a companion to that sort of vision of AI literacy that is within the discipline because there are many tools and strategies and even sort of ethical practices that will vary from one discipline to the next, or there are tools that are very specific to that discipline. I mean, but even like the field of nursing versus like primary care physicians, like note takers for primary care physicians, are seen as a positive tool because there’s more eye contact and you can sort of be more present with someone that’s in your office where I’ve heard feedback about note taking tools for nurses making rounds, that actually it serves as a distraction that they actually don’t pay as close attention as they were when they had to listen and process. So in other words, they don’t know the people on their floor during their shift maybe as well as they did previously. So, I mean, even what is best practice in two fields that seem really close together, right? You know, nursing and maybe, you know, your family doctor, there’s different views. So there’s, there’s a generic version of AI literacy that there’s probably some commonalities about that maybe that’s what belongs or should be taught within the general education curriculum. But probably we need to also have a second place within the major where we teach our students the tools and the practices and the norms and the ethics of using generative AI within that world of work.

Bonni Stachowiak [00:22:42]:

When we last spoke for the first edition of the book, I remember us coming across the conversation. It was something like, holy cow, what just happened? I don’t think I quite said holy cow, but it was something like what just happened to like, oh. And then I remember the closing third of our conversation were around things that were making you go, oh, wow, just like, what, what? You know, and, and I, I, I can’t even begin to tell you, Eddie, the way that that just carried with me, it was really. And not just me. I mean, so many people would contact me and say some of the things that you were excited about at the time were very intriguing and got a lot of people curious. So are there any things that are either coming with your own personal exploration as a, as a human being who uses computers and, and has a teaching that you do in leading people in higher education, anything that’s making you go, wow, this is, you know, this is pretty, you know, useful or surprising or anything like that that comes to mind today.

C. Edward Watson [00:23:43]:

Well, I guess one example is, is maybe quite personal to me. And then there’s another example that I think is likely to touch all institutions. But for me, in my own use of AI, I, you know, AI has a voice that, that has a little bit of an arrogance, like, you know, anytime you ask it for anything, it’ll say, well, you know, I’m glad you asked. And then it’s got a very confident answer that it provides us. And I found that, you know, anytime you ask AI anything, it’s going to give a reasonable response. That sounds good. And I guess I very quickly discovered that if I go to AI as the first step in my process, I kind of get lost because I hear that response. I go, that’s pretty good.

C. Edward Watson [00:24:28]:

And so I kind of move on. And now for me I just never include AI in the beginning of my processes. It’s like I do work first to kind of make sure that I’m present and I have my ideas out there. And then I, you know, in the same way that I might go to you or Jose or you know, a friend or colleague to ask for feedback on something, then I might go to AI and then get an additional perspective that I then either synthesize or discard. So I guess for me, I just sort of found that. I guess I’m. It’s a bit of a concern, but for me personally, I definitely recognize that it was very real, that it’s, it. It might be easy for us to get lost our ideas, the, the, the.

C. Edward Watson [00:25:09]:

Our human. Very specific to us, first person central. Like me losing my perspective because it’s just so easy to kind of go with the AI perspective. So my ideas, my thoughts, my creativity could get submerged. So as a, as a, as a practice for me, and really what I recommend for students and faculty is maybe you start first and then get feedback rather than going straight there. Of course, it depends on the task, right? If it’s something that there’s just. It doesn’t matter much. You just need to get it done.

C. Edward Watson [00:25:38]:

You know, maybe going straight to A.I. in that case, where your inventiveness isn’t necessarily valued, that you don’t value your inventiveness very much around that task, like an email response to the dean or whatever. And you just need to have a polite response that, yeah, okay, but I guess that’s one thing was just this notion of like recognizing how easy it might be for us to lose ourselves in our collaboration with AI if we kind of allow it to be the captain of the ship and go first, as opposed to us going first and kind of refining our ideas and then maybe as a later step in the process asking AI to come in. So that’s, that’s just of like a personal practice, though I’ve heard others echo that notion as well. So that’s one thing. But I guess the thing that has made me go. Whenever I first. When I first saw AI in November of 2022, my son Carter showed it to me and the first words out of my mouth were, oh, no, wasn’t excited about what it could do.

C. Edward Watson [00:26:38]:

I instantly saw the academic integrity challenges it was going to bring forward and the work that that might cause for me and the disruption for higher ed that was likely to come along. And I had the same response whenever someone gave me some advice to try the agentic web browser from OpenAI. And it was again, another like, oh, this is really not. I see the value in this for many different settings, but for higher ed specifically, this is going to be a new academic integrity nightmare. In fact, I predict that 2026, this is what we’re going to be talking about is like, how do we solve this? Especially those that are offering fully online asynchronous programs of study. Those are the ones that I think are going to have the greatest challenges associated with the emergence of fairly realized agentic AI.

Bonni Stachowiak [00:27:35]:

I love hearing you talk about those, like, just all the juxtaposition of all these different feelings. And I also hear you saying some of them came in arcs and waves. But also we can probably have 17 different feelings in just this conversation about it because it is so murky and muddy and messy as far as how it impacts our intersectional identities. I would like you to close this part of the episode before we get to those recommendations with talking a little bit about the future.

C. Edward Watson [00:28:04]:

Wow. I think anyone that could accurately predict the future, they probably are no longer on podcasts. They’re on a beach somewhere because they’ve been able to make those predictions. I feel like what was once a murky future, I think there’s some clarity, again, this notion that higher ed, part of our mission is around preparing our students for life beyond graduation. I think most institutions are going to lean into developing curricular structures where their students will gain the skills that are needed once they graduate. So I think that’s something we’re going to see more and more of. I mean, AAC and you. We have this institute on AI pedagogy and the curriculum specifically to help campuses toward those.

C. Edward Watson [00:28:48]:

Those goals that. That, you know, lie ahead of us. But there’s, you know, there’s. There’s chasing something versus doing something well. And I think that that is the. The position that we’re in is sort of like, how do we do this well? And we have to move with agility, but we also need to do this well. And, you know, one thing that we’re really good at in higher ed is curriculum reform. What we’re not good at is curriculum reform fast.

C. Edward Watson [00:29:18]:

And we’ve never built a curriculum for something that’s changing so quickly. You know, we might spend two years to put together a good curriculum and courses and course descriptions and syllabi and all of those great things, and then now we’re ready to launch it. And then what’s on the other side of agentic AI emerges or there’s a big another Big technological shift. So it’s, you know, we, we’re being asked to perform a task that we’ve never been asked to do before. This, this rate of, you know, keeping up with this rate of change in a meaningful way that actually serves our students well. I mean, that’s, that, that’s what’s ahead of us in the next couple of years. But then we also have new challenges, you know, working with agentic AI and discerning how to ensure the integrity of our curriculum and our degrees in light of tools such as this that have emerged. So that’s.

C. Edward Watson [00:30:10]:

Those, those are the challenges. I mean, I think for the next two years, I can see that that’s, that’s, that’s the work that we’re going to be doing and, and the, the overarching challenge that’s ahead of us. And to be honest, we really haven’t solved the generative AI challenge that emerged three years ago. You know, it’s still a problematic and a grand challenge for everyone that teaches a class in higher education.

Bonni Stachowiak [00:30:32]:

This is the time in the show where we each get to share our recommendations. And Eddie, you mentioned mine a little bit ago. I found some hope in a report that came out from AAC and you, the organization that you work with, and I’ll just read the headlines and then I’d love to have you share because you’re far more familiar with it than I am. I think it’d be fun to hear you share a bit. But so the headline goes, New National Survey Finds Strong Employees Confidence in Higher Education. I believe actually it’s a, It’s a. You didn’t mention this report. You, you mentioned another report, but.

Bonni Stachowiak [00:31:05]:

But that came out around a different topic. But. So, yeah. New National Survey Finds Strong Employer Confidence in Higher Education. Findings show clear alignment between liberal education outcomes and evolving workforce needs. Eddie, we don’t often get happy stories. I mean, maybe happy is not the right word, but we, we don’t often get to see, see, like, wow, there’s, we’re seeing some. At least there’s some evidence here of some value and value around things that I know those of us that, that care about the, the common good and about what it means to have a wonderful education that includes, you know, a liberal arts education within it.

Bonni Stachowiak [00:31:47]:

Tell us what these findings were and is it bringing you any hope the way it’s bringing me some.

C. Edward Watson [00:31:52]:

Well, yeah, it definitely does bring me some hope. And this actually is the same study that has some questions nested about. So the survey was pretty sort of broad, structured to Cover a lot of different topics. But, I mean, higher ed is fairly embattled and there’s some challenges that maybe we partially own, such as the student debt crisis, but certainly it’s become very politicized and there’s been this diminishing of the value of higher education in some circles and how things have been framed. So it’s the. I guess, boy, I don’t even. Oh, I hate this metaphor that just came to my head. But kind of the greatest consumers of our product, if you will, you.

C. Edward Watson [00:32:30]:

Those that hire our students. That’s much better. I hate that metaphor now that I just said that out loud. But they’re pleased with what we’re doing. I mean, that students are. They leave our campuses, they go to work, they’re positioned well. They have been educated in such a way that they can do the work that awaits them beyond graduation. And the employers, who would be the best judge of that, not.

C. Edward Watson [00:32:56]:

Not politician, not parents, but those that are having to assign tasks and manage our former students, they’re finding that the students that they’re receiving are well prepared for the work within their businesses and their institution. So, yeah, that’s a very positive data point on this broader landscape, is that there are things that we do quite well and that we’ve done quite well for decades and decades and that we’re obviously continuing to do quite well.

Bonni Stachowiak [00:33:28]:

My second recommendation is we’re recording episodes out of order. But I’m still. I just feel so strongly about it. I gotta recommend season two of a podcast that I’ll recommend season one of when you all get to hear Brian Alexander in the coming weeks, come back on the podcast. So the podcast is called the Shell Game and I’ll let you listen to the episode with Brian Alexander a little bit about what season one was all about. Essential, essentially, it is about the brief version of it is it’s a guy, a journalist, and also someone who is more entrepreneurial, goes out and starts a company with AI workers. And if that just made you bristle and go, why would anyone ever want to listen? I’m telling you, I have to actually read a text from my friend, my dear friend Jackie, who also had listened to season two along with me because I just have to read her words. So this was a text to a group of us, she says.

Bonni Stachowiak [00:34:28]:

So I’ve now listened to season two of the Shell Game podcast and here is all of her descriptions. Eddie. Fascinating, inspiring. Hilarious. It is hilarious. I can back that up. Disturbing, thought provoking, Exhausting. Challenging.

Bonni Stachowiak [00:34:45]:

Scary. Intriguing. One of the things I love about this podcast is all of those feelings all mishmashed together is what my experience has been. I have felt those things in, in terms of my own teaching and my own work and my own sense of what it means to be human. And it was so delightful that she could put, she could pack so much of those adjectives. I thought, yes, I feel those things while I am listening to these episodes. I also think so many of us would resonate because we have felt those things as we’ve been, you know, going through this experience of what does this mean, what does this change, what hasn’t changed, you know, not wanting to buy too much into the hype. And we are still humans, humans being very unique and wonderful.

Bonni Stachowiak [00:35:27]:

And so it’s delightful. I will warn you, if you do decide to listen, that you will have visceral feelings. I think that’s important though, that we recognize what is it about that that just happened that is. Rob park, also on this chat, said he has to take these. Jackie’s husband had mentioned he had to take the episodes a little slower. Really hard to listen to, you know, so you might need to take it slow. You might be binging like Jackie and I. But I, I do think this is an important opportunity for us to be able to reflect on, well, why is it, what is it that that’s bumping up against in me? And I hope that you’ll let me know if you’re listening or.

Bonni Stachowiak [00:36:08]:

I mean. Cause I can’t wait till we’re all caught up with each other so all of us friends can get together and have a. A good conversation about this very unusual circumstance. And every episode I just want to sit down immediately and talk to people about it. I have not felt this, this binge worthy of a podcast in a very, very long time. It does get you very curious to find out what on earth is going to happen next in the next episode. So I guess I pass it over to you, Eddie, for whatever you’d like to recommend.

C. Edward Watson [00:36:35]:

Well, something sort of way out of the domain of media and higher education. So really since the first edition of the book, I’ve been on the road a lot and eating a lot of road food and whenever I’m home, I want to cook and have fresh food. And a friend of mine recommended, because I’m not necessarily the best cook, right. You know, a friend of mine recommended a brand of cookware called Caraway. And so I bought like a skillet and it was amazing. Number one, it’s like non toxic. There’s no teflon there’s no anything like that. And it cleans up.

C. Edward Watson [00:37:09]:

It’s just so easy for cleanup. And so it’s just made, like, cooking fun. So that’s my recommendation, is that, you know, try a piece of Caraway Cookware, and I think you’ll be really pleased and surprised. Been it’s just made the cooking experience the parts about cooking that I don’t like, which are usually the cleanup, which often has some burnt stuff here or there. Caraway just makes it where, you know, you kind of get to the food quick, and the cleanup takes seconds. And, yeah, it’s just. It’s just really sort of changed my. My relationship with the kitchen.

Bonni Stachowiak [00:37:42]:

Yeah, you quite literally just described me as a cook, too. You just could have inserted my name into what you just said. And those caraway cookware things get advertised to me all the time on Instagram, and I think, don’t do it. Don’t do it. They never live up to the promise. So you have just given me the perfect excuse to actually give way to my temptation every time I see them. So if you burn something in the pan, not to say that you would ever do that, but let’s say, hypothetically, I ever did that. It does.

Bonni Stachowiak [00:38:09]:

It can handle that.

C. Edward Watson [00:38:11]:

One of my best friends from high school and college, we ended up going to college together. He’s. His name’s Howard Petrozello, and his grandmother from Italy has this fantastic recipe for spaghetti sauce. I mean, it is just fantastic. It takes about six to seven hours to make. I have always burned the bottom of the pan, always. And so, like, the clean, it’s made me not want to make it because the cleanup would literally might take me a half hour to kind of, like, get all of that. And so I got a big, giant pot from caraway and made that spaghetti sauce.

C. Edward Watson [00:38:46]:

And number one, the flavor was a little different because there was nothing burnt within. But it was, you know, cooking on the exact same stove, just a different pot. It was a different cleanup was 90 seconds. I mean, it was. I can’t. I was. I was elated because I’ve had decades of experience of cleaning the burnt pot, and this was just so literally like, it’s. Get one piece and try it first and, you know, make sure that it lives up to the hype that I’m providing.

C. Edward Watson [00:39:15]:

But I. Yeah, it’s. It’s. It’s been a little bit of a life changer for me and makes me feel more confident going in and out of the kitchen.

Bonni Stachowiak [00:39:22]:

It probably is a good time for us to mention that today’s episode is sponsored by. No, I’m just kidding, friends. That’s what makes this recommendation even more special, is it’s uniquely suited, at least to me and I’m sure many of our listeners. So thank you for that. Thank you for this second edition of the book. And I know you’ve been doing so much work to get out there and grapple with and wrestle with and lead model and just help us find some guidance in what’s definitely a very challenging time in higher education. Thank you for the work that you and your colleagues do, and I’m so looking forward to the next time we get to chat.

C. Edward Watson [00:39:59]:

Well, you’re so kind, Bonni. Thanks for having me here. It’s been a great conversation.

Bonni Stachowiak [00:40:05]:

Thanks once again to Eddie Watson for joining me on today’s episode of Teaching in Higher Ed. Today’s episode was produced by me, Bonni Stachowiak. It was edited by the ever talented Andrew Kroeger. If you have been listening for a while and yet to sign up for the weekly update from Teaching in Higher Ed, I encourage you to head over to the website teachinginhighered.com/subscribe there. You’ll be able to sign up for the weekly update, which contains the most most recent episodes, show notes, as well as some other resources that go above and beyond those show notes. Thank you deeply for listening and I’ll see you next time on Teaching in Higher Ed.

Teaching in Higher Ed transcripts are created using a combination of an automated transcription service and human beings. This text likely will not represent the precise, word-for-word conversation that was had. The accuracy of the transcripts will vary. The authoritative record of the Teaching in Higher Ed podcasts is contained in the audio file.

Expand Transcript Text

TOOLS

  • Blog
  • Podcast
  • Community
  • Weekly Update

RESOURCES

  • Recommendations
  • EdTech Essentials Guide
  • The Productive Online Professor
  • How to Listen to Podcasts

Subscribe to Podcast

Apple PodcastsSpotifyAndroidby EmailRSSMore Subscribe Options

ABOUT

  • Bonni Stachowiak
  • Speaking + Workshops
  • Podcast FAQs
  • Media Kit
  • Lilly Conferences Partnership

CONTACT

  • Get in Touch
  • Support the Podcast
  • Sponsorship
  • Privacy Policy

CONNECT

  • LinkedIn
  • Instagram
  • RSS

CC BY-NC-SA 4.0 Teaching in Higher Ed | Designed by Anchored Design