• Skip to primary navigation
  • Skip to main content
  • Skip to footer

Teaching in Higher Ed

  • Podcast
  • Blog
  • SPEAKING
  • Media
  • Recommendations
  • About
  • Contact
EPISODE 569

A Practical Framework for Ethical AI Integration in Assessment

with Mike Perkins & Jasper Roe

| May 8, 2025 | XFacebookLinkedInEmail

https://media.blubrry.com/teaching_in_higher_ed_faculty/content.blubrry.com/teaching_in_higher_ed_faculty/TIHE569.mp3

Podcast (tihe_podcast):

Play in new window | Download | Transcript

Subscribe: Apple Podcasts | Spotify | RSS | How do I listen to a podcast?

Mike Perkins and Jasper Roe share a practical framework for ethical AI integration in assessment on episode 569 of the Teaching in Higher Ed podcast.

Quotes from the episode

Criticality and pessimism aren't the same thing, especially when it comes to GenAI models.


We wanted to be flexible and have some opportunities for students and faculty to really have open conversations about how AI might be suitably used given the individual circumstances and the cultural context.
-Mike Perkins

One of the things that is happening that we can't deny is that the rate of hallucinations is going down. The capabilities are getting better and better.
-Jasper Roe

Criticality and pessimism aren't the same thing, especially when it comes to GenAI models.
-Jasper Roe

Resources

  • Updating the AI Assessment Scale, by Leon Furze
  • The Artificial Intelligence Assessment Scale (AIAS): A Framework for Ethical Integration of Generative AI in Educational Assessment, by Mike Perkins, Leon Furze, Jasper Roe, & Jason MacVaugh
  • Nick McIntosh
  • Artificial intelligence and illusions of understanding in scientific research, by Lisa Messeri & M. J. Crockett
  • Amelia King
  • Jane Rosenzweig’s Bluesky post: Schitts Creek: The Sequel (Bluesky login required to view)
  • Jane Rosenzweig’s Breakfast Club Ai generated photos mixed with real ones (login required)
  • SIFT Toolbox for Claude (and ChatGPT) Released, by Mike Caulfield
  • Strava
  • Garmin
  • AI and the Future of Higher Ed, by Nick McIntosh
  • The Residence

ARE YOU ENJOYING THE SHOW?

REVIEW THE SHOW
SEND FEEDBACK

ON THIS EPISODE

Mike Perkins

Head of the Centre for Research & Innovation

Dr Mike Perkins heads the Centre for Research & Innovation at British University Vietnam, Hanoi. He is an Associate Professor and leads GenAI policy integration and trains Vietnamese educators and policymakers on this topic. Mike is one of the authors of the AI Assessment Scale, which has been adopted across more than 250 schools and universities worldwide, and translated into 16 languages. His research focuses on GenAI’s impact on education, and has explored various areas within this field. This has included AI text detectors, attitudes to AI technologies, and the ethical integration of AI in assessments through the AI Assessment Scale. His work bridges technology, education, and academic integrity.

Jasper Roe

Jasper is an Assistant Professor in Digital Literacies and Pedagogies at Durham University, UK. Prior to joining Durham, Jasper held senior leadership positions as the Head of Pre-University at British University Vietnam and Head of the Language School at James Cook University Singapore. Aside from academic leadership, Jasper has taught a wide variety of subjects, including educational research methods, sociology and anthropology, English language education, and training in-service teachers on technology use. Jasper's early studies in corpus linguistics and discourse analysis, along with his experience as an English for Academic Purposes (EAP) teacher, led to a natural interest in language applications such as Automated Paraphrasing Tools (APTs) and Digital Writing Assistants (DWAs). Since 2022, he has focused more fully on the implications of Generative AI (GenAI) in educational settings and new methods of fostering AI literacy in higher education.

Bonni Stachowiak

Bonni Stachowiak is the producer and host of the Teaching in Higher Ed podcast, which has been airing weekly since June of 2014. Bonni is the Dean of Teaching and Learning at Vanguard University of Southern California. She’s also a full Professor of Business and Management. She’s been teaching in-person, blended, and online courses throughout her entire career in higher education. Bonni and her husband, Dave, are parents to two curious kids, who regularly shape their perspectives on teaching and learning.

RECOMMENDATIONS

SIFT Toolbox for Claude (and ChatGPT) Released

SIFT Toolbox for Claude (and ChatGPT) Released

RECOMMENDED BY:Bonni Stachowiak
Bend App

Bend App (iOS & Google Play)

RECOMMENDED BY:Bonni Stachowiak
Strava

Strava

RECOMMENDED BY:Jasper Roe
Garmin

Garmin

RECOMMENDED BY:Jasper Roe
AI and the Future of Higher Ed, by Nick McIntosh

AI and the Future of Higher Ed, by Nick McIntosh

RECOMMENDED BY:Mike Perkins
The Residence

The Residence

RECOMMENDED BY:Mike Perkins
Essentials_CoverMockup-2

GET CONNECTED

JOIN OVER 4,000 EDUCATORS

Receive a free Educational Technology Essentials Guide and
weekly update.

Please enter your name.
Please enter a valid email address.
JOIN
Something went wrong. Please check your entries and try again.

Related Episodes

  • EPISODE 554Classroom Assessment Techniques
    Todd Zakrajsek smiling

    with Todd Zakrajsek

  • EPISODE 209Antiracist Writing Assessment Ecologies
    Asao Inoue

    with Asao Inoue

  • EPISODE 259Intentional and Transparent Assessment
    Natasha Jankowski

    with Natasha Jankowski

  • EPISODE 370Toward More Equitable Assessment
    Erin Whitteck square

    with , Erin Whitteck, Douglas Fritz

  

EPISODE 569

A Practical Framework for Ethical AI Integration in Assessment

DOWNLOAD TRANSCRIPT

Bonni Stachowiak [00:00:00]:

Today on episode number 569 of the Teaching in Higher Ed podcast, a practical framework for ethical AI integration in assessment with Mike Perkins and Jasper Roe.

Production Credit:

Produced by Innovate Learning, maximizing human potential.

Bonni Stachowiak [00:00:25]:

Welcome to this episode of Teaching in Higher Ed. I’m Bonni Stachowiak, and this is the space where we explore the art and science of being more effective at facilitating learning. We also share ways to improve our productivity approaches, so we can have more peace in our lives and be even more present for our students. How can educators integrate generative AI into assessment in ethical, transparent, and pedagogically sound way? In today’s episode, Mike Perkins and Jasper Roe introduced the AI assessment scale, a five level framework that helps us navigate the complex terrain of generative AI in education. Doctor Mike Perkins heads the Center for Research and Innovation at British University Vietnam Hanoi. He’s an associate professor and leads Gen AI policy integration and trains Vietnamese educators and policymakers on this topic. Mike is one of the authors of the AI assessment scale, which has been adopted across more than 250 schools and universities worldwide and translated into 16 languages. His research focuses on GenAI’s impact on education and has explored various areas within this field.

Bonni Stachowiak [00:01:52]:

This has included AI text detectors, attitudes to AI technologies, and the ethical integration of AI in assessments through the AI assessment scale. His work bridges technology, education, and academic integrity. Jasper Roe is an assistant professor in digital literacies and pedagogies at Durham University in The United Kingdom. Prior to joining Durham, Jasper held senior leadership positions as the head of pre university at British University Vietnam and head of the language school at James Cook University Singapore. Aside from academic leadership, Jasper has taught a wide variety of subjects, including educational research methods, sociology and anthropology, English language education, and training in service teachers on technology use. Jasper’s early studies in corpus linguistics and discourse analysis, along with his experience as an English for academic purposes teacher, led to a natural interest in language applications, such as automated paraphrasing tools and digital writing assistance. Since 2022, he has focused more fully on the implications of generative AI in educational settings and new methods of fostering AI literacy in higher education. Mike Perkins and Jasper Roe, welcome to Teaching in Higher Ed.

Mike Perkins [00:03:24]:

Thanks for having us, Bonni. Great to be here.

Jasper Roe [00:03:27]:

Yeah. Thanks for inviting us.

Bonni Stachowiak [00:03:29]:

Mike, I wanna start with you because I have heard about so many people curious about this new and innovative way of thinking. I know it’s not new to you, but new to those of us that that learn about it. Tell us a little bit about the origins of the AI assessment scale and particularly what led you to think there was a need for something like this.

Mike Perkins [00:03:52]:

Oh, thanks, Bonni. So the AIAS, we recognized in early twenty twenty three once TACI PPT had really entered our sphere of understanding as as academics, and we saw students using AI in ways that we weren’t fully comfortable with in terms of considering how assessments were being designed and how students were actually tackling those assessments. So we were seeing some what you might call misuse of AI tools, but hard to really define misuse at such an early stage when we’re really talking about these new technologies. My dean, Jason MacVaugh, one of the original authors of the AIS, tasked me to say, Mike, we need to do something about this within our institution at British University Vietnam. We don’t wanna be those guys who ban a technology, and then this comes back later on to realize that they really shouldn’t have done this. And, you know, this came from really high up in our in our institution with our vice chancellor and deputy vice chancellor remembering the days of the of schools banning the Internet, banning computers from certain courses, and recognized we needed some sort of a way to actually get this going. So I’ve been doing a bit of research on this area, and I’d come across Leon Furze, and he had an original style of scale for using AI, which I took a look at and thought that we could do something with this in in HE. And got in touch with him, got in touch with Jasper, and thought there’s maybe some opportunity to develop this in a way that could become a bit more inclusive for broader educational purposes.

Mike Perkins [00:05:37]:

So this is re really where we started off and recognizing that we wanted to have some way of supporting students and faculty alike to think about how they’re gonna introduce AI in an appropriate way in an assessment, where that doesn’t just mean saying, yes, go ahead and use AI however you want. It doesn’t mean just saying, no, you can’t use AI, but also not trying to break it down to, you know, in small individual pieces to to really break things down too much. We wanted to be flexible and have some opportunities for students and faculty to really have these open conversations about how AI might be suitably used given the individual circumstances that, classes were in, the the cultural context that the university or school’s facing, and that’s really where we came to this idea of a of a five point scale.

Bonni Stachowiak [00:06:35]:

Alright. This is the point where we can talk about a shift. So, Jasper, tell us more about rather than just thinking in binary ways, the ways in which the AI assessment scale helps us make a shift in our thinking, a shift in the conversations from one exclusively about academic misconduct to one about some other elements you thought were important.

Jasper Roe [00:07:01]:

Yeah. So I think, you know, there’s there’s been this historical trend of sort of trying to catch out students who are breaking rules and then punish them. And it’s you know, of course, we need to have guidelines for students, and we need to make sure that if someone does act in a way that is completely unethical, then we have ways to deal with that. Right? But one thing that I often encountered, you know, when I was an English teacher was students would lean on technologies as learning tools, and they would use them in ways that were, you know, perhaps left something to be desired, but were really to assist them. And so I think that is one of the guiding principles where we need to start thinking about, well, you know, do we want to restrict and punish students if they’re trying to make use of technologies that are available to them, or do we want to have a more balanced approach where we still maintain some trust? Of course, we need some secured assessments, and we need some, you know, guidelines for dealing with breaches of ethics. But at the same time, let’s move away from this idea that if you, paraphrase slightly incorrectly, then it’s a it’s a punishable offense. You know, there’s there’s been a lot of talk in the assessment world about this not being a detective and not being a police officer in relation to assessment. And I think that’s one of the goals of the AIAS is to start having these conversations about, well, look.

Jasper Roe [00:08:34]:

Let’s let’s think about what we want to do with this assessment, how we’re going to communicate what we’d like from our students, I guess, move away from this discourse of cheating, catching, punishing.

Bonni Stachowiak [00:08:47]:

Another big element, you talked about the the one view that that many have is exclusively never use it. Let’s go the other direction because there certainly are some faculty who think always use it. Everyone should be using it and wanting to have a real stressing of using those tools. How might we address concerns about equity, particularly in context where students may not have equal access to generative AI and other tools?

Mike Perkins [00:09:18]:

Equity is a really challenging one to deal with in these situations because you are talking about a technology which is evolving super quickly. And despite recent pushes, which have seen, I guess, an increase in equity in terms of the model access being available to everyone and and higher levels of models being available, at least to some extent being available to everybody. It is inevitable in the commercial world that AI companies find themselves in that you pay for a technology and you get more usage of the model, or you get that slightly more intelligent model. OpenAI is muddying the waters even further with the the sort of phasing out of telling you which exact model you’re going to be using and saying that, you know, plus users get a more intelligent version of the model and pro users get an even more intelligent version of the same model. So it’s really hard to talk up to really deal with that equity there. I think one of the ways that we can start to deal with that is by an institutional perspective in terms of saying, this is our university or this is our school platform for using AI tools, whether that’s something like University of Sydney’s Cogniti, which is a a really nice feature, whether that’s you just say, we’re gonna use Gemini. We’re gonna use ChatGPT, and just being a bit more consistent with how you do that. And, also, therefore, you can start to ask students to say, well, we’d like you to share your chats from this model, and this is how we’d like you to learn.

Mike Perkins [00:10:56]:

This does then give educated the opportunity to bring in things like custom GPTs or custom prompts and get students to practice using refined models or things that have been slightly adjusted to help support with that equity development. I think that’s one way that we can start to support that without just saying it’s a free for all, and if you have a pay full version, then then just go ahead and use it. I think having a little bit of restriction there is something that you can enforce in most assessments. And even not enforcing it, it can just, again, open up conversations about equity issues within AI overall, and that can start to get students really thinking about the broader societal issues that AI might bring to, to the table.

Jasper Roe [00:11:45]:

One thing we’re really keen on as well is when we talk about integrating AI into education, we also need to be teaching students about these concerns, about the equity concerns relating to AI, about perhaps the environmental concerns or the economic concerns. So it needs to be more than just allowing use. We need to have a real program for AI literacy, and that’s something that we talk about as well. Mhmm.

Bonni Stachowiak [00:12:11]:

So we’ve looked at two extremes, those who say never use it, those who say always use it under all circumstances. I’m I’m excited that we could shift our conversation now to a more nuanced one because that is what the AI assessment scale does. So before, though, we get into that more nuanced, why don’t why don’t one of you just describe what the scale is? What are the different components of it? And then we can kinda dig in and and get some further guidance from you on how we might make use of it as as educators.

Mike Perkins [00:12:45]:

Okay. Sure. So the scale is a five point system for designing assessments. That is the most fundamental way of describing it. It is not a system to trap students. It is not a system for academic integrity, although it can be used to support academic integrity. What it is is a way for a an educator to decide based upon their specific learning outcomes of what they’re teaching to have a conversation with students to say this is the level of AI usage that I expect you to be using in this assessment. Now as you mentioned, that might be no AI.

Mike Perkins [00:13:30]:

So at our level one, we have this no AI usage, and this must be under controlled circumstances. There’s been a lot of educators who big names in this field, Phil Dawson, Danny Lou, who really emphasize this fact that you can’t say don’t use AI and then not actually enforce it. And we’re fully in agreement with this. But then once you go outside of allowing AI, we do think that there’s some flexibility here in terms of the assessment design principles. Now when you say to a student, here’s an assessment. Go home and use this, but don’t use AI. What do you think is gonna happen? You know, of course, the student’s gonna use whatever tools they have available to to get the best marks possible, and we’re never gonna be able to stop that. Even with before AI, if a student doesn’t want to engage fully in the assessment process, they would copy from their friend.

Mike Perkins [00:14:25]:

They would look up answers from the Internet or from Chegg, and they would go through whichever hoops they wanted to go through. But what we can do as educators is to decide how can we create an assessment from the ground up which supports the use of AI at different levels? So after this no AI level, we can say, well, we can actually develop an assessment that focuses on the planning process. So how can a student have a a cocreation process of ideas with an AI tool and then take those as a separate part and then have another assessment to take those ideas into something different? Then at level three, we might have some co writing with AI. When we originally decided on the the AIAS, we had said, you know what? We can use AI for editing. And then we realized very quickly that just saying AI for editing doesn’t really apply to how AI is actually used in real life. Certainly not for me. When I have engaged in a long form piece of writing, by the end of it, it’s a struggle for me to say exactly which words I’ve written, which words have come from, an AI tool, where we’ve got this editing process. Yeah.

Mike Perkins [00:15:44]:

Sure. My words have been tidied up, but some of those words have been created by an AI large language model based upon my ideas. So this idea of AI collaboration is then, a little bit more accepting of the same. Look. You’ve gotta be aware of the limitations of AI tools. You’ve gotta be able to understand what the AI is doing, why the AI is doing that, and being able to modify this to create something that has your voice. When we go beyond that point, we’re then really starting to talk about a full AI integration. So level four, we say, you either can use AI tools to whichever degree you would like or you might have specific requirements of using AI tools from your your teacher.

Mike Perkins [00:16:34]:

So they might say, we want you to use this specific AI tool to do this specific thing. Or they might just say, here’s the task, and you can choose how you do it. Now in reality, when we’re looking at assessments that are designed at level four, of this full AI, of the AIAS, we’re talking about things that if the student didn’t use AI to do it, they would really start to struggle. So this is where we are starting to encourage and push students towards using AI. Whereas at the previous levels, you could not do it and it wouldn’t really harm you. So at level four, we are expecting more from students, and this is why we do need to design assessments from that perspective. We’re not just saying, here’s an assessment that I’ve already designed previously. Now go ahead and just use it using AI because the student can complete that in under an hour, and, really, where’s the learning? So we are asking for more from the students.

Mike Perkins [00:17:30]:

And then the final level of the scale is what we designed as AI exploration. We previously used to stop the scale at full AI, but we saw the rapid developments in the technology. And we did wanna have this as a bit of a future proofing model. So at this final level of the scale, we say this is where you’re using AI to solve new problems, to create new insights from existing datasets, or even design a solution to a problem that we haven’t even thought of right now. And it is all about, an application of generative AI as part of the core learning outcomes. So here we are integrating AI fully into the assessment. It’s not just you can use AI. It’s the whole thing is actually about AI in some way.

Mike Perkins [00:18:17]:

So that’s it. Five point scale for educators to redesign their assessments and support these conversations with the students.

Bonni Stachowiak [00:18:25]:

So helpful. Thank you so much, Mike. Jasper, I want you to help us think about the some of more the more nuance in between these things. And and one of the ways I’ve seen people start to shift was going from a prohibiting any sort of AI to critiquing and reflecting on AI generated content. Could you share any examples that come to your mind in in any discipline of your choice where that gets used well or or perhaps even not well, the idea of, critiquing and reflecting AI generated content?

Jasper Roe [00:19:01]:

Yeah. So I think a a good example is that I used to teach this introduction to cultural anthropology course, and this has sort of changed as the technologies progress now. But one of the things that you’d find with early image generation models is that there’d be quite a high degree of cultural bias and racial bias, and that’s something that’s been documented quite widely. So one thing that I really, you know, enjoyed doing was asking students to generate images using these early models and then sort of try and analyze the, the biased outputs that would come out of that. And, of course, because we say that, you know, AI is a is a black box, we can’t know exactly how these outputs came. But we do know that based on the training data, there seems to be this magnification of existing biases in the Internet data that’s that’s used for these models. So that was one example of, I would say, a sort of assessment where I ask students to generate these images, then I ask them to reflect on these images and use that as a way of fostering a a critical approach to AI. And the learning point there is not just about understanding concepts like cultural relativism, but it’s really about seeing where the potential for these tools to impact things like equity, diversity, and have negative societal implications can can come from.

Jasper Roe [00:20:33]:

So I think that shows that the AIAS is is not a silver bullet. It can’t solve all academic integrity and security problems, but it’s quite flexible, and it can be used in terms of creating content that can even be subject matter for an assessment. Or it can be using the AI tool to, to come to some sort of outcome or produce something that is also part of part of the assessment and learning outcomes. I think that’s the best example that I can give of something that I’ve personally used it for. Mike, I don’t know if there’s any other examples that you’ve come across in the wild or from colleagues you’ve talked to.

Mike Perkins [00:21:13]:

I think just to highlight the the flexibility of the scale and how this has really evolved away from our original perceptions of the scale, We’ve seen maybe a a dozen different variations on this in terms of using it for English for academic purposes and English for foreign language, learning, which we have done some work on, but also using it at lower level students. Amelia King’s got some great examples of this where she’s integrated in her school at lower level schools. And so for juniors and who don’t necessarily understand the terminology. So So I think in terms of that sort of cultural differences, this is a real benefit that we say this isn’t something that you’ve got to use in the exact way that we have said. If you want to use a a mashup of this or you want to have something that is inspired by the AIIS, go for it. This is why we specifically had our Creative Commons license to encourage people to redesign, to redevelop. And from that, we’ve now got, I think, 25, 20 six translations of this, which really allows people to bring in their own cultural practices into the AIS without necessarily giving a a direct word for word translation.

Bonni Stachowiak [00:22:32]:

You talked about cultural relativism, Jasper, and I wanna take us to a similar and dissimilar example, and that is the Overton window. And I’m I’m really I’ve been actually thinking about whether I wanna share this story publicly, but I don’t I don’t think that I’m doing a disservice because I’ve never publicly disclosed who cuts my hair. I’ve never written that person’s name or the company’s name where I go and get get my haircut. So I’m just gonna tell it. I’m gonna say I’m sitting there getting my haircut a couple of weeks ago. And I had I don’t often have this feeling anymore of, like, I can’t believe you just said that about how you’re using AI, but I had this just visceral reaction to it. And I didn’t know if I’m eventually just gonna entirely change my mind about this situation or if there is something here. So I’m gonna test it out on both of you.

Bonni Stachowiak [00:23:23]:

I will not be asking you to virtually cut my hair, though. So we’re good there. So the woman who cuts my hair has kids that are a similar age as ours, but then also has a little second grader. And she was talking about her little second grader was working on a story for school and had come up with an ending that the mom just didn’t feel like was a very good ending to a story. So they put the story into chat g p t, and chat g p t gave the second grader a better, and I quote, a better ending than she had come up with for her own story. And I’m I mean, I I was trying to control the expression on my face because I speaking of police, I’m not there to police someone else’s use, and I I don’t know. I just went I’ve just been having this just I’ve really been struggling with that example of why that bothered me so much and what do we lose when we introduce a tool that maybe doesn’t fix the problem we think it’s fixing and then tell little second graders that they’re not good enough to come up with creative enough and unique enough stories. So I’m curious either either one of you.

Bonni Stachowiak [00:24:32]:

If you have any thoughts on my is this a Overton window? Am I gonna look a year from now and think, what the heck was I thinking? And and should I never have told this story? So, Jasper, you’re gonna start on this. Yeah.

Jasper Roe [00:24:44]:

Yeah. I mean, it’s it’s a really great point that you make, and I think I totally understand where you’re coming from. I can’t think of that many examples, but I’ve definitely encountered times where I thought, wow. I I don’t like that or something about it makes me feel uncomfortable. Right? And

Bonni Stachowiak [00:25:05]:

that is

Jasper Roe [00:25:05]:

a big question because right now, we’re still grappling with so many unknowns. Like, there is some there is some research. There was one paper, I can’t remember the authors now, that that used an experimental model, and they showed that students who were using ChatGPT tended to rely on ChatGPT more than students who didn’t. That was like a controlled experimental study. And you think, well, what are the long term effects of that? So there’s this one author, Lisa Messeri, and, actually, she wrote it with someone else, M.J. Crockett, at Yale University. And they’ve been writing about how AI might lead to these scientific monocultures where our stories, our ideas just get recycled and recycled. And, yeah, what what does it say about limiting human creativity and, you know, our own incredible biological computer that we have, you know, sitting on top of our of our shoulders? So I think these are really valid questions, but I don’t really think that there are clear answers right now.

Bonni Stachowiak [00:26:07]:

Mhmm. Mike, you’ve got some thoughts.

Mike Perkins [00:26:08]:

So that was that was a a super, political answer from Jasper, and he’s not gonna get himself attacked on LinkedIn here. But, but I’m gonna go for it. I’m gonna say, yeah. You know, the the mom putting the the the second greatest story into ChatChi Petit and asking for a better ending, that could be done in a way that destroys the students’ want or desire to ever go into doing more creative writing because they think, oh my god. Well, if I if the this machine can do a better job than I can, well, why am I ever doing this? That’s certainly one possibility. But let’s look at this from another perspective. If this is done in a in a thoughtful way, and maybe we can say, look. We’re at, like, a level two on the AIS.

Mike Perkins [00:26:54]:

We’re talking about planning and thinking about alternatives alternative endings for a story. You can use that as a learning tool to help develop what is gonna capture the attention of people in a different way. Now maybe when we’re talking about a second grader, that’s gonna be a really tough one to do in a sensitive way, But it does give us an opportunity to develop this critical AI literacy at a at an early age and say, this is what the probabilistic perceptions of the whole of literature say about your story. But is that right? Do you what do you think is a better story, and where can we go from there? So I don’t think that the the the mom was necessarily wrong in this situation, but it could be done it’s gonna be gonna have to be done in a really sensitive way, especially when you’re talking about that younger age of students.

Bonni Stachowiak [00:27:47]:

One element that I’m hearing well, actually, two elements. The what is uniquely human and and and coupled with that, building confidence in what makes us uniquely human, those of us who have developed some fluencies using artificial intelligence, generally, I see that associated with a confidence in the ability to critique its output. And we can look at it and go, that is not good, and and that it’s harder for some more novice learners to be able to do that. What’s been your experience with maybe not being as reluctant to have students use it, but still cultivating their ability to whether it’s saying what is right or wrong, quite literally, factually speaking, or just growing some confidence in what makes us human and creative to be able to do things that a AI could not presently do.

Mike Perkins [00:28:47]:

Okay. So I think for me, this is, an area where it’s hard to step outside because I have been quite involved in the and aware of the technology since just before ChatGPT was coming out. So I feel like I’ve always been quite critical and quite aware of where the current limitations stand on AI. So I know when I can trust and when I when I can’t trust. And this is also a benefit of being at looking at things from a perspective where you have existing knowledge in a in an area. And so I haven’t really known where I’ve been asking AI to do something, and I don’t know whether or not it’s right or wrong. So for me, getting students to to think about this has been a challenge for me specifically. And one of the things that I’ve been asking students to to deal with this, at least in the early days of Chatchi Petit, is to say, okay.

Mike Perkins [00:29:47]:

So if you’re gonna be asking an AI tool, and we are talking here ChatChiPT 3.5, you know, back in the days of November, December ’20 ’20 ‘2, find another source to back up what you’re saying. Does this actually match up? And students would would generally be able to find some sources, but at least it’s getting them to think about what the situation is from another perspective. So they’re starting to at least recognize that not everything is completely generated by AI, including completely fabricated reference list. And we say, okay. So you say you’ve written this yourself. Well, can you can you show me this can you can you find me this paper? Here in this meeting, there’s they go and find this paper. And they’re like, oh, I I can’t find it. It must be on my laptop somewhere else at home.

Mike Perkins [00:30:45]:

And it’s like, this is the need to be critical. And this is in those early days we were having those conversations where you could actually have a teachable moment. I think the technology has evolved a little bit beyond that point where for most people and for most questions, you’re gonna have a higher degree of confidence. Now that’s not to say that AI is gonna be right about everything, but I think we can have a a little bit more trust in some areas now, at least beyond the the descent that comes from people who are very much an AI refusenik. And he and saying AI get AI three years ago answered a question incorrectly. Therefore, all AI is wrong, and I’m never gonna use it in my class, and students are never going to use that either.

Bonni Stachowiak [00:31:33]:

Mhmm. Jasper.

Jasper Roe [00:31:34]:

Yeah. I think it’s it’s such a good point because one of the things that is happening that we can’t deny is that the rate of hallucinations is going down. The capabilities are getting better and better. You yes. Of course. There are still things like, you know, there was the whole strawberry thing, and then, you know, there are all these famous cases of hallucination. Like, what was it, Gemini, where it said that you needed to put rocks on a pizza or you needed to eat a couple of rocks a day? Yeah. Those things are happening, but they are becoming less frequent.

Jasper Roe [00:32:08]:

And so that does make us reevaluate some of the the critical views that we had early on. And when we talk about being critical, I think sometimes there’s this tendency to confuse criticality with just pessimism. But criticality should be about not just saying what’s wrong and what the risks are, but also taking a really clear look about what the potential benefits could be, especially for students and for things like assessment, it it’d be wrong to deny that there are a lot of benefits. So, like, recently, I’ve been writing this piece on how to use Gen AI in educational research. And when I first began that piece, I was struggling to see many use cases where it would be a really good value proposition to engage in Gen AI. But by the time I’d finished writing it, because it took me, you know, maybe close to a year, I had to really reevaluate and change what I was saying. So I guess that’s just one area that I wanted to highlight. You know? Criticality and pessimism aren’t the same thing, especially when it comes to to Gen AI models.

Bonni Stachowiak [00:33:17]:

This is the time in the show where we each get to share our recommendations, and I just added one based on what the two of you have been talking about. It feels so, so important for us to be looking at this now. I’ve been following the work of Mike Caulfield for almost ten years now, and he speaking of people who are more pessimistic or critical or the he’s not someone who is just lightly positive and optimistic. So when he says this changes everything, I’m gonna sit down and say, what changes everything? He’s been experimenting with some custom GPTs. He’s getting a lot of success using Claude, but the tool I’m about to share also works if you copy and paste it into chat GPT. He’s created a SIFT toolbox. And if you aren’t familiar with Mike Caulfield’s work, Sift is a fact checking set of skills and fluencies that work really well. I I keep I was joking post on social media every time I teach it to students.

Bonni Stachowiak [00:34:26]:

I always go, this is the magic that, you know, he’s that he speaks of. It’s really quite brilliant to see. And I after I I did share a video because he asked people to experiment with the toolbox, and he invited us to share. He really wants to get the word out there about it, so I did share something on my YouTube channel. I’ll share a link in the show notes for anyone who wants to check that out. I would suggest that you do and suggest that we keep tabs on his continued, modifications to this tool. And now I’m thinking back to an episode I did with Maria Anderson a while back. I will also post a link to this on the show notes.

Bonni Stachowiak [00:35:05]:

But, essentially, she’s really been for for quite some time now telling us rather than saying that our learning outcomes, every single one of them, we have to have mastered every single thing we ever teach. And instead, we really have to think about, should we just know that this thing exists? A scale it’s it’s, diff a different scale, very similar in structure, by the way, to the the one being shared by Mike and Jasper today, all the way to mastery. So not assuming every learning outcome has to be fully mastered, but that some things we should just know this thing exists. Or so now I’m I’m wrestling with that with the SIFT fluencies that I’ve been teaching and also using practically on a daily basis myself for years now. How much of the SIFT toolbox should entirely replace those skills and and and be able to outsource that to an AI entirely, or how much will I want to to attempt? Although, back to Mike’s earliest point in our conversation today, I don’t know that it makes a lot of sense to try to prohibit something when I don’t have control over really whether or not they they use these tools. So, anyway, I’m really wrestling with that, but have been very, very impressed with the SIFT toolbox, encouraged by what it might be able to do for our society We’re more people able to use these technologies for the kinds of common good that Mike Caulfield’s work so brilliantly exemplifies. I have one more recommendation. This is the one I was planning on recommending in the first place, and now I kinda hang in my head in shame a little bit.

Bonni Stachowiak [00:36:42]:

But I want everyone just to bear with me here. It is called the Bend app, and the Bend app is a stretching and flexibility app. I have not been successful at regular habits of stretching even though I’ve talked about it on the show. I’ve recommended playlists in the past and things like that. But as of Mike and Jasper and I sitting here, I am happy to report I have stretched for five days straight, and it really seems to have scratched an itch or to have been able to help me just create a really simple habit of stretching. And I and I would think that, you know, if I were you listening, I would think, well, this has nothing to do with AI, but a lot of times the recommendations don’t. Well, I am here to tell you they’re actually within the Bend app is an AI tool where I can go in and say I have trouble with my neck and with my shoulders, and it’ll actually use your free flowing narrative to generate essentially a playlist of different kinds of stretches that are exactly what you’re looking for and exactly the time frame that you would like it to take. So those are my two recommendations, the SIFT toolbox from Mike Caulfield and the Bend app, which is available, by the way, on both iOS as well as on Google Play.

Bonni Stachowiak [00:37:55]:

So, Jasper, I’m gonna pass it over to you for whatever you’d like to recommend.

Jasper Roe [00:37:59]:

Okay. Awesome. Well, it’s a little bit of a mixed recommendation. It’s just a couple of things I’ve been experimenting. One, I’m really, really late to the game, but it’s Strava. So Strava is the social networking app for exercise, and listeners won’t be able to see this, but Mike is laughing because he’s been using it for ages. But I did notice when I looked at it that it had AI powered insights, which, you know, is supposed to, give you some information on your your exercise performance. And then that links into another thing, which is Garmin.

Jasper Roe [00:38:32]:

I don’t know if any of the listeners use a Garmin device, but they’ve recently caught a bit of, a bit of flat online because they’ve suddenly released a subscription plan for their app. And one of the things that they’re using to justify charging a fee, AI insights into your training. And it’s been posted online a lot that the AI insights so far, you know, they leave quite a lot to be desired. There are things like, you did a lot of exercise this week. That is good for you. So, those are a couple of things that I’ve been planning with this week.

Bonni Stachowiak [00:39:05]:

Oh, that’s great. I you know, we could set up a business where we don’t even have to charge a subscription, and we can tell people that exercise is good for them. But you are having good success with Garmin, though. You like the gadgets, just not necessarily paying the subscription for them. Yes.

Jasper Roe [00:39:18]:

Yeah. And I I love Garmin, so I hope that doesn’t come off as me, you know, speaking badly about them. Yeah. But I’m not sure about the AI insights right now.

Bonni Stachowiak [00:39:26]:

Yeah. I see people on social media sometimes referencing Strava, but I’ve never used it. And and also speaking of the trite things, you know, everyone seems to be baking AI into their applications whether we’re asking for it or not. But it is fun when some company does something novel with it. You know? So, like like, with my example of the Bend app, yeah, that’s fun. Alright, Mike. What do you got for us to close out the episode?

Mike Perkins [00:39:49]:

Okay. So I think, like all of us, I occasionally suffer from AI fatigue. I get to the point where I just can’t open up LinkedIn anymore, and I don’t wanna read about AI even though this is a big part of my job, my research, my life. And sometimes I just I can’t deal. So who I often rely on to give me a bit of an update when I’ve had a few days off, I wanna give a shout out to Nick McIntosh as a learning futurist from RMIT Vietnam. He based in Hanoi, and he does a weekly a weekly newsletter talking about the latest updates to in the world of AI. So I know that even if I’ve just had a few days off and don’t wanna engage deeply with everything, I can still kinda have a bit of a catch up on what’s been going on. And, you know, for example, as Lace posted where I learned about the the the newest runway models that have been coming out and some of the the the latest image generation models that, despite the news and the, you know, everybody posting action figures of themselves or or studio portraits that there are actually even newer and more novel models that are that are continuing to come out from existing providers.

Mike Perkins [00:41:07]:

So that’s where I’ve been going when I haven’t been on LinkedIn. And when I haven’t been on LinkedIn, what I have been on is, is Netflix. And so a a a nice easy recommendation from from me is The Residence on Netflix. Really huge, huge cast about a murderer in the White House. It’s fantastic, features Uzo Aduba from Orange is the New Black fame, and along with a whole host of other characters, including Kylie Minogue because why not have Kylie Minogue as Kylie Minogue in in your Netflix? So, yeah, that’s a strong recommendation from me.

Bonni Stachowiak [00:41:46]:

I just closed out so many of the season finales of shows that I love. And, Mike, you’re you’re just about to fill a special hole that just got left in my in my life after this past weekend. Thank you so much for that. I wanted to just share. You were talking about the and we didn’t we didn’t talk too much about this, but the advances in image generation. I’d I’ll put I’ll put links to this in the show notes for anyone interested, but someone was posting on Blue Sky about breakfast club image. One was AI generated. One was not.

Bonni Stachowiak [00:42:18]:

And I was able to suss it out. I’m I’m old enough to have watched the Breakfast Club when it when it came out as a teenager or whatever age I was at that time. And so I it it wasn’t that hard of a test, but that same person then later the in the day posted a image purportedly from Schitt’s Creek, a comedy. And had I not seen this woman’s earlier comparison one, you know, can you tell which one of these is from The Real Breakfast Club, and I had only seen that, I would have so, so, so fallen for thought thinking like, oh my gosh. They’re gonna have a baby together. Like, I was I I would have so fallen for that in terms of AI generation. So we’re definitely, one one constant we can say is change, and, especially, it’s been pretty remarkable to see these image generation tools, which I haven’t really had tons of time to play with yet, but but interested in in doing some exploration on that in the coming weeks. Well, thank you both so much for your time today and being in conversation, and thanks to Leon first for the introduction to both of you as well.

Bonni Stachowiak [00:43:26]:

What’s so inspired by your work, and, Mike, I’m glad for the times when you’re able to be on LinkedIn because I’ve already learned a lot from you in the process, and I’m just looking forward to following both of your work. From here, thank you so much for your time today.

Mike Perkins [00:43:41]:

Thank you, Bonni. Real pleasure to be on the show.

Jasper Roe [00:43:44]:

Yeah. Thank you, Bonni. It’s been great.

Bonni Stachowiak [00:43:48]:

It was so great getting to have this conversation with Mike Perkins and Jasper Roe. Thanks to each of you for listening. Today’s episode was produced by me, Bonni Stachowiak. It was edited by the ever talented Andrew Kroger. Podcast production support was provided by the amazing Sierra Priest. If you’ve been listening for a while and haven’t signed up for the weekly update that comes from Teaching in Higher Ed, you’re gonna wanna do so because you’ll receive the most recent episodes show notes, and this one’s gonna be a good one. And you’ll also receive some other resources that don’t show up in the show notes. Thanks for listening, and we’ll see you next time on Teaching in Higher Ed.

Teaching in Higher Ed transcripts are created using a combination of an automated transcription service and human beings. This text likely will not represent the precise, word-for-word conversation that was had. The accuracy of the transcripts will vary. The authoritative record of the Teaching in Higher Ed podcasts is contained in the audio file.

Expand Transcript Text

TOOLS

  • Blog
  • Podcast
  • Community
  • Weekly Update

RESOURCES

  • Recommendations
  • EdTech Essentials Guide
  • The Productive Online Professor
  • How to Listen to Podcasts

Subscribe to Podcast

Apple PodcastsSpotifyAndroidby EmailRSSMore Subscribe Options

ABOUT

  • Bonni Stachowiak
  • Speaking + Workshops
  • Podcast FAQs
  • Media Kit
  • Lilly Conferences Partnership

CONTACT

  • Get in Touch
  • Support the Podcast
  • Sponsorship
  • Privacy Policy

CONNECT

  • LinkedIn
  • Instagram
  • RSS

CC BY-NC-SA 4.0 Teaching in Higher Ed | Designed by Anchored Design