• Skip to primary navigation
  • Skip to main content
  • Skip to footer

Teaching in Higher Ed

  • Podcast
  • Blog
  • SPEAKING
  • Media
  • Recommendations
  • About
  • Contact
EPISODE 584

A Different Way to Think About AI and Assessment

with Danny Liu

| August 21, 2025 | XFacebookLinkedInEmail

https://media.blubrry.com/teaching_in_higher_ed_faculty/content.blubrry.com/teaching_in_higher_ed_faculty/TIHE584.mp3

Podcast (tihe_podcast):

Play in new window | Download | Transcript

Subscribe: Apple Podcasts | Spotify | RSS | How do I listen to a podcast?

Danny Liu shares a different way to think about AI and assessment on episode 584 of the Teaching in Higher Ed podcast.

Quotes from the episode

There is no way to really know if the rules that you're putting in place are going to be followed by students, and it doesn't mean that we need to detect them or surveil them more when they're doing their assignments.

Our students are presented with this massive array of things they could choose from. They may not know the right things to choose or the best things to choose. And our role as educators is to kind of guide them in trying to find the most healthy options from the menu to choose from.
-Danny Liu

People want to give their students clarity. They want to give their students a bit of guidance on how to approach AI, what is going to be helpful for them for learning and not helpful for learning.
-Danny Liu

There is no way to really know if the rules that you're putting in place are going to be followed by students, and it doesn't mean that we need to detect them or surveil them more when they're doing their assignments.
-Danny Liu

We need to accept the reality that students could be using AI in ways that we don't want them to be using AI if they're not in front of us.
-Danny Liu

Not everyone lies. Most of our students want to do the right thing. They want to learn, but they have the temptation of AI there that is saying, I can do this work for you. Just click, just chat with me.
-Danny Liu

Our role as teachers is not to be cops, it's to teach and therefore to be in a position where we can trust you and help you make the right choice.
-Danny Liu

Resources

  • Menus, not traffic lights: A different way to think about AI and assessments, by Danny Liu
  • Talk is cheap: why structural assessment changes are needed for a time of GenAI, by Thomas Corbin,Phillip Dawson, &Danny Liu
  • What to do about assessments if we can’t out-design or out-run AI? by Danny Liu and Adam Bridgeman
  • Course: Welcome to AI for Educators from the University of Sydney
  • Whitepaper: Generative AI in Higher Education: Current Practices and Ways Forward, by Danny Y.T. Liu, Simon Bates
  • Five myths about interactive oral assessments and how to get started, by Eszter Kalman, Benjamin Miller and Danny Liu
  • Interactive Oral Assessment in practice, by Leanne Stevenson, Benjamin Miller and Clara Sitbon
  • ‘Tell me what you learned’: oral assessments and assurance of learning in the age of generative AI, by Meraiah Foley, Ju Li Ng and Vanessa Loh
  • Interactive Oral Assessments: A New but Old Approach to Assessment Design from the University of South Australia
  • Interactive oral assessments from the University of Melbourne
  • Long live RSS Feeds
  • New AI RSS Feed
  • New AI RSS Page
  • Broken: How Our Social Systems are Failing Us and How We Can Fix Them by Paul LeBlanc

ARE YOU ENJOYING THE SHOW?

REVIEW THE SHOW
SEND FEEDBACK

ON THIS EPISODE

Danny Liu

Professor of Educational Technologies

Danny is a molecular biologist by training, programmer by night, researcher and faculty developer by day, and educator at heart. A multiple international and national teaching award winner, he is Professor of Educational Technologies at the University of Sydney where he co-chairs the University's AI in Education working group and leads the Cogniti.ai initiative that puts educators in the driver's seat of AI.

Bonni Stachowiak

Bonni Stachowiak is dean of teaching and learning and professor of business and management at Vanguard University. She hosts Teaching in Higher Ed, a weekly podcast on the art and science of teaching with over five million downloads. Bonni holds a doctorate in Organizational Leadership and speaks widely on teaching, curiosity, digital pedagogy, and leadership. She often joins her husband, Dave, on his Coaching for Leaders podcast.

RECOMMENDATIONS

Long live RSS Feeds

Long live RSS Feeds

RECOMMENDED BY:Bonni Stachowiak
New AI RSS Page

New AI RSS Page

RECOMMENDED BY:Bonni Stachowiak
Broken by Paul LeBlanc

Broken by Paul LeBlanc

RECOMMENDED BY:Danny Liu
Woman sits at a desk, holding a sign that reads: "Show up for the work."

GET CONNECTED

JOIN OVER 4,000 EDUCATORS

Subscribe to the weekly email update and receive the most recent episode's show notes, as well as some other bonus resources.

Please enter your name.
Please enter a valid email address.
JOIN
Something went wrong. Please check your entries and try again.

Related Episodes

  • EPISODE 528Assessment Reform for the Age of Artificial Intelligence

    with Jason Lodge

  • EPISODE 554Classroom Assessment Techniques
    Todd Zakrajsek smiling

    with Todd Zakrajsek

  • EPISODE 489Teaching with Artificial Intelligence

    with Lindsay Doukopoulos

  • EPISODE 209Antiracist Writing Assessment Ecologies
    Asao Inoue

    with Asao Inoue

  

EPISODE 584

A Different Way to Think About AI and Assessment

DOWNLOAD TRANSCRIPT

Bonni Stachowiak [00:00:00]:

Today, on episode number 584 of the Teaching in Higher Ed podcast, Why Traffic Lights Don’t Work for AI Assessment and What To Do Instead, with Danny Liu. Produced by Innovate Learning, Maximizing Human Potential.

Bonni Stachowiak [00:00:22]:

Welcome to this episode of Teaching in Higher Ed. Hi, I’m Bonni Stachowiak, and this is the space where we explore the art and science of being more effective at facilitating learning. We also share ways to improve our productivity approaches so we can have more peace in our lives and be even more present for our students. I’m thrilled to be welcoming to the show Danny Liu, who was introduced to me by Jason Lodge. Danny Liu is a professor of educational technologies at the University of Sydney, and he joins me today to talk about what it means to trust students in the age of artificial intelligence. You are going to hear about metaphors that we are going to extend the heck out of, but have a lot of fun doing it, including the metaphor of a Disney cruise. So get ready to travel with us today and hear about how we might rethink traffic lights as a means for assessing and setting some parameters around artificial instead, and instead use a menu framework, which Danny’s going to share with us today. We’re going to talk about what AI can and can’t do for learning and the real danger of corroded trust between students and professors, and how much we need to think about creating secure and open assessments that prepare students for real world challenges.

Bonni Stachowiak [00:01:56]:

Whether you’re grappling with ChatGPT in your own teaching or looking to foster student agency in new ways, this episode offers a refreshing, practical, and, as you’ll hear, deeply human perspective. Danny Liu, welcome to Teaching in Higher Ed.

Danny Liu [00:02:14]:

Thanks for having me.

Bonni Stachowiak [00:02:15]:

I was so glad toward the end of my conversation with Jason Lodge when he mentioned about your work and you and I have been working on scheduling ever since. And I’m just so grateful for you setting aside the time to talk to us today. And not only that, but we get to. To join you at least in our mind’s eye on a Disney Cruise, just to kick off our episode. So take us on a Disney cruise, Danny, and tell us what Disney cruises may have to tell us about menus.

Danny Liu [00:02:46]:

Oh, gosh, I wasn’t expecting this one. So we, we went with the family on a Disney cruise just from Sydney to New Caledonia and back. I’m a terrible person at sea. I get horribly seasick and so most of the time was just spent looking outside into the distance where she can get off the boat. But the really interesting thing about Disney cruises is that every night you get sent to a different restaurant. And it’s a bit bewildering because there’s dances and there’s lights and there’s sounds, and there’s a really massive menu of options of food that you could choose from. And on a cruise, all you do is really sleep and eat and do water slides. And halfway through being sick on the cruise, I was thinking about AI obviously, as I do, and also thinking about the restaurants that we visit every night.

Danny Liu [00:03:32]:

And what they have on a Disney cruise is to help you navigate this really complex menu. Every night, they have a serving team which follows you around. The serving team is the same one who gets to know you as a family. You have a head server who introduces you to the menu and says, I remember that you don’t like this or you’d really like this. So I recommend these things from the menu. That level of care and familiarity with us and the menu really got me thinking about what we’re doing as educators when it comes to AI in that our students are presented with this massive array of things they could choose from. They may not know the right things to choose or the best things to choose. And our role as educators is to kind of guide them, as our serving team did, in trying to find the most healthy options from the menu to choose from.

Danny Liu [00:04:21]:

And so, you know, every night, we would go to a different restaurant and we would be introduced to the menu. And the serving team, who got to know us personally as a family over the course of the cruise, their role was to connect our needs with the menu that was available. And so, as educators, our role really is to help students to grasp the diversity of ways they could consume. AI know that. Help them to understand that there are healthier choices and less healthy choices. And as educators, our role is not to say to them, you can’t eat X from the menu because, you know, there’s nothing stopping them from eating X from the menu. But rather, our role is to help them, saying, for this particular assessment or this activity, these are the healthy choices from the menu which will help you learn and will show you what they’re like. And for this activity, these are the less healthy choices.

Danny Liu [00:05:11]:

If you choose those choices, it’s up to you. You could have five desserts today, but if you do, and we did one day in the Disney cruise, if you do choose these five desserts, you’re going to get sick, you’re not going to learn, and therefore, it’s going to detract from your experience and your learning. At university. So I think that’s where the menu analogy came from for us.

Bonni Stachowiak [00:05:29]:

So you tried on a menu analogy and I’m going to try to extend your menu analogy, but I’m going to also warn us both in advance that I could totally fail. Yours is cruise related. I want you to, I want to take you on a cruise that never happened. So our family, the kids, grandparents, they, they had wanted, they just love bringing the whole family together. So we actually had a Disney cruise planned. The grandparents were going to take all the grandkids and everyone else on a cruise. I’m going to give you the date of the planned cruise and you’re already going to know why it didn’t happen June of 2020. And when we say dates like that, people from all over the world that work in higher education can instantly flashback and think about the things that were happening in June 2020.

Bonni Stachowiak [00:06:13]:

And the reason I thought I might use my own menu or Disney cruise gone awry is that a lot of what was happening then was well intentioned attempts to assess learning that I truly believe the vast majority of people do not mean to be horrendous in their choices for assessment. But we’re not always able to see where our ethics and values get tangled up in our attempts to assess. And so we saw headlines at the time for things like surveillance of female students. Two vivid things come to mind. For me, maybe three never list three things because I’ll never remember the third one, but one where a proctoring company asked a young attractive woman to take her webcam and view her lap. And another where a young woman was throwing up and was ill and asked to be able to leave the room so she could throw up not in front of the camera. And they said no, if you throw up, you will fail this exam. So she threw up on camera.

Bonni Stachowiak [00:07:22]:

And a third one, this one maybe is a, I mean it’s not an urban legend, but I certainly didn’t see video of this. But professors who required students to have their cameras on and weren’t able to have the, I want to say good, but that sounds so value laden. But as a woman who has had two children, I would know not to require people to have their webcams on because yeah, people might want to breastfeed who have young babies, but whatever professor this was did just didn’t have the good sense to know, gee, maybe 100% of the time we shouldn’t require people to have the webcam on. That there would be lots of reasons why for me, that may work in My unique life context. But why for other people, this wouldn’t. So help me extend this. My little cruise gone wrong. I never got to experience the wonderful guide that you describe.

Bonni Stachowiak [00:08:17]:

What is coming to mind for you about our well intentioned efforts to make assessment scales work when they actually just aren’t going to work because the level of surveillance is impractical and even if it was practical, might bump up against some of our values that we hold so dearly. So I realized that was a lengthy thing and you probably have lots you want to share and I can’t wait for us to dig into this meal together on seeing if my analogy here worked in any way, shape or form.

Danny Liu [00:08:50]:

Let’s stretch this metaphor as far as it can go.

Bonni Stachowiak [00:08:52]:

Yes. Yes.

Danny Liu [00:08:53]:

No, it’s good. I think you’re right that faculty across the world have been trying to grapple with what this AI thing, which we didn’t ask for, is going to mean for the value of what we’re doing in higher education. And many people are well intentioned in how they want to approach this. I think the ideas of traffic lights and assessment scales are extremely well intentioned. People want to give their students clarity. They want to give their students a bit of guidance on how to approach AI, what is going to be helpful for them for learning and not helpful for learning. But my take on the assessment scales and traffic lights is that they offer clarity. We think they offer clarity.

Danny Liu [00:09:33]:

But there are some issues around reality. And one of the realities is that students will do not need to follow what you say. If it’s a take home assessment, if it’s something where we’re not kind of, they’re not there with us in person, then there’s really no stopping them from choosing other items or rather not following your yellow light or level three guidance. And so the research coming out around AI and one of the main things that have been coming around is students are massively confused. They’re massively confused because different courses are telling them different things the traffic lights are saying or scales are saying particular things. But you know, there’s no way for us to actually tell if they’re actually following this or not. And as Tricia, who’s been on your podcast recently, likes to say in her book, you know, if there’s no way to really know something is happening or not, then it gives other students more rationalization to say, well, why should I? Why should I bother listening to this thing? And so I think the trouble with these traffic lights and scales is that there is no way to really know if the rules that you’re putting in place are going to be followed by students, and it doesn’t mean that we need to detect them or surveil them more when they’re doing their assignments. Like, we don’t want to go around and do process tracking and those kind of solutions because that’s just tech on tech.

Danny Liu [00:10:57]:

But instead what we need to do is I think we need to accept the reality that students could be using AI in ways that we don’t want them to be using AI if they’re not in front of us. And so the approach that we’re doing at University of Sydney is really to accept that reality and to say yes, for these assessments where the students are not in front of you, it’s going to be open. They’re going to do things that we don’t necessarily want to do. And our role as educators is not to police the use of AI in these open assessments, but rather to be able to design open assessments, to be motivating, to be encouraging of student thought, to be guiding them through a menu of options that we know they can choose from freely, that we don’t have any say in what they do or don’t do. But we do have a say in helping them to understand what’s healthy and not healthy. And so having these open assessments, but also having other assessments where they are in front of us, we, we can talk with them, we can sit with them, we can understand if they’ve actually truly learned in our courses. So that’s kind of the approach that we’re taking with these traffic license scales, saying it doesn’t really fit with reality and therefore we need something else.

Bonni Stachowiak [00:12:03]:

There are two key things that come up in so many of these conversations. One is that I want us to be so careful that we can really understand what, what you said, you said to accept the reality, but that doesn’t mean accepting a reality. And this is not at all what you implied, by the way. But it’s just so easy for our brains to go there. That doesn’t mean everybody lies, everybody cheats, everybody’s trying to get one up on you. So we really have to caution ourselves against that. That is not what you’re saying. The first time I ever used, to my knowledge, big caveat there.

Bonni Stachowiak [00:12:42]:

To my knowledge, the first time I ever used today’s chat bot based large language models, it happened accidentally inside of the note taking app that I use. I did not seek out to use it. And now between Microsoft and today, as of our having this conversation today, instructure the makers of Canvas, have just announced that all of the AI stuff and it’s too new for me to have sifted through it all again as of this conversation. But so it is kind of ubiquitous. So to to just blanket say all students are liars, cheaters and there’s no way we could ever catch them. So we give up on any sense of ethics or integrity. That is not at all what you said. I just need to reference in case this is the only time someone’s ever listened to this podcast that that’s really important.

Bonni Stachowiak [00:13:31]:

But then the second thing that I’m hearing you say is when I think about food and I think about nourishment and Danny, of course we’re just going so wild on this analogy, but I’m thinking if I could have been with you on that Disney cruise, I don’t eat red meat except for, by the way, pepperoni. Does that make any logical sense? No, it doesn’t. And then we have our friend’s son is allergic to peanuts. And so I mean just think about all of the myriad. So the first half of I guess this part of our conversation is like, please don’t assume everyone’s a liar and a cheater. And this second part is please understand the vast variables that take place in any learning context and especially as we start to wrestle through artificial intelligence. So I’ll allow you to perhaps if you anything you’d like to say around yes, maybe let’s not assume everyone’s a liar and a cheater. And second, anything you’d like to say around the variability in one’s diet, one’s nourishment, one’s learning.

Danny Liu [00:14:44]:

So I think one thing that has been quickly corroded when AI came in is trust. Trust between teachers and students, primarily because we were using AI without telling students and they were using AI without telling us. And basically we just hush hushed it. And a couple of recent articles out from New York Magazine and New York Times, I’m sure your listeners have seen those really just spoke to how professors using ChatGPT and students don’t like it, students using ChatGPT and professors teachers don’t like it. And so the trust element I think is really important. And the reason why I bring that up is because what we’re trying to say, I guess through a menu metaphor and through an open assessment where we just accept the reality that students, we’re going to assume that students will be using AI and in ways that we don’t necessarily want them to be using AI, even though we want to tell them, with our best intentions, not to use AI in certain ways when they’re not in front of us. We just don’t know. We’re saying to students in those open assessments that we will be your guide, as your teachers.

Danny Liu [00:15:46]:

We will trust you to make the good decisions. We will also have these assessments where they will be secure, and they’ll be secure in a way which is in person. We’ll have a conversation with you wherever it is, and then you can, through those assessments, trust us as educators, as an institution, that the piece of paper you walk away with at the end of the day matters and mean something. That we’re going to have these assessments where open assessments, we’re going to trust you to make the right choices and we’re going to have these secure assessments where you’re going to trust us, that you’re going to be accurately measured on what you’ve learned. And so that’s kind of why we go for this kind of open, secure approach to assessment at Sydney, and also why we don’t, I guess, prefer the metaphor of scales. Because with scales, it implies that there is a decision that we can make for students about what they do with AI. And the reality is there is no decision we can make for students. In a way, it’s taking away the agency and trust with students because it’s saying to them that, well, for this assessment, it’s going to be an orange light.

Danny Liu [00:16:53]:

You can only use AI for X, Y or Z. And that kind of takes away the agency in a way. So I think not everyone lies. I want to say that, I guess most of our students want to do the right thing. They want to learn, but they have the temptation of AI there that is saying, I can do this work for you. Just click, just chat with me. And what we need to do as educators, we need to shift that conversation towards, yes, we know that this AI is there to do the work for you, but our role as teachers is not to be cops, it’s to teach and therefore to be in a position where we can trust you and help you make the right choice. So that’s the first one in terms of the variability in our learners.

Danny Liu [00:17:33]:

I think it’s really important to remember that people learn in all different ways and they’re all different kinds of people with different needs. And so I was having a chat with a couple of my writing colleagues recently. I’m a biologist, and so chatting with people from the other end of the spectrum is often refreshing. And I was asking them about how some of their Colleagues are putting in place things called anti AI pledges or getting their students to, to shun AI in class. And I was asking them how they perceived this meant and they said, you know, if we make those decisions for our students, we’re taking away their agency. We’re taking away the agency to choose what works for them in their learning, what their needs are in their learning. Perhaps they have particular learning needs that we don’t even see which AI can help them through. And so I think it’s really important to think about that.

Danny Liu [00:18:27]:

Are we building classrooms within the context of AI that are giving increasing agency for our students and increasing that trust relationship? Or are we putting in place policies, guidelines, syllabus statements which are actually eroding that trust? And not only eroding that trust in a way which is saying we’re going to be putting in these rules that we know that you’re not necessarily going to follow. We’re just going to turn a blind eye to that or we’re going to be eroding trust in other ways. So I think it’s really important to recognize that that AI can help in many different ways and our role is to guide, not to be cops.

Bonni Stachowiak [00:19:04]:

Let’s move ourselves then into some practical examples of what this might look like. So describe for us, since most of us probably could have at least some, by the way, in my case, a very minimal knowledge of biology. We’re really working at a low bar here. Danny, I gotta warn you, a really simple assignment that may have occurred in an Introduction to Biology course for undergraduates before Nova November 2022 with the release of AI’s chat based large language model. So what, give me a contrast. What might that assessment? By the way, those typical assessments back then may not have been very effectively, you know, thought through for integrity reasons, et cetera, but like just do a contrast for us. What does this kind of used to look like and then what might it look like today at the University of Sydney, using a menu approach to assessments.

Danny Liu [00:20:02]:

Okay, so a very typical undergraduate biology thing is a lab report or maybe a mini paper as we would call it. And so pre chatgpt we would have students do an experiment in class and then take that away and do some analysis, look up the literature and then basically write up a report like a mini paper. Those taught students valuable skills like data analysis, literature searching, putting together different ideas and putting together a coherent train of thought. Most of the time coherent. And so in today’s, I guess, world with AI, students could use AI for every single one of Those things, they could feed their data into an AI that analyzes this and then get that same AI to write up a result section, a discussion section to look up real literature, non hallucinated references, and put it all together in the space of five minutes. And very easy for people to do. And so I guess our role as teachers in that context would be to say to students, okay, what are the key learning outcomes we want you to gain? Is it to learn data analysis? Is it to learn long form writing? Is it to scour the literature? And when we can determine what those key learning outcomes or our values are as teachers, then we can help students to say, okay, well, the key learning outcome here is integration of literature into your results. Okay, it’s not data analysis, at least not yet.

Danny Liu [00:21:27]:

So we’re going to show you some AIs that could help you speed up the data analysis. You don’t have to spend all your time doing that because that’s not the key learning outcome. But one of the key learning outcomes is the integration of these ideas from literature into your writing. And so we’re going to encourage you not to use AI there, knowing full well that you could choose the unhealthy choices and we probably wouldn’t know. Right. And then, so our role then is to be teachers and motivate the students towards these healthy choices of AI and guide them and show them what they look like. Again, just telling them, knowing full well, we just don’t, we won’t, we won’t actually know. But then saying to them, because we think it’s really important for you to have achieved this learning outcome of integrating literature with your data or all those kinds of things.

Danny Liu [00:22:13]:

We’re going to have these secure assessments later on where we will actually be able to measure this. Maybe it’s a conversation with you in class, five minutes where we will say, okay, well you see how you did this thing here in your, in your paper. What happens if the judiciary said this? What would that mean? And they would have to think about this and show us their way of thinking and that they have actually learned how to do that digital integration. So it’s that kind of balance between the openness of assessment and also the security of some other assessments.

Bonni Stachowiak [00:22:40]:

And you mentioned, Tricia and I will certainly link back to that episode as well as to their book. I know one of the things that they’re doing here in the States through the UC system, she mentioned this on that episode, is setting up secure testing centers. And I believe she said collaborating, it was either community colleges or State colleges, it kind of doesn’t matter for my example here, I’m sure it matters for them. But, but this idea of. Because I was even thinking when you’re talking about biology, so many of our STEM students, I don’t know what percentage, it totally varies depending on the type of institution. But a lot of people might be wanting to go to medical school. And so a lot of the, the responses I get from people are like, well, we got to get them ready to take the mcat, which is the name of the. And so, yes, you will be in some cases today, in this day and age, taking secure assessments, high stakes for something as crucial as getting into the medical school you want to go to, et cetera.

Bonni Stachowiak [00:23:40]:

So we’re not saying naively that you’re never going to need to prove, you know something in a very high stakes, very, very secure. What I’m hearing you say, Danny, and I also heard echoed in that other episode is, but realistically, while we’re getting you ready to do that, there might be lots of possible menu choices for what that could look like to get ready for that particular context. And just having the respect that that might look different for different people. Am I kind of, am I, am I doing that well?

Danny Liu [00:24:13]:

Yeah. So I think that there’s the, there’s two sides to that. There’s the kind of the what you do in preparation for an assessment and what you do for the assessment itself. And I think you’re absolutely right that in preparation for assessment, everyone learns different ways. And our role, again is as teachers to help them think about how they can be doing the work of learning, because that’s really important on the actual assessment or the secure assessment side of things. I want to say that assessment security is not just exams. There are many different ways to do secure assessments where we will have much higher confidence that we are actually knowing that the student has learned or not, that are not exam based. And so you could have regular conversations with your students in class, for example, or you could have interactive orals which are taking off in Australia around the world, where you kind of have a scenario based activity where you talk the student through a scenario and they demonstrate and synthesize and extend their knowledge for you.

Danny Liu [00:25:05]:

And that’s really powerful way to kind of know if students have actually learned or not. So there are lots of ways which are not exam based. And I think one of my colleagues used the idea of a varied diet of assessment just to carry the food analogy a bit further. It’s that we don’t just want Exams. Sometimes there will be necessity for some, sit down exams, invigilated exams, but often we need to be more creative in how we think about how we reliably and validly assess that learning has happened. And many of those do not have to involve exams as uncomfortable as that reality is.

Bonni Stachowiak [00:25:39]:

And you used a somewhat STEM centric example of exams and I’ve been doing the same. So how about for our writing teachers, for those professors who focus on writing? I think we can extend this to say it can’t. Our assessments cannot only be written work. Is that, I mean, so just like you’re saying, please don’t have a diet where it’s only exams in something more like a STEM field. Can we apply that same logic to something like teaching someone how to write?

Danny Liu [00:26:08]:

So what would you say would be the main learning outcomes of a, of a first year writing course?

Bonni Stachowiak [00:26:15]:

Yeah, I mean, so much of this is like it really depends on the professor. Right. So. And on the assignment with that professor. So sometimes they’re wanting to have people. We’ve had lots of conversations on this class around the blank slate. So sometimes writing professors are saying, I want to teach them to wrestle through that blank slate and not instantly go to the AI and tell it to generate all their ideas. But I need a writer to be able to have a blank slate and to know how to move through that.

Bonni Stachowiak [00:26:43]:

So maybe we just use that as an example rather than me listing off a bunch because I have like 20 in my mind right now. So. But creatively wrestling through the tension involved in a blank slate on really any kind of writing.

Danny Liu [00:26:56]:

Okay. And you’re absolutely right. You know, with a blank slate these days, the temptation which many people do take is to just go to AI and say, hey, there’s this idea, help me, help me get started. And that is a, a key issue that I think as humans we shouldn’t just cave off creativity into AI all the time. So in those situations, what I would say is, don’t get rid of your essay, keep it. But just know that your essay can no longer be a reliable measurement of your student cohort learning. It can definitely be used to motivate and encourage long form thinking and long form writing and getting creative from a blank slate. But it can’t be used as reliable measurements.

Danny Liu [00:27:33]:

The question is, is okay, don’t get rid of the essay, but you also need to have secure moments of assessment where we can measure that learning outcome of can you get started from nothing? And so maybe one of the ways that you could, you could do that is through a interactive oral. Maybe the scenario is the examiner is a, I don’t know, a marketing client perhaps, and the student is a marketer. And what needs to happen is the marketing client, during this 15 minute conversation with the student, you’ll say, you know, I’ve got this great idea for my product. How do I get started? Or what do you think about this? And so that scenario allows the students to start from a blank slate, think on the spot and really demonstrate their ability to achieve or that they have achieved that learning outcome of being able to go from nothing to some idea. And then that conversation is not an oral exam. It’s not kind of these fixed questions. It’s a very live, exploratory conversation between the teacher and the students where the student can say something and the teacher can kind of, you know, encourage them to follow particular lines of thought to see whether the student has actually achieved the learning outcome of being able to create it from a blank slate. So there are different ways to achieve this.

Bonni Stachowiak [00:28:43]:

I think this is so helpful. Dani, I keep thinking about, you’re really inspiring me so much right now and I’m sure so many people who are listening. I’m finding myself reflecting that when I get into wanting to control the learner, that’s when I get myself into trouble. So something that I’ve been attempting to do a lot more of it. It wasn’t necessarily entirely aligned with the emergence of chat based AI tools. But, you know, certainly it’s exasperated the desire would be to have students more extemporaneously reflect on something. So I might have, I might use. Inside of our learning management system, we have a video tool that allows them to easily just record themselves and, or record their screen.

Bonni Stachowiak [00:29:31]:

And so in my dream world, Danny, in my dream world, it would be, you know, we’re getting you curious about these ideas. I’m thinking I teach most years a class called Personal leadership and Productivity. So part of this is theoretical. You know, what are the habits and what will it mean to live a quote unquote productive life? What does that even mean? What is the good life in my, you know, unique context? And then some of this is really practical. Setting up a digital calendar, setting up a digital task list and then having projects that you collaborate with other people, you know, all the things. But I find myself like, I want to say, please don’t read a script to me because I know that you just went to ChatGPT and copied my prompt. And I just wanted you to think about like, what are the different things in your life that you may find this in, but it does come back. I can assume they’re a liar and a cheater.

Bonni Stachowiak [00:30:24]:

That’s not going to be really helpful. Or I could assume there’s a lot of baggage there that whatever it is they have to say or write in the case of a writing class won’t be good enough. And you mentioned trust earlier, Danny. And I think about the trust is just so broken, and we. And it wasn’t us necessarily that broke it. I’m sure all of us should really admit to ourselves probably we’ve contributed to this, but this has been going on for these people that we get the privilege of teaching for such a long time and to ask them to sort of unlearn that. I was going to mention one thing and would love to hear if you have examples of this too. One way I try to build up trust, Danny, is to show them that I’m doing this in my life.

Bonni Stachowiak [00:31:07]:

So if I’m going to ask them to go through the friction of using a digital calendar, I will show them. Here’s my. This exact same thing I’m asking you to do. Here’s mine. This exact same task management. And I’m asking you to think through using a verb in the beginning. Is it write, call, schedule, ask. You know, that thing with the colon like, what is it? I’m specifically trying to do here? Here’s my task list.

Bonni Stachowiak [00:31:31]:

Here’s some examples from my life. So I think we can really help build up the trust when we show ourselves choosing not to use AI, using it for this, but not for this, and sort of talk about our own diet and our own nourishment, and maybe that helps restore some of the tr. I don’t know if you have any examples that you’ve come across in how us being transparent about our own use or not use can contribute in some ways to trying to foster some more trust here.

Danny Liu [00:32:03]:

I think the role of a teacher is to model the use of these things and to say to students, you know, there was this time where I used AI, maybe to write a grant application where I just didn’t care about it. And. And I used it because it was the last minute and I just needed to get it done. And if you share those examples with your students, then they might be able to see, well, my professor is also a human being. They also are tempted by these technologies to make these shortcuts and how do we then think through this? And then as a professor, you might say things like, well, I know you guys are under pressure. I Know, you guys have this tool out there, all these tools out there that can do the work for you, but if you use those tools in these particular ways, you actually going to be learning. Are you actually going to be achieving what you came to university or college for? And, you know, you can share examples of how when you used AI, what did it take away? What did you lose, but also what did you gain? I think that the balance is really important. And thinking about if we do use AI for certain things, then we.

Danny Liu [00:33:09]:

We can accelerate our work, but also, what do we start to atrophy, I guess, in our own lives? And so talking through this, modeling across students, I think is really important because students are also undergoing this paradigm shift, just like we are as teachers. In a way, it affects them even more because they’re worried about their graduate prospects. They’re worried about what it means for their, you know, their assessments later on. And so it’s really important for us to help them move through this paradigm shift as well by modeling as a teacher and moving away from that kind of thinking that we are. We need to police them.

Bonni Stachowiak [00:33:45]:

I think also that care comes in and the feedback that comes back to students. Anything you want to say around building trust as it comes to feedback on these kinds of assessments that you’re describing?

Danny Liu [00:33:57]:

Yeah, the feedback element is important too. It raised an interesting thing about AI marking for me, actually. It triggers this thing in my head, and it triggers it because a lot of people are grappling with the question of should we use AI to give feedback? Should we use AI to mark assignments? My personal view is we shouldn’t if those assignments count for marks, because if we do that, then what signals is that sending to our students? It’s signaling that a human being isn’t going to read their work. And that’s the dangerous slippery slope, because it means that. That if the feedback that they get, the mark that they get after they submit this work, a computer is going to generate that for them, not their professor. And again, that erodes that trust relationship that you’ve built or you want to build with your students. And so I think it’s really important to think about how we can promote trust in the classroom and avoid things which will erode that trust.

Bonni Stachowiak [00:34:55]:

I’ve got a dear colleague. I’m thinking back to that timeframe I mentioned to you, 2020. And I just remember us sitting with her, and she was just, I mean, everyone just so devastated, right. And she was having such a hard time, and we were trying to tell her, you don’t have to write volumes and volumes and volumes of feedback to your students and just wanting her to be so much more gentle with herself. And the reason I’m thinking back on that is, is that I tend to also be someone, Dani, who really wants. I want every student to know that I care about reading every word, everything. And that sometimes those of us that are on that end of providing feedback, we might not have an imaginative capacity for the kinds of feedback you didn’t talk about. And that is the ones that don’t count for grades or don’t count for marks.

Bonni Stachowiak [00:35:51]:

And I’ve been really intrigued by people who are sort of challenging my, like, wanting there to be a human involved in every interaction. Because I’m thinking back to students that have shown up in, whether it’s someone’s research or more anecdotal stories from students where they actually like to have a chance to talk to the AI first to relieve some of their fears before they talk to the professor. And as much as I, I know most of us, not all of us, most of us don’t want to be scary to our students like we wish that we weren’t. But to some degree, any kind of power dynamic like that, no matter how hard we try to get rid of it, it’s still going to be there. So I’m really getting kind of curious to think about the feedback element when it doesn’t count for marks, and maybe how I might want to expand my own imagination around that. And specifically here, I’ll just mention real quick that I’m thinking about the Sway AI tool or SWAY Guide AI Tool that we’ve talked about in a previous episode. I’ll put the link in the. In the show notes where it guides students through having difficult conversations.

Bonni Stachowiak [00:37:04]:

And then it’ll interrupt a student to be like, oh, you just made a personal attack on the person that you’re partnered up with. Did you maybe want to revisit that and use some evidence instead of a personal attack? And then what the professor sees is that they did this assessment, they had this chat, difficult conversation with the person they were paired up with, and that they took a test for understanding. Did they listen well enough to understand what the person was saying with the other opposing perspective? But the professor isn’t in there reading every single word about every single interaction. So they’ve essentially allowed that separation from the professor as an evaluator to build up a little bit of trust. Although, I mean, to the extent to which students actually realize, no, the professor isn’t going to see every word that you say to your colleague, I don’t know if that trust gets fostered 100% of the time anyway.

Danny Liu [00:37:58]:

But yeah, I love the idea of care and relationship. And I think it’s really important to, especially in this world where students are trying to figure out what AI means for them and what these machines mean for the future. I think that kind of rediscovery of humanity in the classroom is really, really important. And that’s around that care and relationship that you talk about. And your feedback example speaks to me and connects with that idea of control and agency we’ve been talking about before as well, that many things that we do may unintentionally cause the loss of control agency for students. And so they feel the loss of agency, loss of control. And so the example you gave of the AI and students interact, interacting with AI in order to learn and get feedback, I think is really powerful. That maybe one way to think about AI for feedback is instead of us feeding a student’s work through AI and then getting some feedback that we can then send back to them, which is again, like we said before, a bit, it doesn’t send the right message, I think, is to think about how can we use, or how can we get students to use AI tools in a thoughtful way, where, where maybe they can then have the agency to send their own work to the AI, perhaps before turning it in, in order to get that kind of feedback that they otherwise may not be able to or may not be comfortable getting from their professor.

Danny Liu [00:39:16]:

And so kind of turning it around and saying, how do we give more control, more agency to students and use AI in the process as well.

Bonni Stachowiak [00:39:24]:

I could keep going on this conversation for a very long time, but I better discipline ourselves. Let it get us to the recommendations segment. I have a recommendation I want to make, but first I just want to say, long live RSS feeds. For those listening who aren’t familiar with what an RSS feed is, it stands for Real Simple Syndication. And it’s just an easy way to get fed all the goodness from lots of different places into either a single feed or many feeds, I have created a a new AI RSS feed off of my bookmarking service. So what this looks like is every single time I save anything having to do with AI, including, by the way, the articles that Dani and I are talking about on today’s episode I, you could, if you have a RSS reader, you could just easily subscribe so that every single time I put something out there, you would receive that. And even if you don’t have an RSS feed. I’ll also include a link just to a page.

Bonni Stachowiak [00:40:27]:

If RSS is a little bit too heavy of a lift, you could just have the page of all the articles that I’m saving. But I just want to encourage you, if you don’t know about rss, it’s not too late to get on this party. It’s a good party to be a part of and really allows you to have a more speaking of control and autonomy, a lot more control and autonomy over what comes to you versus succumbing to the algorithm’s decision about what should come to you. And if you’re interested in artificial intelligence, I save a lot of things and you could have them coming into either again, that page or to your own RSS feeds, your news or your RSS aggregator. So, Dani, I’m going to pass it over to you for whatever you would like to recommend.

Danny Liu [00:41:12]:

Yeah, and I didn’t even know RSS was still a thing. I remember having an app on my phone a long time ago for RSS reading and they died. So that’s good to know. So my recommendation is a book called Broken by Paul LeBlanc. So he was the ex president of Southern New Hampshire University. I remember reading this on a plane once and it’s one of the few books that have brought tears to my eyes. My colleagues tell me it’s because the environment in the airplane is very dry and that kind of makes you tear up. But I think genuinely, I was really taken by this book because in it he talks about the different kind of social systems that are meant to build people up but end.

Danny Liu [00:41:44]:

End up not doing so. So the prison system, the health system, and also the education system, which he’s very familiar with. And he talks a lot about the ideas of care, about relationship, about stories from people, about how we need to scale these systems, but we should. We need to scale them, but in a way which can still emphasize and convey care for people. So I really love the book and it’s a really great read for teachers, administrators, everyone in between to think about how the things that we put in place in the classroom, in our curricula, in our strategies, really can make or break care for our students, the people that we serve.

Bonni Stachowiak [00:42:23]:

Oh, wow. I’m familiar with his work, but I have never heard of this book. That’s so great. I’m looking forward to looking more into that because that’s what I need, Danny, is more books I want to read. But the nice part is when you have a good system for capturing them. There’s so much goodness to be found. How fun. Thank you so much for introducing us to this new for some way of thinking about how to confuse students less about AI, how to build and rebuild that trust, and how just to bring our integrity more into the ways we’re trying to wrestle with all of this and facilitate learning for students.

Danny Liu [00:43:01]:

Thank you for the conversation. It’s been great.

Bonni Stachowiak [00:43:05]:

Thanks once again to Danny Liu for joining me on today’s episode and for all these thought provoking ideas that are going to be swimming around in my head for some time to come. Today’s episode was produced by me, Bonni Stachowiak. It was edited by the ever talented Andrew Kroeger podcast. Production support was provided by the amazing Sierra Priest. It’s time for you, if you haven’t done it yet, to sign up for the Teaching and Higher Ed Weekly Update, you can head on over to teachinginhighered.com subscribe and start receiving the Show Notes link so you don’t have to remember to go get them, as well as some other resources that are only found in those updates. Thank you so much for listening and I’ll see you next time on Teaching in Higher Ed.

Teaching in Higher Ed transcripts are created using a combination of an automated transcription service and human beings. This text likely will not represent the precise, word-for-word conversation that was had. The accuracy of the transcripts will vary. The authoritative record of the Teaching in Higher Ed podcasts is contained in the audio file.

Expand Transcript Text

TOOLS

  • Blog
  • Podcast
  • Community
  • Weekly Update

RESOURCES

  • Recommendations
  • EdTech Essentials Guide
  • The Productive Online Professor
  • How to Listen to Podcasts

Subscribe to Podcast

Apple PodcastsSpotifyAndroidby EmailRSSMore Subscribe Options

ABOUT

  • Bonni Stachowiak
  • Speaking + Workshops
  • Podcast FAQs
  • Media Kit
  • Lilly Conferences Partnership

CONTACT

  • Get in Touch
  • Support the Podcast
  • Sponsorship
  • Privacy Policy

CONNECT

  • LinkedIn
  • Instagram
  • RSS

CC BY-NC-SA 4.0 Teaching in Higher Ed | Designed by Anchored Design