Bonni Stachowiak [00:00:00]: Today on episode number 545 of the Teaching in Higher Ed podcast, cultivating critical AI literacies with Maha Bali. Podcast Production Credit [00:00:13]: Produced by Innovate Learning, Maximizing Human Potential. Bonni Stachowiak [00:00:22]: Welcome to this episode of Teaching in Higher Ed. I'm Bonni Stachowiak, and this is the space where we explore the art and science of being more effective at facilitating learning. We also share ways to improve our productivity approaches, so we can have more peace in our lives and be even more present for our students. I'm so pleased today to be welcoming to the show, or I should say welcoming back to the show, Maha Bali. She's a professor of practice at the Center For Learning and Teaching at the American University in Cairo. She has a PhD in education from the University of Sheffield in the United Kingdom. She's the cofounder of Virtually Connecting, a grassroots movement that challenges academic gatekeeping at conferences and cofacilitator of Equity Unbound, an equity focused, open, connected, intercultural learning curriculum, which is also branched into academic community activities, continuity with care, socially just academia, a collaboration with 1HE, Community Building Resources, and Myfest, an innovative 3 month professional learning journey. Maha writes and speaks frequently about social justice, critical pedagogy, and open and online education. Bonni Stachowiak [00:01:54]: Maha Bali, welcome back to Teaching in Higher Ed. Maha Bali [00:01:57]: So good to be back with you, Bonni. Bonni Stachowiak [00:01:59]: I feel like I should have had an echo in my voice. Welcome back, back, back because you've been there so many times, but it's been too long. And I have been so looking forward to this conversation about artificial intelligence. Of course, you and I have had many of them with Equity Unbound and so many of the wonderful events I was able to be a part of this last year and have talked about many of those on the podcast. But I wanna start with early on, I can I can vividly remember you telling me and urging me? You have to get in there. You need to start experimenting and experience go through go through many of the phases. Talk talk to me about the early phases that you noticed yourself and others that you were urging to start getting in there, what you experienced, and then what you've been experiencing in, I don't know, the last few months. Maha Bali [00:02:49]: Mhmm. That's so cool that you remember that conversation. We were co authoring a paper and we were meeting a lot and AI was so new. And, yeah, I did tell you that. I remember. Actually, I was just explaining something to someone today that I think is useful to say on the podcast for the first time in public. So different people develop these different models of critical AI literacy, and some of them seem focused on, like, Bloom's taxonomy. And some people will use Doug Bell's digital literacies and then use that with AI, which is kinda cool. Maha Bali [00:03:17]: And I don't explain explicitly how mine is based on critical pedagogy and prairie, but there is something really important related to what we were just talking about now. So there's the element, obviously, of looking at power and inequity and the ethical dimensions and all of that, which is very clearly part of what critical pedagogy is. And to start with that and look at bias and all of the cultural bias and and all of that first. But then the elements of you still have to try it yourself is actually based on the book Pedagogy of Liberation by Irish Shore and Paulo Freire. And what they talk about in that book is you need to teach people to critique the dominant culture. They're not talking about AI, obviously. They're talking about you need to teach people to critique the dominant culture, but you still need to teach them the dominant culture in order for them to survive economically. And they learn their own culture, which is if you're talking about oppressed people is not the dominant culture, and they learn to critique their own culture as well. Maha Bali [00:04:16]: But what what regular education does is that it teaches you to follow the dominant culture and not critique it. Right? And so I think with AI, to be able to critique it well, you need to not only look at it theoretically, you need to actually try it and recognize its limitation, and know what it is that people are talking about when they tell you, oh, AI is gonna transform education. Oh, whatever. And at the same time, students have access to it. So you need to know what students have access to whether or not you're liking or disliking it even when you have ethical issues with it because of the climate impact, for example, and the exploitation of human labor, which are huge, huge issues that in a normal situation could make me not use a product. But you and I as educational developers, we don't have the luxury of refusing to learn new things because we need to support an entire institution. And whether or not we know AI, our faculty are gonna need our help with that, and our students are gonna have access to it. So what I think is, it's a little bit overwhelming, and I was giving a keynote recently to a group of educational developer. Maha Bali [00:05:17]: And thinking about how faculty are complaining about burnout, and I totally totally understand, like, my frequent co author and collaborator, Danielle Aghasago, had recently given a presentation about the relationship between burnout and neoliberalism. And, definitely, our faculty are burnt out worldwide. Right? Now look at that level of burnout they're going through, and then look at the educational developers who have to support people through COVID, and then they have to support them through hybrid teaching, and then they have to support them through AI. And the level of our burnout as educational developers is probably a 1000000 times more than what educators are going through. And with the AI thing, it's it's a bit crazy because they want you to figure out how to detect AI and that's not possible. And we keep telling them that's not possible, and all the tools are not doing a good job, and they it's not comforting them. And I understand it's not comforting them, but I'm not the one who made AI. Right? And and the other element of it is there's so many different tools, and the issue isn't the tools that are similar to ChachiPT. Maha Bali [00:06:17]: I mean, it's not like the Geminis and the Clords and so on. It's the tool that build on such a p t to do new things, like the ones you can do for use for research that do use real references versus statistics that make them up. But the point is, when there's one that you don't know and someone comes to you, you gotta figure it out pretty quickly and you gotta try it out. I was recently asked to get a workshop about using, AI for literature review. I don't like using AI for literature review. So I had to learn these 6 or 7 tools. That Anna Mills had thankfully presented, for Equity and Bound during my set. So I knew a few of them through her. Maha Bali [00:06:51]: And then Google Notebook l m came out in the middle. And then I was, again, talking to Daniella Casaga and Nicola Pilots, and they were telling me about the ones that in their institution are being used. And there's so many, and they do different things, and they are different each other in very nuanced ways. And every time I give a workshop on this, I tried it for my research. I haven't tried it for your research. You have to try it for your thing. My husband's a doctor. He was doing something, and I said you can use this tool. Maha Bali [00:07:17]: And he discovered things in the tool that I had I'd been using it for a while, and I'd never noticed. This tool called typeset that comes from Scifet. But it's useful for, like, you upload a PDF, and it gives you some reason you can ask questions about the article. But he discovered when he first used it for medical stuff, he's like, why is it talking to me like I'm a lay person? I'm a doctor. And then it turns out that it has three tones. It can talk to you like a lay person, and that's the default mode, but it also has an academic mode and a professional mode. So you can ask it to talk to you in a different way. Anyway, the point is, this is overwhelming, but I can't stop doing this because I need to keep doing it for both my students also because I teach digital literacies, but definitely for the faculty. Maha Bali [00:07:59]: And sometimes my students teach me new things about AI. This happens a lot. If you if they trust me after a few weeks and they trust me enough, and they know it's okay to talk about this, and it's okay to use it sometimes. And then because we try it and we take it to its limit and this this is the thing I wanna talk about a lot, which is the people who don't take it to its limit are very impressed with it. But if you keep pushing it to its limits, you will see it break down, and you will see that it's not as scary as people hype it up to be. And I think a lot of times, you think, oh, maybe the tech people know things that we don't know. I think they see the world differently than we see it. That's all. Maha Bali [00:08:39]: It's not like they know something we don't know. This is my current belief, and maybe I'll be wrong in 5 years' time or 1 month's time. But Bonni Stachowiak [00:08:46]: So you brought up so many things that resonate, and I, again, I wanna emphasize how wonderful it was to be a part of the Myfest experience this past year. And you invited Jon Ippolito to be there, and I was there each day that he joined. And when I've heard him in different contexts, I really appreciate he's very disciplined about always beginning with a slide or 2 about just emphasizing there are many reasons why you wouldn't want to use AI. And I and I think that's helpful, and he talks you know, one of the big things that comes up is around bias in in his reasons why people may not want to use it. You mentioned that you've been doing some experiments with implicit AI that I think will be helpful for us to hear about, and it has to do with what you were talking about where you can just tell people, oh, it's biased. And and there there's something vastly different between talking about something versus actually experiencing it, like you were talking about with your husband, where he's using it in his context. So tell us about some of these experiments and what you've been discovering. Maha Bali [00:09:55]: Yeah. So I I had recently made a decision that people whose culture is very highly represented in AI data are less likely to notice bias. I think it matters so much what your identity is so that you would notice that it's doing something that doesn't represent you. And so I think and the hallucinations are more likely. So there's 2 things that happen when your culture isn't similar to ChachiPT's data. You get more hallucinations about your culture because it doesn't know your culture very well, and you get more bias against your culture because it's reading the sources that are on the other side of that, conversation. So the the I'll talk about the hallucination one because it's very obvious about the bias in the dataset. And I would do this with faculty, and they would test it with material in their courses. Maha Bali [00:10:43]: So the ones who teach Arab and Islamic history kept seeing very funny things because it would mess up the time period in Egyptian Islamic history. It knows the ancient Egyptian history really well because the west is very invested in them. There's a lot of material about it online. But you ask it about more contemporary Egyptian history and Islamic history, and it'll put you a building from, like, 200 years early and attribute it to the person who lived in a completely different time. So that element is very high. You can also make up historical figures and it'll mess up the names of people or put, like even with pop culture, it'll put an actor in a different movie and things like that very often with Arab and Muslim people. So that happens. There's a very funny story I have with Gemini that I I asked them to do a table of contemporary Egyptian leaders and their achievements and give a photo. Maha Bali [00:11:31]: And one of the first contemporary Egyptian leaders called Muhammad Ali is Muhammad Ali Pasha, and it gave me the picture of Muhammad Ali Clay the boxer because it's more likely to find people looking for the boxer who is of American nationality than the you can see the Egyptian leader from, I don't know, 3, 400 years ago. And then I don't I don't know my history too well, but definitely a long time ago before Muhammad Ali the boxer. And then it's the implicit bias one that is really interesting to us. And I read some papers about this, and I did my own testing. And the one that I wanna share because it's very glaring was this one. If you ask Shachipit explicitly, is a person from this nationality or that nationality likely to be more more of a terrorist? They would say, no. No. No. Maha Bali [00:12:14]: We can't stereotype people. It's been hard coded not to answer this question. It'll say, we cannot stereotype that one nationality is more likely to be a terrorist than another. Okay. And then I ask the question differently, and I say, okay. Define terrorism and give me five examples. I tested FHGPT, Gemini, Claude, Copilot. I tested 5 different tools. Maha Bali [00:12:35]: The examples, majority Muslim, Islamist terrorist example. And so this is implicit bias because, okay, it won't tell me outright that it thinks that terrorism is connected to my religion. But when it thinks of examples in its training data, the majority of examples that's ever seen that have been labeled terrorism have that factor common factor between them. So, some one of them gave me 3. 1 of them gave me 4. I think one of them gave me 5. So it was that's that's the implicit bias, but you don't people don't tend to go in and intentionally do that. But a lot of my students start to see this both with the written AI, and I think the visual AI is a lot easier also to see. Maha Bali [00:13:16]: So if you ask it to create, an Egyptian classroom, it does these funny things where it puts, like, pink tuck on the wall. Like, nobody has that in an Egyptian classroom. So it's thinking ancient Egyptian is thinking classroom, but it doesn't know what a modern Egyptian classroom looks like. Or if you say Egyptian, students going to school, it'll put them, like, walking in the sand with camels to the pyramids, like, you know, Cairo is just the city. It looks like New York. You know? It doesn't actually look like that. I mean, there is a part that has pyramids and sand, but that's not where schools are. You know what I mean? So so it has that, and that's also very easy for for us to to notice and to prompt it and to see those kinds of biases. Bonni Stachowiak [00:13:56]: Your examples are are cracking me up because what else are you gonna do but laugh because it's it's just awful. But I'm thinking about what you said about that you just don't see it if you are coming from the nonhistorically marginalized population. And I was thinking about I in my business ethics class, we'll have students watch some videos from a philosophy professor at actually, he's a law professor, but teaches philosophy at at Harvard, and he teaches a class called justice. And so I had Maha Bali [00:14:30]: Michael Sandel? Bonni Stachowiak [00:14:31]: Yeah. Yeah. Maha Bali [00:14:32]: I love him. Bonni Stachowiak [00:14:33]: Oh, he's so good. Maha Bali [00:14:34]: In person? Bonni Stachowiak [00:14:35]: No. No. No. No. Sorry. Oh, good. Have we have we have them watch his videos. I wish I knew him. Maha Bali [00:14:39]: Oh, nice. Bonni Stachowiak [00:14:40]: That'd be incredible. And so I asked chat gpt if it would well, it's Dolly through through chat gpt, but I asked if you would make an image of us taking a road trip. I was thinking of Alan Levine. Alan Levine, for those listeners who may not have the deep cut of, of teaching in higher ed or otherwise know his work. Whenever he does stuff with his teaching, it's always sounds like you're going on an adventure. You're having an experience. He's so good at that. So he talked about going and visiting people's studios. Bonni Stachowiak [00:15:09]: A studio visit was how he phrased it. So I kinda and he has a picture of a bus on his material, so I was like, we're going on a road trip. And I asked it to make you know, I said, we're from Vanguard. We're going on a road trip to Harvard to visit Michael Sandel. And then it created an image. I for for this is not no shocker to anyone. I had to say, well, no. The professor is blonde and is in her fifties. Bonni Stachowiak [00:15:34]: I had had to, you know, no. It's not a man. You know? So tried to fix that, and then we'll do it again. And and so I'd asked the students. They were very impressed by the way. I don't think they'd be as impressed if they saw this image today, but the time they were, this was really generated a lot of excitement. How did you do that? And then this was just all that they could talk about was how on earth I was able to create it, but I asked because we're talking about bias. You know, what do you notice? So I had already fixed it so that the professor was a woman on me, all the thing. Bonni Stachowiak [00:16:00]: Well, it was so obvious to the young women in the class, they just instantly the students are all men. But the young men just like they're looking, I think a pixels off over here, you know. So so I mean, it's just so it's so true. And I'm and I'm sure very much for myself. Yeah. Maha Bali [00:16:16]: Oh, that's so interesting because I'm gonna tell you something else that's really interesting. I was with Anna Mills and Lance Ethan once and we were all testing visual AI and we were getting different results. They were using, I think, Valley from InsideChat, GPC paid versions, and I was using po.com. And poe.com takes my Google login, and I think it stores cookies or something because here's what happened. I was creating classroom images for a workshop that I was doing, and I said diverse. And it keeps interpreting diversity to have people who wear house cards, which is very weird. So that only happens to me. This doesn't happen to other people. Maha Bali [00:16:48]: It doesn't happen to Anna. It doesn't happen to Lance. I also tend to get when I ask for images of families, I tend to get, first of all, head families and men with beards for some reason even though I'm not really sure where that one's coming from. But that wasn't happening for Lance or Anna, which is odd. But the headscarf thing is funny because our associate director is American. He just came from the US, and he was trying to get images of Egyptian classrooms. He kept getting the stuff and everything. And then he said, no. Maha Bali [00:17:14]: No. No. I want just normal students, but I want them to look Egyptian. And then he says, and maybe some of them can be wearing headscarves. Do you know what he got? He got a man in a beard wearing a headscarf. Because apparently, him his thing doesn't understand that stuff. Mine does. And I think it's the difference between, like, pro and the regular Chachapiki, but it's these things are funny. Bonni Stachowiak [00:17:35]: Yeah. Another big thing that you've been doing so much writing and reflecting about is just this idea. What what literacies might we need to acquire in order to and and you talked about some of the burnout from from all of this. Talk some about the evolution of your and other thinkers and and reflectors when it comes to what should faculty in general know about artificial intelligence? How do we know if we first of all, we've never arrived. Right? But but what's going on in your thinking and some of your collaborators' thinking? Like, what do we need to be effective at? Maha Bali [00:18:11]: Yeah. Yeah. So so here here it is. So at first, when AI came out, we were like, test your assignments whether this is what my department was doing and what I was doing also in Equity Unbound in public. Just test your assignments on Chat CPC, find out if it can do it with a little bit of prompting, not necessarily the first time. And then if chat TPT undo your assignment, you need to change your assignment probably or do it pen and pencil in the classroom or do it orally or some other way. Right? And so we're always, like, either make it oral or written in the class so they can't have access to chat QPT. If that's really the learning outcome, they have to be able to do this without checking something, and that's what you do, or you change your assignment. Maha Bali [00:18:48]: Right? And you check what your learning outcomes are, and I was kind of like, I have this cake analogy. Are you are your students learning to bake? Are they learning to decorate the cake? Or are they, like, wedding planners so they can outsource the cake so they can focus on the other stuff? And just put that with AI and that then figure out if they really need to learn to bake. Like, it's a course on writing or it's a course on language. They need to learn the language. They shouldn't be using translation. That is existed forever, but not forever, but for a long time. But, anyway, so that that was where we were going for, like, a year and a half. And then, suddenly, I stepped back a second from this and remembered and I should've remembered this much earlier because one of the things we kept saying is make your redesign your assessments to be authentic, right, and experiential. Maha Bali [00:19:30]: So that even if they do use AI, AI wasn't in the experience, so they're still gonna have to do their own work and then use AI maybe a little bit, but they it wouldn't do their work. Right? But I realized that we're actually we need to step back further before the assessment the the outcomes and the assessments. So what is your teaching philosophy in the first place? Wait. What do we believe about human learning? How do we think about it as teachers? And we have very different values about this, and then you need to come at AI from those values. And you can still decide to use or refuse AI with the same set of values, but you just need to come at it that way. And that will help you think about, you know, how would AI fit into this. Right? So for example and this is where I'm at, like, as a critical pedagogue where we learn via dialogue and I'm not the one telling them. I can tell them everything I know about AI, but that's not the point. Maha Bali [00:20:22]: I can tell them everything I know about AI and tell them you can use AI here and you can't use AI there. But instead, in a more dialogic way and to to trust their own experiences, I give them a lot of permission to explore AI in different places and report back on whether they found it useful or not. And what turns out is that in a class of 20 students, different people find it useful in different things. So who am I to decide? Oh, you use a programmer or you use it for outlining. You use it for brainstorming. Different people benefited from it in different places, and different people were angry with it and very disappointed with it in different places. But in order for them to be able to do that, you need to give them opportunities to test it a little bit, to model it for them a little bit so that they can see a good way of using it, what you do when it doesn't get it right, and trying different tools so that those of them who are afraid of it because they're afraid of violating academic integrity and still get some experience of it even if they're not gonna spend as much time working on it. But for me, the first thing I ever do with students is teach them quick draw. Maha Bali [00:21:27]: Have you ever played quick draw? Mhmm. It's a Google game? Bonni Stachowiak [00:21:29]: I don't think so. No. Maha Bali [00:21:30]: Oh, you have to play quick draw. So it's not useful on a podcast to talk to show you, but it's a game that's been it's been around for a while. I actually discovered through, the DS 106 assignment bank, which Allan Levin and Jim Broom contributes to. So Quick Draw is a free game you can get on the browser anywhere on better to do on the phone because it asks you to doodle something, and Google is basically learning how humans doodle things. And it uses AI of of the pattern recognition type. So not exactly the way Chat 50 works, but similar and I can it learns from what what we do. Right? And then the AI pretends it doesn't know what you're drawing and then it tries to guess. Oh. Maha Bali [00:22:07]: So it'll ask you to draw. Yeah. It'll ask you to draw a tree or a car, and then it'll ask you to draw something like a nail. And a nail could be like a fingernail. It could be the nail that you hit with a hammer. It knows both because apparently half the people do this one, half people do the other one. So I asked them to play that, and then I explained how it works. And I asked them when they get their answers, they can see how other people drew it. Maha Bali [00:22:28]: Because what happens sometimes is it gets it correctly before you finish drawing the thing. And it's probably because most people start to draw the thing the same way. It doesn't know what it's supposed to look like in the end. It just knows that when people draw, I don't know, a circle with a line like that, it's probably gonna be as an apple. So it just you know, that kind of thing. So, but then we have a conversation about the nail thing. We have conversation about a bath because it could be the bath's the animal. It could be the bath, the baseball, or cricket bath. Maha Bali [00:22:55]: And then we have another conversation about, did anybody get hospital? And if it's hospital, how did you let it know that this building was a hospital? It's a cross. But in the Islamic world, it's a Bonni Stachowiak [00:23:08]: crescent. A crescent Maha Bali [00:23:09]: with a cross? Yeah. We put a crescent, not a cross because the Red Cross and Red Crescent and also the hospitals have a crescent. And no. That it doesn't know the crescent. Only knows the cross or an 8. Even if you play Quick Draw in Arabic because Quick Draw works in a lot of other languages, it still expects to see that. It also has a question where it asks you to draw an angel. And of course, my students draw the angel with wings and a halo and all that. Maha Bali [00:23:33]: But in Islam, the majority of Muslims believe you should not depict angels in pictorial form. And so so it doesn't have that cultural sensitivity, and we talk about this. And we talk about the the majority of people who are playing this probably live in a country where hospitals are depicted with an h or a cross, and a lot of them are not from us. So the majority won't do that. And so it starts to show the bias in the training data right away. But of course, it's pretty unpredictable. So you never know what's new you're gonna get from AI tools anyway. Fascinating. Maha Bali [00:24:08]: But it's so it's so important that they experience it because the the more they use it, the more they'll learn that it's not great. I know sometimes faculty are worried that students are gonna use AI and get good grades. I've never almost never seen something that's 100% AI generated that was good quality for a student paper. They're often off track, the students tend not to read them back. And and they're so obvious. So that's not obvious to you if you don't use AI or the teacher yourself. Right? That's the thing. Bonni Stachowiak [00:24:37]: Yes. Maha Bali [00:24:37]: That's why you need to use it. Bonni Stachowiak [00:24:39]: You were talking about reflecting on our teaching philosophy, and so sometimes why it's so difficult for me is that when we do that, then you're you're saying, oh, we'll just figure out your teaching philosophy and then just build it around that. And I'm thinking, what if the teaching philosophy is a flawed philosophy? What if what if that's not actually a way to build a life as an educator? You know? I I mean, we talk about that a little bit when it comes to academic integrity where I think James Lang has done an effective job of saying, if all you are doing in your job is trying to catch people cheating, are you in the right job? Yeah. Is that what you were? Maha Bali [00:25:13]: Is that why you came here? Yeah. And I Bonni Stachowiak [00:25:15]: think I mean, that's helpful. Maha Bali [00:25:17]: Yeah. So I have a question for you. As an educational developer, have you been in that situation where you've given a workshop that's full of tangible strategies that people can use in their classes? And then you go and observe them use it and you realize, oh my god. This person doesn't get it. He's repeating the motions. Usually, it's that he but he doesn't really understand why we were doing it because only only people in in in my practice will come to me. I'm like, oh, you've been changing our mindset all these years. It's not about the strategy. Maha Bali [00:25:46]: And those are the people who get what we're doing. We're trying to it's not just about whatever your teaching philosophy is. It's trying to make you a more compassionate teacher. These are our compassionate literacy, equity literacy, making them realize the inequities in some of the ways we normally teach. But if you don't have any of that and you don't have a deep teaching philosophy, you don't actually reflect a lot on what helps students learn, and you're just doing whatever. And then I tell you to do think pair share. And then they do think pair share, but they don't give students a lot of time to think. They do the think pair share, but the think is very quick. Maha Bali [00:26:17]: And the pair is whatever. And then the sharing sometimes they do the sharing instead of letting the students do the you know what I mean? Like, they don't get it. They don't get why they're doing it. And so I feel like if I tell that person, now convert your assessment into an authentic assessment that's experiential, that's such a huge leap if you're not already someone who believes in experiential learning and the importance of learning from experience and reflecting on it. So you need to go back and let's let's back to the drawing board, and let's let's build that so that it fits your course so that it's not like just the thing that you're doing that doesn't fit with everything else you're doing. Yeah. I Bonni Stachowiak [00:26:49]: think a lot of the culture can come down to talking about something. And and too often, it'll be, in my mind, talking about students and what their failings are as opposed to creating experiences that may facilitate learning. So there's just I mean, it's constant constant tension between a banking model of education. Maha Bali [00:27:11]: Makes Bonni Stachowiak [00:27:11]: sense. That's that's that's what I tend to see. I I wanted to make sure that before we get to the recommendations segment, I asked you about another piece that you've written, and that is something that I've struggled with so much. So I'm excited to have you share about it. And that's when it comes to AI, is transparency enough? And because you were mentioning when so when AI came out, a lot of the reaction was, okay. Rethink all your assignments and, you know, redesign them. Well, another thing, I felt constantly pulled in lots of different directions to you better be explicit about when you've used AI, and I use it, for example, and I have a disclosure in my emails where it'll say, yeah. I use AI. Bonni Stachowiak [00:27:47]: So the the the transcripts from our podcast get developed. The first round is developed by AI. We do have a human that goes through and tries to check for I'll always list off the names I think could be misspelled or those sorts of things. I'm sure we miss things. I'm sure we do. And then it will actually pull out quotes. And as a person that doesn't then look at the AI quotes and then pull out them, I mean, there's some failings in in, like, in humans delegating to other humans, you know, to do things or delegating to AI. So, anyway, I've gone back and forth between how do you even extrapolate where you used AI and where you didn't. Bonni Stachowiak [00:28:24]: I just had a meeting with our our faculty about course evaluations, students evaluations of teaching, and I didn't wanna record it. But right as soon as it ended, I recorded myself recalling the things that we talked about. Because other people on the campus, they really wanted to be there, legitimately. You know what I mean? So it's Mhmm. And and then it's, like, once I put my recollections of a meeting and ask AI to create a summary of my I mean, how much do I need to cite? Because at that point, who said what? So what are some of the thoughts that you've been having around? What what should we be documenting about our AI? Where where is perhaps alarmist thinking about how dare you not disclose every use of it? At at some point, I can't keep track of where I did or didn't. Maha Bali [00:29:11]: Yeah. That there's so many different dimensions to this. And that article also talks about accountability, which if if I have a moment, I'll talk about as well. But so transparency first. So my first thing about this just related to what you you were saying about where did it begin and so on. Sarah, Elaine Eaton was talking about the quote later than an era where she thinks maybe in the future, it'll be hard to distinguish between where the AI starts and we begin and, like, ends and we begin and so on. And I I can totally see that as it becomes integrated into so many of our tools against our will for the most part, which is a bit confusing and annoying. But, yes, there are other AIs that are integrated into our lives, and we don't notice them as much. Maha Bali [00:29:48]: But so, initially, one of my first reactions to having AI in the world was I'm gonna ask I'm gonna allow students to use AI, but they need to be transparent about where they used it and how they used it. And now I ask them to reflect on how they used it as well sometimes because that's the learning process. That's different than when you're in the workplace. Right? But if as a learning process, I want them to reflect on how they use it. Right? That's one thing. The other thing is as academics writing in journals, for example, there are journals that ask you very explicitly, did you use it in your writing? Did you use it in your grammar check? Did you use it in your literature review? Did you use it in your analysis? And when you're using it in analyzing human data, did you get permission from the people whose data is getting in there to use it? Did you anonymize it? Because you have no idea what the privacy rules how they changed. Right? So there are all these journals that ask you to be very explicit about it. And I remember Nature came out very early. Maha Bali [00:30:40]: It was like, AI is not your coauthor because AI cannot be accountable for what it's written. Right? So that's that's there's that element of transparency. My concern with transparency is a quite a different one in two ways. One of the ways is whatever you're getting out of AI, you're citing AI as the source of that information, but AI isn't the source of information. AI is re rephrasing something that's the work of a lot of other humans who are not getting attributed at all. And I strongly I have strong reservations about this with stuff that isn't general knowledge. For example, if you ask it about white supremacy culture, this is the work of Tema Okun, and I always forget her coauthor because he passed away and then she continued without him, which is my mistake always, but because I also met her in person. But, anyway, so it's that work of that particular person and her collaborator and not work of a lot of other people that'll talk about it as if it's whatever. Maha Bali [00:31:37]: You know? It's AI talking about white supremacy culture. It just and sometimes in the past, you'd prompt it, like, who said this stuff? It might give you the real author. It might not. But the point is that there is sometimes work that you need to be able to trace back, like, who actually wrote this, and what is their identity, and what is their positionality, and what else have they written about this? Because, for example, has later, written about how she doesn't like the way people use this work and how she wants it to be used differently. So if you're not in that frame and in that chain of scholarly thinking, you're getting the information sort of in a vacuum. And that's not what you get when Google's AI finds you a source because you can go to the source, and you know who said this, and then you can dig deeper. So when we say transparency, what the heck are we talking about? Because you're transparent that you used AI, but AI is not transparent about anything. It's a total black box. Maha Bali [00:32:25]: And the explainability that we keep hoping for in the transparency in the AI, it's maybe possible, but it's really, really difficult to do. I I know because I'm a computer scientist and I've designed neural networks before, and it's almost impossible. But it's I I think they could figure out a way to make it work. It's just really the so that transparency issue is problematic there. The other issue is I don't know if you know the story about Lorna Chernovich who got a Google alert that she was cited. And she opened the paper that cited her, and it was citing a paper that she and Sheryl Brown had never written. Bonni Stachowiak [00:32:55]: Yes. Yes. Yes. And there's Maha Bali [00:32:56]: a whole paragraph describing the work that they did, which they never did. And the problem here isn't transparency. It's not that the author wasn't transparent about using AI. The problem is that the author was an unethical because they cited a paper that they never read, right, in the first place. And so, like, the problem isn't that the AI gives you fake citations, by the way. The problem is that you're using citations that you've never read and claiming that you actually know what's in them, and you're using them in your paper. So you see what I mean? Like, it's the transparency is a completely different issue. I I had a student recently reflect on a board game that was played by an Egyptian person that we played in class using ChatChapiti, which totally misunderstood the game. Maha Bali [00:33:35]: The game is called sucrose. It's about, positive psychology and emotional literacy, and ChatChapiti thought it was a game about sugar manufacturing. And this student had no problem. I I checked, like, she was in class that day. But yeah. So, I mean, that's a whole other issue. And the thing is we need to be accountable for everything we do. And the bias in AI people say, well, people are biased, so AI is biased. Maha Bali [00:33:57]: It's just reflecting it's marrying us. I'm like, yes. But we get we are accountable for what we do. If Bonni makes a decision that discriminates against a particular group, he can be held accountable and punished for that or whatever. But AI can't be. And we tend to think of machines as neutral. Not all not you and me, but other people tend to think of machines as neutral. So the AI is smarter than us. Maha Bali [00:34:19]: Like, why? Why do we think a machine can be smarter than us? Why do we think a machine can be more neutral than us? We know historically that AI has been racist. We know facial recognition has been racist. It took a lot of work for us to become less so, and it still is. We know that it's high risk. And in Europe, they talk about it's unacceptable risk to use it in certain context. And in education, it's considered high risk, and I don't know why we keep talking about using it to make admissions more efficient. It's been raised it's been discriminating against people in recruitment. So it's the same with admissions. Maha Bali [00:34:51]: And using it learning analytics, for the most part, by the way, hasn't been always using AI, but if you use it like that, why aren't we using human judgment in places where things are so important because they relate to the future of a young person? And I feel like people who wanna use AI to give feedback to students, for example. Why? Like, this way, the student's gonna write it with AI. We're gonna put it through Turnitin to discover if it's AI, and then it's gonna get feedback from AI. So nobody is writing or reading anybody's work, then maybe they'll write that. Maybe that wasn't a necessary piece of writing in the first place. You know, cancel it altogether. Yeah. And I think, like, you can ask your students to let AI give them feedback on their own without your intervention, but your role as a teacher is to then give them human feedback, I think, or solve that problem by giving people smaller classes or more teaching assistance or whatever it is. Maha Bali [00:35:38]: Yeah. So that's my rant. I thought that. Bonni Stachowiak [00:35:41]: Yeah. Thank you so much. That's really helpful. I I am reluctant to move us to the recommendations, but I'm going to, and it's gonna keep relating to all that we've been talking about. I mentioned that I just had a great conversation with some fellow faculty about course evaluations. We talked about some of the bias that's inherent. We looked at a literature review as well as some data specific to our university, and I decided to use Google Notebook LM as, my attempt since I did not wanna record that session just so there could be a little bit more vulnerability in the conversation, which there really was. It was great. Bonni Stachowiak [00:36:20]: And so what I had done was record on I use a tool called whisper memos that actually have an Apple Watch, and I can tap the little button, and it listens to me talk. And it uses artificial intelligence to produce a transcript of what I said. And then I also had the literature review in Zotero, which is a references manager, and had I had I I don't know about you. How you haven't really shared much about this, Maha, but we have to I shouldn't say we have to. We choose to try to do something novel to get people's attention because they're getting all sorts of emails that come, so I've been having fun experimenting. This time, I told a story about our kids that they now know that Dave and I have a joke. Dave, my husband, and I have a joke about making a coffee table book, and the coffee table book is going to be reviews that our kids have shared after going and having these incredible experiences. So he took them to the NASA space camp, and the kids got to go. Bonni Stachowiak [00:37:22]: And, actually, Dave too, they got to experience what it's like to be in zero gravity. They got to meet with astronauts. It was just this complete experience that they'll never forget. And one of the 2 of them, I tried to keep them secret who it was. I'll tell you when we're not recording. But one of the kids said, it was better than I thought it was going to be. And so, you know how that is as parents or even as as educators, once they know that how to amuse you, then they'll just ham it up. So I had put little a gift. Bonni Stachowiak [00:37:54]: I had made a gift for the email that we sent out to the faculty to invite them to come of this fictitious coffee book of, like, going to all these incredible things and say, no. It was alright. Nah. You know, that kind of thing. And it did work. I mean, people read the blog post that I had written, and it and it did extrapolate on to how it relates to course evaluations, but I think I might have gotten, you know, a few more people to come because of that. So I linked to the blog post as well. And so I'm gonna play now my recommendation, by the way, spoiler alert, is that people go and experiment with Google notebook LM. Bonni Stachowiak [00:38:29]: I think it is a different conversation. We've certainly talked about it on this podcast, but there's a whole different experience when you're experimenting with large language models than when you're experimenting with a language model that is based on whatever you tell it to be based on. So I think that would be a good experiment. Maha Bali [00:38:49]: Right. Bonni Stachowiak [00:38:49]: Right now, a lot of what's coming up is that it'll make podcasts. So people that's always the joke of, oh, no. Bonni and Dave, they both have podcasts. They must be terrified of these podcasts. And when I've listened to these podcasts across a number of different contexts so I've listened to the tech bro kind of podcast that I like to listen to, the geeky podcast, because they've been responding to what it tells them on the podcast from blog posts they've written from a technical standpoint. I have a colleague who teaches theology, and and he so he's kinda geeking out on what it says or doesn't say, you know, how it's able to process that. And so this is 2 minutes, I'll play of 17, where I asked it to take all these artifacts, the literature review about bias and course evaluations, the blog post I mentioned, and the my recollection of the conversation that we had had that day. And so I'm gonna play 2 minutes, and, then Maha might have some reactions to it as well to how good a job we think it did or did not do with this particular thing. Bonni Stachowiak [00:39:56]: And I think it's gonna start at the beginning, but it doesn't have to start again. Audio from Google Notebook LM: You ever plan, like, a really awesome trip? Uh-huh. You know? And it's something that you think is gonna be super cool, like exploring some giant cave Mhmm. With crazy rock formations, And then your kid just kinda shrugs and says, it was okay. Oh, yeah. It's like, come on. Totally. And it got me thinking about that in relation to course evaluations. Yeah. We've all been there filling them out as students Yeah. Or maybe even stressing over them a little bit from the instructor side. For sure. But are those evaluations really giving us the full picture? That's a great question. And I think it really connects to this idea of paradigms Okay. Or the mental frameworks that we use to see the world. So our existing beliefs can actually shape what we notice and how we judge things. Exactly. Think of it like a filter on a camera lens. It changes what we see and how we interpret it. That makes sense. So in this deep dive, we're gonna explore some faculty perspectives on course evaluations, especially this thing that everyone's thinking about, but maybe not always talking about out loud. You mean bias? Yes. Bias. We're gonna dig into some surprising findings from a 2019 study. Oh, exciting. And we'll even unpack some ideas about how to get better feedback Hello. For actual teaching improvement, not just criticism. So kinda like that meh reaction in the cave you were talking about. Exactly. I mean, research shows that student perceptions are often colored by these pre existing ideas. Okay. And they might not actually have anything to do with how well the instructor is teaching. Right. And that can be really discouraging for faculty, especially when they see one negative comment. Oh, yeah. Even if it's surrounded by tons of positive ones. Totally. And it goes back to that idea from Stephen Covey. Oh. Stephen Covey, he wrote the 7 Habits of Highly Effective People. Oh. Right. Right. Right. And he said, what we focus on grows. Oh, yeah. I've heard that. So if a faculty member is already feeling insecure Yeah. Maybe they are prone to, you know, being a little sensitive to criticism. They're way more likely to 0 in on that one negative comment. End of audio from Google Notebook LM: Bonni Stachowiak [00:42:03]: Alright. I can't I can't listen to any more of it. I think that was probably more than enough. So, Maha, was So Maha Bali [00:42:09]: how do you feel about it? And then I'll see how I feel about it. Bonni Stachowiak [00:42:12]: Well, I have to admit to people we were we were chatting in the Zoom chat as it was playing. It's just when you listen to it, I think for the first time, most of us would go like, wow, this, you know, this is pretty amazing that it took text, a lot of it, and turned it into something in audio. I mean, it's like a like a magic trick that, I mean, I don't know. The first time maybe it's impressive, and it probably depends Maha Bali [00:42:40]: on with the, the humanistic intonation of their voices. If you're not used to listening to AI generated voices, you'd be very impressed with the AI generated voices. But I the AI generated voices in English, with the AI generated voices. But I the AI generated voices in English have been like this for quite some time. Bonni Stachowiak [00:42:53]: Yeah. So I I would say on this one, because I have listened to it across many contexts, and because I was there for the conversation and I know what the input was, not impressed at all. Because they they take what was a rich, in some cases, vulnerable conversation Maha Bali [00:43:10]: Yeah. Bonni Stachowiak [00:43:11]: And to something that Maha Bali [00:43:12]: life. Bonni Stachowiak [00:43:13]: Yeah. It's yeah. It's and I'm thinking Maha Bali [00:43:15]: I gave him my thesis and I was like, this is a high schooler interpreting my thesis. It's not wrong. Yes. Bonni Stachowiak [00:43:20]: Like most Maha Bali [00:43:21]: of what they're saying is in the thesis. It's so not deep. Bonni Stachowiak [00:43:24]: Yes. Yes. And the and the friend But Maha Bali [00:43:26]: also formulate. Bonni Stachowiak [00:43:27]: The friend that I was telling you that's using it in his teaching, I thought so helpful where it's possible that him playing the podcast excerpts of the podcast recording might help students be able to have the text to be more accessible while simultaneously helping them see the nuance and richness that is missing from it. And and he was using an example, I'm not a theologian, so he was I I was all I have to we're all in this text chat together about 5 of us, and 3 of the 5 of us knew what the heck they were talking about, and I was not among them. So but Maha Bali [00:44:01]: I'm actually very shocked about Theologian example because in you know, indigenous groups are worried about giving their data to AI because they would lose their rhetorical sovereignty over their data. And because I'm not an indigenous person, the thing that I feel very jealous over being messed up by AI is my religious information. And I could imagine, like, if you feed AI, like, the holy books and have AI give you feedback or or use it and mess it up, that would make me so angry. And so I not the thing that I would input into AI, but some people do that. I don't know. But the thing about the the Google notebook, it's I think it's cute and fun to try out with students, maybe just to try it out. I think it's useful for the text based versions of interrogating the the text. Right? But the the podcasting is cute, but very formulaic. Maha Bali [00:44:49]: Like, they're always giving these weird metaphors or similes. It's kind of like I do it like so often. Bonni Stachowiak [00:44:55]: Deep dive. Deep dive. We're gonna do a deep dive. Maha Bali [00:44:58]: Yeah. And and someone I remember on one of the mailing lists had realized this and I do notice that that it's mostly the guy and the girl agreeing with them. So he's like, oh, we've all been there, and she's like, yeah. Right. And Bonni Stachowiak [00:45:11]: Yes. Very formula. Like that. Very 2 dimensional, very formulaic. Although, I still I don't know. I've been intrigued by it, and we've been doing, I think you and I emailed about this, but we have been doing, Alexis, Pierce, Caudill, and I have been doing some experimentation with what it might be like to use AI, including Google Notebooks. She's been playing with it for shaving off some of the harsh feedback that might come out of course evaluations that isn't helpful to us. And and so that's a interesting area of experimentation that people are people are doing. Bonni Stachowiak [00:45:43]: Because sometimes the the flattening effect could actually be used to our advantage if the harsh words aren't edifying to shaping better teaching overall. Maha Bali [00:45:53]: Although, you know, maybe we need to just work on people's psychology and help them deal with negative feedback better. Wouldn't that be better? Or to hide it from them or go off of Bonni Stachowiak [00:46:02]: this? Plus, I also came across, you know, helping equip students to be aware of implicit bias. Maha Bali [00:46:08]: And to give give more constructive feedback, which is gonna be useful to them in their lives. Yeah. It's not just the teaching thing. But, yeah, that's a great point. Bonni Stachowiak [00:46:16]: Yeah. Alright. Maha Bali [00:46:17]: Although recently, I had students tell me, please be careful in the way you give the TA this feedback because we don't wanna make him angry. We wanna make him better. And I was so proud of them. Bonni Stachowiak [00:46:25]: Yeah. So good. Alright, Moha. I get to pass it to you for whatever you'd like to recommend. Maha Bali [00:46:29]: Okay. So my big, big recommendation is to let people know that Audrey Waters is back to writing about EdTech. So her blog is called Second Breakfast. And if you don't know Audrey Waters, Audrey Waters is the reason so many of us can be critical about EdTech now because of how powerful powerfully but accessibly. She has been writing about this for years. She has several books, and she had a blog, and she used to do, like, a hack education blog and towards the end of the year, like, sum up everything in a very critical way about EdTech and where's the money going and and why are certain things being hyped up. And if you really need an antidote to the hype of EdTech, her blog now, she she had stopped talking about EdTech for a while, but she's back. And the blog is has a free version and a paid version. Maha Bali [00:47:13]: The free version you get once a week. The paid version you get twice a week. It's worth paying for, honestly. And I'm so glad she's back. I had missed her so, so much. She's also a really lovely person in person, by the way. She's very snarky in her writing, but actually very sweet in person. So that's a very interesting combination of a person. Maha Bali [00:47:29]: So that's my biggest recommendation. And in one of her recent posts, there was a link to something by Tressie McMillan Cottom, and that's another person you need to follow on Instagram or TikTok. And Tressie was talking recently and that's also another person, like, you have to follow in general in life because it's very, very sharp and very insightful. And she was talking about what's happened with AI is that people were scared of it in education, but then all the money was going to it, all the grants and so on. So people had to be like, oh, and so now everybody is on board on the AI bus for the wrong reasons, she thinks. Like, a lot of it is about following the money and where's the money going. And so you need to listen to Tresi on that as well. And then the 3rd recommendation very quickly is I have a survey in my, blog post about the transparent is transparency enough? And that's on the London School of Economics Higher Ed blog. Maha Bali [00:48:19]: There's a survey there, so I want people to answer the survey. I mean, you have to read the article to be able to answer the survey. Sorry. But I do want people to answer the survey if it comes out if this podcast comes out soon enough for the deadline of the survey. Bonni Stachowiak [00:48:32]: Absolutely. We'll put all those links in the show notes, and just so grateful for today's conversation and yesterday's and the day before, and so grateful for your voice and for your friendship. Maha Bali [00:48:43]: Same here, Bonni. Love you. Bonni Stachowiak [00:48:47]: Thanks once again to Maha Bali for being a guest on today's episode. Today's episode was produced by me, Bonni Stachowiak. It was edited by the ever talented Andrew Kroeger. Podcast production support was provided by the amazing Sierra Priest. Thanks to each of you for listening. If you'd like to get the most recent episodes show notes in your inbox each week, head over to teachinginhighered.com/subscribe. This one's gonna be a good one. And you'll also receive some other resources that don't show up in those regular show notes. Bonni Stachowiak [00:49:25]: Thanks so much for listening, and I'll see you next time on Teaching in Higher Ed.