Bonni Stachowiak [00:00:00]: Today on episode number 524 of the Teaching in Higher Ed podcast, toward a more critical framework for AI use with Jon Ippolito.Production Credit: Produced by Innovate Learning, Maximizing Human Potential. Bonni Stachowiak [00:00:23]: Welcome to this episode of Teaching in Higher Ed. I'm Bonni Stachowiak, and this is the space where we explore the art and science of being more effective at facilitating learning. We also share ways to improve our productivity approaches so we can have more peace in our lives and be even more present for our students. I am so glad to be welcoming to the show today, Jon Ippolito. He's an artist, writer, and curator who teaches new media and digital curation at the University of Maine. Winner of Tiffany Lannon, American Foundation, and TAMA Awards. Ivelito is cofounder of the Variable Media Network for Preserving New Media Art, University of Maine's Digital Curation, and just in time learning programs, and learning with AI, a toolkit for educators and students that makes it easy to filter for AI assignments and resources by discipline or purpose. Ippolito has given over 200 presentations, coauthored the books at the edge of art and recollection, art, new media, and social memory, and published 80 chapters and articles in periodicals from Artforum to The Washington Post. Bonni Stachowiak [00:01:57]: His AI focus is creators, writers, programmers, and media makers, and how the technical, aesthetic, and legal ramifications of generative AI empower and frustrate them. Jon Ippolito, welcome to Teaching in Higher Ed. Jon Ippolito, welcome to Teaching in Higher Ed. Jon Ippolito [00:02:21]: Thanks, Bonni. Glad to be here. Bonni Stachowiak [00:02:23]: Would you take us back to the early 19 nineties and tell us about the mix up and what job you thought you were applying for at the Guggenheim and the job you actually got. Jon Ippolito [00:02:35]: Well, yeah, fresh out of art school, didn't have a lot of options. I had some background in martial arts, and I knew something about art history. So I applied to the Guggenheim to be a guard because they had a program where guards with art history backgrounds could also be docents. Save money, have one person watch over the Picasso at the same time. They can explain what the Picasso is about. I made a mistake, got in the wrong interview, applied to be a curator, end up being hired as one. Bonni Stachowiak [00:02:59]: Oh, it's fascinating. You've probably seen those memes where it's, here's where I started, and here's how people think success works, and it's this linear line of this forward and upward trajectory. And everything that I've learned about you seems like you would draw a very different picture in your here's how things actually turned out, including that early job interview. Jon Ippolito [00:03:20]: It would be a very twisty, twirly mess. The the one thing I I always remind people, one of my students of Isenberg, who's now creates, mobile games for the iPhone for the New York Times, he always warns me about survivorship virus, a term you might have heard, which is, like, just because someone, you know, made it big or or achieved something doesn't mean you can follow their footsteps. And the the classic sort of joke about that is is someone saying, you know, I was poor my whole life, but then I just bought a lottery ticket every week, and here I am rich, and you can do that too. Bonni Stachowiak [00:03:52]: Well, I have had such a blast. This was an unusual experience having you sign up for an interview because you signed up for a time that was in less than 24 hours from when we're actually talking today. And I both love that because you're you're so fascinating that I could have really gone and practically written a master's thesis before I was even remotely close to being curious. So I'm so excited about today's conversation and grateful for your time before you head out of town. Would you first talk to us about intelligence? Before we start talking about artificial intelligence, let's just talk about intelligence in general. How would you casually define intelligence? There's a Jon Ippolito [00:04:31]: lot of different kinds of intelligence, obviously, you know, emotional, intellectual. Another of my colleagues at the University of Maine, Joline Blais, likes to talk about maker intelligence, which again is someone who can perhaps produce creative, really interesting things that are that require a medium or some kind of activity to realize themselves or to or to be seen by others. In school, we tend to validate intelligence that comes about from someone being able to summarize a book that they read or write articulate paragraphs about the French Revolution or, in other ways, verbally either elaborate on or possibly extend things that they have already seen and more commonly read. So, again, words have been the sort of gold standard of what intelligence has meant in at least the US and and many other school systems. Now we're at a point where word simulation, language simulation is ubiquitous, and it's not coming from people. And it's certainly not coming from if if it is being used by people, it's not necessarily a marker of their intelligence. So that sort of sense of the word intelligence is really broken now, and we need to kind of either rethink what intelligence means, beyond the scope of simply being articulate and knowledgeable when it comes to to putting words together, or we need to think about, what other values we want to promote and encourage and teach that go beyond intelligence. So one that I'm particularly interested in, especially as it relates to large language models is creativity, which is not the same as intelligence, but is another another potential, what would I say? It's even hard to describe exactly what it is because it's unclear. Jon Ippolito [00:06:14]: Is nature creative? Is a baby, you know, a child who's picked up a crayon for the first time and touches paper? Are they creative? Is it a faculty? Is it a skill that can be learned? Is it a talent that's in in sort of that that that's genetic, that somehow, you know, emerges as someone gets older. But it that that's that's one of the many sort of facets of intelligence I'm personally interested in. Bonni Stachowiak [00:06:37]: Oh, and you talked about giving a child a crayon. One of the areas of research I've seen around that that I I was thinking of as you were sharing is there's the idea of, am I good at it inherently, or are these skills that could be built? What I instantly thought of is that we can hinder imagination when we start listing off the things. So if you give a child a toy and you say, oh, let me show you a little bit about how this toy works. You could pull on this thing, and then this little thing over here shares music, but instead, if you just give them the toy, they the research that I saw, they'll come up with way more things that the toy can do and be a lot more creative about its possibility if we don't start with the, let me give you the little tutorial for how this thing works and instead just kinda get them playing with it. And that might actually be a fun way for us to start to talk about artificial intelligence because you are someone who I perceive as playing with it and experimenting with it and encouraging others to do so. Is that a fair estimation that you might be energized by or interested in kind of the playful aspects of artificial intelligence? Perhaps even just to see what it can't do in addition to what it might be able to do. Jon Ippolito [00:07:44]: Yeah. I I I do think that's fair. I think that there's a very critical piece to that that I should come back to later. But one of the things I've noted as someone who's sort of looked at the history of how artists, but in particular, have used technology is that people who are perhaps less creative will look at the manual. Right? Artists don't look at the manual. They immediately start messing with something and often see if they can break it and and or misuse the technology in ways it wasn't intended. So there are artists, for example, who created the very first web animations. This is back in the very early days of Netscape 2. Jon Ippolito [00:08:19]: Right? So we're talking mid nineties before Flash, before Director, before JavaScript. There was a way to to fake a sort of succession of images and colors on a web page by adding multiple HTML tags in a row, which is illegal. You're not supposed to be able to put more than one body tag. But in this case, they quickly found that they could sort of hack their way into a primitive kind of animation this way. That didn't last long because the browsers fixed the bug. But the idea of exploiting what we might think of as a weakness in the technology and and misusing it is something that hasn't just been the realm of art. If we look, for example, at why we know there's a big bang, that discovery was made because people thought that pigeons shat all over the antennae on the roof, and they went to try to clean them off. And, Bonni behold, they cleaned them off, and they still heard this buzz in the background that turned out to be microwave background radiation that was the strongest evidence for this idea that the universe is expanding. Jon Ippolito [00:09:15]: Apollo 13 astronauts had to, you know, put together a a filter with a piece of duct tape and a couple other random parts that they well, all they had in the capsule and and managed to save their lives and give them enough oxygen to make it back to Earth. Earth. So the idea of of of working with constraints and, in particular, doing things that are not on the label, you know, and sometimes the things that are explicitly sort of prescribed not to do on the label, and sometimes things that just are never mentioned or thought of by other people, that's definitely within the realm of creativity. And I would say that whereas a lot of people can be creative with technology, to be creative specifically in a technological sense means almost always to misuse it. Bonni Stachowiak [00:09:54]: And so talk more then about this critical piece you wanna make sure that we hear from you too. Jon Ippolito [00:09:59]: So this is an interesting sort of a realm of metaphor, and we we have to fall into metaphors sometimes talking about AI because it is it is alien to us. It's a strange neurological process that doesn't resemble too much of what else we've experienced. I think that the tech bros and AI CEOs will tell you, well, it mimics the human brain. And I think it's very different from the human brain. Certainly, deep learning is closer to the human brain than, say, the old models of a bunch of if then statements that somehow encompass the rules for, I don't know, identifying a chair or being able to add, you know, solve 2 simultaneous equations or something like that. But in in our case, the way that these models work and generate information is quite divorced from other things we've seen. So we we fall back on metaphors. And Maha Bali and some of her coauthors have a really great article on the different kinds of metaphors people are are are looking at when they identify AI and how they describe what it works and where it fails. Jon Ippolito [00:11:00]: I'm actually thinking right now more of metaphors of how we deal with AI. Not of the AI itself, but how we deal with it. So there's obviously the embrace. Like, oh, let's take this Bonni, and we'll we'll accept it. There's the sort of kind of shunning. Oh, this is you know, this is a pariah. We shouldn't have anything to do with it. In in both cases, those are human metaphors. Jon Ippolito [00:11:19]: Right? We embrace a, you know, a cousin we haven't seen in a long time, but we we shun someone who is a danger to us. But I'm I'm actually of the mind that an approach that can help, at least for some of us to do as researchers, is what in Tai Chi they call rolling back and pressing forward. It's a sort of 2 step process where if you're attacked and this is this is sort of imitated in a a sort of practice called push hands where you are in a more gentle way sparring with someone else in Tai Chi Chuan, the martial art, you you allow someone's energy to come at you and you just sort of receive it all. But in doing so, you're sort of sensing where they are and where they're pushing and where their weight, where the center of mass is. And then when you're ready, when they've sort of overextended, that's when you can push back. And when you push back, you are not just kinda fighting, like, you know, knee jerk resistance from the get go. You're responding to their energy and using the information you gathered in the rolling backstage to sort of now be more strategic about what direction you want to push them. And so I think of this as a good metaphor for how to deal with a lot of things, including AI. Jon Ippolito [00:12:26]: I don't think it's a great solution to just say, these things are unethical. We shouldn't use them at all because they're already being used all around us. They're in CV screening software. They're in facial recognition. They're in self driving cars to the extent. Those are a thing. And we are we're sort of giving up our agency to other factors, other forces in the world when we let them decide what directions these systems should go. Whereas if we if we sort of roll back and accept what these things are, as you just say play with them, right, because this form of Tai Chi is a kind of play, we can learn more about how they work and, most importantly, where they fail. Bonni Stachowiak [00:13:03]: Mhmm. Jon Ippolito [00:13:04]: Once we know that, we can start to push back and say, this is the value they can have for us. This is the direction we want to bend them in. These are the ways we don't want them to go. Bonni Stachowiak [00:13:12]: Oh my gosh. That's such a powerful metaphor. Thank you for that. And I I will definitely put that link in the show notes that you referred to from the the authors, Maha Bali and others Bonni those metaphors, but what another powerful one I have not heard spoke of before today. So speaking of people's misunderstanding, what are some of the common things that come up of people not really understanding how it works? May I we've heard so many of these metaphors, it's like a calculator. No. Actually, it's not. And so how how is that breakdown or a couple of things where you think a lot of people, especially in a higher education space, I think if we can limit it to our our group who may be listening to this podcast, people tend to misunderstand about how it works and then not really be able to use this additional metaphor you've added to our collection. Jon Ippolito [00:14:01]: Sure. I'm trying to think of whether I want to pick on the people who are the who are the promoters of the technology or the ones who are squeamish about it first. I think that maybe maybe I would say start with some of the the buzzwords that people use when they describe AI that I think are really misleading. Right? So the the word hallucination is the is my biggest bugaboo. Yes. They generate falsehoods. No. They are not connected to a a trillion cells of a human sensorium with decades worth of experience bumping into real objects in the real world and the and consequences of that. Jon Ippolito [00:14:40]: Right? As as humans, we feel pain. We trouble our relationships. We succeed at doing something and getting rewarded for it in ways that are not the same as just changing a bunch of weights in a in a in a matrix somewhere. So when I hear and it this is a term that comes from not the tech critics, but from the tech champions. When I hear them say, oh, that's just a hallucination. The model was hallucinating. They're referring to, of course, the generation of a falsehood or potentially a nonsensical output after you prompt something into chat GBT or or the like. But that implies a kind of bodily sense of the world as well as the idea that something went wrong. Jon Ippolito [00:15:23]: If you hallucinate now, you know, you see a pink elephant behind me, I swear there's no pink elephant behind me, you your senses and your connection to the brain would be messed up. Something is going wrong. Right? You're either got DTs because you drank too much or you are having a deep psychosis or some kind of neurological problem. That's not normal functioning of the optic nerve and how it connects to the brain. Well, if a language model hallucinates by saying Obama was the first Muslim president or it's okay to eat rocks as long as you eat small ones, both of which were statements generated by Google's latest foray into AI, namely the AI overviews that they offered as summaries of on search result pages. Both of those statements are wrong, but they're not wrong because the LLM did something different than it usually does. The large language model did exactly what it always does, which is to essentially grab the words that are in the prompt, relate them to other words that have been clustered with those words online in billions of web pages that have been scraped from the Internet, and then try its best to predict the next words in the sequence. So we can't call it hallucinating if it's what it does when it's not hallucinating. Jon Ippolito [00:16:37]: It's it's simply accidentally wrong as opposed to something about the process being wrong. So the process is divorced from the real world of right and wrong, correct and incorrect statements. It's its own thing, and there's no difference between an AI that hallucinates and an AI that doesn't, except the sort of vagaries of how it was trained and the specific sort of randomization of responses that goes Bonni the background. So that's one that's probably my biggest bugaboo for people misunderstanding that this is a kind of software that has bugs. No. Learning language a large language model isn't a kind of software that can have bugs. I mean, in in principle, some parts of it can, but not the meaty, the engine of it that generates, you know, thoughts and comments and even imagery and media. It's not even the kind of thing that you can consider software in a conventional sense. Jon Ippolito [00:17:32]: There's a bunch of vectors with numbers in them, and there's just some very basic code that essentially recombines those vectors in different ways. So there's no memory leak to be patched. There's no web URL that's broken. It's not like the kind of thing that you would do with traditional software when you needed to fix it. That can lead us to other kinds of claims or words that, again, are used in this industry that I think are very misleading. The idea of guardrails well, guardrails are something you put to prevent a car from accidentally driving over the edge, but there's a human in the car that can see the guardrail and steer clear of them. There's no one steering a neural network in the conventional sense. The potential vulnerability and failure of a large language model than there are guardrails we could possibly put on them. Jon Ippolito [00:18:29]: And that's why years after large language models are introduced, we still see embarrassing failures by major companies where their models spit out things that are either ridiculous or harmful. I could talk about more of these buzzwords, but I think those 2 give you a sense. Bonni Stachowiak [00:18:43]: Oh, yeah. And I wanna because I wanna hear the latter half where you would like to share a bugaboo perhaps about someone who has been more resistant to engaging. But before we do, I I would love to take this opportunity to have you talk about this exercise that you've had students do that helps them explore the connecting random things together. Could you talk a little bit about what that exercise is and what it's taught you as well as students that you've worked with? Jon Ippolito [00:19:10]: Sure. So one of the research topics I'm most interested in is creativity. And creativity for AI, anyway, is the inverse of trust. Right? A lot of people are worried about, oh, can these things be trusted? Are they reliable? Right? And we see many examples when they're not. Well, some people other people typically are interested in whether they can be creative. Like, is it artwork that's generated by Midjourney or Stable Diffusion, really art? Or can it have the same level of aesthetic value or innovation as a human artist? To me, these ideas are intricately related. In fact, they're inversely related. The more creative you are, the more you can't be trusted, to put it bluntly. Jon Ippolito [00:19:49]: So when you think about what what does creativity mean in the context of a large language model, well, it's hard for us to predict or, imagine that the auto predict that sort of gives us the next suggestion on our phone when we start typing something to a friend and pick up some bananas at the grocery, that that could be creative. Like, how could that be creative? It's just filling in words. Remember that you're dealing with billions of web pages, all of which have words frequencies where, you know, the same word will relate to another word. Right? So an example I often give is sort of, you might see pet and dachshund nearby in many web pages. You're probably not gonna see pet and velociraptor. Right? So velociraptor pages that have velociraptor are gonna be far in a semantic space away from the pages that have pet or the word dachshund. And so when these models are trained, they start with random parameters, and then over time, they're rewarded every time they a network is a connection is made that imitates the the likelihood that 2 words appear together in the same way that is the likelihood that they'll appear together in the actual corpus of data that's out there online. So, again, to oversimplify, by the time you're done with the training, you should have a vector, which is a bunch of little numbers for every word. Jon Ippolito [00:21:11]: So the word pet is gonna have something like 12,000 little numbers inside this vector. And each of those corresponds to a word that it may or may not be associated with. The word pet and dachshund, well, that dachshund's actually gonna be quite associated with the word pet, so that might be a pretty high percentage for that little number, like, you know, 30% or 10%. The word velociraptor is gonna be incredibly low, like point 0 0 0 1%. And, literally, the the training that a AI neural network goes through just generates for every word the sequence of numbers that correspond to its likelihood to be appear next to any other word in the training data. So to be creative is to say, well, what if you could have a pet velociraptor? And this is quite strange for us because we know there's no web let's say, maybe there is a web page out there with a pet velociraptor in it, someone's fanfic for Jurassic Park. I don't know. But let's suppose there isn't. Jon Ippolito [00:22:08]: We would say, well well, how could the large language model discover this page or write a story about this if it has never seen these two things combined before? But this is where the idea of averaging comes in. And, again, to somewhat oversimplify, when you write a bunch of words in your prompt let's say you're working with, image generator like Stable Diffusion or MidJourney or DALL E. You say, hey. Show me a picture of a pet pet dachshund. It's going out and finding phrases and websites that include pet and dachshund and associating them with a picture. And this is easy to do on the web because pictures have captions, and all tags for accessibility behind the scenes in the HTML. So it's gonna go through and find the pictures that are associated with the words pet and dachshund, and it's going to, in a rather fancy way, average those pictures to give you a result. And you will see a girl walking a pet dachshund. Jon Ippolito [00:23:00]: Well, if you ask yourself if you ask the same image generator, show me a pet velociraptor, I want a girl walking a pet velociraptor, It may have never seen any of those pictures, but it still can find the pictures the web pages associated with girl and dachshund, or I'm sorry, and pet and velociraptor, and then sort of average those pictures together. And that's what how you get a girl walking a pet velociraptor out of MidJourney or stable fusion or DALL E. This is an extraordinary thing technically, and I can go into how it happens behind the scenes, but I think it's beyond the scope of this, podcast. More more to the point, it's an example of the kind of creativity that these systems can do, can can be possible, that that it it is it is both a problem. It's a it's a curse and a blessing. It's a problem because the average of 2 facts is not necessarily a fact. So you're gonna get something from a result that you query that isn't true, and we would call that hallucinating. But that's simply because, essentially, the chatbot is looking at all the words in the prompt and averaging all of those words to find the midpoint of those words. Jon Ippolito [00:24:12]: And it could be that there's a word there or a sequence of words that makes perfect sense, like girl with a pet dachshund. Or it could be that those words, the average of those, the sort of midpoint of all of those, you know, vectors in this abstract mathematical space, is something for which there hasn't ever been a web page. There's nothing in the training data for that because no one's ever thought of having a pet velociraptor before. And yet the the AI is capable of generating a sequence of words just like it would normally from the averages of all the words in the prompt. And that to me is quite interesting because it means that although the average of 2 facts isn't necessarily a fact, the average of 2 cliches isn't necessarily a cliche either. That's where novelty and creativity can come from. That's why we get to ask it to create a haiku about quantum mechanics or the script for a romantic comedy about the French revolution. It's incredibly good at generating these these kind of hybrid. Jon Ippolito [00:25:11]: And I hope that that gives you a sense of why I think the idea of the creativity of these systems and the reliability are sort of inversely related. And understanding that understanding where the novelty comes from helps us distinguish between times we should use it, where kind of creativity is valued versus times when we shouldn't, where reliability is required. Bonni Stachowiak [00:25:36]: Oh, and you're just perfectly it's almost like you've talked about this stuff before because that's exactly where I wanted to go next. My the division that I lead at our university, we had it right for us a a traditional holiday story that many children read, and and we was terrible at it. But it was extremely low stakes, and what I find so joyful as I think to that experience, We have such a diverse group of people that I'm privileged to work with, and every single one of those people all got engaged in one way or another. Some people were highly engaged. Some people were just, hey. Let's change this wording. But it was so fun to be a part of that. But, yeah, it wasn't it didn't matter that it that it wasn't a great creative endeavor, and, we made it better. Bonni Stachowiak [00:26:22]: Every every single one of us made it better. But I would love to have you talk because I was surprised in doing the research to prepare for today by some of the things you revealed about looking at high stakes versus low stakes, and then an upper another factor that I hadn't thought of before, opportunity versus prescription. So can you kinda walk us through this might be a different way for many of us of thinking about when we may or may not wish to use artificial intelligence. Jon Ippolito [00:26:51]: Sure. I'd love to hear more about the sort of engaging quality of your group work with, writing a story. I I often think about if I'd had this when my kids were young, I could have had Chat JBT write bedtime stories involving their favorite imaginary characters. You know, I think there's immense potential there for education. Anyway Bonni Stachowiak [00:27:08]: Yeah. Jon Ippolito [00:27:10]: So so when we think about reliability versus dangers, one of the first determinants that comes to mind is, oh, high stakes versus low stakes. Right? That that makes sense. Right? These things are created but unreliable. So, of course, I'm going to give a task that are sort of trivial or or unimportant, whereas something's really critical that absolutely needs to be done, I'm not gonna I'm not gonna task an AI to solve that. I think that's wrong. I think that's, kind of tempting, but that's easy to find examples that don't work according to that kind of dichotomy. So one of the big success stories of artificial intelligence is in certain realms of health care. One of the first job categories to be endangered well before CHAT gpt was radiologists who look at X rays for a living and identify potential tumors and the like. Jon Ippolito [00:28:00]: There's lots of other examples that many of which have to do with medical imaging because those sort of that sort of machine vision was an early sort of a research angle before the natural language processing got up to speed. Well, that's not low stakes. That's really high stakes. Right? You have cancer in your lungs, you wanna know. But the difference is it's not about, do you know whether any one of these blobs that the II identified is definitely cancer. It's can it simply identify a bunch of things that should be screened and looked at more closely, perhaps biopsied in the worst case scenario. So it's a it's an example of something that's really important, but for which AI is is, in some ways, according to numerous studies, more valuable than a human researcher, at least in the initial stages. Well, certainly then there are either cases where why would it ever not why would it ever be a problem then to use AI for low stakes? I mean, we could sorta, like, at least keep that half of the dichotomy. Jon Ippolito [00:28:57]: I think that's also wrong. I I taught a class last year online where we use Slack as our LMS. I don't like folder based, clunky, constipated learning management systems. I like a very dialogic, like, conversational courseware even for an asynchronous online class. And it's it's been great for a lot of reasons. But this past year, one student was giving pretty predictable responses. And they weren't even bad responses. They were just sort of, like, generic, and they sounded a lot like, an AI chatbot. Jon Ippolito [00:29:31]: And so I called this person on it, and I said, look. I don't mind that you're using it. In fact, there are numerous places in the syllabus where I recommend you use AI and try it out. But responding to to classmates with that kinda human touch that I'm trying to encourage in class dialogue is not a good use of this program. Does it matter? Not that much in the sense that it didn't impact the person's grade. It also doesn't matter if you write a thank you note to, you know, your Bonni. Thank you for your contributions this year by writing something that sounds like it came from ChatGPD. It's only it's not you're being fired or hired. Jon Ippolito [00:30:06]: It's not a strategic decision. It's just a thank you note. Who cares? It's low stakes. Well, I care. I don't want Chat GbT writing thank you notes to me. Right? I don't want a boss pretending like they're connecting with me on a human level and really using an AI to do it. And, similarly, I want a class conversation even over trivial things like, hey. Good idea, or I'll have to check out that link. Jon Ippolito [00:30:27]: I want those to be real even if they're informal, casual, kinda one off responses. So one way to think about that might be, well, it's it's low stakes in the short term, but in the longs longer term, you're basically degrading the human relationships that are that are essential to this community or connection or class or whatever. But I think it I think that the these are examples that show you that high stakes versus low stakes isn't really the right dichotomy to think about when to use AI and when not. Bonni Stachowiak [00:30:54]: And I think part of what I'm hearing you say, or maybe it's what you're not saying, is that perhaps even a dichotomy isn't the best way to think about this at all. Maybe because you mean, you've already introduced another one instead of high stakes versus low stakes, you've also introduced this, what what the outcomes might be. But I don't know. I I wanted to ask you how this relates to your thinking on feedback to students on assignments that they're doing. I I have been troubled, to say the least, about the ways in which there are often visceral reactions by faculty. How dare students cheat on an assignment in a way they're defining cheating as making use of AI, but the same, I think, logic that they're using doesn't apply of anything that would save faculty time. And I I wanna be very careful how I talk about this because I've been reminded of and and I'm thoughtful enough, I think, self reflective to realize I sit in a privileged place. I'm not teaching classes of 200 with 0 TAs to speak of. Bonni Stachowiak [00:31:57]: And so our contexts are different, but that same thing that you talked about, I I have colleagues I wrote I I finally confessed to them, and they didn't mind at all that the formal letters in support of their applications, their portfolios for promotion or tenure, Chat GPT does a far better job than I ever will. They're it's not only faster, but better. And so they were like, why are you apologizing to me? But those same colleagues, these are the dearest of dear friends. I wouldn't even think to write a thank you note that was at all generated. So so in terms of feedback to students, what what's some of the thinking you have around that? Using AI, not using AI, and I I wanna be nuanced about it myself. Jon Ippolito [00:32:39]: Yeah. It's a great question very much in the news now with the revelations of of of sort of education adjacent services and companies like Khan Academy kind of promoting this idea of, like, what we can automate all aspects of student assessment and so forth. I personally think that it's a bit of a double edged sword. I I do recognize there's some context in which relying on AI generated assessment can be fairer. Bonni Stachowiak [00:33:04]: Mhmm. Jon Ippolito [00:33:05]: But there's a lot of context when it's not. And studies of, for example, people who speak nonstandard English or use dialects that are not sort of standard white people talk can be disadvantaged. Even names, I saw a study recently, which I will have to look up, in which someone somebody had the exact same assignment to one of these chatbots, but simply change the names to different ethnic groups. Right? So one sounded very clearly like a sort of white American and other sounded Indian or Chinese or, like, black American, and they got very different scores on exactly the same assignment that was submitted. But I think there's a difference between grading and feedback. And I think, in in my experience, the chatbots can be incredibly valuable at offering perspectives on writing or other assignments that are not part of the student's original thinking, especially when prompted correctly. Now you might say, well, Bonni, that's totally contradictory because on one hand, you're saying, like, these bots are biased against certain groups. On the other side, this you're saying they're good at sort of encouraging students to look beyond their own personal biases at exactly the kind of issues of these same groups. Jon Ippolito [00:34:15]: And I think both are true. 2 computer scientists, Greg Nelson, Troy Schauder, and I, in the fall, conducted a semester long experiment with 50 students. And we had them do 7 tasks without AI and 7 tasks with AI. And one of those one of the tasks with AI was to essentially submit a work of theirs, not just for our class, but actually for other classes into, in this case, GPT 4. And then, you know, ask what what the grade would be and get feedback. And the results are still coming in. We're gonna publish this coming year, I hope. But preliminarily, it looks like students were sort of felt equally more or less about both kinds of feedback. Jon Ippolito [00:34:57]: It turns out that Chat GPT was a harder grader. I think that was kind of circumstantial, had to do with it not really understanding the level of the class. So they tended to get worse grades. The feedback was more sort of all encompassing, whereas the human TAs tended to focus more on specific pieces rather than try to capture all the bases. But the other interesting factor was the feedback meant more coming from a human. They knew this person was a flesh and blood creature reading their essay or looking at their image or whatever, and they appreciated that aspect. Bonni Stachowiak [00:35:28]: Oh, it's fascinating. Now I I don't wanna forget to ask you to then describe the exercise that you ran with students to help them experience these things things coming together that are seemingly disconnected. Yeah. Jon Ippolito [00:35:42]: Yeah. So I gave the sort of sketch of how large language models work by averaging what's surrounding them even if that explores part of the space for which there's very little few data points. And and and I describe this kind of creativity. Not necessarily creativity of the sort that invents a new scientific theory or a new art genre out of nothing, but the kind that sort of mashes up things and creates hybrids. Mhmm. So the assignment is pick 1 from column a and one from column b and ask ChachiPT to find the connection between them. And they are deliberately from the exact opposite realms of the world. So, you know, what's the relationship between Taylor Taylor Swift and the Big Bang? Or what's the relationship between Aardvarks and the industrial revolution. Jon Ippolito [00:36:27]: It's just trying to go as far out as you can. And what surprises students is how, instead of balking at this, like, well, there is no relationship. Increasingly, the companies have started to add guardrails that say something like that. Like, well, you know, there isn't any direct connection between Taylor Swift and the Big Bang. But and then it will go on and say both of them are disruptive forces and that have created entire new it'll go on and on and sort of explain in this bull thing way, but actually also creative way connections between them. I think in one case, a student asked what's the difference or what's the connection between a chupacabra, which is a sort of cryptid, this kind of scary goat type creature that's a kind of fiction sort of like sasquatch or Loch Ness monster. What's the what's what's the connection between the chupacabra and dark energy, you know, dark matter? And it turned out that it came up with this whole theory of quantum entanglement being the way the chupacabra could appear and disappear. So it's it's it's definitely a wonderful exercise because it's so quick to do. Jon Ippolito [00:37:26]: And I think rather than just being a parlor trick, it's also a good lesson as to how these things work. Anytime you are generating an answer, it's using that same kind of averaging, trying to find the sort of midpoint between these other words or prompts. And so if you realize that this is what's happening when it makes a connection between Taylor Swift and the Big Bang, you can also realize that that's how it's generating results that sound plausible, but may may not be true. Bonni Stachowiak [00:37:54]: I I haven't tried on this analogy yet, and I will admit to being rather feeble at them. But I'm kinda thinking back to you talking about guardrails and the rightful criticism about that asking these big AI companies to build these in, you know, and self regulate. That's that's not necessarily it, but I'm feeling like you and so many others that have been working out in the open and experimenting and sharing what you're coming up with and working with students have been helping so many of us rather than choosing between one dichotomy. I'm either gonna put my head in the sand, act like this doesn't exist, and act like I can police my way into never having students use it, or the other extreme to use it irresponsibly without realizing some of the true ethical things that have been baked into the systems because they were built by humans, and they're being trained on language that is filled with bias. So, anyway, I just Bonni, I guess, thank you for the guardrails, but maybe guard I don't know if guardrails is the right thing because, I I'm not sure if, you're, you're doing more than helping, at least in my case, me avoid a crash, if that makes any sense. You're helping me go on a wild adventure. I might not been have been able to go were it not for you and people like you in this grand adventure we're on. Jon Ippolito [00:39:05]: Oh, that's very thoughtful. I don't think of them as guardrails. I think it I I think you have to choose your choose the vehicle for the right kind of trip. Bonni Stachowiak [00:39:13]: Mhmm. Right? Jon Ippolito [00:39:14]: And this is where the prescriptive versus opportunistic task comes in. And, you're not gonna use a dune buggy to to drive around Maine in the middle of a blizzard. And you you shouldn't be using these tools for tasks that aren't amenable to finding these sort of randomized averages. So so my my sort of framework for making choices here, when to use it and when not, depends on 2 things. 1 is there's a whole battery of rightful criticisms that have to do that just kind of come from the outside. They're sort of, I don't know, maybe circumstantial, you might say. So they're they're not specifically about how the engine works. They're about the context in which these things were created. Jon Ippolito [00:40:00]: So things like bias in the training data. We know there's bias against people of other cultures and genders and races. Things like exploitation, exploitation of workers who in the 3rd world are going through looking at horrific imagery of, like, you know, sexual violence so that these things get stripped out of the things we see in the first world. There's plagiarism accusations from artists whose work is being used so that someone can type in, you know, in the style of Greg Krakowski, and lo and behold, there's a Greg Krakowski painting that he never painted. There are a lot of there's monopolization that you know, the fact that we used to worry that there were 5 main media companies. Now there's basically 2 or 3, thanks to the huge compute powers of AI. There's carbon impact. So there's all of these sort of external factors that might make you say, look. Jon Ippolito [00:40:48]: I'm not gonna use this at all. But when we look internally to, like, say, okay. Let if we disregard those and just say, like, what what task does this engine work well for? That's where I think we can think about a framework that distinguishes between what I call opportunistic versus prescriptive tasks. So going back to the health care example, finding a potential tumor, if you find 1 and it turns out not to be a tumor, well, that's distressful for the patient. Maybe they even had to go through some surgery to get a biopsied. But it's better than not finding the tumor that was there, you know, the third one down. So to me, that's an opportunistic task. We are trying to look for chances of things that might be true even if we get some of them wrong. Jon Ippolito [00:41:29]: We'd we'd rather have a kind of big net and capture what we can. And then, the opposite is prescriptive tasks. So, again, for the health care medic analogy, that's like, you know, dosing your grandpa's heart medication. Right? You don't wanna just kinda take a whirl and say, hey. Let's just kinda switch around some pills to see what happens to him tomorrow. You know? He might be dead. That's something where you gotta get the dosage right. That's a prescript it's literally prescriptive task. Jon Ippolito [00:41:52]: That's why we call it prescriptions. So on the one hand, you have tasks where you just kinda wanna generate a lot of possibilities, and if some of them work, that's great. And on the other hand, you have tasks where you gotta get it right. You gotta have every aspect of it right. I think the second half is a terrible idea for most generative AI, and the first half the first kind of category is actually a really good use of AI. And we can look at that in all kinds of different fields from being a creative designer or artist, their impact in politics, which is something we didn't talk about today, their role in education. So all of these are fields where we have to decide when we're gonna use this and when we're not. And I think I think, as much as it's, you know, it's great to be able to choose individual cases by case, but it's also helpful to think about what is the engine what is the probabilistic engine that generates these possibilities and under what tasks do we want kind of opportunistic solutions, and what task do we want a more well defined, certain, reliable, prescriptive solution. Bonni Stachowiak [00:42:52]: This is the time in the show where we each get to share our recommendations. And since I totally don't wanna stop this conversation, I'm gonna invite listeners to keep it going, and that is that I invite you to follow Jon on your most preferred social network. Jon, it looked like you have quite a few going. So, I most connect with Jon and his work on LinkedIn, but I saw that you're you got plenty other listed, so I'll make sure those are in the show notes. I don't know if you have advice for people if they only had to pick one where you're able to engage the most, 1 or 2 spots. Is there any suggestions you have for people? Jon Ippolito [00:43:26]: Thanks to Elon. It's become a total mess, and I find myself distracted between 6 different platforms, including Facebook, which I thought I would never go back to. So I I I tend to post first to Mastodon. It is the most decentralized and the least corporate, and I find there's no algorithm in charge. It's really about people. But, I'll answer you wherever you post. Bonni Stachowiak [00:43:46]: Okay. Great. And then the second thing I wanted to recommend is, ironically, a website that I found about while I was researching for the conversation with Jon today, and that is a website called Data by design. And I found out as I started exploring the website that it's actually just a very open, and it's they've they're receiving feedback on what will be an MIT Press book in the fall of 2025. So I'll read a little bit from their about section. Data visualization is not a recent innovation. Even in the 18th century, activists and economists, as well as educators and politicians, were fully aware of the power of visualization to produce new knowledge. But who, more precisely, was wielding this power Bonni whose behalf and for whose benefit? The answer the answers to these questions are what this project explores. Bonni Stachowiak [00:44:42]: By retelling the history of data visualization alongside the histories of colonialism and slavery, We show how questions of ethics and justice have always been present and continue to offer lessons to viewers and designers of data visualizations today. And I was chatting a little bit speaking of social media, I was chatting a little bit with, many time prior guest, Rami Kallir, today, and he is going to be coming out with a book through MIT Press. I forgot when his release date is, but he was talking about they're just a really good publisher for o open access to their publications and also this annotating and feedback process happening very much out in the open too. So all sorts of things for us to explore. And if you go visit Data by design, you're it kinda doesn't matter what your interest is. It's gonna drag you in whether you're interested in data, in in web development, or if you're interested in art, or or to manage that. I mean, so many things. So I really think it's worth a visit. Bonni Stachowiak [00:45:45]: And, now I'm gonna pass it over to Jon for whatever he would like to recommend to us as we close out the episode. Jon Ippolito [00:45:50]: Sure. Well, I'll put in a word for MIT Press as well since I'm sort of negotiating with them to open access a book that I published with them about 10 years ago. And the the idea of what the publication means in the 21st century and especially after the rise of generative AI is a really interesting question. And, fortunately, they're open to that dialogue. So my my recommendation can I have 2 Oh, yeah? Absolutely. So the first one is is by Bryan Alexander's book, University's on Fire, which is about how the how higher education will be buffeted by climate change and how it should respond. Don't read it. Just don't even send it to your house. Jon Ippolito [00:46:27]: Send it to your administrator. Send it to your college dean. Send it to your president. That's what, I did, in my case because, of course, I read it as well. But, it I think it's actually honestly more important that you get the book into as many people's hands as you can, especially people who are in a position to realize that universities need a shake up if they're gonna confront the timely issues of our day. 2nd piece of advice or recommendation is more of a practice, which is you may be teaching writing, you may be teaching programming, computer science, history, whatever. Get your students to try generating images with AI even if it's a class that has nothing to do with media or art or design. The reason is that many of the the sort of internal workings of these engines that are kinda hidden by the chatbot interface are made plain by the interfaces of something like stable diffusion. Jon Ippolito [00:47:17]: And I think that we forget when we only see a single sort of oracular godlike answer to our prompts that these are not machines that have knowledge of the world or even models of the world. I would argue that they don't. You can see this most easily when you go to something like Stable Diffusion, you know, Leonardo dotai, DALL E, and so forth, especially the ones that have the more complex interfaces, and they can be a little off putting at first. But there are so many bells and whistles and levers you can pull to get different results. You can choose the model, and you see very different kind of images depending on which model is under the hood. Right? You can you usually almost always generate multiple images at once. You don't just get one answer, you get 4. And you're like, woah. Jon Ippolito [00:48:02]: Okay. I gave up with these 4 different things. Why? Because it's probabilistic, and it's always generating random answers. Even when you ask for text, you it it makes it very obvious that things are wrong. Right? So you might generate something about, I don't know, the death of Robespierre, and there's a detail that's wrong there, but you don't know because you're not a French historian. But if you see someone with 6 fingers, then you know it's probably not an accurate representation that you got back from the model. And I think, also, one big thing that's important is this idea that, yes, there is bias in the training data, but there's also bias in the very act of averaging. So there's this concept called the midpoint Hottie, which means if you look for if you if you prompt something like stable fusion, I wanna see a construction worker. Jon Ippolito [00:48:45]: I wanna see a typical female doctor. They're gonna look hot. They're gonna be beautiful. You're gonna fall in love with them. You go, wow. That woman is, you know, incredible. Is she average? Well, in a strange way, yes. Not typical, not the way we would say, but average in the sense that studies have suggested that if you take all different kinds of body, you know, kinda images and portraits and and sort of compose them together, you get something that actually looks attractive. Jon Ippolito [00:49:09]: And we could talk about why due to regularity of facial features and expectations. But that problem, it's technically a problem called the midpoint hottie problem, is something that's very hard for engineers to deal with. Because every time you ask for an average person, you're gonna get someone who looks beautiful. That is not because there's beautiful people online. Of course, there are more probably, portraits of people who look attractive than the ones they picked that were deliberately ugly they put online. But the real problem is the averaging dynamic of these systems. And if you really wanna understand the limits of that process, of that way of producing results, image generators are way way great way to see it visually. Bonni Stachowiak [00:49:49]: Oh, Don. I could I could keep going. I'm so grateful for you and for your work, for your agreeing in less than 24 hours to talk with me because, otherwise, I guess you said we woulda talked in 6 months. So what a pleasure to get this opportunity to speak with you and to hopefully spread the word about the work that you do, and I'm just so excited about continuing to learn from you as well. What a invigorating conversation that I wish I could keep going for the next few hours, but you probably have some other things to do in your day. I know you're about to start traveling. Thank you so much for this time and for your generous con contributions to so many of us who are trying to wrestle with this stuff and learn. Jon Ippolito [00:50:25]: Thank you, Bonni. Bonni Stachowiak [00:50:28]: Thanks once again to Jon Ippolito for being a guest on today's episode. Today's episode of Teaching in Higher Ed was produced by me, Bonni Stachowiak. It was edited by the ever talented Andrew Kroeger. Podcast production support was provided by the amazing Sierra Priest. Thanks to each one of you for listening. If you have yet to sign up for the weekly Teaching in Higher Ed updates, I encourage you to head over to teachinginhighered.com/subscribe. You'll receive the most recent episodes, show notes as well as other resources that don't show up in those regular show notes. So head on over to teaching in higher ed dot com slash subscribe. Bonni Stachowiak [00:51:15]: Thanks for listening, and I'll see you next time on Teaching in Higher Ed.