• Skip to primary navigation
  • Skip to main content
  • Skip to footer

Teaching in Higher Ed

  • Podcast
  • Blog
  • SPEAKING
  • Media
  • Recommendations
  • About
  • Contact
EPISODE 576

The AI Con

with Alex Hanna & Emily M. Bender

| June 26, 2025 | XFacebookLinkedInEmail

https://media.blubrry.com/teaching_in_higher_ed_faculty/content.blubrry.com/teaching_in_higher_ed_faculty/TIHE576.mp3

Podcast (tihe_podcast):

Play in new window | Download | Transcript

Subscribe: Apple Podcasts | Spotify | RSS | How do I listen to a podcast?

Emily M. Bender & Alex Hanna share about their book, The AI Con: How to Fight Big Tech’s Hype and Create the Future We Want on episode 576 of the Teaching in Higher Ed podcast.

Quotes from the episode

The boosters say AI is a thing. It's inevitable, it's imminent, it's going to be super powerful, and it's going to solve all of our problems. And the Doomers say AI is a thing, it's inevitable, it's imminent, it's going to be super powerful, and it's going to kill us all. And you can see that there's actually not a lot of daylight between those two positions, despite the discourse of saying these are two opposite ends of a spectrum.

What's going on with the phrase artificial intelligence is not that it means something else than what we're using it to mean, it's that it doesn't have a proper referent in the world.
-Emily M. Bender

There's a much broader range of people who can have opinions on AI.
-Alex Hanna

The boosters say AI is a thing. It's inevitable, it's imminent, it's going to be super powerful, and it's going to solve all of our problems. And the doomers say AI is a thing, it's inevitable, it's imminent, it's going to be super powerful, and it's going to kill us all. And you can see that there's actually not a lot of daylight between those two positions, despite the discourse of saying these are two opposite ends of a spectrum.
-Emily M. Bender

Teachers' working conditions are students' learning conditions.
-Alex Hannay

Resources

  • The AI Con: How to Fight Big Tech’s Hype and Create the Future We Want, by Emily M. Bender and Alex Hanna
  • Distributed AI Research Institute (DAIR)
  • The Princess Bride
  • Emily Tucker, Executive Director, Center on Privacy & Technology at Georgetown Law
  • On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? By Emily M. Bender, Timnit Gebru, Angelina McMillan-Major, and Shmargaret Shmitchell
  • Emily M. Bender’s website
  • How the right to education is undermined by AI, by Helen Beetham
  • How We are Not Using AI in the Classroom, by Sonja Drimmer & Christopher J. Nygren 
  • Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI, by Karen Hao

ARE YOU ENJOYING THE SHOW?

REVIEW THE SHOW
SEND FEEDBACK

ON THIS EPISODE

Alex Hanna square

Alex Hanna

Director of Research at the Distributed AI Research Institute (DAIR); author of THE AI CON

Dr. Alex Hanna is Director of Research at the Distributed AI Research Institute (DAIR) and a Lecturer in the School of Information at the University of California Berkeley. She is an outspoken critic of the tech industry, a proponent of community-based uses of technology, and a highly sought-after speaker and expert who has been featured across the media, including articles in the Washington Post, Financial Times, The Atlantic, and Time.

Emily Bender Square

Emily M. Bender

Professor of Linguistics at the University of Washington; author of THE AI CON

Dr. Emily M. Bender is a Professor of Linguistics at the University of Washington where she is also the Faculty Director of the Computational Linguistics Master of Science program and affiliate faculty in the School of Computer Science and Engineering and the Information School. In 2023, she was included in the inaugural Time 100 list of the most influential people in AI. She is frequently consulted by policymakers, from municipal officials to the federal government to the United Nations, for insight into how to understand so-called AI technologies.

Bonni Stachowiak

Bonni Stachowiak is dean of teaching and learning and professor of business and management at Vanguard University. She hosts Teaching in Higher Ed, a weekly podcast on the art and science of teaching with over five million downloads. Bonni holds a doctorate in Organizational Leadership and speaks widely on teaching, curiosity, digital pedagogy, and leadership. She often joins her husband, Dave, on his Coaching for Leaders podcast.

RECOMMENDATIONS

How the right to education is undermined by AI

How the right to education is undermined by AI

RECOMMENDED BY:Bonni Stachowiak
How We are Not Using AI in the Classroom

How We are Not Using AI in the Classroom

RECOMMENDED BY:Alex Hanna
Empire of AI

Empire of AI

RECOMMENDED BY:Emily M. Bender
Woman sits at a desk, holding a sign that reads: "Show up for the work."

GET CONNECTED

JOIN OVER 4,000 EDUCATORS

Subscribe to the weekly email update and receive the most recent episode's show notes, as well as some other bonus resources.

Please enter your name.
Please enter a valid email address.
JOIN
Something went wrong. Please check your entries and try again.

Related Episodes

  • EPISODE 489Teaching with Artificial Intelligence

    with Lindsay Doukopoulos

  • EPISODE 518Teaching with AI

    with Jose Bowen

  • EPISODE 528Assessment Reform for the Age of Artificial Intelligence

    with Jason Lodge

  • EPISODE 472Perspectives on Artificial Intelligence: A Student-Professor Dialog

    with , Stead Fast, Lance Eaton

  

EPISODE 576

The AI Con

DOWNLOAD TRANSCRIPT

Bonni Stachowiak [00:00:00]:

Today on episode number 576, the AI Con how to Fight Big Tech’s Hype and Create the Future We Want with Emily M. Bender and Alex Hanna. Production Credit: Produced by Innovate Learning Maximizing Human Potential welcome to this episode of Teaching in Higher Ed. I’m Bonni Stachowiak and this is the space where we explore the art and science of being more effective at facilitating learning. We also share ways to improve our productivity approaches so we can have more peace in our lives and be even more present for our students. It’s an absolute honor today to get to speak to the authors of the the AI how to Fight Big Tech’s Hype and Create the Future We Want on today’s episode. Dr. Emily M. Bender is professor of Linguistics at the University of Washington where she’s also the faculty Director of the Computational Linguistics Masters of Science program, an affiliate faculty in the School of Computer Science and Engineering and the Information School.

Bonni Stachowiak [00:01:23]:

In 2023, she was listed in the inaugural Time 100 list of the most influential people in AI. She’s frequently consulted by policymakers from municipal officials to the federal government to the United nations for insight into how to understand so called AI technologies. Dr. Alex Hanna is Director of Research at the Distributed AI Research Institute Dair and a lecturer in the School of Information at the University of California, Berkeley. She is an outspoken critic of the tech industry, a proponent of community based uses of technology, and a highly sought after speaker and expert who’s been featured across the media including articles in the Washington Post, Financial Times, the Atlantic and Time. Emily and Alex, welcome to Teaching and Higher Ed.

Emily M. Bender [00:02:20]:

Thank you. I’m excited to be part of this conversation.

Alex Hanna [00:02:23]:

Thanks for having us, Bonni.

Bonni Stachowiak [00:02:25]:

I think many of the listeners are going to know this quote well from the Princess Bride from the character Inigo Montoya. He says, you keep using that word. I do not think you know what it means. And throughout reading the A icon and listening to you, I feel like that quote must float around in your heads a lot. When we say the word AI, we often don’t know what it means. Talk a bit. Emily, why don’t we start with you when AI is not what we say it is.

Emily M. Bender [00:02:57]:

Yeah. Well, first of all, as a total Princess Bride nerd, I have to say the correct is, you keep on correct quote is you keep on using that word. I do not think it means what you think it means.

Bonni Stachowiak [00:03:05]:

Oh, and I misquoted it. It was in my head. Correct. You know how sometimes something can be in your head? Correct. But it doesn’t come out of your mouth. Correct. Thank you. We need to be truest here.

Bonni Stachowiak [00:03:14]:

Yes, yes.

Emily M. Bender [00:03:14]:

Yeah. And I actually think that it’s a little bit different to that because the, the word in the Princess Bride that is being misused is inconceivable. And the point is that in fact what’s happening is very much conceivable. And what’s going on with the phrase artificial intelligence is not that it means something else than what we’re using it to mean, it’s that it doesn’t have a proper referent in the world. It’s got a referent in science fiction worlds. But in terms of what’s going on in the actual technology, it is used as an advertising term and applied to what’s actually not a coherent set of technologies. And it’s everything from image processing, synthetic text extruding machines, automated decision systems, but also sometimes just spreadsheets that someone wanted to get more money for.

Bonni Stachowiak [00:03:55]:

Yeah. Alex, what’s coming to mind for you in terms of the confusion between the term and it not always quite living up to what we think it means?

Alex Hanna [00:04:05]:

Well, first off, I completely thought you were going to say, hello, my name is. You killed my father. Prepare to die.

Bonni Stachowiak [00:04:12]:

Prepare to die.

Alex Hanna [00:04:14]:

But that would not really be very germane to our current conversation.

Bonni Stachowiak [00:04:17]:

It could be though. It could be, actually. Could be we’re going to get to doomers. Just you wait.

Alex Hanna [00:04:22]:

Exactly. Just a completely tortured metaphor. Yeah. So just to add to what Emily said, yeah, there’s not a coherent set of technologies that are referred to as AI, but it’s helpful to be very specific about what we mean when we’re talking about these things. And so in this vein, we follow folks like Emily Tucker at the center for Privacy and Technology who says that at their institution they refuse to use the terms artificial intelligence or even machine learning because they are not really referring to particular sort of outputs of the systems and the way that those outputs are consequential. So often when we’re talking about generative AI, we say synthetic media, synthetic text generation machines, or synthetic text generators. And we talk about diffusion models that are text to image generators. We will often say synthetic image generation machines.

Alex Hanna [00:05:22]:

And so those are pretty specific. And often what people are talking about when they’re talking about quote unquote AI. But there’s many different things, including automated decision making systems, which are much more consequential. These things can be as simple as a logistic regression, something that people in stats, a first year stats class, we’ll learn about. But then they have a bunch of data that they’re fed into them, and then they make some kind of a determination about somebody’s life chances. And so that is very consequential, but still gets wrapped up into this notion of A.I.

Bonni Stachowiak [00:05:57]:

Our son is 13, and he has. I love it. He takes up all sorts of hobbies and then moves on to other things. So I’m sure by the time this episode airs, he’ll have moved on to something else. But he has been enjoying transitioning his 3D printer knowledge from using models that are developed by other people and then printing them here at home to designing his own. And he created the coolest thing for his Latin class that just blew. I mean, it blew me away that I kept thinking I was missing part of the story because that this had originated in his mind was really fun. But we were outside going for a walk, and he said, mom, I can use real math now in my real life.

Bonni Stachowiak [00:06:41]:

And it was so. I mean, I truly wish I would have had that so many times. We’re in classes, whether they be, you know, at that age of 13, or I certainly teaching college students, expecting them to learn things, that it is difficult, if not impossible, to connect with one’s unique context. That’s why I want to spend a little bit more time talking about what both of you have encountered in terms of why, in this case, it actually is important that we take the clock apart and actually understand how the clock works, as opposed to just knowing what time it is. Talk a bit about the hype and the ways that our better understanding of what artificial intelligence actually is can help us sort of negate or otherwise wrestle with some of the mechanics of hype. Yeah.

Emily M. Bender [00:07:32]:

So when we’re talking about the synthetic text extruding machines, or chatbots, or I’ve come to calling them sometimes conversation simulators. So picking up on this thread of, like, describing what the functionality of the system is, rather than the anthropomorphized idea of what it might be. So what’s the purpose of a chatbot? Well, the purpose is to simulate a conversation. Why do you need to simulate a conversation? Frequently, not for any good reason. Some cases where that’s a useful thing. And what’s particularly pernicious with the synthetic text extruding machines is. Is that it ties into how we work with language. And this is.

Emily M. Bender [00:08:08]:

This is why I think linguistics is a really important field to be one of the fields informing this conversation. So linguistics is the study of how language works and how we work with language. And there’s One lesson from each of those that I think is particularly germane. So, on the how language works side of things, there’s the point that languages are systems of signs, that when you’re talking about linguistic artifacts, there’s always both form and meaning, and the form is the directly observable part. Right. Marks on the page, the articulations that someone makes with their hands if they’re using a sign language, sounds in the air and so on. And the meaning is on two levels. It’s like, what’s the conventional meaning that someone who knows that language would be able to put together from those parts? And then what was it being used to convey in that moment? And the thing about large language models, which is another name for something like ChatGPT, is that the data that is used to build them is only the form of language.

Emily M. Bender [00:09:02]:

There’s no access to the meaning part of it. But they are very, very good models of the form of language. And so what comes out looks like something somebody might have said. And what we do with something somebody might have said in the language that we speak is that we interpret it. And we interpret it not by just picking it apart and figuring out what meaning is in those words, as you might think we do, but in fact, through this elaborate but rapid and instinctual and reflexive process of putting together everything we know about that person or imagine about that person and what common ground we have about that person, and then asking ourselves, what must they have been trying to convey to me or to some other person that they were talking to by picking those words in particular? And when we do that, we are imagining a mind behind the text, and that’s fine. That’s how we do language processing. But when we’re faced with something like ChatGPT, we have to remember that all of that interpretive work and that entire constructed mind is actually on our side and not there in the machine.

Bonni Stachowiak [00:10:02]:

So powerful. So powerful. Alex, what’s coming to mind for you around this and especially the hype?

Alex Hanna [00:10:07]:

Well, I’m just thinking about your metaphor about the clock, which I think is really interesting. The thing about a clock is that it’s pretty deterministic.

Bonni Stachowiak [00:10:16]:

Yeah.

Alex Hanna [00:10:16]:

And, I mean, I don’t really know how the courts and the second thing works, but it’s good enough. If it’s good enough for. For. For everything except atomic clock people, then it’s good enough for. For me, good enough for Casio, good enough for me. And so there’s a sort of kind of notion here in terms of a deterministic behavior and so maybe the metaphor doesn’t quite work. I mean, if at a clock that Mace maybe told you that it was 10 o’ clock, or, you know, when it was 2 o’ clock, or sometimes when it was blue rather than two, then that maybe is something more of an apt metaphor. But, you know, the hype mongers are like, so getting behind something.

Alex Hanna [00:10:57]:

Getting to your original example, I think is very helpful because it’s thinking about, okay, how does this work? And in this sense, there’s not a lot about like the how does it work? And in the book, we talk a lot about the how does it work insofar as the basic way that language models work and the way that those are effectively a parlor trick to make it sound like it’s generating a coherent text that looks like it is coming from a thinking mind. But knowing how it works is not the, you know, is not really the way that, like, Sorry, I’m trying to extend the metaphor to, to. And I’m really stretching it out, so I don’t think it’s going to work. But thinking about the way that we talk about in the book is uncovering the metaphor or uncovering the way these work, because that is a way of pushing back against the height. Right. So even calling it and saying it’s a clock and saying this clock is going to do the best clocking that ever clocked, you know, that’s. Well, in this case it’s. First off, it’s not going to do such a thing because it’s not deterministic.

Alex Hanna [00:12:05]:

And second off, it’s less of a matter of seeing what’s behind the curtain because in this case it’s. It’s more of apt to talk about it more of what a magician does or somebody that has a sleight of hand than something that’s mechanical that you could dissect piece by piece just to.

Emily M. Bender [00:12:23]:

Quickly land the plane of that metaphor.

Alex Hanna [00:12:24]:

Yeah, sure.

Bonni Stachowiak [00:12:25]:

Oh, please.

Emily M. Bender [00:12:26]:

I remember in our, in our podcast we do this thing, we have fresh AI Hell. And several episodes back, I think one of the things we found was a clock that was displaying text instead of numbers that was driven by one of these large language models. So it was like frequently wrong because it was non deterministic. Yeah, yeah, Just sort of really put a nice fine point on. If you care about accuracy, why would you do this?

Bonni Stachowiak [00:12:48]:

Yeah, it’s hard for me to pick a favorite part of your episodes because I enjoy them from the very start to the very end. But I do always look forward to the fresh AI it’s something that I look forward to with fresh excitement every time that I listen. And I’m loving that we’re exploring metaphors. I’d like to explore them just a little bit further. But first I’d like to mention I created a card game that helps people talk about artificial intelligence through metaphors. I have found that the playfulness of a game helps to bring us together with a little bit more childlike curiosity, since it can be a polarizing topic. You could just imagine rooms full of hundreds of faculty who have all sorts of feelings about this. But, Emily, the first time that I ever heard your name, it was associated with a metaphor.

Bonni Stachowiak [00:13:39]:

And since I’ve now talked to probably at least 500 faculty, I can know that this is a word that many times they won’t know. So would you tell us first what the word stochastic means is, but then what a stochastic parrot is? Because then we can be so much more inclusive to anyone listening and make sure that that term gets defined and maybe even if you have origins of how you first kind of thought of this.

Emily M. Bender [00:14:03]:

So I’ll start with, as you asked, a definition of stochastic. Stochastic means randomly according to a probability distribution. And in fact, anything that’s random, if you don’t have a specific probability distribution, then it’s probably just like equal probability to everything, which is a probability distribution. So that’s stochastic. And in stochastic parrots, parrots, there is a noun, but we are using the sense that comes from the English verb to parrot, which means to repeat back without understanding. And I have to clarify that because people over the years have gotten upset with me for denigrating actual parrots, which are lovely creatures and I’m sure have internal life. And, you know, when parrots can parrot human language in interaction with people, it’s an open question, like, what exactly is going on? What kind of communicative intent is it? Purely, like, if I say, polly wanna cracker, I get a cracker, or is there something more? So the point of the phrase stochastic parrots is to try to make vivid what’s happening when you use a large language model to synthesize text to repeatedly answer the question, what’s a likely next word? So it is parroting back things from the training data, but stochastically meaning with some randomization in there. And sometimes people will say, well, it can’t be a stochastic parrot because it’s saying new things.

Emily M. Bender [00:15:15]:

It’s like, well, no, actually that’s what stochastic means. So it’s sort of a random probabilistic remix of what’s in the training data. And that phrase came out of a paper that I co authored with Dr. Timnit Gebru, Dr. Scmargaret Shmitchell, Dr. Angelina McMillan-Major who became a doctor after we finished, and some other co authors too. There’s a whole long, interesting story, if people are curious. I have a.

Emily M. Bender [00:15:38]:

A subpage on my website on stochastic parrots that links to the journalism. And we had so much fun with the phrase that we decided to put an emoji in the title of the paper. So the paper ends with a parrot emoji. And I had a lot of fun towards the end of the publication process telling the association for Computing Machinery that their copyright form was not Unicode compliant when it choked on my emoji. So that’s a little bit about where the phrase comes from. And I think that to some extent it has served the purpose of making this more vivid. But you’re right that the word stochastic, though a lot of fun to say, is a little bit obscure and maybe made the metaphor somewhat less effective for broad audiences. I mean, the paper’s an academic paper, but for broad audiences.

Bonni Stachowiak [00:16:24]:

I had not heard the backstory around you potentially offending people with a great affinity for parents. So that’s really, really a fun story to hear you share. I’m thinking about reading your book. How many. You introduced me to a lot of new things, but you also knit together things that I kept separate, that I realized how much overlap there was. And this is specifically doomers and boosters. And Alex, I’m gonna ask you to help define those terms for the listeners. But before I do, I just have to admit I’ve never, until this moment, I’ve never been able to say doomers and boosters without saying doomers and boomers.

Bonni Stachowiak [00:17:01]:

Cause they just seem like they should go together. But I think I got it. I should have slow myself down talk about this connection between doomers and boosters and that they have a lot more in common than I ever realized before reading this book.

Alex Hanna [00:17:16]:

Yeah, sure. So I think that the. The spectrum that we’re usually. That we’re often given in the AI debate is that these two. There’s two groups and there’s the doomers and the boosters. And the boosters are like, we need to run headlong into developing AI and nothing should stand in our way. And then the doomer and then the doomers say that we’re going to AI but it might kill us all. Right.

Alex Hanna [00:17:44]:

And so they’re pos to be these complete opposites. And Emily has this really nice turn of phrase of how they’re so the same. I’ll let her say it because I don’t want to steal her thunder.

Emily M. Bender [00:17:56]:

So the boosters say AI is a thing. It’s inevitable, it’s imminent, it’s going to be super powerful, and it’s going to solve all of our problems. And the Doomers say AI is a thing, it’s inevitable, it’s imminent, it’s going to be super powerful, and it’s going to kill us all. And you can see that there’s actually not a lot of daylight between those two positions, despite the discourse of saying these are two opposite ends of a spectrum.

Alex Hanna [00:18:18]:

Yeah. So insofar as that’s posed, there’s. The spectrum is quite larger about the range of opinions that exist here. Right. And it can be people who are very skeptical of these tools, people who consider them not to be very helpful in our everyday lives, and people who would like to slow down or highly regulate their development, people who are forced to use them as. As kind of cheap replacements in their everyday work or for social services. So there are much. There’s a much broader range of people who can have opinions on.

Alex Hanna [00:18:58]:

On AI. The Doomers specifically, take some untangling, because there’s different flavors of doomerism. There’s kind of people that think. Think that if you. The thing that you need to do to have beneficial artificial intelligence is to. Is. Is. Is to solve what’s called the alignment problem.

Alex Hanna [00:19:21]:

And so the alignment problem, and this is kind of popularized by a writer named Brian Christian and his book of the same name, they’re effectively saying, well, if we do not align these computers to our values or human values and say our values, a human values, then we’re going to be in a heap of trouble. There’s going to be either some kind of incident in, like, intentional or unintentional ways in which machines will eliminate us. Except there’s so many things that are wrong with that. First off, the notion that there is a unified set of human values. There. There are. There’s a stable category of the human that is even helpful to align to when we know that that’s a category that has a lot of variation and should be critiqued, can be critiqued just in terms of a notion of human that is uniform and that that is the most major problem that has to do with AI Harm. There’s so much harm that gets attributed to AI in the here and now.

Alex Hanna [00:20:30]:

Not least climate change that’s caused by the proliferation of data centers, automated decision making which are denying people of. Of things like welfare benefits or of housing or of employment. AI, quote unquote, AI used in war, biometric systems used at borders. There’s whole host of things that are denying people in the here and now. So, yes, Boomer and Doomer. Sorry, Booster and Doomer. Now I’m saying Boomer. I think.

Alex Hanna [00:21:00]:

I think the generational divide is actually. It cuts across many of these categories. But Booster and Doomer are not the only options here.

Bonni Stachowiak [00:21:08]:

Yeah, this is one of the biggest takeaways that I had where I felt you and your work shaping me. I will admit to having read the alignment problem and much of it resonating with me in the sense of these seem like good things to avoid. I mean, the first time I ever heard the phrase effective altruism, it seemed like that there’s some things that piqued my curiosity a little bit. And so I so appreciate how you have equipped me and so many others through your work to think more critically about. Let’s not pay attention to what might happen because it seems scary. You know, not dying seems like a good thing. You know, not dying sooner than I would have otherwise, you know, seems like a good plan. But really to hone in on the ways in which, as you said, Alex, things are being affected today and that some of that is a mask, an illusion.

Bonni Stachowiak [00:21:59]:

The hype that you describe, whether the hype is for toward the good or these supposedly shared values. Of course that should have been my tip off right there, but I didn’t get there as soon as I might have other otherwise liked. But I’m glad to be thinking more critically today. Would you each reflect for us on advice that you would have for resisting the AI con and to any extent that you might wish to be specific about someone in a teaching context where the whole mix of things around our own use or not use of AI, thinking critically, but also wanting to shape others to think more critically about it as well.

Emily M. Bender [00:22:40]:

Yeah. So, I mean, we both teach I a little bit more frequently than Alex. And for me, I get really sad when I see people proposing to use synthetic text or synthetic images in the context of the classroom from any position that you’re participating in the classroom, because it is, from where I sit, anathema to what we’re doing when we are working with students and learning together and helping them learn. And at the same time, I don’t want to be put in the position of having to police students and so what I do is I let go of the idea that I could ever possibly make sure that it never, ever happens, which honestly is true. Also, if you’re trying to police it, like, you can’t get to GPT0, let’s say. But what I want to do instead is sort of work with students to understand how the tech works. And I think every faculty member can do that from their own disciplinary expertise. So for me, I’m in computational linguistics.

Emily M. Bender [00:23:36]:

And so we talk about what a language model is and how it’s not actually a good use, a good match for something like information access. And then there’s these more general points about if you don’t do the writing yourself, then you’re not getting the learning that goes with doing the writing. And one of the things I’ve said in the past is we don’t ask students to write essays in order to keep the world’s supply of student essays topped up. Right. The point is the experience of the thinking that goes into the writing. And on the flip side, when we’re talking about reading something a student wrote and providing feedback on it, that is, you know, it’s a hard part of my job to do that work. It’s not the part that I look forward to the most necessarily, but it is a kind of interaction with another person. And if instead what I’m reacting to is synthetic text, then there’s no value for anyone.

Emily M. Bender [00:24:21]:

Right? I’m not reacting to something the student wrote. The student’s not getting feedback on their thoughts. So that is sort of the direction that I take it in the classroom. I do not think there are good uses for synthetic text in general or in the classroom, But I think that the approach to it is to really talk about, you know, why not what are we doing here? What’s our purpose? And why would turning to synthetic text basically get in the way of achieving that purpose?

Bonni Stachowiak [00:24:44]:

That feeling of sadness is so resonant and rich. I have felt sadness myself, and that. Does it not seem like I would care about what you would have to say that. I mean, that. But that. But also understandably the number of people. I would say that’s one of the biggest surprises I’ve had since November of 22, when the, you know, ChatGPT was first released. I think the number of faculty who would accuse students of cheating and place that label on it so quickly, but who would be so excited about AI generated feedback.

Bonni Stachowiak [00:25:20]:

And I put feedback definitely in air quotes in this case, but that was not treated with the same form of cheating. When I feel like you are cheating students out with the possibility of education. And anyway, I’m sure we all could be. Keep going on that. Alex, would love to hear from you your reflections on how we might better equip ourselves to be more critical.

Alex Hanna [00:25:43]:

Yeah, absolutely. I mean, I think it really, there’s a lot of, I think move movement against thinking about against AI in the classroom. And I think it’s a little depressing to hear a bit about like the ways in which, as you rightly said, Bonni, you know, there’s. I kind of, on one hand the university is saying, well, students don’t need to cheat. And on the other hand they’re saying, okay, we’re going to have a multimillion dollar contract with OpenAI. And so a lot of the ways, I think the people who’ve been doing, I think some of the best thinking on this have been teachers unions and working organizations who are thinking about what it means to have these types of things in the classroom and how that is undermining both teaching conditions but also students learning conditions. It’s a well known phrase in the academic labor movement that teaches working conditions are students learning conditions. One of the things that we think about is the implication of having AI or synthetic test generation in the classroom is that it is kind of part of this larger trajectory of the casualization of academia and the kind of move to try to find ways to get rid of teachers either by shunting much of this work to people who have much shorter contracts, who are having to write modules which are much shorter.

Alex Hanna [00:27:16]:

So turning it from kind of this holistic education experience into this thing which is just a, you pay a little bit and you get this kind of, of vending machine type of education. And that’s, we know that as educators that’s not how education works. We can think of educators who are, who change our lives really dramatically and by taking their jobs very seriously. And I think there’s a lot of moves, especially within teaching unions to thinking about what does this mean? How can we push back? What are the kind of institutional ways we can push back? And also what are the ways that we can really think about orienting our pedagogy around being very engaging and focused on our learning objectives. So some people have developed very creative ways of thinking about that. Some people have thought about, well, what are ways that we can talk to students and make an ally with students and thinking about the ways in which large language models are environmentally ruinous and steal from content creators and ignore copyright and do all these XYZ things that we know are horrendous. And I think there’s a lot of great solidarities that can be, can be forged there. So I think really talking to students, understanding students, understanding their conditions and their learning conditions of where it’s coming from, and also organizing within our workplaces and really thinking about how to make solidarity with our students can be a really powerful thing in strategy as well.

Bonni Stachowiak [00:28:40]:

You’ve brought up climate change in the conversation. Of course. Talk about it in the book as well. Before we close our conversation, I’d love to have you reflect for a bit about. I mean, I just, I read, you can imagine, I read quite a bit myself. I’m sure others listening do as well to help equip us to think more critically. So I’ll tell you what I heard in the discourse, but you helped correct me through reading the book. So the discourse would be, yes, that it uses a lot of power, it’s required, there’s a lot of power that’s needed, but it’s AI that will help us be able to use less power.

Bonni Stachowiak [00:29:18]:

So you were talking about the magic metaphor. So there’s a magician going to come in and wave their magic wand through AI will ultimately help us. So we should definitely keep using it even though it uses a lot of power, because eventually it will help us use less power. I wanted to say one thing. The other sort of intermixed in there. I can’t remember the word for this, but maybe one of you will know it is where, yes, I’m going to use it more. But that we had, you know, China came out with this model that used such fewer tokens were necessary to accomplish the same thing. But then people were saying in the news, but then more people are going to be using it.

Bonni Stachowiak [00:29:59]:

So we’re still going to have the same problem even if the technology becomes such that requires. So I guess this is one of those, like discuss amongst yourselves. What would you like to leave us with? We’ve done entire episodes around climate change, but like specifically for educators who want to equip ourselves better to negate some of these myths specifically around the climate.

Emily M. Bender [00:30:21]:

So I think that if somebody has used the phrase artificial intelligence and didn’t tell you specifically what, what they’re automating what they’re building, then that is not a credible claim. And you can at the very least say, how is that going to actually address climate change? Because ultimately we know what needs to be done with respect to the climate crisis. Right? We need to be emitting less greenhouse gases. We need to be working on preparing for climate refugees and making sure that people have a place that they can land and be integrated into communities. And there’s not a question of technology ing our way out of it already. But even if there was, there’s absolutely no evidence that a larger synthetic text extruding machine is somehow going to magically come up with that technology. That that’s not how science works, it’s not how any of this works. And so there’s a lot of magical thinking and it’s always worthwhile to think, okay, first of all, why are you making that claim? Connect the dots for me.

Emily M. Bender [00:31:20]:

But also behind that, who’s benefiting from singing this song and getting people on board to say, oh, okay, we don’t have to worry about climate change now because DAI is going to solve it for us. So that’s sort of one part of the answer, Alex.

Alex Hanna [00:31:37]:

Yeah, just again, having specificity on what we mean. I mean, if it’s developing new compounds or doing something that might be doing some particular sort of pattern matching, that might be developing new materials that may aid in some kind of development or mitigations of some climate related issues, some carbon capture, but as if I think how much of this gets talked about, depending on who you ask, there’s a notion that there’s going to be some magic machine that’s going to be, sometimes it’s called artificial general intelligence, which is not a real thing. And that is going to somehow magically come up with the answer. And it’s going to have, you know, the one silver bullet or something that, I don’t know, somehow develops fusion reactors and we’re going to massively scale those up in such a way in which they replace every coal refinery or something of that nature, which is really absurd. I mean, it’s not how science works. It also is not how politics works in terms and not how the whole energy system works. But it is what has been said by many of these individuals. So Eric Schmidt has said as much.

Alex Hanna [00:32:58]:

He said we might as well just develop these tools and let them admit as much as we want. And once we come up with the big robot brain, then it’s going to solve all these different things. That’s as patently absurd as one would think. And that’s certainly not going to happen. And so these tools are making the climate crisis worse. And even if there are kind of resolutions of bringing training costs down, most of the costs are now. Well, actually, I can’t say this authoritatively, but many of the costs are embodied in the development of the data centers. And so the embodied cost of semiconductor construction and construction of data center projects and they are the inference costs.

Alex Hanna [00:33:52]:

So the marginal inference costs might be somewhat low, but if you have 200 million people making queries a day, then it goes up quite dramatically. And if data center projects are trying to expand this, then it gets even worse. So yeah, there’s a lot of wishful thinking when it comes to thinking about the climate crisis and generative AI.

Bonni Stachowiak [00:34:15]:

I’m glad that we didn’t close the conversation, Alex, without you bringing up the artificial general intelligence, because that’s really important too. So the importance of defining and breaking down these terms. And Emily, glad that you asked the question, as we’re closing our time together of we should continually be asking who benefits from this? Well, this is the time in the show where we each get to share our recommendations and we didn’t share our recommendations in advance, of course. I read the book in advance, so I knew that mine somewhat aligned. Sometimes they don’t, but this one does. But boy, Alex, what you just said, it does really align with this. So it’s a piece by Helen Beetham and it’s called the title of it is how the Right to Education is undermined by AI, a response to UNESCO’s call on AI and the future of Education. I’m just going to read a short portion of it and then really invite people to go read the full thing.

Bonni Stachowiak [00:35:11]:

There’s a lot of powerful things that Helen has shared here. So she says, how might education leaders respond to to this UNESCO’s call? So called AI is antithetical to the UN goals of free equitable access to learning and cultural opportunity. Education leaders should take a human rights based approach to AI, not only as a class of technologies with known impacts on learners data rights, but as a crisis for education systems and their role in global peace and democracy. This crisis has been engineered by a small number of the world’s most powerful corporations in alliance with their state militaries. The global response should be led by those most immediately affected. People of minority languages and cultures, people suffering from epistemic injustice, particularly at the hands of digital and AI industries, teachers and education workers threatened with poorer conditions of work and young people who aspire to the full development of personalities and intellectual powers. I encourage everyone to go have a read of Helen be’s post. It is powerful and such a beautiful blend with so many of the things that Emily and Alex have shared today and expound upon in their podcast and their wonderful book the AI Con Alex, I’m Going to pass it over to you for whatever you’d like to recommend.

Alex Hanna [00:36:42]:

Yeah, thank you for that. That was a really good, great sounding piece I would recommend. Specific to this podcast is a really good article by Sonja Drimmer and Christopher Nygren. It’s called how we are Not Using AI in the Classroom. And it is in the International. It’s the icma, which is, I think, the International center of Medieval Art. So they’re art historians. Really fantastic piece on kind of the ways in which synthetic text is antithetical.

Alex Hanna [00:37:17]:

I think they were also talking about in the forum, you know, how are they using AI in the classroom? And they’re saying, well, we’re rejecting the question. We are going to refuse the prompt. You know, there’s. There’s quite a lot of ways that we ought to resist because it is completely antithetical to the, to the learning objectives of teaching art history. And so it’s a really great piece, I think. I would really love to see pieces like this published by people from all different disciplines, whether art history, sociology, linguistics, our fields in particular. But also, I mean, I think those are, depending who you ask, more humanistic, but certainly more social scientistic, more stem. There’s, I think, a.

Alex Hanna [00:38:05]:

A lot of ways in which people could be constructing their classrooms to be anti AI, but also anti carceral. So not tracing down, not trying to police students with turnitin or with all these different things, because that puts the onus of the student having to deal with these. These kinds of intrusions of this institution that is. That is pushing them towards this kind of horrendous view of what the university should be.

Bonni Stachowiak [00:38:35]:

I love that phrase you used. Just about where the onus is going and when the onus is going on, the students. Yeah, we’ve got a problem there for sure. Yeah. Thank you so much, Emily. What do you have to recommend for us today?

Emily M. Bender [00:38:48]:

So I have to recommend Karen Hao’s new book, Empire of AI, which is absolutely amazing. It’s based on deep reporting that she did across five continents. And she’s looking both sort of at the tech companies, and she’s done extensive reporting on the company OpenAI and the sort of dramas that are playing out within the company. But then she also looks outside to the other workers that are involved in producing this technology and the other communities that are impacted. So there’s really interesting reporting on the environmental impact and also the activists who are pushing back against it in places like Chile and Uruguay. And then also looking at what’s going on for data workers and how people are impacted when they are asked to do the labeling work that is behind these systems and the only thing that actually makes them functional. And it is so deeply reported, but also rivetingly written. And I think it’s a really just wonderful grounding in what’s the real sort of the real world version of what’s going on here without ever like, it’s never focused on the technology, it’s always focused on the people in a really wonderful way.

Bonni Stachowiak [00:39:54]:

I’m so grateful for each of you for being generous and willing to spend your time, and I’m really hoping that people will your book is going to spread near and far and a lot of people will be equipped to think more critically because we definitely cannot do this alone. And I’m just so grateful for today’s conversation. Thank you so much.

Emily M. Bender [00:40:12]:

Thank you.

Alex Hanna [00:40:12]:

Thank you.

Bonni Stachowiak [00:40:16]:

Thanks once again to Emily M. Bander and Alex Hanna for joining me on Teaching in Higher Ed. Today’s episode was produced by me, Bonni Stachowiak. It was edited by the ever talented Andrew Kroeger podcast. Production support was provided by the amazing Sierra Priest. Thanks to each of you for listening. And if you have been listening for a while and aren’t signed up for the weekly Teaching and Higher Ed updates, head over to teachinginhighered.com subscribe. You’ll receive the most recent shows notes and also some other resources that don’t show up on the main episode pages.

Bonni Stachowiak [00:40:57]:

Thank you so much for listening and I’ll see you next time on Teaching in Higher Ed.

Teaching in Higher Ed transcripts are created using a combination of an automated transcription service and human beings. This text likely will not represent the precise, word-for-word conversation that was had. The accuracy of the transcripts will vary. The authoritative record of the Teaching in Higher Ed podcasts is contained in the audio file.

Expand Transcript Text

TOOLS

  • Blog
  • Podcast
  • Community
  • Weekly Update

RESOURCES

  • Recommendations
  • EdTech Essentials Guide
  • The Productive Online Professor
  • How to Listen to Podcasts

Subscribe to Podcast

Apple PodcastsSpotifyAndroidby EmailRSSMore Subscribe Options

ABOUT

  • Bonni Stachowiak
  • Speaking + Workshops
  • Podcast FAQs
  • Media Kit
  • Lilly Conferences Partnership

CONTACT

  • Get in Touch
  • Support the Podcast
  • Sponsorship
  • Privacy Policy

CONNECT

  • LinkedIn
  • Instagram
  • RSS

CC BY-NC-SA 4.0 Teaching in Higher Ed | Designed by Anchored Design