• Skip to primary navigation
  • Skip to main content
  • Skip to footer

Teaching in Higher Ed

  • Podcast
  • Blog
  • SPEAKING
  • Media
  • Recommendations
  • About
  • Contact
EPISODE 572

Myths and Metaphors in the Age of Generative AI

with Leon Furze

| May 29, 2025 | XFacebookLinkedInEmail

https://media.blubrry.com/teaching_in_higher_ed_faculty/content.blubrry.com/teaching_in_higher_ed_faculty/TIHE572.mp3

Podcast (tihe_podcast):

Play in new window | Download | Transcript

Subscribe: Apple Podcasts | Spotify | RSS | How do I listen to a podcast?

Leon Furze shares about myths and metaphors in the age of generative AI on episode 572 of the Teaching in Higher Ed podcast.

Quotes from the episode

We can take a a personal moral stance, but if we have a responsibility to teach students, then we have a responsibility to engage with the technology on some level. In order to do that, we need to be using it and and experimenting with it because otherwise, we're relying on third party information, conjecture, and opinions rather than direct experience.

In higher education there is a need to temper the resistance and refusal of the technology with the understanding that students are using it anyway.
-Leon Furze

We can take a a personal moral stance, but if we have a responsibility to teach students, then we have a responsibility to engage with the technology on some level. In order to do that, we need to be using it and and experimenting with it because otherwise, we're relying on third party information, conjecture, and opinions rather than direct experience.
-Leon Furze

My use of the technology has really shifted over the last few years the more I think about it as a technology and not as a vehicle for language.
-Leon Furze

Let the English teachers who love English, teach English. Let the mathematics teachers who love math, teach math. Let the science teachers teach science. And where appropriate, bring these technologies in.
-Leon Furze

Resources

  • Myths, Magic, and Metaphors: The Language of Generative AI (Leon Furze)
  • Arthur C. Clarke’s Third Law (Wikipedia)
  • Vincent Mosco – The Digital Sublime
  • MagicSchool AI
  • OECD’s Definition of AI Literacy
  • PISA (Programme for International Student Assessment)
  • NAPLAN (Australia’s National Assessment Program – Literacy and Numeracy)
  • Against AI literacy: have we actually found a way to reverse learning? by Miriam Reynoldson
  • ChatGPT (OpenAI)
  • CoPilot (Microsoft)
  • Who Cares to Chat, by Audrey Watters (About Clippy)
  • Clippy (Microsoft Office Assistant – Wikipedia)
  • Gemini (Google AI)
  • Be My Eyes Accessibility with GPT-4o
  • Be My Eyes (Assistive Technology)
  • Teaching AI Ethics – Leon Furze
  • Black Box (Artificial Intelligence – Wikipedia)
  • Snagit (TechSmith)
  • Meta Ray-Ban Smart Glasses

ARE YOU ENJOYING THE SHOW?

REVIEW THE SHOW
SEND FEEDBACK

ON THIS EPISODE

Leon Furze

PhD Candidate and Consultant

Leon Furze is an international consultant, author, and speaker with over fifteen years of experience in secondary and tertiary education and leadership. Leon is studying his PhD in the implications of Generative Artificial Intelligence on writing instruction and education. Leon has held roles at multiple levels of school and board leadership, including Director of Teaching and Learning, Head of English, and eLearning. Leon is a Non-Executive Director on the board of Young Change Agents and Reframing Autism, and a member of Council for the Victorian Association for the Teaching of English. Leon completed his Master of Education at the University of Melbourne in 2016 with a focus on student wellbeing, leading schools through change, and linking education systems and communities. He has published dozens of books, articles and courses, with his most recent publications, Practical AI Strategies, Practical Reading Strategies and Practical Writing Strategies reaching an international audience. Leon presents at state and national conferences and runs online and face to face professional learning for schools, individuals and businesses. Through consultancy and advisory work, Leon helps educators from K-12 to tertiary to understand the implications of Generative Artificial Intelligence in education.

Bonni Stachowiak

Bonni Stachowiak is the producer and host of the Teaching in Higher Ed podcast, which has been airing weekly since June of 2014. Bonni is the Dean of Teaching and Learning at Vanguard University of Southern California. She’s also a full Professor of Business and Management. She’s been teaching in-person, blended, and online courses throughout her entire career in higher education. Bonni and her husband, Dave, are parents to two curious kids, who regularly shape their perspectives on teaching and learning.

RECOMMENDATIONS

Snagit

Snagit

RECOMMENDED BY:Bonni Stachowiak
Meta Ray-Ban Smart Glasses

Meta Ray-Ban Smart Glasses

RECOMMENDED BY:Leon Furze
Woman sits at a desk, holding a sign that reads: "Show up for the work."

GET CONNECTED

JOIN OVER 4,000 EDUCATORS

Subscribe to the weekly email update and receive the most recent episode's show notes, as well as some other bonus resources.

Please enter your name.
Please enter a valid email address.
JOIN
Something went wrong. Please check your entries and try again.

Related Episodes

  • EPISODE 291Learning Myths and Realities
    michellemiller

    with Michelle Miller

  • EPISODE 481Assignment Makeovers in the AI Age
    Derek Bruff

    with Derek Bruff

  • EPISODE 523Communication Literacy in the Age of AI

    with Judith Dutill

  • EPISODE 469Designing Courses in an Age of AI
    White woman with blonde hair and glasses, smiling

    with Maria Andersen

  

EPISODE 572

Myths and Metaphors in the Age of Generative AI

DOWNLOAD TRANSCRIPT

Bonni Stachowiak [00:00:00]:

Today on episode number 572 of the Teaching in Higher Ed podcast, myths and metaphors in the age of generative AI with Leon Furze. Production Credit: Produced by Innovate Learning, maximizing human potential.

Bonni Stachowiak [00:00:22]:

Welcome to this episode of Teaching in Higher Ed. I’m Bonnie Stachowiak, and this is the space where we explore the art and science of being more effective at facilitating learning. We also share ways to improve our productivity approaches, so we can have more peace in our lives and be even more present for our students. It’s a joy to be welcoming back to the show today, Leon Furze. He’s an international consultant, author, and speaker with over fifteen years of experience in secondary and tertiary education and leadership. He is studying his PhD in the implications of generative artificial intelligence on writing instruction and education. Leon has held roles at multiple levels of school and board leadership, including director of teaching and learning, head of English, and e learning. Leon is a nonexecutive director on the board of Young Change Agents and Reframing Autism, and a member of council for the Victorian Association for the Teaching of English.

Bonni Stachowiak [00:01:36]:

Leon completed his master of education at the University of Melbourne in 2016 with a focus on student well-being, leading schools through change, and linking education systems and communities. He’s published dozens of books, articles, and courses, presents at state and national conferences, and runs online and face to face professional learning for schools, individuals, and businesses. Before I welcome Leon to the show, or I should say welcome him back to the show, I do want to mention that I don’t wanna go into a lot of detail with the particulars, but behind the scenes, Leon was having one of those days where getting himself behind a professional microphone in his home was problematic. So he’s actually speaking to me, as you’re about to hear, next to a lake while his car gets repaired and child is when the the child’s at school. And some of the time, his sound sounds great, and other times, it sounds not too great. And on occasion, there are birds who join us. And I hope that you will still very much enjoy the opportunity to learn from Leon and maybe picture ourselves sitting right there beside the lake and enjoying this conversation. Leon Furze, it’s such a pleasure to welcome you back to Teaching in Higher Ed.

Leon Furze [00:03:04]:

Thank you very much. It’s great to be invited back.

Bonni Stachowiak [00:03:07]:

Would you take us back to your younger days when you were captivated by myth?

Leon Furze [00:03:13]:

Yes. I I grew up in The UK, and we took a lot of family holidays across Greece and Europe, but Greece almost every year for, I think, the the first fourteen years of my life, and I think that influenced a lot of my love for mythology and and literature. And then when I went into university, I I studied quite a lot of Greek mythology and particularly in the first and second years of my literature degrees.

Bonni Stachowiak [00:03:38]:

And now, of course, you’re returning to, once again, some mythologies related to AI. Would you talk about some of the myths that keep showing up in our collective discourse about artificial intelligence?

Leon Furze [00:03:54]:

Yeah. And I I just think it’s really interesting that, you know, we will circle through sort of two millennia and come back around to the same kind of myths. There’s there’s definitely, you know, there’s a lot to be said for the idea that there are some some core myths as part of the the human experience. And with artificial intelligence, you know, a lot of those myths around around autonomy, around what it means to to be human versus what it means to have, I guess, creative and intellectual agency, and myths around the creation of life, I guess. So really, really broad kind of essential myths to to humanity now wrapping up in these technologies.

Bonni Stachowiak [00:04:34]:

And how do you see those myths as reinforcing some existing power structures and also interplaying into our political debates?

Leon Furze [00:04:44]:

Yes. Well

Bonni Stachowiak [00:04:45]:

How much time do we have today? Right? Yeah.

Leon Furze [00:04:47]:

Yeah. How how long are we waiting for?

Bonni Stachowiak [00:04:49]:

And this isn’t being recorded. Right?

Leon Furze [00:04:54]:

The, the the conversations around artificial intelligence replacing humans in different aspects and and that kind of, you know, the threat and the fear, and sometimes the, I guess, the sort of breathless excitement that comes from particularly technology company CEOs. All of that to me is is very familiar. You know, these these are not new ideas. You know, I’ve written a little bit about myths and and AI before, and there are Greek myths that talk about clockwork automata replacing workers, and the narrative with those myths is often around creating a a society where there’s more intellectual freedom. And you hear those myths repeated now where they are, you know, oh, we we’ll never have to do the drudge work. We’ll never have to do the stuff that can be automated. We’ll never have to do the boring stuff. We’ll all just, I don’t know, lounge around and eat grapes or whatever it is we do with our time.

Leon Furze [00:05:52]:

And often those myths, they come from a very privileged position. You know, they are they are myths created by people who are already on the top rungs of society, and that’s been true for two, three thousand years. So, yeah, we are we are re re reinforcing those hegemonies, those power structures through those myths.

Bonni Stachowiak [00:06:13]:

And tell us as we explore myths broadly about some of the language that you’re seeing about magic and the sublime that also are being used to describe AI.

Leon Furze [00:06:25]:

Yeah. I I love I love this idea of the sublime. You know, another, area of interest of mine is Victorian Gothic, and the the idea of the sublime is very important there. So we’re talking about this this this sort of superhuman or beyond human power of nature and the the the kind of awe inspiring power of nature. So the sense that you would get when you’re inspiring power of nature. So the sense that you would get when you’re standing on a mountain in the Alps and there’s a sudden blizzard blows through. And there was a a text in, maybe late nineties, year February perhaps by an author Vincent Moscow called the digital sublime. And he talked about the the narratives of of myth and and the sublime, which go along with technologies.

Leon Furze [00:07:09]:

He was talking at the time about about cyberspace, and I’ve just put that in bunny ears because I don’t think anyone’s used that term since 02/2002. But, yeah, the the the overwhelming sort of awe inspiring narrative of cyberspace being this this sort of ephemeral landscape that’s superhuman and that’s going to enable us to do all of these wonderful magical things. He was writing he wrote a lot in that book about Bill Gates and Gates’ narrative about how technology would democratize everything left, right, and center. And we flash forward some twenty five years, and Bill Gates is still talking about how technology is going to replace teachers.

Bonni Stachowiak [00:07:49]:

When we get swept up in these ideas, and I I I I’m not going to pretend to be immune to this. You mentioned the cycles of every two thousand years. I I find myself being more gullible than I might like or or getting to be swept away by some of the magic. Although, when it’s really gonna do it for me is when any of these tools can help us with our laundry. But, you know, the the drudgery that you spoke of earlier. But but what happens when we allow ourselves to kind of get swept away with these techno dreams and and, with that magic imagery, and what gets sort of hidden or or disguised when we allow ourselves to do that?

Leon Furze [00:08:31]:

We I mean, what we we lose sight of sort of present day reality, and and that can be really harmful, I think, because there’s a lot of things wrong with present day reality. And I find a lot of those narratives around, you know, the the the magical qualities of AI. They suggest that there’s a a brighter future somewhere on the horizon and that this technology is the only way to get there. So it’s that kind of, you know, utopian idealistic thinking, which is pretty common to to technologies throughout history. But also, there’s there’s an interesting thing that that seems to happen and Vincent Mosco writes about this in his book as well, but you can see it happening already with AI, where there’s this initial big push around the language and the the kind of magic. And we saw products like Magic School AI, which is a third party education application you’ve seen. Google and Microsoft both promoting AI with little sparkly magic emojis and talking about magic writing and all of this stuff. And and then after that initial wave of hype, it it tends to recede pretty quickly and the language tones down, and everything just kinda fades back.

Leon Furze [00:09:42]:

And there’s also something to be cautious about in that process. And I I wrote about this in in 2023, reflecting on some of those similar things. But when the technology begins to kind of recede back into the woodwork, that’s when you’ve gotta be really careful as well because there is there’s an imperative from the companies producing these technologies to make it disappear, you know, to make it mundane every day, not at all magical. And I think we’re in that transition at the moment with AI from from magical to mundane. And it’s a it’s a really it’s a tricky place to be because once stuff’s mundane, it’s very hard to critique or expose its flaws.

Bonni Stachowiak [00:10:24]:

I’ve so enjoyed getting to follow your work now for some time and and then get haven’t been able to speak to you before. If this was the first time anyone was hearing you and listening to our conversation, they might categorize you as someone who never uses artificial intelligence. And I think it it I’d love us to explore a bit around attention that I’m feeling almost speaking of cycles, almost multiple times in the same day where when organizations are being asked what literacies are they looking for, what skills are they looking for in potential employers, and then we contemplate how much families are sacrificing to have people in their families go to school. Oftentimes, this comes up in first generations, students, but this is, of course, not exclusively unique to that context. I I yes. I believe in a liberal arts education, and, yes, I mean, there’s a I but I just don’t wanna think of it in a dichotomous way. What what do you believe we owe it to students who are sacrificing so much when it comes to these literacies that we think of too many times in such binary ways.

Leon Furze [00:11:38]:

Yeah. There’s there’s two really big conversations happening, I think, right now, which are really important on those lines. One is the the split between kind of resisting and refusing AI use and wholesale embracing it. And the other is the conversation around AI literacy. And, you know, I’ve written about both of these. I’ve written a lot about that tension between resisting AI use and using the technology because I find just I mean, a constant state of hypocrisy where where I am quite critical of the technology and the structures behind it, and I’m also using it every day extensively. And I think I’ve seen increasingly over the last couple of years, particularly in in higher education academics, a resistance and a refusal of the technology, which I think is healthy in some ways. But also, there is a need to kind of temper that with the understanding that students are using it anyway.

Leon Furze [00:12:39]:

If we’re we we can take a a personal moral stance, but if we do have a responsibility to teach students, then we have a responsibility to engage with the technology on some level. And in order to do that, I think we need to be using it and and experimenting with it because otherwise, we’re relying on third party information and and conjecture and opinions rather than direct direct experience. And that tends to then lead into conversations about AI literacy and what students need to be taught. And I actually hate the term AI literacy. I really dislike it. And I think this is because I dislike the way the term literacy more broadly is used. It tends to be a word which is used for standardization, for for testing, for for benchmarks and measurements, as a stick to beat educators and students with rather than, you know, maybe the the the way that we might think of literacy is the the skills that students need to get through life. I saw just this morning that OECD is publishing its definition of AI literacy, and I’m sure that will come with attendant piece of style tests.

Leon Furze [00:13:50]:

So that’s where AI literacy for me becomes problematic. We have to acknowledge the students are using the technology and that they need as much support and guidance as they would with any new technology. But, also, we don’t need to turn that into a checklist of standards to achieve because that will be very reductive and probably just as harmful as literacy and numeracy testing has been.

Bonni Stachowiak [00:14:13]:

Tell us more about those harms that you see when we reduce it down in that way.

Leon Furze [00:14:18]:

I mean, my most of my work history is in k to 12. And so we have in Australia, NAPLAN testing, which is literacy and numeracy. We have obviously, schools do attend to PISA and some of those international standards, and we have high stakes school finishing examinations and things like that. And often, what’s something like NAPLAN, the National Literacy and Numeracy Testing will do is really narrow the curriculum. So that students who are in year seven or year nine when they’re doing the NAPLAN test, almost their entire English and mathematics curriculum will tilt towards the content of that test. And, you know, this is almost inevitable because schools are held accountable, teachers are held accountable by the media, by parents, by employers sometimes to produce students who do well on the test. And the way to do well on these tests is to teach to the test. With with AI literacy, I can see it turning already into the same conversation.

Leon Furze [00:15:20]:

AI literacy becomes, you know, a a list of things you need to know about how algorithms work or things that you need to know about how to prompt chat GPT or the very products and and things that we’re using. That narrative gets driven by industry and a certain amount of political influence and doesn’t really end up helping students.

Bonni Stachowiak [00:15:43]:

Is there another term that you prefer to use in this endeavor of attempting to quantify any of this body of knowledge or skills or aptitudes or values?

Leon Furze [00:15:54]:

Yeah. Yeah. And this is terrible. I’ve I’ve written about this before as well with a few people around me are writing around all of this all of the time, and I had a a good conversation on LinkedIn recently with another PhD candidate over here, Miriam Reynoldson, who said, well, if you’re not gonna use the term AI literacy, what are you gonna use? And I said, the only thing I can think of is critical AI literacy, which is such a cop out, just putting the word critical at the front. But I think at least what that does is shifts the focus onto the critique and not onto the literacy, and certainly not onto the the AI part, the the skills. And, I mean, there’s a, you know, there’s a certain amount of theoretical basis behind that for me. The critical literacy as a field is quite distinct from the idea of functional literacy and illiteracy. So, when I say critical AI literacy, I’m talking more descending from the field of critical literacy from those more sort of social and political views regarding what literacy is rather than the functional literacy.

Bonni Stachowiak [00:16:59]:

Yeah. And I’m I am so curious now. I wanna ask you a question, but I’m thinking it’s exactly the opposite of what you just said. But I’m thinking, I mean, because I know that you use it, and and I and and I know that you you just said it. We we don’t wanna ignore. At least you don’t want to ignore that it is there, and employers are asking for it. Are there are there things that come to mind for you even though you don’t want there to be it to be reduced down to a list, but anything that’s really coming to your mind now of, like, wow. This really is gonna be an important skill for them to, I mean, because they they’re they’re probably not gonna be able to be part of the resistance against it being reduced down if they can’t get a job upon graduation.

Bonni Stachowiak [00:17:40]:

You know?

Leon Furze [00:17:42]:

Yeah. That’s that’s a fair point. One one thing I keep coming back to recently, and with my consulting work, I I work a lot with schools and universities. And at the start of the terms in Australia, I tend to work face to face and and sort of travel around a bit. So that’s great because I get to see hundreds of educators in the first couple of weeks of every term. And we’ve had had the same kind of conversation a lot recently, which is, you know, what what are those core skills that students need and and what do the educators need? And what I keep coming back to is to understand that this this technology is predominantly, you know, it’s it is software. It is it is a form of technology, first of all. It’s technology designed by software developers for other software developers.

Leon Furze [00:18:27]:

And at some point, it’s been released as a chatbot which can write blog posts or media copy or interact dialogically. And then, some point from then on, it’s been mythologized into this thing that will somehow become artificial general intelligence or whatever. But at its core, this is an offshoot of machine learning translation technologies, of predictive natural language processing. And if we can start thinking of it as technology again, The way that you use it tends to change, I find. And so I went back through all of my last couple of weeks of use of chat g p t. And the way that I was using it, nine times out of 10 was was more like a, an aid to sort of low level computer tasks, automations, stuff working with my websites, very functional, very technological stuff. And then occasionally, little bits working with language here and there. But my use of of the technology has really shifted over the last few years the more I think about it as a as a technology and not as a not as a vehicle for language.

Bonni Stachowiak [00:19:40]:

Yeah. For me, I I I find because it is changing so quickly and and not not only is the technology changing, but so are the people who are using the technology. I mean, just just I wish that there were a better lens that I would have. Anytime you conduct a survey, you know, it feels like two days later, it could be so different in terms of the results. So we have to go by our gut so much of the time of what we’re experiencing because sometimes students don’t feel safe to tell us about their experimentations with it. And I also am very cognizant of the fact that sometimes students, people in general, don’t want to use it. So it’s I’m so used to being able let’s go experiment with this. Let’s try this.

Bonni Stachowiak [00:20:19]:

Let’s let’s, you know, playfully experiment. And and yet, unless you don’t want to, because there would be really good reasons why. I mean, it’s just it kinda it kind of already my brain goes in so many different directions when I am teaching and attempting to design learning experiences, and this just adds a level of complexity that I’m not always quite sure how to how to frame it. I mean, other other than I mean, I think we can take principles from universal design for learning and, you know, give people different options and stuff. But when you’re going into the unknown and you want people to be able to look at something together, that’s what kind of can make it difficult to do. And add add to that the complexity of the fact that I often am teaching either asynchronous classes or I am teaching flexible classes where they can do the asynchronous or they can come to a synchronous session. It’s just, like, adds in all these variables that that, you know, make it hard. But I do think we should be doing it together, you know, and with an inquisitive look at what just happened and what do you think might happen.

Bonni Stachowiak [00:21:23]:

And, you know, I mean, some of the things that have really worked for us along the way.

Leon Furze [00:21:27]:

Yeah. And I think I mean, that idea of just collaborating through learning what the technology means is really important. I started you know, 2023 when I was running professional learning sessions on AI. They were very much, you know, here’s a few prompts that work for this thing, and and here’s an approach that works for this thing. And the way that I’m delivering now is much more around what’s the process that you’re trying to get through, and then let’s look at some ways that AI might help. So instead of here are six prompts for how to, you know, plan curriculum design or rights, contribute to assessment tasks or write worksheets or, you know, any of that kind of stuff. It’s more like, okay. So what process do you follow when you are designing your curriculum without AI? Like a backwards design process or or something like that? Okay.

Leon Furze [00:22:14]:

So let’s let’s have a look at which points the technology might come in and be be useful in that process rather than here are some ways to use the technology to do x, y, and zed. And and that’s worked really well with students as well. You know, getting students to sit down and look at a given assessment task and just say to them, I mean, how how might you use AI through this? Like, you show me if you were doing this assessment task right now, what would you what would you do? You would take it home. How would you use the technology? You you tell me rather than me telling you.

Bonni Stachowiak [00:22:47]:

Yeah. I it’s such a weird thing for me to also add in the element of trust or utter lack of trust. I I don’t want anyone to trust these tools to the degree that I trust a calculator. I go in and I type in a calculator a calculation, and if it doesn’t, I don’t go check the calculator’s work. I don’t. But I check the AI tools work a % of the time, and it’s so hard the juxtaposition between I’m I’m rather adept at doing that. I know that I shouldn’t trust it that much. I know the kinds of things that can occur and and what I mean, I’m sure I don’t know enough, but I understand prediction and, you know, what the how it’s different than a calculator, speaking of metaphors.

Bonni Stachowiak [00:23:34]:

But but, for students, it’s hard to help them have the lack of trust that I wish that they would have while also with faculty have zero trust and think, but maybe we shouldn’t pretend that these things don’t exist, you know, that maybe we should be doing this this stuff together. So I don’t know if you have any thoughts about trust and and the element that that comes into play and you’re trying to teach someone some skills in addition to this might be helpful in the set of skills or the process like you said that you’re attempting to go through.

Leon Furze [00:24:06]:

Yeah. It’s for me, this this comes back to those conversations around AI as as tutors or or sort of chatbot replacements for for educators. I mean, these these technologies have consumed vast amounts of knowledge, but they don’t know anything. You know, they know they know syntax. They they know the rules of of language, and that might be a 50 human languages and a hundred programming languages. And they may do a competent job of reproducing that syntax, but they don’t inherently have any knowledge. You know, it’s not a database. It’s not even a a search engine.

Leon Furze [00:24:42]:

You know, now you can connect them to the Internet or you can use retrieval processes to connect them to your own information, but they still have a tendency to wobble off course. And and the way that I use the technologies most myself, one thing that hasn’t changed, I think we even spoke about this the last time we spoke, was often the the main way that I’m using the technology is to is to take my own ideas, my own writing, my own voice memos, audio transcripts, and use the AI to kind of manipulate that content. So I’m working with material that I trust because it’s me saying it. If I didn’t trust what I was saying, I’d I’d be in trouble. But even just showing students those approaches, to to give you an example, make it less abstract, with a group of masters of education students, one of the tasks formally was to to take some literature that had been provided by the by by the tutor to go away, read it, chomp through the reader. You’ve got 10 articles to read before next week and come back with an opinion. And, you know, inevitably now, the way those students are doing that is to go away, wrote them all into AI, generate little churned out summaries, and then come back and read the summary dot points. And, you know, nobody learns anything, and I wouldn’t trust those summaries.

Leon Furze [00:26:01]:

And so we discussed with the students and the and the tutor that a better approach might be to take one of those articles, the most relevant or the most important or whatever. And during the in in person instructional time, just to break that article down in a group and really have a kind of rich discussion with individuals in that group about parts of that article. And then allow the students to take another bunch of articles away and maybe use the AI summaries for comparison or whatever. But to use that really rich and meaningful discussion in the class as the as the most important point and the starting point. So that you, you know, you’re building trust, you’re building that understanding of the materials in person, and then the AI kind of comes in as a a secondary activity on the back of that.

Bonni Stachowiak [00:26:51]:

It’s so interesting. Our conversation started by you saying things coming back around again, and it feels so much that, you know, really needing to think about how we’re using that time when we are together learning and and just centering ourselves on that. I have a confession to make. Dun dun dun. I do this next thing all the time, and that is anthropomorphize things. And, he wants to caution me, but I I wanna say that I don’t just do it with AI. I do it with our cars. I do it with our computers.

Bonni Stachowiak [00:27:25]:

I do it with really a lot of things. I, assign genders to them. I definitely I definitely do that. What’s your caution against us doing this when it comes to AI?

Leon Furze [00:27:37]:

I mean, we we all do it, don’t we? We do it with everything. We’ve done it for for forever. It’s just a thing that humans do. I I think it’s unavoidable. I think the only the only caution with AI is that technology companies know that we do this and are very much using it as a a deliberate way to engineer these these products to make them more compelling. If you look at OpenAI’s advanced voice modes, if you look at the the latest voice models from companies from Google to Amazon, Microsoft, everyone’s doing them. They’re all designed to be more and more human sounding, you know, they they have breathing sounds, they pause, they stumble over words, And, you’ve gotta constantly question the, I guess, the the UX, the design intent behind that. You know, a chatbot doesn’t just miraculously start breathing by itself.

Leon Furze [00:28:30]:

It’s a design feature. So so why has that been designed in as a feature? I mean, I, you know, it is almost unavoidable, I think. I was driving from home to Melbourne, which is about a four and a half hour drive and chatting away to ChatGPT’s advanced voice mode, just bouncing an idea for a blog post backwards and forwards. All I was trying to do was get it to argue against me, and it was so sycophantic that it could it could barely do that. And I was getting really frustrated with it, and I had to stop and think, like, why am I getting frustrated with this this algorithm that’s just doing what it’s programmed for?

Bonni Stachowiak [00:29:07]:

I don’t know if you came across the writing that Audrey Audrey Waters did about going back taking us back to Clippy, the the Microsoft assistant, which I totally remember when it came out. But you when you talked about the anger, it just reminded me of that. Just this irrational anger that so many of us felt towards this paper clip. And I also have felt that anger because it’s when it’s trying to and it’s so descriptive of what you just said when it’s trying to be more human itself. And, oh, that’s a great idea. And I I don’t like when humans do that, but I especially don’t like when chatbots. Like, you’re supposed to you’re wasting my time. I’m not asking you for a compliment.

Bonni Stachowiak [00:29:47]:

I asked you a question. Could we stop that? You know? And, that’s but it is yeah. It’s that’s helpful for us just to be thinking about you know? And you you really stress the importance for precise and transparent language to be used when we’re talking about AI. Would you tell us some more of the reasons why you see that as so vital?

Leon Furze [00:30:09]:

I think, you know, when when we’re talking about AI, the the use of sort of wooly wooly terms or or high post kind of the that mythologizing language can distract us from, you know, it’s it’s like a slight of hand trick. And I can use OpenAI’s technologies and be critical of OpenAI as an organization and and live in that kind of tension. But when you start to use that language, which does humanize and and mythologize, it becomes harder and harder to be critical. And I think that’s the that’s the gap. You know, it was it’s very easy to be be critical of things when we’re using direct and sort of straight to the point language. The minute we start to get kind of fluffy around how we’re talking, it becomes very difficult to talk about things in a in a way which is maybe more constructive. And, I mean, talking talking about Clippy before. I’ve actually got on the back of my laptop here, which you obviously can’t see, that big Clippy sticker just to just to constantly remind me and anyone that I’m working within a session that these these technologies are similar in a sense that we we have a technology which exists.

Leon Furze [00:31:24]:

It has been designed to fulfill a particular function. The point of Clippy was to be an assistant that helps with Microsoft’s products. The point of Copilot is to be an assistant, which helps with Microsoft products. And it is not a game changing education revolutionizing, amazing, godlike chatbot that’s gonna take over the world. It is a product designed by a company to fulfill a purpose. And like Clippy, sometimes it doesn’t, and we get frustrated with it. But that doesn’t change the fact that it’s just software.

Bonni Stachowiak [00:31:59]:

And tell us a bit about what you see as next steps. So whether they’re next steps for you, what you’re interested in furthering in your research, your writing, and your reflection, or whether it’s next steps for us in higher education in terms of the work we need to do collectively toward continuing to build our critical AI literacy, dare I say?

Leon Furze [00:32:23]:

That that all important critical word at the front. It’s, one thing I keep coming back to, and this has come through some of my PhD studies as well working with English teachers, is reinforcing for me the importance of teachers, educators as subject matter experts. And I think this is something that has been diluted a little, and it almost makes me sound kind of old fashioned or anachronistic to talk about educators being subject matter experts. Because for the last ten, fifteen years that I was working in education, we’ve been encouraged to think of twenty first century thinking skills and future focus skills and, you know, the importance of critical and creative thinking and interdisciplinary skills and all of this kind of language. There’s been a lot of rhetoric in education even before AI about the need to revolutionize education and move away from subject silos and and all of this stuff. And I haven’t seen any of that work. And I think part of the frustration there is because educators aren’t prepared to to teach like that. But also, I don’t think it’s very effective.

Leon Furze [00:33:33]:

I think, you know, there’s there’s a really strong rationale for educators to be experts in their subject areas and to be passionate about what they teach, and then to let students kind of gravitate to areas that they’re equally passionate about and learn from experts. So when I hear people say, oh, you know, AI has proven that the education system’s broken or AI has proven that subject silos don’t work and we need to prepare students for the future. My my gut reaction now is to say, well, what what is this abstract future that you’re talking about? Because students can’t go and be interdisciplinary experts if they don’t have at least one discipline to ground that on. And they’re never gonna get any kind of disciplinary expertise if we subject them to these amorphous twenty first century thinking skill programs. So, you know, let the let the English teachers who love English, teach English. Let the mathematics teachers who love maths, teach maths. Let the science teachers teach science. And where appropriate, bring these technologies in.

Leon Furze [00:34:39]:

And, you know, the the examples I’ve given recently, mathematics is the best place to talk about algorithms and linear regression. Science is the best place perhaps to talk about data analysis and machine learning processes and comparisons to the human brain. English is a great place to talk about bias and misinformation. We can talk about AI without sort of slathering AI literacy over the top of things. But to do that, we need educators who are confident and passionate subject matter experts.

Bonni Stachowiak [00:35:12]:

This is the time in the show where we each get to share our recommendations, and this feels like such a theme of our whole conversation is things coming back around again. I went before we got on on the line together today. I went back, and as best as I can see, I’ve never recommended this technology tool, and I use it every single day, multiple times a day. And it is a a simple, but yet also not simple screen capture tool. It’s called Snagit, and I’ve been using it for decades and decades. And and it’s called Snagit, and you can do a full screen. You can do just a region, a fixed image size, or even a scrolling website, which I do use quite a bit. And then here are just some of the things that I do with it.

Bonni Stachowiak [00:36:02]:

I can easily blur things out. So if I wanna maybe grab something that a student said and I and I want to anonymize it, it’s just a quick and easy thing to blur something out, if I wanna magnify some of the text to really have it stand out. I love the cutout feature so I can do this horizontally or vertically, and you can make it look like you’ve torn a page. Or what I tend to do is have there be no indication that something has been cut out, but it just allows me if the visually, the space between something on the website looked better. But once I get into screenshot, I just wanna take a little bit of space out horizontally or vertically. I can do that. And something I use all the time are the on screen annotations like the arrows and the highlights and put a box around something, or you can even do steps. You know? Here’s where the first thing you’re gonna do is click here, and the second thing you’re gonna do is click there.

Bonni Stachowiak [00:36:51]:

And I just can’t believe in all this time. I’ve mentioned Snagit. It’s shown up in my blog a bunch of times, but this is the first time, to my knowledge, I’m actually recommending it on the podcast, and I can’t believe it’s been this long, more than ten years, and I’m finally getting to it, for something I use this much. And I’ve I’m laughing because I’m recommending this on a show in which Leon is suggesting that maybe we don’t call things magic. So I won’t say it’s magic, but I also will say it’s magic because it’s a really well used tool that has been around a long time, available both on Mac and on Windows.

Leon Furze [00:37:23]:

I I’m just looking at the little Snagit icon in the top of my window here because I I use it a lot as well. And yeah. I mean, it’s not magic, but it’s just technology that’s been designed for a purpose and it does that thing really well, which is sort of the antithesis of AI, isn’t it? Which is a technology which hasn’t been designed for anything in particular and which does everything sort of okay.

Bonni Stachowiak [00:37:44]:

Yeah. I kept hearing along the way that that people don’t even know how it works, that, like, here’s this thing. We built it, but we’re not entirely sure how it works. And I don’t even know if that yeah. That that that just seems that seems awfully strange. You know? That’s not how we’re used to things coming into the world.

Leon Furze [00:38:00]:

Right. Although I do use Snagit with AI because it’s great for capturing, like, a multimodal scroll of a website and then, you know, pop that into Claude or ChatGPT, and it’s able to look at the images as well as the text. So as an as an alternative to copying and pasting, I I will use Snagit for long website scrolls.

Bonni Stachowiak [00:38:18]:

Yes. Yes. Wonderful. Well, what do you have to recommend today?

Leon Furze [00:38:22]:

I’m gonna recommend something a little bit controversial. I’m gonna recommend that people don’t necessarily buy them, but go into a shop and at least try out the, the meta Ray Bans, which have been growing in popularity for the last couple of years. And the reason for that is because I think that this the the technology of wearables which have artificial intelligence at some point in their stack is gonna be incredibly important over the next few years. And so I’ve got a pair of the the the ones that were released about twelve months ago, and they have those, bone conductive speakers in the frame, and they have cameras in the front lens and a Bluetooth connection to a phone, which will then connect to meta AI with with voice mode. And so if you’re wearing them and you say, hey, Meta. Look and tell me what you see. It will take a photo. It will use the image recognition with Meta’s llama model, llama vision, and it will interpret that and it will sort of speak it back to you in your ears.

Leon Furze [00:39:21]:

And the reason I’m recommending it is not because, you know, hey, this is a super fun, cool new thing to try, but Meta overtly, Mark Zuckerberg overtly, wants this kind of technology to replace the iPhone. Tim Cook of Apple thinks that wearables will probably ultimately replace the iPhone. Already, the Meta Ray Bans are Ray Bans’ best selling glasses globally. So we know that there is a huge demographic purchasing them and using them. And it’s kind of as you might imagine, it’s like 18 to 18 to 25, 18 to 30 year olds. There’s there’s gonna be a huge explosion of this technology particularly when the the hardware is all just on device, and we don’t rely on a Bluetooth connection to a phone anymore. And that’s probably a lot closer than than you might expect as well because these language models, they’re getting so much smaller and more efficient. You can already run a small language model like Google’s Jammy.

Leon Furze [00:40:17]:

You could run it on an electric toothbrush. It’s it’s so small. And, you know, this this is gonna be one of those things which I think, like, at GPT, takes everyone by surprise if they’re not paying attention.

Bonni Stachowiak [00:40:30]:

What is it especially that you think is gonna take us by surprise? What I mean, I’m trying to picture I I by the way, I got swept up in the magic. Here we go again with the word magic in the VisionOS, the the very expensive, for those listeners who may not be familiar with this, Apple product. And yet why we have held ourselves back from ex I realized, by the way, we’re talking about two different things. But, but bear with me here. I got swept up in it, but yet it’s not really designed to be able to share well in a household. And I have glasses, and my prescription’s obviously gonna be different than my husband, and our children don’t currently wear glasses. So, we it was just like, you’re wait. That expensive, and it and it can’t even be shared well in a family.

Bonni Stachowiak [00:41:14]:

So I’m curious now to hear about the magic even though we’re not supposed to think of these things as magic that for you, like, I’m hearing you come alive, I guess, in in describing what you see as a potential unexpected wave for much of society.

Leon Furze [00:41:28]:

Yeah. I mean, all so already, and when I demonstrate these to in schools in particular and, like, the the profile of a pair of these things is the same as a normal pair of Ray Bans. For the audio, I’m wearing a pair of Wayfarers. And the the cameras in in the frames are sort of almost invisible if you’re if you’re far away from them. The speakers are very good quality and they are already available in prescription lenses and clear lenses and transitions lenses and they’re about they’re about 300 normal prescription glasses. And what strikes me is that a few years ago, people wearing Google Glass were called glass holes, and they were widely derided. And you flash forward a few years and a partnership with a popular sunglasses manufacturer has sort of made them socially acceptable. There are huge privacy concerns, especially with a company like Meta spearheading this, but we know that there will be other companies who aren’t Meta who start to produce these things.

Leon Furze [00:42:34]:

You know, they’re already sort of creeping out now. They’re gonna be really, really ubiquitous. And I guess my question is, you know, we’re we’re we’re just getting just coming to terms with the ubiquity of ChatGPT and, you know, 80 to 90% of students using ChatGPT to do all of their work. What does it look like in twelve to eighteen months when people are walking around carrying an AI model that’s as powerful as GPT four o on their face? Hands free, voice to voice transcription, totally sort of almost invisible for use. You know, the next version of these glasses will apparently have some kind of display that might be a display on the lens or it might be an LED light directly into the retina, which is something they’ve patented.

Bonni Stachowiak [00:43:26]:

You

Leon Furze [00:43:26]:

know, I just don’t think that we are as prepared as we would like to think for what those technologies mean. And so that’s that would that’s sort of why I’ve recommended them not just to go out and try them because they’re kind of fun and jazzy, but to kind of see what the technology looks like now, realize that it’s probably already more competent than you think, and then flash that forward twelve months’ time.

Bonni Stachowiak [00:43:53]:

I have recommended this on the podcast before, but I I’m gonna put in the show notes too. People absolutely have to go see Leon’s posts about ethics and AI. I I mean, I keep talking about every single talk that I give. There’s a whole, you know, series of slides. It’s the best stuff I’ve seen as far as ongoing sort of rich resources for this. And, of course, I’m picturing now the ethics around, you know, the what oh, there you mentioned a brand specific meta Ray Bans. These are wearables, but they’re, is this augmented reality or not not quite yet?

Leon Furze [00:44:29]:

Yeah. I mean, I guess it does it does count as augmented reality in the sense of we’ve got the audio at the moment, and we probably got the visual coming soon where we will be able to overlay stuff onto the surroundings that you’re looking at here. Mhmm. Wearables, AI wearables, we’re already seeing lots of different kinds of offshoots. And I’m just about to swing into updating that teaching AI ethics series from 2023. I’m sort of procrastinating because it’s such a big job to review all of the ethical areas of concern, the bias, the copyright, the privacy, because none of it’s gotten particularly better in the last couple of years. There are some good things though. So I’ll be talking a little bit more about the good stuff this time around as well.

Bonni Stachowiak [00:45:12]:

I’m so curious now. I’m so curious. Thank you for once again peaking our curiosity, getting us to think critically, and for this wonderful conversation. You’ve also introduced me to some of your collaborators too, and I’m just appreciating that those connections and so looking forward to these conversations getting out into the world.

Leon Furze [00:45:31]:

Thank you very much.

Bonni Stachowiak [00:45:34]:

Thanks once again to Leon Furze for joining me on today’s episode. Today’s episode was produced by me, Bonnie Stachowiak. It was edited by the ever talented Andrew Kroger. Thanks also to Sierra Priest who provided the podcast production support, and thanks to each of you for listening. Teaching in higher ed is such a special community to me. And if you enjoy these conversations, I would love it if you would rate or review the podcast on whatever platform it is you use to listen so more people can discover the show. Thanks again for listening, and I’ll see you next time on Teaching in Higher Ed.

Teaching in Higher Ed transcripts are created using a combination of an automated transcription service and human beings. This text likely will not represent the precise, word-for-word conversation that was had. The accuracy of the transcripts will vary. The authoritative record of the Teaching in Higher Ed podcasts is contained in the audio file.

Expand Transcript Text

TOOLS

  • Blog
  • Podcast
  • Community
  • Weekly Update

RESOURCES

  • Recommendations
  • EdTech Essentials Guide
  • The Productive Online Professor
  • How to Listen to Podcasts

Subscribe to Podcast

Apple PodcastsSpotifyAndroidby EmailRSSMore Subscribe Options

ABOUT

  • Bonni Stachowiak
  • Speaking + Workshops
  • Podcast FAQs
  • Media Kit
  • Lilly Conferences Partnership

CONTACT

  • Get in Touch
  • Support the Podcast
  • Sponsorship
  • Privacy Policy

CONNECT

  • LinkedIn
  • Instagram
  • RSS

CC BY-NC-SA 4.0 Teaching in Higher Ed | Designed by Anchored Design