• Skip to primary navigation
  • Skip to main content
  • Skip to footer

Teaching in Higher Ed

  • Podcast
  • Blog
  • SPEAKING
  • Media
  • Recommendations
  • About
  • Contact
EPISODE 604

Peak Higher Ed: AI’s Possible Futures with Bryan Alexander

with Bryan Alexander

| January 8, 2026 | XFacebookLinkedInEmail

https://media.blubrry.com/teaching_in_higher_ed_faculty/content.blubrry.com/teaching_in_higher_ed_faculty/TIHE604.mp3

Podcast (tihe_podcast):

Play in new window | Download | Transcript

Subscribe: Apple Podcasts | Spotify | RSS | How do I listen to a podcast?

Bryan Alexander shares about Peak Higher Ed on episode 604 of the Teaching in Higher Ed podcast

Quotes from the episode

"The problem of how do we actually figure out what people are doing with AI within post secondary education? That's a really great challenge because if you polled people, they have all kinds of great incentives to not respond accurately." - Bryan Alexander

“It's another form of thinking, it's another form of organizing information and that we have to treat it seriously as such. The computer scientist actually recommends that we think about generative AI as children. These are AIs that have some degree of autonomy and they're also not very wise in the world yet, and we have to train and rear them up.”
– Bryan Alexander

“So if AI is bubble, if it turns out to be a bubble and it pops, this might be bad news for the entire economy.”
– Bryan Alexander

“The problem of how do we actually figure out what people are doing with AI within post secondary education? That's a really great challenge because if you polled people, they have all kinds of great incentives to not respond accurately.”
– Bryan Alexander

Resources

  • Peak Higher Ed, by Bryan Alexander: How to Survive the Looming Academic Crisis, by Bryan Alexander
  • Bryan Alexander’s Website
  • Maha Bali’s Blog
  • On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜, by Emily M. Bender et al
  • Helen Beetham’s Newsletter: Imperfect Offerings
  • Pluralistic: Daily Links from Cory Doctorow
  • Faraday Cage
  • Georgetown University: Learning, Design, and Technology
  • John Warner
  • John Warner’s Newsletter
  • GTD – Workflow diagram
  • Todd’s AI Playground
  • Todd’s AI Songs About His Course Evaluations
  • Adam Tooze
  • Chartbook

ARE YOU ENJOYING THE SHOW?

REVIEW THE SHOW
SEND FEEDBACK

ON THIS EPISODE

Bryan Alexander

president

Bryan Alexander is an internationally known futurist, researcher, writer, speaker, consultant, and teacher, working in the field of how technology transforms education. He completed his English language and literature PhD at the University of Michigan in 1997, with a dissertation on doppelgangers in Romantic-era fiction and poetry. Then Bryan taught literature, writing, multimedia, and information technology studies at Centenary College of Louisiana.  There he also pioneered multi-campus interdisciplinary classes, while organizing an information literacy initiative. From 2002 to 2014 Bryan worked with the National Institute for Technology in Liberal Education (NITLE), a non-profit working to help small colleges and universities best integrate digital technologies.  With NITLE he held several roles, including co-director of a regional education and technology center, director of emerging technologies, and senior fellow.  Over those years Bryan helped develop and support the nonprofit, grew peer networks, consulted, and conducted a sustained research agenda. In 2013 Bryan launched a business, Bryan Alexander Consulting, LLC.  Through BAC he consults throughout higher education in the United States and abroad.  Bryan also speaks widely and publishes frequently, with articles appearing in venues including The Atlantic Monthly, Inside Higher Ed.  He has been interviewed by and featured in MSNBC,  US News and World Report, the Chronicle of Higher Education, the National Association of College and University Business Officers, Pew Research, Campus Technology, and the Connected Learning Alliance. His two most recent books are Gearing Up For Learning Beyond K-12 and The New Digital Storytelling.

Bonni Stachowiak

Bonni Stachowiak is dean of teaching and learning and professor of business and management at Vanguard University. She hosts Teaching in Higher Ed, a weekly podcast on the art and science of teaching with over five million downloads. Bonni holds a doctorate in Organizational Leadership and speaks widely on teaching, curiosity, digital pedagogy, and leadership. She often joins her husband, Dave, on his Coaching for Leaders podcast.

RECOMMENDATIONS

GTD - Workflow diagram

GTD - Workflow diagram

RECOMMENDED BY:Bonni Stachowiak
Todd’s AI Playground

Todd’s AI Playground

RECOMMENDED BY:Bonni Stachowiak
Todd’s AI Songs About His Course Evaluations

Todd’s AI Songs About His Course Evaluations

RECOMMENDED BY:Bonni Stachowiak
Adam Tooze

Adam Tooze

RECOMMENDED BY:Bryan Alexander
Chartbook

Chartbook

RECOMMENDED BY:Bryan Alexander
maha bali’s blog

Maha Bali’s Blog

RECOMMENDED BY:Bryan Alexander
Helen Beetham Imperfect Offerings

Helen Beetham’s Newsletter: Imperfect Offerings

RECOMMENDED BY:Bryan Alexander
Woman sits at a desk, holding a sign that reads: "Show up for the work."

GET CONNECTED

JOIN OVER 4,000 EDUCATORS

Subscribe to the weekly email update and receive the most recent episode's show notes, as well as some other bonus resources.

Please enter your name.
Please enter a valid email address.
JOIN
Something went wrong. Please check your entries and try again.

Related Episodes

  • EPISODE 144Digital Literacy – Then and Now

    with Bryan Alexander

  • EPISODE 288Academia Next

    with Bryan Alexander

  • EPISODE 312Digital Visitors and Residents

    with David White

  • EPISODE 602Navigating AI’s Rapid Transformation in Higher Ed with C. Edward Watson
    Man wearing glasses with a slight beard smiles at the camera

    with C. Edward Watson

  

EPISODE 604

Peak Higher Ed: AI’s Possible Futures with Bryan Alexander

DOWNLOAD TRANSCRIPT

Bonni Stachowiak [00:00:00]:
Today on episode 604 of the Teaching in Higher Ed podcast, Peak Higher Ed: AIs Possible Futures with Bryan Alexander,

Production Credit: Produced by Innovate Learning, Maximizing Human Potential.

Bonni Stachowiak [00:01:22]:
Welcome to this episode of Teaching in Higher Ed. I’m Bonnie Stachowiak, and this is the space where we explore the art and science of being more effective at facilitating learning. We also share ways to improve our productivity approaches so we can have more peace in our lives and be even more present for our students. Today I’m joined by futurist Bryan Alexander, whose work has shaped how colleges and universities around the world think about emerging trends, technological change, and what’s next for higher education. Bryan is the author of Academia Next, and most recently Peak Higher Ed, and he leads the long-running Future Trends forum. In this conversation, we look squarely at just one of the many forces that Bryan examines in Peak Higher Ed: Artificial intelligence.

Bonni Stachowiak [00:01:22]:
This is a force that’s shaping not just our classrooms, but the entire social and geopolitical landscape in which higher education now operates. Bryan helps us widen our lens, consider the multiple futures that might unfold as AI both inflames and inspires our work, and shares candidly about his own experiences as an educator. Bryan Alexander, welcome back to Teaching in Higher Ed.

Bryan Alexander [00:01:51]:
Thank you so much, Bonni. It’s wonderful to be here.

Bonni Stachowiak [00:01:53]:
I am both so delighted to talk to you again. But I’m also scared because I want to start out with something that feels sensationalist to me, and you know, Bryan, this is not a sensationalist podcast, but you did write it. So I feel like, you know, you have to be safe with it. So I’m going to, I’m going to quote an X exchange from February of 2025, but I want listeners to know if this feels like “What on earth? This is uncharacteristic of Teaching in Higher Ed”. It’s only going to take about 15 seconds and it all makes sense in a second. So hang on to your hats, folks. All right, Bryan, are you ready for me to quote?

Bryan Alexander [00:02:29]:
I’m ready.

Bonni Stachowiak [00:02:29]:
You probably already know what I’m going to quote. So, Palmer Luckey, this is a post on the Social Network X. Palmer Luckey writes: What will happen in broader academia when clear scientific consensus is that AI-assisted education delivers better outcomes than 3.8 million teachers currently do? And Bryan, who is it that responds to Palmer Luckey?

Bryan Alexander [00:03:00]:
And that would be one Elon Musk.

Bonni Stachowiak [00:03:02]:
And he says “That is already possible”. I ask you, Bryan, how are you analyzing all the different ways that AI is inflaming, and yet, in other cases inspiring higher education in all all the complexity therein?

Bryan Alexander [00:03:24]:
Well, thank you for picking out that quote. I’m glad it stuck with you. It really hit me when I first read it, and I think it embodied quite a lot. Before three years ago, I was paying a lot of attention to AI. I wasn’t writing a lot about it, although I wrote a chapter about it in a previous book. But when ChatGPT3 really took off, then I devoted more and more time to it, and I’ve been thinking more and writing more. I have a whole Substack series just about AI. And it cuts across higher education in so many different ways. I mean, you could think about, for example, a strategic question.

Bryan Alexander [00:04:00]:
How should an institution, a complicated organization, like a research university or a community college, how should they grapple with this new technology? It’s a technology that in many ways is more slippery, more fast moving, just simply more vast than many other previous technologies, say compared to Wikipedia or the mobile phone. You can think about questions of pedagogy. What are the positive ways that we can interact with? How can we use generative AI to produce simulation? How can we use it to enhance student experience and student voice versus the problems of what does this do to the beneficial friction of struggling with writing or thinking through a problem? Much less the great unsolved problem of what we do we do with assessment? And we can look at the problem of how do we actually figure out what people are doing with AI within post secondary education? That’s a really great challenge because if you polled people, they have all kinds of great incentives to not respond accurately. You know, if you ask somebody are you using ChatGPT or using any AI, they have the incentive to say no, because it might appear to them to be a kind of, you know, unpleasant or unpopular awkward technology. Or they want to seem cool and tech savvy and so they’ll overstate their use of it. It’s like asking people, you know, about their use of alcohol or their sex habits. On top of this is the question of how do you structure a univeristy or college’s response? Does it become something that, say, a chief information officer takes charge of, or chief online learning officer if they have one? Should this be a provost job? Should you hire someone to be an AI manager or AI evangelist or AI vice president for a campus? Is this something which you need a special advisory group for? I mean, is this as urgent and as challenging and as complicated as we confronted COVID 19 five years ago? It needs that kind of top whole of enterprise response? Or is this something that is overblown and overhyped and we should expect it to fade after a big market correction and bubble? So we should be ready to pick up the pieces in a few months and then this should just be something for the chief of IT to worry about? I mean, the issues go even further and further still.

Bryan Alexander [00:06:11]:
The proponents of AI in the world outside higher education are sometimes very political. They might be very libertarian, or if you’re republican. The Republican Party right now is largely, not entirely, but largely very pro AI. Trump is project 2025 was. I mean, there are Republican opponents to AI like Josh Hawley, but on campuses, a lot of the opposition to AI cast themselves in a very progressive, political way. People viewing AI as having bad labor practices, as reinscribing social problems, social injustices and so on. So it, it really hits higher education on so many different angles and domains. I mean, in, in this book, I, I, I focused everything on, on one chapter because I really wanted to get as much of the current set of issues on it as possible.

Bryan Alexander [00:07:00]:
But it’s not the only issue impacting higher ed, so it had to take a seat alongside all the other ones too.

Bonni Stachowiak [00:07:06]:
You write some argue we’re on the verge of something like Utopia, while others warn of disaster or utter human extinction. How do we even start to think about these issues? And particularly using a futurist’s lens or set of lenses?

Bryan Alexander [00:07:27]:
Well, that’s a great question. And if the audience hasn’t seen some of these, I’d recommend, for the utopian side, my go to text is Vinod Khosla’s piece called AI Dystopia or Utopia? Which is one which argues for AI is Utopia. And he’s interested and biased, he’s an investor in AI. But it’s a very thoughtful piece in terms of trying to create a maximum AI is beneficial technology view. And you can go further still to science fiction. I’d strongly recommend the Ian Banks series called The Culture, which has a future humanity lorded over and sustained by attentive and kind AIs.

Bryan Alexander [00:08:07]:
For the dystopian side, I mean, it’s easy to find pop culture versions. Everyone likes to go The Terminator. I’d say instead go back to Colossus: The Forbin Project. But I think you can find many other views that consider this to be dystopian. I would say Maha Bali’s views, through her blogs and her talks, you could find that. But also taking a look at, for example, the great paper which declared “The “Stochastic Parrot Model” and there are other views. Brian Merchant, for example, in his Blood of The Machine Substack, gives a very great argument about AI as this nightmarish machine on many levels.

Bryan Alexander [00:08:43]:
As a futurist, I bring to bear a few different tools that we have in the futurist toolkit. One is just simply practice, that futurists have been thinking about AI since, really, the 1960s. And you can find references to it from writing from then on to the present. So we’ve got some ideas. We can rely on some of our practices, like environmental scanning, which is a pretty explanatory term, just going through the present day looking for evidence of certain stories and uses and then compiling them and trying to determine what kind of trends come out of that. I do a bunch of that work, but you can use other tools as well. One, for example, is to try to produce scenarios of where this could take us. And I give four different scenarios like that along those lines in peak higher education.

Bryan Alexander [00:09:30]:
And another is to do workshops and interactions with people, which is in many ways the best. So to give an audience, like thinking about audience, but as a group participants, giving them a challenge. For example, on the Future Transform, we had a challenge of redesign the college and university of today, assuming AI as it now stands, which is a fascinating prompt. Think about, well, do we need sports? What happens to tenure? What kind of governance structures do we need? How do we fund all this? Should there even be a college? All these kind of things. And again, another two bits of this, one is that we try to be as open to as many possibilities as we can. I think of this as the range from utopia to dystopia, so that we don’t end up just leaning towards one or the other. And beyond that, to try to get as much different input as possible. So to hear from proponents and critics, to hear from people in the global north, the global south, to hear from people across the disciplines, the humanities and the sciences.

Bryan Alexander [00:10:30]:
And so that often churns up a lot of the way I work.

Bonni Stachowiak [00:10:34]:
You do offer us a caution, which I love both what you just said, but also that I may be ahead of some listeners in having read the book and knowing that you live up to the spirit of that through and through, not just this book, but through your work. That is what informs your sense of purpose and mission, and that’s very consistent. But you still do offer us a caution against believing all the hype like, not just going down and buying into all of that utopia beliefs. What informs your caution on that particular front? What should we be aware of?

Bryan Alexander [00:11:08]:
Well, I have a whole series of cautions, and I guess I could add one more parts, the futurist toolbox. And this is a bit more controversial, which is to think about science fiction. Because, you know, imaginative fiction has been thinking about AI for quite some time, and arguably the first great AI story is Frankenstein, if you will. And we have many examples that give us ways that AI could, could go wrong. I pay a great deal of attention to critics like Helen Beetham and Cory Doctorow, trying to think about some of the different ways that AI could cause human harm or misfire. I mean, there’s the macroeconomic possibility of it being in a bubble. Ed Zitron, for example, writes these incredibly long newsletters about that, and that has a couple of problems.

Bryan Alexander [00:11:51]:
One is just taking down the amount of investment, both financial, but also, the investment in time that people using in AI. But also, the problem that a lot of the US economy is now riding on AI in many ways, depending on how you measure it, a big chunk of our economic growth. So if AI is bubble, if it turns out to be a bubble and it pops, this might be bad news for the entire economy. But also thinking about what this means within higher education. Thinking about the possibility of the classic industrial era move, where an enterprise, which is workers for capital, so we’re going to take out humans who do various things from staff functions and faculty functions and replace them with AI. That’s one thing that I’m very leery about, especially if it’s done in a cruel way and doesn’t lead to a net increase in employment. I’m worried about the pedagogical problems that may result. Do we have, for example, cheating that just occurs at scale because we have no real good way to stop it or dissuade it? And in which case, do we get a lot of students who go through the higher education experience and get very little out of it? And then does that degrade the reputation of higher education outside of the academy? And also, I just have students who lose the, what some call the beneficial friction, the beneficial struggle of wrestling with complicated stuff in higher education. That for us is part of how we do learning.

Bryan Alexander [00:13:18]:
So if someone is trying to fight through the many thickets of organic chemistry, for example, or they’re working through an experimental novel for the first time. They can turn to AI to smooth the way. But does that smooth it so much they actually don’t learn anything? I’m cautious of this. I’m cautious of the way that AI can reproduce problems in its data set, which is the Internet. Ian Bogost once said that we should think of AI as an instrument that plays the Internet, which I’ve always liked. My friend, the late leader of CNI, Cliff Lynch, said that for him, large language models were basically a remix machine, remixing its data set. And I’m very persuaded by those accounts. But in doing that, in having that kind of remix happen, then we can remix and reinclude and reinscribe, reinforce all kinds of biases on multiple languages, on multiple axes. Thinking about biases by language, by geography, by religion, by race, by gender, by ideologies of all kinds, and so on. I’m also very worried about the labor aspect of this.

Bryan Alexander [00:14:28]:
I mean, like a lot of businesses, I worry. I worry that they have bad labor practices. I mean, are they outsourcing moderation? Are they outsourcing work to people that they pay very badly and support very badly? And in a sense, for me at least, looming above all of this, is the question of what is the impact of the large language model on climate change? To, to what extent do large language models drive up electricity demand and cause us to use more electrical power, which means that in responding to that, we slow down the decarbonization process and give new life to, say, coal or natural gas? And to what extent does large language models use too much fresh water? And fresh water is a problem in much of the world. So I think about all these issues as they impact and shape the world around higher education, but also as they hit higher education itself. I don’t think we can cleanly separate the academy from the world. I think these global issues are part and parcel of what we do.

Bonni Stachowiak [00:15:30]:
I want to share two quick anecdotes, and my risk in sharing them quickly is that they’ll lose their nuance. Both of these people, I appreciated the interaction we had. The first was someone who I did not know. She came up to me after I gave a keynote and did a workshop at a university more than a year ago. And she said she really enjoyed the game that we played. I’ve created this card game based on Maha Bali and her collaborators research that looks at the AI metaphors, what metaphors do people use to describe it? And so she had heard the keynote, which was about a different topic, but then played the game. And she came up to me because I asked them.

Bonni Stachowiak [00:16:10]:
The game is called Go Somewhere, how you quote, win the game. Everyone gets to win if everyone commits to one small action that they will take to move forward in whatever path they’re on. So I don’t ask them that they have to follow my own set of, you know, where I think we ought to go or, you know, that kind of thing. So I just say it’s a small commitment. It’s a little yellow card, and everybody fills out. What’s that one small action that you’ll take? And she was so kind “Oh, it’s been so great having you here. I loved hearing you talk.

Bonni Stachowiak [00:16:42]:
And this has just been so fun”. She’s, but this is, this was her very gentle criticism. “I just thought you would tell me what to do, and I’m disappointed”. And I think she said exactly what to do, which, yeah, a little nuance there.

Bonni Stachowiak [00:16:57]:
And then the second interaction is with a dear friend. So actually, it wasn’t an interaction that I had. It was an interaction that other colleagues had, but I know this person very well. I’ve considered him to be a friend for many decades now, someone with whom I admire immensely on their teaching ability. So I don’t want this secondhand, paraphrased sentiment to at all reduce the power of this, the transformative power of this person’s teaching over decades and decades. But the way that this has been shared with me is that the colleague said, just frustrating.

Bonni Stachowiak [00:17:33]:
So many of us are frustrated, right Bryan? Said, “I don’t want to sit in conversations about how to get students not to cheat with AI. I also don’t want to be told that I have to change how I’ve been teaching successfully for decades”. And I’m sure I didn’t get it right because I wasn’t there for this. But this is the spirit of the sentiment. So, in terms of those two reactions, I’d like you to assume the best, because I do. I assume the best of both of these people.

Bonni Stachowiak [00:18:00]:
So if we assume the best, and we can tap into, like, the struggles, the challenges, just how it’s like knocking around our sense of identity and purpose. How might you respond to either both of them collectively, or if you’d like to take them one by one? I’m game either way.

Bryan Alexander [00:18:16]:
Well, I’d like to take them one by one. The first one is one I’ve been hearing for, I want to say since the 1990s, when some academics confront digital technology and they think that this does not, either this does not require a lot of deep thinking, and practice and reflection on my part, or that they don’t, they can’t justify the time for this. I remember hearing one college provost tell an audience at Educause that campus it should be like toilet paper; it was necessary, but it shouldn’t be seen in public and people shouldn’t make a big deal about it. I’m not sure she really impressed the audience very much there at that institution, but there are good reasons for the second part of not being able to devote the amount of time to dive into this. We think about right now, depending on the institution, the multiple pressures that are on faculty right now ,that they’re concerned about their profession. It might be a field like, well, it might be a field of computer science.

Bryan Alexander [00:19:17]:
It might be a field like women’s studies or Chicano studies. They might be an institution which is suffering financially. And until this year, that would be institutions that tended to be marginal or lower-ranked. But now this year, this includes the elites as well. Just today, I was reading that Yale’s provost announced that they are contemplating layoffs in the next year. This is one of the richest, one of the richest universities in the world, and they’re contemplating this. And then it depends again on the person. They may have lost loved ones, they may suffer physical damage.

Bryan Alexander [00:19:48]:
They may have lost, suffered other damages during COVID for which they have not yet to recover, they may have long Covid for example. They may be of a gender or racial or religious minority that again, depending on where you are, that is suffering from persecution. They may also be an institution which is responding to its crises with a shift away from faculty governance. It may be one that has increased the amount of work that faculty have to do. It may be that’s decreased the support they have and so on. I mean, there’s, there’s quite a few reasons to not be able to have the time to think about this. But unfortunately, AI is this complex, and also this kind of orthogonal to what we’ve been doing.

Bryan Alexander [00:20:31]:
I mean, some earlier technologies, we could fit into previous situations. We could say, okay, well Wikipedia is an encyclopedia, and we’ve had encyclopedias for 200 plus years, we know how to handle that. So that could fit in a little bit. You know, you could take a look at the learning management system, which didn’t do anything new on itself, what it did was it put a bunch of tools like discussion boards all in one box. The AI is in many ways something which we haven’t really done before in higher education. The AIs before LLMs were very thin on the ground and often at research or experimental stages. A lot of people that used AI outside of education either weren’t aware of it or didn’t connect it.

Bryan Alexander [00:21:14]:
So, people playing computer games, for example, or using Google before Gemini or Bard. And I think we really have to be in that stage, where we have to be able to think carefully about this. One of the ways I view AI is that it’s an alien, in literal sense, from Latin for “the other”. It’s another form of thinking. It’s another form of organizing information, and that we have to treat it seriously as such. The computer scientist actually recommends that we think about generative AI as children. These are AIs that have some degree of autonomy, and they’re also not very wise in the world yet, and we have to train and rear them up. I mean, these are all very demanding ways of going ahead. But I would also say that nobody says that every faculty member at every institution has to be thinking and planning through all the strategic ways that the world intersects with their institution. One of the reasons we join institutions is to outsource some of that thinking.

Bryan Alexander [00:22:13]:
So we don’t, you know, individual professor of history or biology doesn’t have to do HR. They hire some for that. They don’t have to do marketing or outreach, we have a registrar, provost and communications team for that, and so on. So I think in many ways it’s up to a college or university to facilitate conversations about this, to bring in professional development, to bring in good speakers, to set up good resources and practices. Morgan State University, for example, with very few resources, actually is doing tremendously ambitious job of grappling with AI. They set a repository of campus AI policies, not their own, but other campuses, which is a great tool for anybody to use. To have a teaching and learning center, to have the provost arrange for workshops or tea times so that people can have these conversations. And do it in an environment that is welcoming and supportive of multiple views, so people who are critical don’t feel they’re marginalized, people who are curious don’t feel they’re marginalized, and so on.

Bryan Alexander [00:23:09]:
I think that’s, that’s really important to do. The second one, their second story, your second anecdote. To an extent, we’ve been down that road before. This is the faculty member who says why should I change the successful practices I’ve been using for decades? You could think about people who have had to change for non-technological reasons. If, for example, they changed the institutions, or if they had to teach a seminar or a lecture when they were used to doing one and not the other. But also we’ve had technological changes that shifted this, you know, from web publishing of all kinds, from WordPress to HTML and to the LMS to dealing. I mentioned Wikipedia, which I still hear controversies about, which I find charming in a way. Thinking about mobile devices, do you permit these in the classroom? And if so, what do you?

Bryan Alexander [00:23:57]:
What do you do with them? How do you handle them so they yield the best possible pedagogical, you know, outcomes? I mean, there during during COVID I was talking to a professor at one university who told me she was happy to be online and teaching via video because she believed that the state government required her to lecture for five or six hours a week. And so she would do that in her classes normally face-to-face. So she would just lecture that way on Zoom. And I said, well, you don’t actually lecture five to six hours a week on Zoom. And she said, oh, yes, I do, the government says I have to.

Bryan Alexander [00:24:34]:
No government on earth says that, and she’s not… Sometimes these practices need to change. And in a sense, if a professor doesn’t want to change, the world has already changed for them. What do you do when your students come to you already using AI in various ways and in various capacities? If you turn your classroom into a Faraday Cage, you just blot out all electronic devices, or you just prohibit them verbally? No laptops, no phones, only blue books. God help you. Even if you do all of that, the minute they leave their classroom, whip out their phones, you’ve got access to 3, 4, 5G networks, and off you go to AI, so it’s time to adjust to that.

Bryan Alexander [00:25:14]:
I mean, I know a lot of professors who aren’t used to having a classroom that is not mostly white students. Well, you’ve got to change. As the times have shifted, the demographics have changed. So, I hope that we can fall back to our institutions. I hope that our colleges and universities can support faculty in doing this through professional development, like through the POD network, through teaching and learning centers, through professional development, through giving faculty grace in order to make or to change as the times do.

Bonni Stachowiak [00:25:48]:
You said sometimes practices need to change. What are a few ways that your practices have changed or are changing with this continued emergence of artificial intelligence?

Bryan Alexander [00:26:00]:
Well, I can tell you, but my practices are a little unusual in that I teach an unusual program. So just to back up: I teach some classes at Georgetown University’s Master’s Degree Program called Learning Design Technology. So these are graduate students who are going to go on to work in, say, instructional design or educational technology, or they might just be applying those skills to some other purpose. For example, we’ve had a couple of students who, who work at think tanks. This is Washington, D.C. area, there are a lot of think tanks, and they want to basically not only teach online, but they want to use the skills of teaching online in order to better do their web presence, their digital work. You know, we’ve had people who are professors, we’ve had students who are also staff, and they basically want to take a deep dive into learning design. So.

Bryan Alexander [00:26:47]:
And they’re all ages, you know. As young as, you know, 22 and as old as retirees, mostly international. And the whole program is actually very small. Last year was the biggest size class, a whopping 25. So, you know, the, my own classes, 25 is the biggest, and the smallest would be, say, six. So it’s unusual, it’s a marginal case in this respect.

Bryan Alexander [00:27:10]:
And it’s also very meta, because the program is about learning design technology. So we talk about the stuff we’re using. You know, I’ll rearrange, our classroom has wonderful mobility, we can move all the furniture and everything, great. So I’ll rearrange it and ask the students, all right, what does this arrangement mean? What does this encourage for you as a student? And then if you’re an instructor, how would you arrange the class, right? So when I show them policies about technology, including AI, to ask them, well, what do you think? I mean, what kind of AI policies have worked for you, or do you think would work?

Bryan Alexander [00:27:40]:
So we get meta about this. So, given that caveat, one of the ways that I’ve changed, I think, is that I do a lot more small group work. Now, that might sound like overkill if I’m talking about classes that are small, you know, a dozen students or so. But it’s really interesting and productive to have students in groups of, say, two or three when they’re working with AI because it gives them a way to reflect with AI, that is kind of offline, if you will. So two people are sitting down in front of Claude and they get to talk about their experience. John Warner, I think you may have had your program, the writer, good friend, wonderful writer, he recommended that nobody use AI alone.

Bryan Alexander [00:28:20]:
This is where I got the idea from. He said, you know, it’s, it’s too, I’m going to paraphrase here, “it’s too alluring and too insidious”. I think he’s very, very critical of AI to do without a friend. And I thought this is actually interesting, to have students do that. And then, of course, all the other benefits of group work, you know, they each get to teach each other, and they get to learn from their experiences and so on.

Bryan Alexander [00:28:39]:
So that’s, that’s one thing. I guess, a second thing is I do even more with simulations and games. So I’ve been teaching with games since the 1990, and with simulations. I teach a class that’s entirely about simulations, games for learning. It’s just one of the great tools that we can use to really enhance learning. That’s not necessarily digital games. We’ve had teaching and learning before the digital world, going back, arguably, going back a couple of millennia, Classical Rome, actually. We found that not only did they use board games for teaching people statecraft, but they also used simulated organs out of terracotta or out of porcelain to teach medicine. And so we have a lot of ways of doing this, and I enjoy doing that. So now with AI, it’s much easier to have students, say, ask a student to write a prompt which will take them through a simulation exercise.

Bryan Alexander [00:29:30]:
So, Bonnie, did we talk about this when we were in San Diego? I can’t remember if we did.

Bonni Stachowiak [00:29:34]:
I don’t think so, I’m not recalling it. But I was so stimulated by the conversation, I kind of wish I had been taking notes voraciously, like I am right now.

Bryan Alexander [00:29:44]:
I had the same experience, I had the same experience. It was a very, very pleasant time. Well, one of the things I did that was a lot of fun was, I went to Mexico City and was working with a few hundred academics in that country. And I was asking what they were interested in. At the time, a lot of them were interested in space, space exploration. So, all right, so I whip up ChatGPT and I ask it with a single prompt, it’s just a paragraph, not a very complicated one.

Bryan Alexander [00:30:07]:
And ask it basically “guide me through a simulated human expedition to the lunar surface”. And I structured it so that it would ask me questions. It would give me choices that I have to make, and it would respond to my choices and alter the world accordingly. So how many astronauts did I want in my spaceship? What kind of fuel mixture? What kind of escape velocity? What kind of altitude? And I worked this crowd as we took them all the way to lunar orbit. And I thought that the simulation was actually too easy and too kind. So I asked it to become a little more challenging. Immediately killed one of my astronauts, which is actually very frustrating. Think of it as if you want older listeners, I’m afraid we’ll think of Choose Youe Own Adventure books, which is a good example of that.

Bryan Alexander [00:30:50]:
You could think as well about interactive fiction or about Dungeons and Dragons, or you could think about any kind of simulation, like a tabletop simulation. And digital tools do this that quickly. I mean, really more quickly than it took for me to explain. And so I have students do that now, which I think is very powerful and effective. I guess the other thing that I do for this is I have students outside of class, look for examples of AI, and news about AI and bring that to class. Now I teach with my wonderful colleague Eddie Maloney, I teach a class on AI and education, but I do this elsewhere in my other seminars too. So they can look in the world and look carefully for any experiences and bring them to class. So if you learn about a new bit of software or a new controversy, a new use, they can bring it to class.

Bryan Alexander [00:31:39]:
Those are a couple of the changes I’ve done. But again, my teaching might just not be so far off the beam that it’s not useful for anybody else.

Bonni Stachowiak [00:31:48]:
Well, it’s what’s underpinning it that feels so universal, is ultimately, wanting to find ways to make things be relevant. And the only ways we can have things be relevant is when we give up some of our desire to control and then invite what happens in the unexpected and in those liminal spaces as well. And this actually brings me perfectly to the recommendations segment because what you were just talking about, and I’m so delighted that you shared that story. This can all feel so big, so my recommendation is, you gotta make it small. I invite you to think about one thing that students get confused about in your class. And you don’t even have to like plan this out in advance. You could actually just be like me this last semester, and realizing one of the things that students get confused about in my class I teach every year, and I have for many years, a class called Personal Leadership and Productivity.

Bonni Stachowiak [00:32:44]:
And one of the authors we draw from is David Allen. He wrote a book called Getting Things Done. And one of the things students get confused about is there’s a it’s called the, and I’ll link to it in the show notes, the Getting Things Done Workflow Diagram. But since you don’t maybe want to go to your show notes right now, I’m going to describe it to you because all of you can relate to this. So, we’ve got things coming at us, we’ve got text messages, we’ve got social media, we’ve got email.

Bonni Stachowiak [00:33:12]:
We’ve got all kinds of things that are coming into our, trying to grab for our attention. We also, hopefully, have ideas and dreams and hopes, big and small. And so what the GTD workflow diagram helps you do is to be able to, for everything that comes in, ask, what is this thing? Is it actionable? And then broadly speaking, divide up, like if it’s not actionable, do I need it someday or not? Put it in the trash or, like when might I need this to surface? If ever, you know, reference system kind of thing. If it is actionable, maybe it’s a calendar invite that gets added, or maybe it’s a project, or maybe we delegate it, etc. So, students get confused about this all the time. And by the way, I shouldn’t even narrow it down to students. Human beings get confused about this all the time because a lot of people never learned systems like this, so they don’t even know what I’m talking about.

Bonni Stachowiak [00:34:04]:
What do you mean? Something comes at me and I pay attention to it right then because it’s grabbing for my attention, and so I pay attention to it. And by the way, on my less quote unquote productive days, I do this too. I don’t always carve out the discipline to stay focused on, you know, what’s really important in that moment. So I went to Canva Code, Canva now. There’s so many systems that will do this, you know, for you. But the one that I happened to be tinkering with that day was within the graphic design website. My friend would really bristle at me calling it a graphic design.

Bonni Stachowiak [00:34:37]:
But for us novices, yes, sometimes we go to Canva. And so, Canva Code, I asked it to make me a game to test out, to help people just experience it. Because like, once you start doing it, I mean, it makes perfect sense. Of course you’d put that in your calendar. Oh, of course it would make more sense to not try to do that thing, that’s a really big thing. That’s really a project that will have many steps taken over many months. And so I asked it to make the game. And at first, Bryan, it wasn’t very good because if I tried to play it on a mobile phone or even on a browser and I tried to drag it off the screen, it would just

Bonni Stachowiak [00:35:12]:
the game just stops or whatever. So I did have to iterate with it and etcetera. But I would just recommend think of one thing, don’t, don’t think of 70 things, think of one thing and just try it. Don’t spend more than even 20 minutes trying it. Maybe it’s a failed experiment, but like that, that would be like a nice little thing to just try.

Bonni Stachowiak [00:35:32]:
Can I make this concept easier to understand through a game, through a simulation? And it’s not as hard as it once was, but it’s also not going to be perfect. So kind of be ready for that. Like this may be a failed experiment, but guess what? You’ve experimented and that can never be a failure because failure is data. So, that’s my first recommendation and I will put a link to this game that I made in case any of you want to go try it out, just so you could kind of see what that experiment resulted in. So the second thing I want to recommend is that I recently took a workshop over six weeks with Harold Jarche, it was on Personal Knowledge Mastery. One of the things he encouraged us to do was to get on Mastodon, which I was there, but I wasn’t really there there.

Bonni Stachowiak [00:36:18]:
And he really wanted us to be there there and actually engaging in community. And I met someone named, and when I say met, I mean, through my use of the Personal Knowledge Mastery hashtag, I met someone named Todd who has a wonderful AI playground that I’m going to link to. You want to talk about having fun experimenting? This man is so funny. He is so playful, I’m finding myself just cracking up. But he’s also not totally bought in like, what Bryan was saying earlier, he’s not like a zealot.

Bonni Stachowiak [00:36:57]:
Like we should all just, you know, sell all our goods and just go work for the AI gods. In fact, I think he describes himself as… Oh, I’m, look, I’m literally looking at the page right now. Let me read his words instead of mine: I am very skeptical and concerned about how AI will be used. It’s interesting what it can do, but like the television, I have doubts that we will use it well. More likely, like we usually do with techno mumbo jumbo, we will relentlessly abuse it.

Bonni Stachowiak [00:37:29]:
And Bryan, over on the right hand side, he has a screen grab of the movie Wall E, where the characters are sitting in their comfy cozy chairs, quite not moving around and just, you know, experiencing life in their, in their bubble. So anyway, I want people to go on Todd’s AI playground and just go have some fun and some things that I. There’s probably, I don’t know, 25 or 30 little links you could click on on the left hand side. I have not gotten through all of them, but boy, about halfway through, student surveys in song and other stuff. He took his anonymized student survey data and he put it in suno.com and asked it to write songs, reggae songs for him. And I mean Bryan, I’m just, how many of us don’t just with the stress that we have and the difficulty sometimes of processing that kind of feedback. I’m over here laughing, I’m over here inspired and also appreciative of the fact that he’s not buying all this hook, line and sinker.

Bonni Stachowiak [00:38:39]:
He does have a critical thought process, you know, as he’s going through all these things. But, you know, he’s not, he’s actually the opposite of what we were talking about earlier. He’s skeptical, but he’s also seen, this is emerging, and so I’m gonna make my little playground, I’m gonna do some playing.

Bonni Stachowiak [00:38:57]:
I’ve also just delighted. I’ll put a link to be if you’d like to follow him on Mastodon as well, because he’s just been absolutely great to get to know a little bit. And I’m looking forward to continuing to get to know him as well. By the way, his background, he is in instructional design, so if you are also interested in instructional design, he’s a great person there too. So I feel like I went on my recommendation so long, Bryan. I’m so sorry about that.

Bonni Stachowiak [00:39:19]:
But I’m so energized by all that you’ve shared so, I get to pass it over to you now to whatever you’d like to recommend.

Bryan Alexander [00:39:25]:
Well, thank you, those all sound really exciting you know, I’m always, always interested in a game for learning. I was going to recommend a person, an incredibly productive person named Adam Tooze. T O O Z E. He’s a professor at Columbia, a British professor, specializes in historical economics, and he has now just become an extraordinary creator. So to begin with, he’s written a whole series of great books, including a wonderful book on the great financial crisis of 2008. The reason I bring him up here is because he creates two different Substacks, two different newsletters that are just incredibly detailed where he will nerd out about political news, about economic news, always with an incredibly sharp eye, on incredible depth of data, but also reflecting through lots and lots of theory, of social theory in ways that are incredibly approachable and understandable.

Bryan Alexander [00:40:20]:
And he will riff on anything from modern painting, to banking problems in Japan, to issues of railway gauges in Germany, vis-à-vis German rearmament against Russia. And he’ll do this with incredible speed, infectious enthusiasm, and always a good sense of humor and visual design. And a lot of that, but not the visual design, shows up in his podcast, which is called Ones and Twos. Where he’s interviewed by a colleague, and they just simply discuss news of the week. But in just remarkable detail. So the interlocutor will ask who’s to talk about, say, the Ukraine war, or about Trump’s tariff policies, or about issues with government stats. And he will just cut loose with just incredible energy, lots of enthusiasm and great clarity, really making complex issues much more understandable. And with so much, just breathtaking views on everything. He’s very progressive, but not in ways that you might expect.

Bryan Alexander [00:41:23]:
And he’s always very, very genial, and often very funny. He had a story about the Canadian prime minister we met which was really, really entertaining. But I find his newsletters always make me smarter, they always encourage me to be more curious about the world and to learn a great deal more. So I just unreservedly recognize and recommend Adam Tooze.

Bonni Stachowiak [00:41:44]:
Thank you so much about sharing his work, and you’re getting me curious just talking about him, so I can’t wait to go check it out. That is one of my favorite, favorite, favorite things is people who can make us more curious about the world like you and like him, like this book. I really want to recommend that people head over to the Show Notes or whatever way you’d like to find to get yourself to peak higher ed. It is one I told Bryan before we started recording, recording. It’s one of those that, that just is so nourishing. It’s challenging in the best ways.

Bonni Stachowiak [00:42:16]:
It’s hopeful. It’s, I mean there’s, but there’s so much. So I told him there was like 20 books there in the sense of 20 conversations, we would just be getting started. So thank you for this rich contribution to our discourse on really important issues.

Bryan Alexander [00:42:29]:
Well, thank you. I especially appreciate the chance to talk to you because today is Friday, December 5th, late yesterday, December 4th, I finally got to hold in my hands the first print copy of the book. So I’m still very just excited to see years of research turn out in that beautiful format. Johns Hopkins University Press did just a fantastic job in everything from editing, copy editing, design and outreach. There’s a real wonderful press, and it’s a delight to work with them.

Bonni Stachowiak [00:42:58]:
I was so privileged to get to read in advance a digital copy, but I gotta tell you, I’m looking forward to getting my hands on the print copy as well. In the sense of, there’s kind of nothing like that. I mean, as an author, of course, Bryan, but even as a reader, there’s just something that will always be different about that. So, looking forward to that as well. Thank you for sharing that, that’s wonderful. And congratulations on such a magnificent achievement.

Bryan Alexander [00:43:21]:
Well, thank you for the very kind words, and also thank you for the opportunity to speak with you once more. Your podcast is always a nourishment to my soul, and I’m really glad and honored to have a chance to return to it.

Bonni Stachowiak [00:43:31]:
Thanks, Bryan. Thanks once again to Bryan Alexander for being a guest on today’s podcast and for this rich conversation. Thanks to each of you for listening to today’s episode, which was produced by me, Bonnie Stachowiak. It was edited by the ever talented Andrew Kroeger. If you’ve been listening for a while and haven’t signed up for the weekly update, head over to teachingin highered.com/subscribe. You’ll receive the most recent episode’s show notes, as well as some other goodies that go well beyond those notes. Thanks so much for listening and I’ll see you next time on Teaching in Higher Ed.

Expand Transcript Text

TOOLS

  • Blog
  • Podcast
  • Community
  • Weekly Update

RESOURCES

  • Recommendations
  • EdTech Essentials Guide
  • The Productive Online Professor
  • How to Listen to Podcasts

Subscribe to Podcast

Apple PodcastsSpotifyAndroidby EmailRSSMore Subscribe Options

ABOUT

  • Bonni Stachowiak
  • Speaking + Workshops
  • Podcast FAQs
  • Media Kit
  • Lilly Conferences Partnership

CONTACT

  • Get in Touch
  • Support the Podcast
  • Sponsorship
  • Privacy Policy

CONNECT

  • LinkedIn
  • Instagram
  • RSS

CC BY-NC-SA 4.0 Teaching in Higher Ed | Designed by Anchored Design