• Skip to primary navigation
  • Skip to main content
  • Skip to footer

Teaching in Higher Ed

  • Podcast
  • Blog
  • SPEAKING
  • Media
  • Recommendations
  • About
  • Contact
EPISODE 607

An E-Bike for the Mind: AI, Augmentation, and Moral Hazards with Josh Brake

with Josh Brake

| January 29, 2026 | XFacebookLinkedInEmail

https://media.blubrry.com/teaching_in_higher_ed_faculty/content.blubrry.com/teaching_in_higher_ed_faculty/TIHE607.mp3

Podcast (tihe_podcast):

Play in new window | Download | Transcript

Subscribe: Apple Podcasts | Spotify | RSS | How do I listen to a podcast?

Josh Brake shares metaphors and other ethical considerations regarding AI on Episode 607 of the Teaching in Higher Ed podcast.

Quotes from the episode

When you're moving fast, it's really easy to do things unreflectively and to make a poor decision without even realizing it.

“When you're moving fast, it's really easy to do things unreflectively and to make a poor decision without even realizing it.”
-Josh Brake

“The special thing about bicycles, at least in their non-electronic versions, is that they're totally human-powered. So it's all based on the energy that you put in, and it's just transforming that energy, to make you more efficient and be able to move faster.”
-Josh Brake

“When you have something like an E bike, that augmentation can be used in a variety of different ways, so it can be used to actually extend your capacity.”
-Josh Brake

“It's really this question about what's the intention that you're bringing to the technology when you come to the tool, what are the questions that you're asking? And fundamentally, it's a question of purpose and intention. Why are you using this?”
-Josh Brake

Resources

  • An E-Bike for the Mind: E-Bikes and What They Can Teach Us About AI, by Josh Brake
  • I Grew Up Oblivious About Grades. It Ruined Me. Now I’m on a Mission to Ruin You too, by Josh Brake
  • The Moral Hazards of AI Are Closer Than You Realize, by Josh Brake
  • We Are Teaching Humans: A 50,000-Foot View As We Enter a New Academic Year, by Josh Brake
  • On Bandwidth and Bottlenecks: AI Tools Help Us Go Faster, But Speed is Not All You Need, by Josh Brake
  • Technique’s Deception: How Jacques Ellul Helps Us Understand the Difference Between Education and Schooling, by Josh Brake
  • Clip – Final Advice from Suborno Isaac Bari
  • The Real World of Technology, by Ursula Franklin
  • Player Piano, by Kurt Vonnegut
  • College Matters Podcast

ARE YOU ENJOYING THE SHOW?

REVIEW THE SHOW
SEND FEEDBACK

ON THIS EPISODE

1

Josh Brake

Associate Professor of Engineering

Josh Brake loves teaching, big imaginations, and the next generation of engineers and entrepreneurs. He is an Associate Professor of Engineering at Harvey Mudd College, where he works to cultivate the next generation of engineering leaders, helping them couple technical excellence with an awareness of the impact of their work on society. He is also a Venture Partner with Praxis, joining their mission to advance redemptive quests with a specific focus on how artificial intelligence plays a role. At night he writes, and on Monday evenings, you can find him finishing up the next post for his Substack, The Absent-Minded Professor where he writes weekly manifestos about technology, education, and human flourishing. You can find out more about Josh and his work at www.joshbrake.com.

Bonni Stachowiak

Bonni Stachowiak is dean of teaching and learning and professor of business and management at Vanguard University. She hosts Teaching in Higher Ed, a weekly podcast on the art and science of teaching with over five million downloads. Bonni holds a doctorate in Organizational Leadership and speaks widely on teaching, curiosity, digital pedagogy, and leadership. She often joins her husband, Dave, on his Coaching for Leaders podcast.

RECOMMENDATIONS

Clip - Final Advice from Suborno Isaac Bari

Clip - Final Advice from Suborno Isaac Bari

RECOMMENDED BY:Bonni Stachowiak
The Real World of Technology, by Ursula Franklin

The Real World of Technology, by Ursula Franklin

RECOMMENDED BY:Josh Brake
Player Piano, by Kurt Vonnegut

Player Piano, by Kurt Vonnegut

RECOMMENDED BY:Josh Brake
Woman sits at a desk, holding a sign that reads: "Show up for the work."

GET CONNECTED

JOIN OVER 4,000 EDUCATORS

Subscribe to the weekly email update and receive the most recent episode's show notes, as well as some other bonus resources.

Please enter your name.
Please enter a valid email address.
JOIN
Something went wrong. Please check your entries and try again.

Related Episodes

  • EPISODE 465Mind Over Monsters
    Sarah smiles at the camera, with wild grasses blowing in the background

    with Sarah Rose Cavanagh

  • EPISODE 114Engage the Heart and Mind Through the Connected Classroom

    with Ken Bauer

  • EPISODE 401The Problem with Grades

    with Josh Eyler

  • EPISODE 533Even More Problems with Grades

    with Josh Eyler

  

EPISODE 607

An E-Bike for the Mind: AI, Augmentation, and Moral Hazards with Josh Brake

DOWNLOAD TRANSCRIPT

Episode 607 : An E-Bike for the Mind: AI, Augmentation, and Moral Hazards with Josh Brake

Bonni Stachowiak [00:00:01]:

Today, on episode number 607 of the Teaching in Higher Ed podcast, An E-bike for the Mind: AI, Augmentation, and Moral Hazards with Josh Brake 

Bonni Stachowiak [00:00:15]:

Produced by Innovate Learning, Maximizing Human Potential. 

Bonni Stachowiak [00:00:22]:

Welcome to this episode of Teaching in Higher Ed, I’m Bonni Stachowiak, and this is the space where we explore the art and science of being more effective at facilitating learning. We also share ways to improve our productivity approaches, so we can have more peace in our lives and be even more present for our students. In today’s episode, I welcome Josh Brake to Teaching in Higher Ed. Josh is an associate professor of Engineering at Harvey Mudd College, where he teaches digital electronics, embedded systems, and optics. Beyond the classroom, he writes The Absent-Minded Professor, a weekly newsletter exploring the intersection of technology, education, and human flourishing. We begin with a memorable bike ride into the San Gabriel Mountains and use it as a metaphor for our relationship with AI.

Bonni Stachowiak [00:01:27]:

Together, we explore what artificial intelligence amplifies, what it may quietly amputate, and how asking better questions, often with the help of one another, can guide us through the moral hazards, closer than we might realize. Josh Brake, welcome to Teaching in Higher Ed.

Josh Brake [00:01:46]:

Thanks so much, Bonni. It’s really a pleasure to be here.

Bonni Stachowiak [00:01:49]:

In early January, I got a chance to keynote for the Lilly Conference in San Diego, and it was so fun seeing people, and they always tell me where they listen to the podcast. I get to go for hikes with people, I get to wash dishes with people. There’s a lot of laundry being done. But you’re gonna take us on a little bit of an adventure of our own. So, anyone listening who’s maybe not exercising right now, I don’t know if it works this way, Josh, but can you take us on a bike ride, and we actually get to burn calories with you, or does the science not work that way?

Josh Brake [00:02:20]:

Yeah, I don’t know that I’ll be able to take you on a bike ride, but I’ll tell you a story of a bike ride.

Bonni Stachowiak [00:02:24]:

All right.

Josh Brake [00:02:25]:

So when I was doing my PhD at Caltech in Pasadena, one of the things that I got into during my time there was riding my bike. There’s a lot of great spots to ride in Southern California, really, generally speaking, but around Pasadena, I do laps around the Rose Bowl, which was super fun. And then if you go just west of the Rose Bowl, so the Rose Bowl is in this kind of beautiful spot, right where you’re just to the south of the San Gabriel Mountains, and so you have this beautiful view up of the San Gabriels. And just to the west of the Rose Bowl, there’s a nice spot where you get to do a little bit of climbing. So cyclists are all slightly masochistic at heart, I think, and so you always want to put in a hard effort and climb hills. And so, whenever I’d want to get out of the bowl, which is relatively flat, I would go up to the west into La Canada, and there’s these beautiful kind of side streets and neighborhoods with unbelievable mansions up there.

Josh Brake [00:03:21]:

And as you go, you can kind of wind your way up. There’s two different ways to go. You can either go up the back way, which is a little bit of a narrower incline, so it’s not quite as steep to go up the back way, you can kind of get a little shallower incline to get up, and then you get to kind of come down Lida Avenue, which is, you just come flying down because it’s really pretty steep and fun. Or you could go that way and then come down the really steep part, or you can go up the steep part. And so every once in a while, you just got to mix it up, go one direction or the other. 

Josh Brake [00:03:53]:

But I just remember this day, where I was climbing up the steep way. I decided, for whatever reason, to go up the steep way, and just really cranking, like, you get your heart rate up, and you’re pushing as hard as you can. You’re getting out of the saddle, trying to get to the top, and there’s this really great payoff at the top where it just opens up to this beautiful view of the San Gabriel Mountains. And something, you know, somehow made better by the fact that you’re really cranking to get up there. And I just remember cranking away hard, heart rate probably at like, 160 or something like that, like, really getting, getting into it. And I just had, like, these two older women just pass me, 

Josh Brake [00:04:33]:

kind of like cruising by, up the mountain, and I just had this sense, like, E-bikes, they must be on E-bikes, that must be what’s going on. And, you know, I don’t know as if that’s me just trying to rationalize and, like, I’m just getting cooked here by some really fit older ladies, which could have been the case. But, that was kind of the first, one of the first kind of instances and engagements I had with E-bikes. And I think that it provides really, this interesting metaphor as we think about AI, and the kind of electronic assistance there, which, there’s all kinds of different takes that people have on that.

Bonni Stachowiak [00:05:06]:

Yeah. By the way, you’re sharing little glimpses of some of the stories that you’ve shared in your newsletter, and I’ve so enjoyed your writing and such a delight to get to talk to you today. Where I thought this story was heading when you shared it in your writing was just an absolute hatred of E-bikes, and in a very binary, sort of dichotomous way of thinking. But it’s a lot more nuanced than that for you. So tell us, what’s the twist in your story? What happened about your feelings and thoughts about E-bikes?

Josh Brake [00:05:41]:

Yeah, well, I mean, I think that the E-bike question is really interesting, and I think the reason that I found it such a helpful and challenging, and really generative metaphor to consider as we think about AI, is that E-bikes in many ways are a way of augmenting an existing mechanism. And Steve Jobs famously called the computer, his vision was the computer as the bicycle for the mind. And it’s really a beautiful metaphor because, so he has this plot, and if you go and you look, there’s this paper where they plotted essentially the efficiency with which locomotion happens on the vertical axis, and they put the mass of the transportation method on the horizontal axis. And if you plot these log-log, the interesting thing that you see is like the most efficient use of energy, locomotion per unit mass, turns out to be man on a bicycle. So if you look, you can see everything they plot from birds to jet planes on this plot.

Josh Brake [00:06:44]:

And then there’s like this one little outlier, which is all the way down, which is where you want to be on the vertical axis. Man on Bicycle. And Jobs was inspired by this in thinking about the way that a computer can be used in the same way as a bicycle actually allows you to more efficiently be able to move from place to place, as opposed to driving a car or walking. It’s obviously faster than walking. To use the bicycle as a way to extend our human capacity. And I think the special thing about bicycles, at least in their non-electronic versions, is that they’re totally human-powered. So it’s all based on the energy that you put in, and it’s just transforming that energy to make you more efficient and be able to move, move faster. The kind of interesting thing with E-bikes then is you add this additional piece in here, which is you have electronic energy, you got a battery on there that’s going to help you to be able to move. And there’s, there’s this kind of question I think we have to wrestle with, which is, 

Josh Brake [00:07:52]:

in what ways is the E-bike actually helping to extend or augment our capacity? And in what ways is it taking that kind of removing the need for us to actually put effort in? And so, kind of the turning point for me in all of this is I had that first experience with E-bikes, and then I was reminded of that, because then I crossed over to the dark side or the light side, depending on how you look at it, and got an E-bike earlier this year. And I think as I was tooling around town on my E bike, I got a cargo E bike, which is great. So I can have, put my two older kids on the back, and then if I’m really looking to get everybody going out, I have a bike trailer, which I’ll put the baby on in the back, so all three of us can get around town as we’re going around on the bike. But I think one of the things I recognized, which really connected back to that original question, experience of riding and seeing what I think were these older women on E -bikes coming past me, was that when you have something like an E-bike, that augmentation can be used in a variety of different ways. So it can be used to actually extend your capacity. So you could think about, let’s say, for example, those women that is a really amazing bike ride to be out in nature and be able to get up to the top. And like I said, it’s this beautiful view from the top of, to see the San Gabriels from that spot.

Josh Brake [00:09:13]:

And if they didn’t have E-bikes, it’s very possible that they wouldn’t be able to actually experience that on a bike, right? To be out and going up there, maybe the only way you could experience is by driving up there. But the E-bike actually, in some way, opened them up to be able to experience that and have that experience on the bike. And I think in the same way, my own E bike experience this year, there’s so many places now that I can get around town with my kids, that I just wouldn’t be able to do on an E bike, or I’d have to take the car, have to put them in the car. But if you have an e-bike now, this makes it actually feasible for you to be able to get around town and not have to drive as much. So I think as we think about AI and we think about technology and computers, there’s this way of thinking about the augmentation aspect that I think is, we often just jump to this conclusion that it’s just allowing us to do new things. But I think there’s also a way where it can actually offer us a way to replace the way we are currently doing things, and be able to reclaim and repair something.

Josh Brake [00:10:22]:

So I think it’s complicated. The other thing I’ll say about the E bikes that I think is an interesting thing is, this gets a little bit into the nitty-gritty, but if you think about it, there’s multiple different ways to ride an E bike. You have the Class 1, Class 2, Class 3, there’s a variety of classes. And they’re classified based on, among other things, whether or not they have a throttle or if they’re just pedal assist. And I think there’s a real lesson there as we think about how we design and use our AI tools. 

Josh Brake [00:10:51]:

And the kind of, one kind of framing question that I’ve been thinking about is, am I using this as a pedal assist? Like, in other words, is it responding to the amount of effort that I put in, or am I just pushing down a button and having it go? Does it require effort? Does it require me to actually be engaged? And I think that’s the place where that sort of line can help us to distinguish between, what are potentially beneficial uses that can be for our flourishing and then the uses that I think can take us in directions where we might not want to go.

Bonni Stachowiak [00:11:22]:

I’m so glad that you brought up that distinction at the end there. One of our children has been advocating to get an e-bike for quite some time. I guess I just gave away which one it is, darn it! I was trying to protect their identity. He is such a great influencer, and he has brought up this distinction with me. Well, what is it that you’re concerned about, Mom? Tell me a little bit more about why that for me, oftentimes it is.

Bonni Stachowiak [00:11:50]:

And one thing that, that you didn’t bring up, I mean, you sort of started to allude to it about these two women, that I just feel like, my gosh, when we’re going to get outside, and we’re going to move our bodies, let’s like, not get assistance where assistance not going to be particularly helpful for like, building strong muscles and burning calories, etcetera. So for me, that’s part of it. But he has brought up this distinction that you just brought up, and first of all, it just goes to his wonderful influence skills, and he’s really thought about this. But to me it also brings, enters into the conversation,

Bonni Stachowiak [00:12:20]:

I mean, he’s so embodying what Stephen Covey wrote about all those years ago in his work, about seeking first to understand before being understood, and the fact that he already knows how to do that at his age, then he’s obviously still has an opinion about whether or not he should be able to, to get one. But his opinion, also can then be communicated in such a way to have first understood what my concerns are, and I just find that lovely. And I didn’t think about this in preparing for the conversation, but when you were talking about the Steve Jobs quote, the computer as a bike for the mind, the quote that keeps coming up to me because of course we, we hear from so many people, with so many understandable concerns about artificial intelligence, and the ways in which it’s, you know, their environmental concerns. There are concerns when the computer isn’t just a bicycle for our mind, but it’s taking over our mind, and not allowing us to exercise it and build the, build up those cognitive skills. But anyway, the quote that kept coming to my mind, which is, I suspect many listeners will have heard this, “A woman needs a man like a fish needs a bicycle”. And so sometimes, what it was reminding me of as I kept chuckling, is sometimes we don’t even know what we’re talking about.

Bonni Stachowiak [00:13:45]:

So we’re arguing whether or not we should or shouldn’t use it, whether our students should or shouldn’t use it, and it’s such a vast topic! I mean, so like, how do we begin having conversations that with a more, more nuanced. And one of the questions that you encourage us to ask is, to what image is our technology conforming us, as one lens that we might put on, broadly speaking. So what comes to mind for you today? Or, is if I asked you on a different day, I might get a different answer, but today what’s coming to mind for you as you’re asking that question?

Josh Brake [00:14:22]:

Yeah, you know, it’s so interesting because, I think one of the best metaphors that has really helped me to think about artificial intelligence and the AI tools that we’re increasingly bombarded with today is, they’re amplifiers. So they take a certain thing that you’re doing, and then they amplify it, right? It allows you to move faster, it allows you to do more, it allows you to get farther with the same amount of input energy. And you can think about the, you know, the E bike as a, as a way to, that’s doing that too, right? It’s amplifying your input, especially in the pedal assist version, which, like, I think is the purest version of the E bike, right?

Josh Brake [00:15:04]:

Still requires you actually put in some effort, but it, you can be used it for different purposes, right? So, you could use an E bike to be able to replace your use of a car, right? I think that’s a great use of an e-bike.

Josh Brake [00:15:19]:

Right? That’s what it does for me in many ways, like, there are a lot of places now where I’m like, oh, I have to drive there, it’s like, oh, no, wait, actually I can take the E-bike. I think it’s better, better for the environment, better for me, better for everybody. Then, there’s other places where I think our intention is super, super important, right?

Josh Brake [00:15:37]:

Because when you have this amplifier now, I kind of think of it like, I’ve been trying to kick around different ways of thinking about this, but I think one thing that’s kind of interesting is if you think about how, if you’re moving faster, the direction that you’re moving gets more and more important, because you can more easily get off track. Right? So if you’re like, a degree off in terms of your heading, and you go 10 miles instead of 1 mile, now you’re going to overshoot, or you’re going to be way further away if you didn’t have that heading exactly correct. And I think that that is the key challenge, as we think about what images your technology conforming you to, it’s really this question about what’s the intention that you’re bringing to the technology. When you come to the tool, what are the questions that you’re asking? And fundamentally, it’s a question of purpose and intention. Why are you using this? And I think for many people, it’s just born out of this externally imposed desire to move quickly, and to get more done. But I think that most of us kind of stop there.

Josh Brake [00:16:40]:

Okay, fine, get my stuff done faster, without recognizing that there’s these external things. It’s like it’s not actually about getting that done faster, because something else is just going to fill in the space where that was. What are we doing? Why are we doing it? And I think as we think about what image it’s conforming us to, one of the things that I think a lot about is, what is the direction that I’m heading? Why am I doing this work in the first place? And I think about this a lot in the classroom. One of my kind of tenants with AI in the classroom, and my students, is just go for the jugular, essentially. I tell them on the first day of my embedded systems class, I kind of try to keep space and keep updated with what the latest tools can do, and so I just take a screenshot, basically, of the first assignment and throw it into whatever the latest model is. said like, hey, do this thing for me, right? 

Josh Brake [00:17:33]:

And it got to a point, this was last year, actually, when it was maybe fall of 24. It solved the whole lab flawlessly, wrote the code perfectly, done. It’s like, okay, well, what are you going to do in this moment, right? And I think for a lot of educators, their decision is, well, I got to change the assignment somehow. I got to get rid of this. And, like, yeah, sometimes that’s what we should do. But other times, actually, the assignment is good.

Josh Brake [00:17:57]:

The things that it is helping you to do are good as well, and just because AI can do it, actually doesn’t mean that it’s not worth doing. My friend Andy Crouch always has this kind of line, like, you don’t bring a forklift to the gym, right? It’s no sense in having a machine to raise the weights. The whole point is that you get in there, and you struggle, and you push and you learn. It’s the reason that you use pedal assist instead of the throttle to get up the hills. The point is actually you engaging and growing from it. And so that, I think, is the question that we have to be asking ourselves today about artificial intelligence. And our use of it is, why are we using it? What’s the point of using it? And if it’s going to allow us to be able to do something that, generate a product that’s truly useful, independent of the effort, it’s not about saying, well, we have to do it this particular way, because that’s the way we’ve always done it.

Josh Brake [00:18:49]:

And, you know, we have to be careful not to just get caught up in, well, we do it this way because we’ve always done it this way. But I think that if we know why we’re doing it, that is a really revealing question for us to really wrestle with.

Bonni Stachowiak [00:19:01]:

Another really big area that you express concern about has to do with the moral hazards of artificial intelligence, and you’ve written that those moral hazards are closer than we realize. I’m gonna ask you a totally unfair question, cause I’m gonna ask you to pick one, and I know you  could list tens of them, and just be getting started. But what’s one that’s coming to mind for you today? Even though I know tons are hitting the headlines. But let’s just give, give us an example for those who may not be familiar with moral hazards, give us an example of one and how you’re thinking about it today.

Josh Brake [00:19:36]:

Yeah, it’s really connected to what I was saying before about the dangers of speed. When you’re moving fast, it’s really easy to do things unreflectively, and to make a poor decision without even realizing it. And I think for me, one example that I’ve kind of experienced myself, honestly, is you have all these new capabilities that these tools enable you to do, and in many ways you’re just like, well, I can do this, and you don’t even think deeply about whether you should be doing that, right? So it’s this, like, can and ought sort of thing. And so, for instance, if you wanted, it’s never been easier to rip somebody else off, to rip off their content, right? And it’s a complicated ethical, like, all of these things are really nuanced. I think if you hear people kind of coming away from with a very black and white picture of it, you probably should ask a little deeper questions, right? Even on the environmental concerns, the question is not, for me, like, is this good or bad for the environment? It’s like, what are we using it for? What’s the value of it? And how does this compare in context, right? Like, we have to think about when somebody says, oh, ChatGPT takes X amount of water per day to do. It’s like, well, how much water does it take for us to watch Netflix? Or be on a Zoom call? Or make, you know, like, we have to think about all these things in context.

Josh Brake [00:20:51]:

But because AI allows us to move so quickly, it makes it really, really important, and this gets back to the intention thing, that we really be in tune with what we’re doing and take some space to reflect. As we think about, like, software products, AI coding, I think, is the kind of the vanguard in many ways of where the tools are being applied. And a lot of times, when you’re trying to build out an app or a new idea, like, you’ll try out a bunch of them, and you’ll be like, ah, this doesn’t quite fit what I want. You tried a bunch of different ones, you’re like, oh, and nowadays, kind of the key, the move is like, okay, well, you can try them out, you could try to find one that fits all your needs, but like, maybe you just build your own, because you can just do that, it’s faster. And one of the things you could imagine doing now is just like, well, actually, I like this and that and the other thing, like, I want inspiration from this.

Josh Brake [00:21:39]:

But you don’t have to go and actually look at the apps, and try to get a sense of like, well, what do I like about this? What are the features that I want? And try to, like, build it yourself. You could just take a screen recording of the app, click through a bunch of screens, look at a bunch of stuff, do that across a bunch of different apps, and then you can just throw that into your favorite AI tool, Claude, or whatever else, and say, hey, here’s a video of this thing, build me a prompt to create a cop, basically a version of this that adds this additional feature and removes this and whatever. And it’s. It’s really easy to just do that without even thinking about it and being like, oh, wait, wait a minute, that’s somebody’s intellectual effort that went into that. And what is the right amount of attribution? What’s the line? And, you know, the artistic community has thought about this for a long time in terms of inspiration versus plagiarism, essentially.

Josh Brake [00:22:28]:

But I think that the other, there are so many moral hazards. I’m going to have to try to, like, rail it in. But, like, the other thing I think we just have to be asking ourselves at some level, too, is every time we use these systems, we’re making some kind of moral, we’re somehow implicated in the moral decisions that have been made in building them. And that’s true for everything that we use, right?

Josh Brake [00:22:50]:

That’s the thing that’s tricky is, like, you start to see, well, there’s actually no free lunch. We’re kind of implicated in a broken world. But we have to answer these kind of questions, like, where is the line? What’s too much? How do we think about, you know, dealing with those questions?

Bonni Stachowiak [00:23:05]:

I so appreciate that you’re, you’re reminding me so much, maybe it’s all the talk early about bicycles, and me thinking about conversations with our kids in terms of, I have decided in the last, I guess, year and a half or so, not to shop at a particular store. And I’m always trying to explain to the kids like, that I have decided to do that. It is one small thing. And we shop at other stores, and particularly online stores, and I know that, that there are some ethical concerns I have there too.

Bonni Stachowiak [00:23:35]:

So it’s, you know, the world is big, there are so many problems that are out there. So I think as human beings, the goal would be that we are living out our values, while also recognizing that there are more systemic things at play, but not wanting to do nothing. So it’s kind of that wrestling that we do, and you offer for us a glimmer of hope, and some sort of an antidote against these issues that go to the core of our integrity and who we are. What have you found to be the greatest resource for you for working against these?

Josh Brake [00:24:07]:

Yeah, right. I think the best thing is other people who you are willing to give visibility into the decisions that you’re making, and the things that you’re doing. And you have to actually give them the permission, and you may actually have to explicitly ask them to be able to critique and speak into what you’re doing, because a blind spot is a blind spot for a reason. You can’t see it, right? And so the only way you can really fix that is to get somebody else who can see in your blind spot. And then you have to give them the permission, and have the relationship with them, that they are willing to speak up and say something. And so I think that’s what’s been helpful for me.

Josh Brake [00:24:49]:

And some of these, like, I’ll be sharing these different things and have somebody, well, hey, what did you, did you think about this or that? And that’s where I think the real conversations and decisions have to get, kind of ironed out, because there is no black and white in so many of these instances. It’s shades of gray, and it’s trying to decide, like, what are my values in this space? How can I live those out? How am I being, you know, am I balancing all these different things? And trying to really, you know… And I think the other thing I’d say is, just to be aware that the moral hazards are there, and yet you can’t be, can’t be frozen, I don’t think, by that. Because otherwise you can’t do anything, you couldn’t use anything, you couldn’t…

Josh Brake [00:25:33]:

It’s like everything has a cost associated with it. And so you have to come up with a… Really it’s a fundamentally, almost a philosophical stance on what. What you believe, and why and what’s, how do you walk in that, in complex situations?

Bonni Stachowiak [00:25:50]:

Before we get to the recommendations segment, I should say the official ones, I wanted to take just a moment to share about another podcast that I think listeners to Teaching in Higher Ed might enjoy, and that is College Matters from the Chronicle. College Matters is a weekly show from The Chronicle of Higher Education, and it’s a great resource for news and analysis about what’s happening in higher education. You’ll hear discussions with some of the Chronicle journalists that offer perspectives on what’s going with the current administration in the United States, insights about how faculty and students are adapting to technological changes, and a few episodes that might be particularly interesting to our listeners would be ones about students and how they’re trying to use artificial intelligence in ethical ways. There was an episode you might want to check out on grade inflation, and also on some of the reading struggles of today’s students. So I just want to encourage you to check out College Matters wherever it is you get your podcasts. And this is the time in the show where we each get to share our recommendations, and I’d like to share two today. The first I’d like to share is actually a second clip from the same person.

Bonni Stachowiak [00:27:03]:

I’m getting so much out of this guy. So Hasan Minhaj interviewed someone named Suborno Isaac Bari, and you would have seen this if you’re listening to every episode. I shared a clip earlier, but this is, this young man is 13 years old, and in this particular clip, he really senses his urgency more than ever. He says, “don’t let AI live your life for you”. And it’s such a jolting thing to see a 13-year-old with all the wisdom that he has. He’s already graduated from high school, he’s already written books, and has earned recognition for his extraordinary talent in math and physics. And in his talk, he cautions people against over reliance, in this clip of a broader interview that Hasan does with him on artificial intelligence.

Bonni Stachowiak [00:27:55]:

And he says relying on it too much will make our lives feel meaningless, and pretty amazing to me, coming from one of the youngest prodigies we have in the world today. His warning serves as a good reminder that while AI can help us, it should never replace our own thinking or efforts. And at the end of the clip, Hasan says, is there any advice you have for me specifically? And I thought this was so fun, he ends the talk where he says, don’t be afraid of failing. And he says, you know, just keep going. And, and really just was so encouraging. And it feels like he’s talking directly to you, even though in this case he’s answering Hasan’s question. It’s so fun, talking about staying curious too.

Bonni Stachowiak [00:28:37]:

So it’s just a lovely clip, a couple minutes, I would, I think it would brighten your day and be enjoyable for you to watch. The second thing I want to just mention is, I’ve made mention of things like this before, but every time it happens, it just brings me a little bit of joy. I can be too hard on myself sometimes, especially as a term or a semester is starting, I’m trying to do all the things before it starts. And then some point, the clock and the calendar just get us back into a sense of reality. So I’m always telling people, you know, just pick one thing. Don’t pick 17 things.

Bonni Stachowiak [00:29:08]:

Pick one thing, so I was taking my own medicine recently. I have this open textbook, which I’ve both written and also curated, for business ethics students. And I enjoy putting clips in there from popular media. So the students in this particular case, this is very early in the book, it’s very early in the class, and it’s interesting, Josh, because it connects to something you said earlier, just about,

Bonni Stachowiak [00:29:32]:

I’m trying to get them, first of all to realize that in this particular case, there are ethics. You know, the study of what, you know, the right thing to do is. So before we can even get into the grays, watch a clip from this television show called The Good Place. And if you’re not familiar with The Good Place, I have recommended it on the show before. But the characters earn a certain amount of points in order to get into The Good Place, so it looks at kind of one’s faith and what happens to us, based on a very transactional way of looking at things.

Bonni Stachowiak [00:30:03]:

And so anyway, the students would have just, in this textbook, watched a clip from The Good Place. And then the next part of the book is sort of unpacking some other definitions, too. So definitions of ethics, definitions of business ethics. Here’s my advice to you today: If there’s something that students get confused on, that’s a great opportunity to do something interactive rather than them, you know, having to write, read a bunch of paragraphs that are pretty dense about something they already didn’t understand. And, you know, from the past times when you’ve taught it, it’s been hard for them to understand. And so in this case, the problem was, they often don’t understand that something could be, so the, the definitions is like, it could be ethical, unethical, it could be legal or illegal.

Bonni Stachowiak [00:30:49]:

And they often don’t see how these pairings work. How could something be ethical but illegal? Or how could something be unethical but legal? And so rather than, again, having them go through a dense set of paragraphs, I already knew that they had had some confusion here, I decided to make a game. And one of the things that I’m learning about artificial intelligence, is that we don’t just want to go to whatever tool of choice it is you’re using. In this particular instance, I was using Google Gemini. So I don’t want to just go to whatever tool it is and say, make me a game.

Bonni Stachowiak [00:31:23]:

Make the game based on the television show, whatever and come up with some different scenarios because it’s too many steps at one time. So you want to ask it to give you advice on how you can then ask it to make a game for you. So picture too broad phases of this project, and within each phase, you know me going back and forth a number of times with it. So the thing that was so fun for me, first of all, really I didn’t take that long because tick tock, I didn’t have, can’t burn the whole class to the ground like I’m always tempted to do, but just getting better at this interactive aspect of it. When I went to play the game, the first thing I was presented with was an emoji of a blonde woman, which I don’t know, did it make the emoji because I happen to be a blonde white woman in this case, or did that just happen to be the one it picked? I would have no way of knowing, but I thought, oh, it’d be fun for it actually to be some of the characters from the show, that would be a fun thing.

Bonni Stachowiak [00:32:23]:

It didn’t actually use the names of the characters from the show, but if you, if you watched it, you would know that Eleanor, one of the characters, loves shrimp. And if you didn’t know that Eleanor loves shrimp, it’s still funny because the first screen that I, in the iteration I asked it to do, was let the person pick the emoji that they want to play as. And then I kind of twisted it up a little bit where I was like, well, winning doesn’t have to be getting the most points, maybe winning for them is getting the least amount of points. So I just had fun with, kind of, how to make it a little bit more playful, while at the same time helping people understand, what something that’s normally confusing early on to the class. So my second recommendation is pick one topic, pick a concept, pick something that your students frequently get confused on or have trouble with, and try this two-phrased approach to making some kind of a game. And in this particular instance, what it did, by the way, what it produced for me was a file that I just saved as a HTML file.

Bonni Stachowiak [00:33:24]:

It’s basically like a line of code, I can sort of read HTML in that, I know it has starting things, and this means this is ending, but I’m not by any means an expert at it. But I, I was able to just upload that to my, I have an account, a free account on GitHub, which is a place where you can store things, and store different versions of things. And before I knew it, I was up there and, and running and, and had a fun game to play. So if you do end up, by the way, I’ll put a link to my game if you want to play it, and if you end up coming up with a game, I would love to see yours.

Bonni Stachowiak [00:33:59]:

It’d be fun to explore some topics, maybe that students get confused on in your class as well. So, Josh, I’m going to pass it over to you for whatever you’d like to recommend, and if you have any advice for people who want to play with it, to feel free to comment before you share your recommendations too.

Josh Brake [00:34:13]:

Great. Yeah, I have a couple of recommendations. I’ll start with a book and an author, a woman named Ursula Franklin, who is one of my heroes. She was a Canadian physicist, and she gave a set of the CBC Massey lectures back in 1989 that are truly fabulous. So, there’s many cool things about these, but she was a physicist, a metallurgist, and she was a critic of technology, too. And really, a critiquer of technology necessarily than a critic of technology. She used technology all the time in her scientific work.

Josh Brake [00:34:49]:

But I think what has been perhaps the most illuminating to me over the last number of years, as I’ve been seeing and trying to really understand what generative AI is, and what it means for us, is that many of the questions that we’re asking today about generative AI, if you go back and read the technological critics of the mid 20th century, the Marshall McLuhans, even Wendell Berry, Neil Postman, C.S. Lewis, Ursula Franklin, Jacques Ellul, all of these folks who, in many ways, were seen to be very negative about technology, a lot of the concerns and critiques that they offered almost seem like they’re written about generative AI. And Ursula Franklin, in particular, I think is a wonderful gateway to these thinkers, and their kind of general frames of thought, in many ways, because the CBC Massey Lectures, because they were delivered orally, have a very concise form factor to them. So you can find these for free on the Internet Archive, you can buy the a copy on wherever you buy books, you can get a printed copy of them, the lectures. But I think what she just highlights time and time again, these very helpful distinctions between prescriptive and holistic technologies, like are these tools actually splitting our work up into little chunks and pieces that then it’s distributing, and kind of assembly line style, like that’s a prescriptive way of building something, versus a holistic style, which is more craftsmanlike?

Josh Brake [00:36:22]:

And what’s, I think, particularly fun to think about, is that AI in many ways could be used either way. And so, there’s ways that AI is being used prescriptively, and there’s ways, often, in many times, to create the same thing where it’s actually being used holistically, you could make the argument. So I just think that she’s a really rich person to wrestle with. The other thing that I think is very sweet about the lectures is, you can actually go and find the recordings of her giving these lectures. And so you can actually not only read her, see her voice in the written word, which still you can tell the orality of it comes through, the brevity of it as an orally delivered lecture, but you can also actually hear her voice, which I think is pretty neat. Another recommendation I would give is another book. I think that I’m in the middle of it now,

Josh Brake [00:37:14]:

I haven’t quite finished it, but it’s a Kurt Vonnegut book called Player Piano. And it’s very interesting, it’s kind of set in this space where there’s the homestead, and then there’s the ilium. And it tells the story of this plant manager who, basically is in charge of the machines. And as the story evolves, it’s a novel, as the story evolves, you start to get more and more of a sense of, what I think I would describe as, a deep spiritual struggle that the main protagonist has with this increasing machinery, right? Just being, taking over,

Josh Brake [00:37:53]:

and you see through the vignettes and the stories that Vonnegut tells, the way that, increasingly the work that humans have been doing, actually gets taken and a machine does it. And then, the human who used to do that now essentially has lost this work. And I think it’s an interesting, again, I think he wrote it in the late 20th century, so before AI, but certainly in this era where we’re starting to think about the mechanization of things. And I just find that so much of what sci-fi enables us to do actually, seems like many of the big tech founders don’t really understand that many sci-fi books are actually dystopian in nature. They seem to think that they’re utopian, and they somehow miss that point. But you start to see actually, in these dystopian fictions, a real window into the moral and ethical and really deeply philosophical and spiritual hazards that I think we are starting to experience now. But it allows us to kind of pick ourselves up from where we are now and put ourselves 10 years in the future, perhaps, and hopefully then that could give us some wisdom as we think about the choices that we make now.

Josh Brake [00:39:02]:

From a tools perspective, this is sort of a yes, and on top of your suggestion before about building these games, I think that there’s a real opportunity for educators especially, to be able to really build bespoke games, apps, web apps. I think when people, you know, people think of like a web app as this thing that’s like hidden behind a LMS, and you got all this complexity, and it’s measuring all these things, and you gotta use this. Like, the beauty of some of these AI tools, and my current favorite, which I think is, kind of generally recognized to be kind of the frontier model right now, is Anthropic’s Claude model, although Google’s Gemini is quite close too. But a lot of these tools now are allowing us to very quickly be able to prototype little apps, games, widgets, that I think can be really useful interactive tools for students. And like you were saying before, these don’t need to be hooked up to anything.

Josh Brake [00:39:53]:

Essentially, it’s like an HTML page, so you can just think about it as a very dynamic, static webpage. And the beauty of it is that you have the ability, by just writing text about what you’re doing in class, or even putting in some of the content that you’re dealing with, or you’re thinking about, to actually tune it to the types of things that you want your students to engage with or wrestle with that they might be struggling with. So, Claude code has become an indispensable tool for me in my own work, and I think that many in academia have likely not played around with it, but I think it’s important for a couple reasons. One is that it can do things for you, I think that can be very helpful, in putting you in a position where you can really be able to curate relationships with your students.

Josh Brake [00:40:40]:

And it also gives you, I think, a taste of where the current capability of these tools are. And my sense in talking to many colleagues is that, in general, academics’ understanding of what the current capability of AI tools is, is very out of date. And I think it’s a dangerous thing to be in that space, and not have a sense of what the tools can do, both for good and ill. Maybe the last recommendation I’ll give, just to circle back is, try out an E bike, they’re quite fun. And cargo E bikes, it’s pretty safe to say a cargo E-bike has really changed my life in this last year since getting one. It’s super fun, I think I probably have reached the capacity now.

Josh Brake [00:41:20]:

This last weekend, I had my son’s soccer stuff on there, his baseball tryouts, which were right after the soccer, I had the two kids on the back. I didn’t have the wagon this time, but like, you could hitch that up too. And they’re just a great deal of fun, and I think in kind of the mode of allowing you to do something that you might not be able to do. E-bikes are a great way if you have a friend who’s an avid cyclist that you can’t quite keep up with, but you want to hang out with, you might just need a little electronic assist to augment your ability, and then you can go and go on a ride. So they’re great.

Bonni Stachowiak [00:41:50]:

Well, I want to mention to people that, two things: One is go to the Show Notes, or subscribe to the updates, so you can get what I’m about to share in automatically, without having to remember. But in the show notes, I have 1, 2, 3, 4, 5, 6 different articles that Josh has written, and I could have kept going. I mean, I literally like, I’m a pretty, a voracious bookmarker, digital bookmarker, and let’s just say there are many, many, many articles tagged. So, at some point, just had to kind of pick some of my favorites, but I suggest that you subscribe, so that you can get all the goodness when he releases new…

Bonni Stachowiak [00:42:31]:

By the way, you, I just discovered this, you have quite a streak going for yourself as well, for writing. It seems like that’s something that motivates you. How often do you post and how long has it been?

Josh Brake [00:42:42]:

It’s been, I started it in 2022, having really no idea of where it would go, and then have been trying to keep a weekly cadence. I’ve actually, I just recently had the kind of the longest break that I had had, which was like a couple weeks, where I didn’t, I didn’t post anything at the end of the year. But other than that, it’s been once a week since 2022, pretty, pretty regularly.

Bonni Stachowiak [00:43:01]:

It’s so delightful, every time I see one come out, I always know that there’ll be that nuance. You’ll give me things to think about, and you’ll challenge me, and encourage me at the same time, and bringing us together like you described, you know, bringing us together as a community to wrestle through this together. It makes me feel less alone, so thank you for the work that you put into that.

Josh Brake [00:43:20]:

Thanks, Bonnie.

Bonni Stachowiak [00:43:22]:

All right, everyone. Thanks for listening, and thanks again, Josh, for being a guest.

Josh Brake [00:43:26]:

My pleasure.

Bonni Stachowiak [00:43:29]:

Thanks once again to Josh Brake for being a guest on today’s episode of Teaching in Higher Ed. Today’s episode was produced by me, Bonni Stachowiak. It was edited by the ever-talented Andrew Kroeger. If you have yet to sign up for the weekly update from Teaching in Higher Ed, now is your chance. Head over to teachinginhighered.com/subscribe. You will receive the goodness that is all those articles I talked about that Josh has written for his newsletter. Such good things for us to reflect on, as well as some other resources that don’t show up on the Show Notes.

Bonni Stachowiak [00:44:08]:

Thank you so much for listening, and I’ll see you next time on Teaching in Higher Ed.

Expand Transcript Text

TOOLS

  • Blog
  • Podcast
  • Community
  • Weekly Update

RESOURCES

  • Recommendations
  • EdTech Essentials Guide
  • The Productive Online Professor
  • How to Listen to Podcasts

Subscribe to Podcast

Apple PodcastsSpotifyAndroidby EmailRSSMore Subscribe Options

ABOUT

  • Bonni Stachowiak
  • Speaking + Workshops
  • Podcast FAQs
  • Media Kit
  • Lilly Conferences Partnership

CONTACT

  • Get in Touch
  • Support the Podcast
  • Sponsorship
  • Privacy Policy

CONNECT

  • LinkedIn
  • Instagram
  • RSS

CC BY-NC-SA 4.0 Teaching in Higher Ed | Designed by Anchored Design