Bonni Stachowiak [00:00:00]: Today, on episode number 605 of the Teaching in Higher Ed podcast, Teaching with AI: The Good, the Bad, the Ugly and the Future with José Bowen. Bonni Stachowiak [00:00:15]: Production Credit: Produced by Innovate Learning, Maximizing Human Potential. Bonni Stachowiak [00:00:23]: Welcome to this episode of Teaching in Higher Ed, I'm Bonnie Stachowiak, and this is the space where we explore the art and science of being more effective at facilitating learning. We also share ways to improve our productivity approaches so we can have more peace in our lives, and be even more present for our students. Today on the episode, José Bowen is back. He joins me to talk about the newly updated second edition of Teaching with AI, which he co-authored with Eddie Watson. José has won teaching awards at Stanford and Georgetown, served as president of Goucher College, and is the author of influential books, including Teaching Naked and Teaching Change. He's also a musician who's performed with artists like Stan Getz and Bobby McFerrin, and a longtime leader in higher education who has spoken at hundreds of campuses worldwide. In today's conversation, we explore the good, the bad, the ugly, and the future of AI in teaching. Bonni Stachowiak [00:01:34]: From the genuinely helpful ways AI can customize learning, to the very real frustrations and burnout faculty are feeling, to the wicked problems of privacy and bias that resist easy answers. We end by looking ahead to a rapidly evolving future that invites us all to rethink our roles as teachers, and to model humility as we navigate what's new, uncertain, and full of possibility. José Bowen, welcome back to Teaching in Higher Ed. Jose Bowen [00:02:10]: Hi Bonnie, so glad to be here. Bonni Stachowiak [00:02:12]: Today, you and I are tackling the very easy topic of when it comes to AI in higher education, the good, the bad, the ugly, and the future. And we're all going to do that in just the normal time of an episode. So let's start, José, with the good. What are you discovering, experiencing, hearing about that is practical, helpful, functional, or even ease-making? Jose Bowen [00:02:39]: So I think there are a lot of those things. I think the good, is that we are going to be able to customize things for students. It was somebody on your show who said that there's a difference between customize and personalize, which I do like. I think the personal has still got to be human. But the ability to say, okay, I'm going to do an individual problem set, or I'm going to put this in multiple languages, or for a student who has a different set of interests. In fact, one of the things I do, is I now customize every assignment for every student. So, the students start by I say, "so give me an alias so I don't have to put your name into the AI, and then tell me what motivates you, what do you want to be, what's your major?" All those sorts of things, and then you take the generic assignment, and you say, okay, so three three-page paper on Hamlet, no? Or problem set, right? Jose Bowen [00:03:27]: How do you then customize that for every student? Another version of that is to say, so take the assignment and before you work on it, have a conversation with the AI about your interests. And then ask the AI for 10 ways you could write about Hamlet and baseball, right? You know, how could you design? And so I think there's customization, I think there's the potential for that to give us some time, right? The jury is still out obviously on grading, and I think it depends a lot on your class size, but right, if I have it, do certain things right? Accreditation reports, the department schedule, there's gotta be something! In fact, my advice to people is, I know we're overwhelmed, so don't ask AI to do something that you love. Jose Bowen [00:04:11]: You love that, you're good at it. Ask AI to do something that you hate. If I, "oh, I have to do this thing, I have to do my expense report". Right? If you can find a way for AI to do one of those things that you find tedious, or it's just there's too much data, or maybe I could do this better if I had access to all sorts of things. So I think the customization, the ability to do that, the ability to do something that you don't like, I think there's real potential there. Bonni Stachowiak [00:04:41]: When it comes to the bad, what are you hearing from people? What are you experiencing yourself? The frustrations, the challenges, the death by a thousand cuts, perhaps? Jose Bowen [00:04:51]: Yeah, well, it really is a thousand cuts, only there are machete cuts at the moment. I mean, none of us asked for this. I think at this moment, post-COVID politics, the birth dearth, you add AI on top of that. So nobody has any headspace, any bandwidth, so I think morale is terrible, and I think people are feeling burnt out, right? The public not trust, I mean, I think there's a lot of things that make it hard to say, I need to get joy for my students. Jose Bowen [00:05:21]: And so the bad is that AI is rapidly changing. It wasn't just a one-and-done. I mean, it continues to get better at a steady pace. I mean, every week there is a new video tool, a new image tool, a new slide tool, a new way to do this. And so, it's impossible to keep up, and so it's anxiety-provoking. And, we do have plenty of other sources of anxiety in higher education right now. So, I think figuring out where to start, figuring out how do I manage in this world? Is bad. Jose Bowen [00:05:57]: And I also think most of what's written about it in the media is terrible. It's easy to get published if it's like, oh, here's my unnuanced, one-sided thing. You know, AI is all good, AI is all bad. It's like the Internet, right? I mean, we've had a long history of saying, oh, erasers are bad, they're going to destroy writing. The typewriter is bad, it's going to destroy... Well, and AI probably will change writing, spell checker. So we've banned a lot of things in the past, and so there's a tendency to have a kind of an us and them. Jose Bowen [00:06:28]: And I think, in fact one of the worst things that's happened is that, right, faculty are now in camps. It's the I'm all in, I'm never, over my dead body. You know, I'm banning it from my classroom. And I think we will soon figure out that you can't completely ban this technology. But I do think that we are going to have to figure out how to focus on student learning in an era where students have this new technology that will short-circuit the learning we want. And so that's existential, more anxiety! That's a massive challenge, because it's not just a little bit of a, I can, I go back to blue books? Oh, I'll do this. Jose Bowen [00:07:11]: I can't make a little fix, I'm going to have to totally rethink two things: So what do I want my students to learn? And when do I introduce this new technology? So, I like the analogy of the spell checker, or the spreadsheet, or even the calculator. I know it's not exactly like AI, but it's like it in this sense: The calculator didn't eliminate the need to learn to add, right? But it did make us also, we also have to teach the calculator, right? No one's going to graduate a finance or accounting major who doesn't also know how to use a spreadsheet or a calculator. So I have to figure out where does it go in the curriculum? When do I add it? Jose Bowen [00:07:48]: But it also means, well, maybe I don't have to teach long division the same way, maybe I don't have to teach as much of that. And so again, that's very existential, that's a major challenge to everything we do. Bonni Stachowiak [00:08:01]: I love this example because you already recognize, and for listeners may or may not be familiar, the calculator is one of those metaphors for AI that either people really embrace and think is totally analogous or say, oh my gosh, what a ridiculous. In you using that example, it's actually a perfect illustration for what you're talking about in terms of thinking in those dichotomous ways. I recently subscribed to this service called Brilliant, and for those not familiar with Brilliant, it's more geared toward K-12. But I do have some catching up to do, let's just say in some parts of my own knowledge. And their website has things like computer science and math and algebra and all kinds of things. And so I'm going through this in real time, Jose. I mean, because I'm doing it because I'd like to learn or refresh, go back to some math I've long since forgotten. Bonni Stachowiak [00:08:53]: And so I'm sitting there and I'm thinking, like, would it make sense for me to pull out a calculator right now? Because I don't, I don't really need to be a calculator in this world. Or is this a point in friction for me, that if I don't actually experience it without a calculator, I'll be missing something. And then I have have this in between, Jose, which I think is also kind of interesting because I don't do math in my head very well. And I'm looking at this app on my iPad, and so what I've been doing, when I don't feel like I should grab the calculator, I grab a screenshot of this problem, I pull it up on this app on my iPad, and I take my Apple pencil, and I start writing all over it. Because I'm like, that's sort of the in between, because I just, I think in math better if I can write and sketch. So anyway, I love this, this metaphor of the calculator as you're also explaining how we get in these camps. And when we start thinking about these issues that are so hard, without the curiosity of where that other camp lives, and why they see it so differently from us, we just get stuck. Jose Bowen [00:09:56]: Yeah, no, and I think we are very stuck. And so we need to realize that the AI is not going to be the cure-all, and it's probably not going to be the end of higher education. But there is another existential threat, which is that when people are worried about their jobs, it's different. And so, I think AI does have the potential to create job loss, especially in higher ed, and that makes it very hard to think about it rationally and reasonably because it is another giant threat. So I will say, on the good side, I think we don't need 600-seat classrooms anymore, right? Because I mean, we didn't need them before, the Stand and Deliver lecture, right? But the Internet didn't really, watching videos is not that much fun. But having an AI tutor deliver content could eliminate the need for me to go to campus, and pay for parking, and drive three times a week for these lectures. Jose Bowen [00:10:57]: Which means that instead, I want to be on campus in small groups, right? Instead of having a 600-seat classroom, maybe I only go once every two weeks, but I only meet in a group of 12. Maybe I meet with the TA sometimes, with the professor some other times, but I have my study group. The example that I use here is an odd one, but in online education, when Brigham Young wanted to enter the field and do something a little different, they realized they had an advantage; they had Mormon churches all around the world. So they set up this program where, on Wednesday nights, you go to study with the church. So in other words, you and I might be there in Durban, South Africa, and we're studying two different subjects. But we're both at the church, hanging out in our study group, because these are online courses. And so, even people in online courses, or maybe, especially people in online courses, want community. They want to get together once a month, once a quarter, Jose Bowen [00:11:55]: and so lots of online programs have these social aspects to them, where people get together and study and talk, etcetera. And so, I can see that model being replicated in an AI future, where you do have some AI degrees, or courses, or tutors, or whatever it is, but you also you don't want to give up, that's the last thing you want to give up. But that means my job changes, right? I'm not just a content professor, I really am a cognitive coach. I really am, and I'm a relationship supporter. I'm a mentor. Jose Bowen [00:12:32]: And I often think we get the administrative or the content confused with the actual relationship piece. And not every faculty member is going to want to do that, or be good at it. It's a different skill set. Bonni Stachowiak [00:12:45]: I mentioned my example of the Brilliant, and I want to just mention it one more time, because the other thing that's really nice, as a lifelong learner, as I'm doing that, I'm doing it while I'm tired. I like to keep my streak up, so I'm, I don't want to go to bed until I do my little lesson, and yes, believe it or not, Jose, I make mistakes. And when I make mistakes, no one ever has to know that, of course, now I've just admitted it, so people now know. But I want to get us to the ugly now, because I know one of the wicked problems, the problem for which there really aren't easy answers, no quick fixes, has to do with privacy. Bonni Stachowiak [00:13:22]: Talk to us about what you're hearing, what you're thinking, about, what you're researching around privacy. Jose Bowen [00:13:29]: Yeah, I think that the two big ugly categories are bias and privacy. So we'll start with privacy. So, on the one hand, the privacy is not just an AI problem, right? The problem is that everything you say in your car then gets sold to Amazon. Things when you use a cloud, like Apple icloud, all of your pictures, right? You shop on Amazon, right? I mean, people, right? Amazon knows more about you than you think they do. There's that old, I don't know whether this research was working, but Amazon knows you're pregnant before you do. And so, on the one hand, there are lots of, you know, Microsoft has all of my documents, all of my emails, you know, things are in the cloud. So now we have this, this tool. Jose Bowen [00:14:13]: The real problem with AI privacy is that now we have a tool that can mine all that, right? I mean, you know, before your car was listening to all of this garbage you were talking about, and it's like, well, what do I do? Oh, I hear a product name! But now, AI can listen to all those conversations and say, "aha! this is the person who needs cat litter, and that's the person who needs a rest". And so, my actual concern about privacy is less that, oh, people are going to input their dissertation, because if you wrote the dissertation in Google Docs, it's already right? I'm more worried about AI as a tool for analysis and observation, and that that's going to change the world in which we live. So, I mean, I'm not, not worried about faculty and student entering input, I think we still need to be very careful about all of that, but I don't know of a big FERPA leak or a HIPAA leak from AI. Jose Bowen [00:15:04]: And most of the tools that schools are using are HIPAA compliant, and they have a real vested interest in making sure nobody hacks into that. In the same way that Google has been really good at not letting anybody hack into my dissertation on Google Docs. We've been trusting Microsoft and Google with our intimate thoughts for a long time. And they've, I don't want to say they've done a pretty good job, but they kind of have, right? It's not been leaked. I don't know what other governments have, but there you go. But I am worried a lot about the ability of AI to now find us all, to know our thoughts, to put us into categories. Jose Bowen [00:15:44]: And that relates to the bias problem. So AI is also a tool that could take racial and other profiling to a whole nother level. So I'm obviously worried about bias in the answers, although I do maintain that AI, does amplify bias, that's clear. AI learned from the Internet, so it learned all the bias there. And so it is mostly white, male, Western sources that are in English, that are on the Internet, and so other cultures are less represented. But if you're aware of AI bias, it's easier to fix. Jose Bowen [00:16:20]: Right? So, if I say to you, "I want you to evaluate these candidates for the professor of history, or these candidates for graduate school, or these new students, but ignore the prestige of where they got their degree"... That's really hard for a human, it's like, don't think about the cookies, right? We all think about cookies. So an AI, let's just rank the candidates based upon the quality of their teaching and research. And so, with the proper prompting and usage, so, for example, in my research, I now get information about people in other countries doing work in other languages that I would never have read. Right? I'm not normally reading the Indonesian Journal of Technology. But now AI can read this for me and find stuff. And this relates to this other new capability that we've just gotten in the last year, which is that AI can search for ideas. What are the trends in my field? What are the latest thoughts? What percentage of people are doing this? So are there anything similar? If I'm looking for a patent, AI is great, because are there any similar patents? Not just, what are these words? So that means I can now I have a system prompt that says "Avoid Western bias. Jose Bowen [00:17:32]: Make every search a global search, consider people from other cultures". And so, I have that built into my prompting at the system level. And so I think that helps me uncover my bias. And I say to people, if you know what your bias is, right? Well, I don't like continental philosophers. Well, then tell the AI to make sure it's, you can have a thinking partner, Jose Bowen [00:17:56]: that is the compliment to you. But I think that that's pretty specialized usage. I mean, I have to think a lot. I have to care as a human, I have to care about bias in order to think of how do I set up my system so it works, and I think most people won't. So I think the potential is, you're probably going to get more bias because people are going to use AI poorly. And so bias and privacy are two categories of ugly that are pretty big. Bonni Stachowiak [00:18:29]: In terms of the privacy element, I wanted to share that, another concern I have is, just how essential failure is to learning. And so when I think about, gosh, if I had gone through school, and every single time I got something wrong, and then you mentioned the data, and just the way in which we can take learning analytics, and take all of the value of failure. And I mean, we already have an educational system that rewards perfectionism, and rewards timed test-taking. If you've got the perfect school, that, you know, score, that means, you know, in the right amount, whatever, quote unquote, right amount of time means, that kind of a thing. And so I'm thinking about the research that we know about things like retrieval practice, otherwise known as the testing effect, and that I'll actually learn deeper learning, and be able to apply that learning in such different contexts, if I fail, get things wrong, as long as I am then presented with what the correct answer was, etcetera. And I just, it seems like we're building systems because of these privacy aspects, where that fear of failure will be even worse than it is now. Bonni Stachowiak [00:19:41]: And Jose, I gotta tell you, it feels awful right now. I mean, I'm sitting here telling you about getting things wrong on a app I'm doing for fun and my own learning, and you know what I mean? But like, I'm not in school right now, and I'm not having that pressure of trying to get into grad school, or trying to get that job and having my entire life and my mistakes so quantified, and then bought and sold and all the things. Jose Bowen [00:20:06]: That's a huge problem because, right? So now look, humans have been given a couple of capabilities that we aren't ready for, right? And that's absolutely one of them, right? The ability to analyze every student mistake, right? That has great potential. Oh my God, I can actually figure, oh, this student is making this mistake, I can go help them. But it also has the potential for enormous surveillance, and other course of things. And so I do think that again, one of the reasons we should all be in this, and not be banning this, is because we need to be in the game. Because that's the sort of thing that you just mentioned, that's the sort of thing that somebody at OpenAI is not thinking about. Jose Bowen [00:20:45]: They're not worried about, they just want to sell us more stuff. And so they want chatbots that are funny, and that are relational, right? They want us to use it more in the same way that Facebook wanted to keep us scrolling all night. And so their motives are very different, so we do need to think about, what do we do with this power? Do we want teachers to be able to have the power to monitor everything that students do? School AI lets me monitor all of the students using my chatbot. And it also will analyze that, which can be good, it's like, oh, Bonni is having trouble with long division, but she hasn't asked a question in class. Jose Bowen [00:21:26]: But I can see she keeps asking the bot and the bot has told me, yeah, so that could useful. But of course, that also, more responsibility. And so, I think that the technology has once again jumped way ahead of human capabilities to think about this responsibly, and we need to make sure that we're building systems that don't say, oh, we're not going to use that tool because it could be used poorly, but rather we're going to use this tool in a way that helps student learning. Because there is amazing potential to be able to help learners of very, very different types. I got an email just a couple days ago, from someone in Iran who is doing work with rural populations, where there isn't a teacher, and so they want to know how AI could be. It's like, okay, so I want to learn a subject, I don't have a teacher in that subject, but that's going to be amazing. But of course, now somebody's going to know that somebody in that little village is learning that subject. Bonni Stachowiak [00:22:28]: Yeah, and you talk about us being in the game, and the importance of that, I would even want to add a caveat to that. Not a game in which you are the expert, and you've spent your career studying this, and you know it inside and out, so you don't experience those failures. And the potential for you getting to feel like what it would be like, to have lots of eyeballs and robots knowing and analyzing your every failure. And I just think that's also another important element of all of us, for us to be in this game, as you describe it, to be intellectually humble and curious, and in order to do that, we have to find ways to, so you talked about fighting against our biases, Bonni Stachowiak [00:23:10]: we also need to fight against that thing that, I mean, has been baked into so many of us because of the context in which we came up. You have to drive and be perfect, and move forward and all, you know, be the expert in the room. Where it's like, well, that's not often the people we're serving, so how do we put ourselves in situations where we're challenged and we fail, and we get to re-experience that and have more beginner's mind around it? Jose Bowen [00:23:36]: Well, and that's back to one of the really good things about AI, is that, AI allows us to get out of our own head, right? AI is a great compliment. So one of my standard prompts is so, take your academic integrity policy, and give it to an AI and say, so how might a first-generation 18-year-old misunderstand this? How can I make this clearer to a student who, you know, has not taken a course in physics before? In fact, I've now, I'm now doing this when I go to universities, this is fun. Because I'll take the university academic integrity policy, and I'll do it live, I'll do it while people are standing around. Jose Bowen [00:24:13]: So look, here you go. I was in Denmark, and so the Danish version of their Academic Integrity Policy got a good grade from ChatGPT. But when they translated it into English, they had translated it as to "Rules for cheating". Jose Bowen [00:24:33]: Nobody had noticed, somebody who's probably not an English native speaker. And so AI immediately said so that makes it sound like these are the rules to cheat, not the rules because, and they said in the Danish word, I don't remember, but they said "it's this perfectly clear in this Danish word". "It's not perfectly clear, here". And so, being able to have somebody who thinks differently than you, be able to read your syllabus, look at your readings, right? Jose Bowen [00:24:57]: What might somebody from a different background, somebody who's not comfortable in school, somebody who wasn't as good in school as you were. And so I think that, that humility that you talked about has never been more important. And so for faculty who are worried about that, I would say, it's the most important thing you can model for your students. Because at the moment your students may be ahead of you on AI, but I guarantee you they were going to come up against some new technology in the future, which is going to intimidate them. And your model of, wow, this is new, and scary, and I grew up with ditto machines, and I don't know how to deal with this. That is the model that you want to present to students, that you do need to be humble in the face of new things, and that there is nuance. And so I think we all need to approach this with that attitude and with our students who, mostly, I think, want to have that conversation with us. They don't want to hear the policy, they want to learn about how we are learning new things. Bonni Stachowiak [00:25:59]: All right, this is the easiest question before we get to the recommendations segment. So just tell us the future. What exactly is going to happen and when? Ready, set, go. Jose Bowen [00:26:08]: Ay ay ay! So I think we are going to continue to, this is the worst AI will ever be, it's going to continue to get better. So I don't know how fast this is going to happen, but we've already seen the launch of really, really cheap AI degrees, that are going to compete with us. So we had this with online education, we had online learning in Southern New Hampshire and Arizona State. There's a new competitor, University of Phoenix, etcetera. Jose Bowen [00:26:36]: That didn't bother some campuses, but it did bother others. But now that we've got $2,000 nursing degrees, and there's a 5,000, $6,000 EMBA program coming in a few weeks. So those are AI all, so those are totally AI-driven, so they're scalable, so that's a whole new thing. In the old days, you could either be the low cost leader, Walmart, you drive down cost all the time, but when you do that, you sacrifice unusual customers. Jose Bowen [00:27:05]: You have to have conformity, you have to have similar lines of products. Or you can have customized boutique, and you can charge more for that. So I think we're going to see the same thing with AI. AI is going to allow us to have customized and cheap degrees, right? Your own AI tutor, so that's a whole new class of thing. So I now have my own personal nursing coach, and I'm in simulations, I don't have to go to any lectures, everything is simulations. Jose Bowen [00:27:35]: So I think the new axis is going to be, on the one hand, totally AI-driven, cheap but customized courses, but no people, because people are expensive. And on the other end, boutique, all people, no AI. And so the question for most places is, where do you sit on the spectrum? Right? And the other axis is really, what do you do with the extra people time? If I'm a faculty member, and I now have AI tutors, I don't have to deliver content, I don't need the big 600-seat classroom. But I also have to have better conversations with students about how they're feeling. Well, I wasn't trained for that. Jose Bowen [00:28:15]: But I think we're going to see those two extremes. We're going to see really, really cheap degrees that are AI only, we're going to see really expensive degrees, boutique, just hands on, small liberal arts college, Harvard, etcetera. And the question for most of us is, where do I now fit in that new environment? Where do I set up shop? How do I distinguish myself? And then the second axis of the second question is, if in fact AI can save me some time, whatever you think that is, whether it's accreditation reports, or grading, or lecturing, or being a tutor, what do I do with that time? And what do I do at that time that actually helps students? Or is it just I get to do more research? So those are big questions, and I think we are going to see a reshaping of the landscape. And all of this happening at the time when the numbers of students are falling and, you know, politics and everything else is what they are. And fewer students want to be on a physical campus, unless there's Greek life and a football game, so, Jose Bowen [00:29:14]: which is ancillary to what we do, for most of them. Bonni Stachowiak [00:29:17]: Thank you so much. This is the time in the show where we each get to share our recommendations. And when Jeff Young was recently on the podcast, in his recommendations, he said he was going to recommend two podcasts. And I started thinking, I didn't say these words, but I started thinking and "Oh, come on, Jeff, I don't need any more podcasts, do you understand how big my podcast queue is?" And I listened to him, and I mean, it did sound super interesting, but so does, like, so much of the podcasting world. Well, I have to tell you, I haven't even gotten to a second recommendation, I'm sure it is also excellent. But I started listening and it has been, Bonni Stachowiak [00:29:54]: it's like, it's just been so long since I've just wanted to binge, like, when will be the next time I can get some time to listen to this podcast? Let me share what it is, and then let me recommend it, so it's the Shell Game podcast. And first, I have to ask José, have you heard of the Shell Game podcast? Jose Bowen [00:30:13]: It sounds familiar, but no, I don't know. Bonni Stachowiak [00:30:17]: A podcast about things that are not what they seem, hosted by journalist Evan Ratliff. And I've only gotten through season one, and on season one, and you might have seen little clips about this, because I did this was, by the way, out in 2024, I'm surprised that it has such staying power, given how much you said things are changing. But in the first season, he's going through, and he's doing, you know, how do these bots go and converse with humans? So he starts with customer service agents, you know, the frustration of going through the phone tree and pressing number four to get in contact. And I mean, it's a funny podcast, and I keep finding myself having all these feelings where I'm laughing, and then I'm going, oh, this is horrible! Bonni Stachowiak [00:30:57]: This is just horrible, horrible, horrible to like, okay, I'm learning, so, I mean, I'm having all the feelings all the time. He's a delightful audio storyteller. And so season one, by the way, then he starts to feel a little guilty because I'm wasting these people's time. These are good, hardworking people. So then he sics his robots on the scammers, the people trying to cheat people out of things by taking advantage of them and everything. And then, I mean, of course, Jose, we're gonna love laughing, you know, at that, and I mean, it's funny, but it's also horrifying. Bonni Stachowiak [00:31:30]: And then it's also, I get so curious and everything, it is just exquisite. So thank you, Jeff, for recommending this! And I am on episode two of season two, which is, then he starts a business. And his business is comprised predominantly of AI bots, and as you can imagine, all those same things I just said, the hilarity, the, oh, my gosh, what on earth am I listening to? It's just everything, everything, everything, it's so wonderful! Bonni Stachowiak [00:31:56]: I can't wait till friends talk, get to listen so I can have conversations, it's that good. And the second season is really turning out to be just as good as, well, so, so, so, so good. Sorry, Jose, to add another podcast to, I'm sure what for you is a very long, but you spend a lot of time on planes. You go and do all these speaking engagements, you just came out with the second edition of your book, Bonni Stachowiak [00:32:17]: you've got places in airports where you can listen to podcasts. Jose Bowen [00:32:21]: Yeah, no, I do. I do listen to podcasts, but as you say, there's a lot. Bonni Stachowiak [00:32:25]: There's a lot. Jose Bowen [00:32:26]: But I also, I have the two kinds of podcasts. One is the, the ones that will help me learn something like yours that I always listen to, although sometimes I have to choose between you and your, which is tough. Bonni Stachowiak [00:32:37]: It's very hard. Jose Bowen [00:32:37]: And then there's the calm, I just want, like, relief. I want something that's just going to entertain me and be funny. Bonni Stachowiak [00:32:44]: And so this will entertain you. I know you, I've driven you from an airport to one of your speaking engagements, and I can tell you it will entertain you. I mean, I'm finding myself just like, busting out laughing. Oh, yes, it's kind of a nice mix and it will inform your work. And I think we'll get you curious, and then you'll want to call me and we'll have a conversation about it. Jose Bowen [00:33:04]: Cool. Bonni Stachowiak [00:33:06]: What do you want to recommend today? Because I know listeners are curious about what you would like to share. Jose Bowen [00:33:11]: Sure. So, I'm not going to recommend the new book, although there is a second edition of Teaching with AI. We also started a new website, we teach with ai.com, which is loaded with tools and prompts. And I update it almost every day because someone will give me something, and it's like, "oh, I've got to add that. Jose Bowen [00:33:29]: Hey, can I add that? I'll give you attribution, and I'll link to that". And so, there's a growing list of stuff. But my recommendations are two things: The first is, I like Boodlebox. It's FERPA compliant, it's a tool for educators, it's easy to use. Jose Bowen [00:33:47]: It's agnostic in terms of, it allows me to go from Gemini to ChatGPT, etcetera, and do different kinds of things. But it also allows me to set up custom bots for students, and it's cheap. An individual subscription is 16 bucks a month or something, it's cheap. Institutions are doing it. So I do find that, and I think, Jose Bowen [00:34:08]: for people who are struggling, they always say, which AI do I start with? And it's like, well, start with one of the big three or four. But you also want to try other things, and you want to go back and forth and try the same prompt and different ones. And so that's number one. Bonni Stachowiak [00:34:23]: Wait, wait, wait. Before you go, though, I have not used it, but I've heard such good things about it. Can I ask a clarifying question? Jose Bowen [00:34:30]: Sure. Bonni Stachowiak [00:34:31]: So, if I understand it correctly, I predominantly will use ChatGPT, occasionally Claude, and then also Microsoft Copilot. But I think there's this distinction between, yes, if I'm going to go build a custom GPT, it's kind of like it's a... What I'm understanding about Boodlebox, is that you're able to really build experiences for students and then, more, give them easier access to it without a login? Or anything else you want to clarify as far as, as a teacher using Boodlebox to design learning experiences? Jose Bowen [00:35:06]: Right, so there are a couple things. So, the first is that Boodlebox works as my AI hub, right? So I can talk to ChatGPT, or Gemini, or Claude through it. So in the same way that you can use a PO or one of those other consolidators, that way. But it also allows me to create assignments or custom bots, which are various, or courses. So it does some of the things that an LMS does, and then I just share with the students a QR code, or a link, and they can immediately go, and they don't have to pay. The same is true for Chat GPT, or I can make GPTs, but I have to pay for chat GPT. Jose Bowen [00:35:40]: But then students can use them for free. They also have a word limit in the GPTs that you do. So this allows me to do really long prompts, among other things. But it mostly allows, it's student-friendly in a way that I think is useful, and again with, with privacy. Jose Bowen [00:35:59]: And it's a small company, so I trust them in a way that I don't, I don't trust, I can't. I shouldn't say any names. Bonni Stachowiak [00:36:08]: I love it. All right, sorry, I didn't mean to interrupt, but I was excited to learn more. Jose Bowen [00:36:12]: So the other one, and actually I find, so I do listen to podcasts, but I have to say, I like LinkedIn. Because there's a community of us, there are a few hundred of us who are educators, who were in the AI space. Ethan Mollich obviously, but there's a lot of people there. And so, A - every time I post an idea, I get people saying "no, that won't work". Jose Bowen [00:36:34]: Or "oh, that's interesting", have you tried this?" Or "here's how I'm doing it". So, it is the place, it's replaced a lot of other platforms. But also, after you curate your feed, so that it knows that I really want only AI in education, I don't want the tech bros, I want just, and so I see the same people every day. And I see what they're doing, or I gave this presentation, here's the slide deck. Jose Bowen [00:37:01]: Sarah Eaton, there are all sorts of people, and so, that's actually where I get most of my information about what's new. Oh, this new AI dropped, and it's good, or it's no good, because I don't have time to try it all. I don't have time, I mean, I tried two yesterday and one new one today, I don't have time for that. So I'll often see somebody else will post a review of how I used it, or here are some tips. Somebody posted a great set of tips for how to get infographics, how to write better prompts for infographics than nanobanana? It's like, oh, that's brilliant. Jose Bowen [00:37:36]: So LinkedIn is my best source of quick little assignment ideas, efficiency ideas. And so I find that I do check that almost every day and get stuff, and of course, I'm there too. And there's Substacks, and other kinds of things, but LinkedIn summaries of, "Here's the Substack I wrote, and here's the three bullet points", does save me some time. Bonni Stachowiak [00:37:59]: It is so delightful to get to talk to you again today, after all the times we've talked before, and all the times to come. I'm so thankful for this new edition of the book. I'm glad that it got the requisite attention the first round, to justify you and Eddie going back and adding such richness to it. Thank you for this engaging conversation, and these recommendations. I'm curious to try out Boodlebox, and I'm on LinkedIn and really enjoying many of the things that you shared as well. Jose Bowen [00:38:29]: Thanks, Bonnie. It's always a pleasure to talk to you. Bonni Stachowiak [00:38:33]: Thanks once again to José Bowen for joining me on today's episode of Teaching in Higher Ed. Today's episode was produced by me, Bonnie Stachowiak. It was edited by the ever-talented Andrew Kroeger. If you've been listening for a while, and you want an easy way to get the show notes from the most recent episodes, as well as some other resources that extend beyond the regular episode links, I would invite you to head over to teachinginhighered.com/subscribe. Thank you so much for listening, and I'll see you next time on Teaching in Higher Ed.