Bonni Stachowiak [00:00:00]: Today on episode number 501, expanding our collective understanding of generative artificial intelligence with Autumm Caines and Maya Barak. Welcome to this episode of Teaching in Higher Ed. I'm Bonni Stachowiak, and this is the space where we explore the art and science of being more effective at facilitating learning. We also share ways to improve our productivity approaches so we can have more peace in our lives and be even more present for our students. Today's episode celebrates the culmination of my fall 2023 University of Michigan Dearborn teaching and learning hub scholar in residence role, and I am so thrilled to be joined by 2 guests. Joining me today is past guest, Autumm Canes. Autumm Caines is a liminal space. Part technologist, part artist, part manager, part synthesizer, she aspires to be mostly educator, you will find Autumm at the place where different disciplines and fields intersect. Bonni Stachowiak [00:01:21]: Always on the threshold and trying to learn something new. Autumm currently works full time as an instructional designer at the University of Michigan Dearborn and part time as instructional faculty at College Unbound where she teaches courses in digital citizenship, as well as web and digital portfolio. Maya Barak is associate professor of criminology and criminal justice, and an affiliate of women's and gender studies and Arab American studies at the University of Michigan Dearborn. She's the coauthor of Capital Defense, Inside the Lives of America's Death Penalty Lawyers. Her most recent book is titled The Slow Violence of Immigration Court, Procedural Justice on Trial. Maya, welcome to Teaching in Higher Ed, and Autumm, welcome back to Teaching in Higher Ed. Autumm Caines [00:02:21]: Thanks, Bonni. Great to be here. Maya Barak [00:02:23]: Hi, Bonni. Bonni Stachowiak [00:02:24]: On episode 500, my family and I celebrated that momentous occasion by blowing into those little noisemakers, New Year's Eve kind of noise maker things. And I'm sure that our podcast editor is glad that those things have been put away because I I I literally had tingling in my ears for an hour after our daughter the one in our ear. So I'm not gonna do that, but I know that the 3 of us are celebrating the close to something that was really special to me, and that was that you had invited me to be a scholar in residence for the University of Michigan Dearborn. And I just wanna thank you both and really your your entire team and and just such a fun, rewarding, challenging, and all the best ways possible. I it was just such a tremendous experience when the I won't ever forget. And I not only have 1, but I have 2 bears so that I can have 1 at work, University of Michigan Dearborn bear at work and then one at home. So just thank my my eternal thanks to both of you. And, Autumm, I know you're gonna talk a little bit about the scholar in residence program, what people might be thinking about for their own institutions. Bonni Stachowiak [00:03:36]: And then, Maya, you're gonna share a little bit with us just about communities of practice more broadly. So Autumm, why don't you start out and tell us a little bit more about the Scholar in Residence program? Autumm Caines [00:03:45]: Yeah. So it's, I think, a fairly unique approach to doing faculty development. We Invite an external expert in teaching and learning, some facet of teaching and learning, To come and be a part of our campus for an extended period of time. We've done it for a semester. We've done it for 2 semesters, an academic year. And what we've done is found somebody who has some expertise that we wanna bring to campus, But we wanna extend it more than just a single workshop or a single keynote kind of experience and really try to get that person to come into our environment and give us some of their expertise, but also to be part of our community, right, to also be part of Our campus. And so you were our 3rd scholar in residence that we had. We our first one was Jesse Snomel, and he came in And really focus some conversations around things like on grading and trusting students. Autumm Caines [00:04:47]: Those were kind of the big themes. When we had him, we had Brian Dewsberry last year, and we actually brought him into campus a couple of times. And with Jesse, we invited him to our on grading group. So we have a group that meets regularly to discuss on grading, and then we had a couple of other, campus virtual events. And then he held a couple of keynotes for us too. Brian actually came to campus and ran some workshops with us. We also intersected His work with, community of practice, faculty learning community that was being developed by Grace Homes' culture Bonni our campus and they did some amazing work together. And this year, Bonni, we were we we really want Focus on AI, you know, there's so much happening with generative AI and I I I've been writing about generative AI. Autumm Caines [00:05:40]: I've been talking about generative AI Since the beginning of last year, it's something that has been on a lot of people's minds lately. I'd written a couple of blog posts And I gotten some attention for that. I was on your show, Bonni. Right? And so when my colleagues came to me and they were like, you know, We'd like to do this Scholar in Residence thing again, you know, what should we focus Bonni, who should we ask, and everybody was thinking, oh, AI. But I feel kinda nervous Because like the idea of bringing in an expert, I'm doing air quotes. Yes. Bonni Stachowiak [00:06:09]: There's a lot of air quotes. Autumm Caines [00:06:13]: The generative AI, When it was so new, it was so new and there were so many people out there, you know, claiming expertise and Getting a lot of attention. I I personally was getting a lot of attention for kind of thinking out loud really more than anything else. And so I was really conflicted about the idea of bringing somebody in, who was gonna take on this kind of air of an expert. At the same time, I wanted somebody professional with some level of expertise. Right? But maybe not an AI expert. And that's when you to mind, Bonni, because I was like, here we have somebody who is definitely an expert in the field of teaching and learning and is definitely but more than anything is an expert listener and is going out there and finding those voices and bringing them in, providing a platform for many different voices to be heard around this Topic. And they said, wouldn't it be great if we brought her in for an extended, you know, extended period to bring us some of that, but also to help us To elicit some of that listening in with us. And so that's the one piece of it is bringing in the external scholar. Autumm Caines [00:07:19]: But the other piece of it Is giving that speller a place where they could feel like they're on our campus even if they can't come physically. Right? And so that's where we got the DigiPen at Dearborn community of practice, faculty learning community, that's where we placed Your residency. We put it inside of this already established community, at the university. And so I'm gonna turn things over to Maya to tell us a little bit about the of DigiPen at Dearborn, how it came to be, and, you know, maybe why we placed it there. Maya Barak [00:07:54]: Thanks, Autumm. You know, Dispathetic Dearborn started prior to the pandemic, actually. And, Autumm, you didn't mention this today, maybe mentioned this on the last podcast. Autumm is humble. She is one of the fantastic like, she is such a wonderful part of our campus community and instructional designer at our hub for teaching and learning resources. They put on they bring the scholars and residents to our campus community. They put on workshops. They work with faculty to design courses. Maya Barak [00:08:25]: And so Autumm and I had worked together before DigiPen at Dearborn came to be before the scholar in residency. And there were a handful of us that were really interested in digital pedagogies and thinking about how we bring sort of innovative ideas To our courses, particularly when it comes to our online courses, but also using some of these technologies and pedagogy in our in person courses as well. And so we started getting together, just a few of us, in person in our hub, the hub, which really is the hub of our campus, And having coffee and just chatting. And that grew over the course of the semester and we thought, well, an interest in this. Why don't we formalize this, make it a bit more regular? And that was at the end of 2019. And then the pandemic hit when we were launching this Group focused on digital pedagogy and we got a lot of interest. And so our community really grew quickly. And I think describing us as a community of practice Makes a lot of sense. Maya Barak [00:09:21]: A group of people with a common interest, with common goals, who are you know, enjoy having sometimes Lofty theoretical philosophical conversations, but are really looking for ways to apply these ideas and experiment and learn from one another. And so we started these regular meetings about all things digital pedagogy, and we really keep that as an expansive and broad understanding of what that means. Again, not just Online learning, but talking about, traditionally in person classes that are now at some point, we we started saying, are all of our classes hybrid? Are all of our Classes in the in between space between the in person and the online regardless of the modality when the student signs up. And so When we talked about when Autumm and I and a few others were talking about, oh, what if we bring Bonni as our scholar in residence And connects us to the Ditchpad group, it just seemed like such a wonderful and natural fit and a really exciting opportunity to Bring, Bonni, all of your listening expertise, your curatorship, if I may put it that way, to our campus community and to collaborate and sort of build something together. Bonni Stachowiak [00:10:33]: It really did seem like such a good fit for me and you're both reminding me of this video that I believe I recommended Bonni a past show, but I'll put it in the show notes because I don't think I can ever watch it enough, but about someone who teaches math, who really suggests, and I'm spoiling the ending a little bit here. But rather than writing learning outcomes that say things like students will be able to, instead, right learning outcomes that are phrased students will be curious about, and what what would learning experiences look like if we try to get more people to be curious about things. And that's what I feel so much from this relationship, both in the formal sense with the scholar in residence, but also so just the friendships and relationships that it's I I do have there are seasons, I will not say. I mean, we're we're I'll be coming up on celebrating 10 years in June of 20 wait. 2024. I'll be celebrating 10 years of doing this. Anyway, sometimes it's agony to be in and I I know I always comment on Autumm's bio every time I get the privilege of talking to her, the liminal space. But really that that I can't try to show up and be an expert really at anything. Bonni Stachowiak [00:11:45]: But I but I can show up and be curious about things, and and listen and listen for what opportunities may be for an expert at something to somebody who's not an expert at something and help generate more curiosity and more understanding that way. So it feels really so good so good that that comes through in my work, and then it resulted in something that I consider to be such an honor like this. I was gonna say one other thing too about communities of practice. This is something that I'm still relatively early in in terms of understanding the research around them. I I may have read a few journal articles and understanding that some of the early faculty learning communities and that kind of a thing. It's an actual area of scholarship as well as friendship and relationships and experimentation. But that if you are in a university where you don't have already these formal established opportunities, I would just encourage you to start getting together with people and and to to start to see who else is curious about the same things that you're curious about and just see what happens. And maybe you do it as a single coffee or or tea date or a walk, or maybe you decide, hey, we're gonna read this article or listen to this podcast, watch a video, and come together a few times and talk about it. Bonni Stachowiak [00:13:04]: But just to start small, and you never know what what might happen there. Or so. And speaking of artificial intelligence, I know we're gonna explore a little bit what's been happening in our own imaginations. We we did have 3 different events. We had one event that brought in some of the emerging experts in this across some very different disciplines, which was so fascinating for me. And then we had more of a student focused conversation and looking at the student experience. And then we had the last event that we did, we kept it pretty open and wanted people to show up and just explore where their reactions were and their reflections to it. And before we get too specific on any of the topics, I just wonder, Autumm or Maya, do you wanna reflect back to some of the key things that stood out to you from those 3 conversations? Should we start with the first one then? That's this we had a number of experts across. Bonni Stachowiak [00:13:59]: We had someone who ran a writing center, and of course, I was able to interview her as well. We had someone in computer science and then someone who educates future teachers come. And by the way, we also had a lot of other people join from the community and share in that conversation. Anything stand out to you from looking across disciplines and how artificial intelligence is, impacting those disciplines? Autumm Caines [00:14:25]: Yeah. So it was it's it's always Great to do anything interdisciplinary where you get everybody together, but I think that AI is definitely creating some rifts Just really. Right? There's just some big differences. So we had Jennifer Coon from our business writing center Alongside of Stein Brumberg, who is our dean of education and is a professor of educational technology. Then we also had Paul Wada who's a professor in the engineering department, and Paul uses AI quite extensively. I think he has a couple of assignments where he actually crafts some things and has the students really use it, but he also just has, like, a blanket policy. Like, just use this Tell me how you're using it because I'm working alongside of you. Right? And I think that that is maybe very different than what we're seeing Coming out of the folks who are from the writing disciplines and the educational disciplines, not that they are I would still say Jennifer and Stein very much were embracing this technology and and interested in how it's gonna work, but they're just in a different place because the discipline Approaches it much differently than, Paul teaches a lot of coding. Autumm Caines [00:15:42]: And so they're they just have a different ethos around sharing and copying and copying and generating ideas and those kinds of things. So it was really interesting to hear them talk and Be able to see the places where they overlap, but also places where they differed. Bonni Stachowiak [00:15:59]: Yeah. With Jennifer, as I think through the 2 conversations with her, I think that She she really emphasized knowing your audience, and she told a story in the podcast episode I did with her where she talked about a student that wrote her. I might not be remembering the details exactly, but something like a 4 paragraph email about that she was gonna have to miss class or not turn something in because she was sick. And so just like, you you likely she's not accusing someone. By the way, none of us should accuse people of using AI. But If you perhaps used artificial intelligence to produce this 4 paragraph essay, which it very much looks like you did, wanting to reach out and just explain this is a mismatch. You used a tool, but the tool that you used was a mismatch for the audience and or at least the way that you used it. So rather than trying to police students or rather than trying to say that you have to write using all of the same tools or lack thereof that I did back when I was in school or whatever. Bonni Stachowiak [00:17:01]: She's really trying to emphasize that. Where I'm still struggling a little bit with my own thinking on this, and that she and I certainly didn't solve this mystery, but the benefits of a blank slate. And we also spoke a little bit about the benefits of being bored, that sometimes boredom can if we were to just try to medicate our boredom or to medicate our I mean, using medicate in a symbolic term, by the way. Just that is there is there harm in that? So I guess and and I guess that's where I still feel really struggling, you know, going for all of us as we're trying to educate people how to that writing is thinking. Thinking is writing. And if you and if you take away parts of that thinking, and you put it into this external tool, I totally see how an external tool can help with that to get you I mean, I've experienced it myself. Have it do the 1st draft of a letter of recommendation for promotion and tenure. Boy, that feels to me like a really good blank blank slate to not have to go through myself. Bonni Stachowiak [00:18:09]: I'm just wondering if we, as a society, if education becomes that that never having to feel the tension of, I don't know how to get started. I don't know how to take ideas that are in my head and then get them out of my head onto a blank piece of paper, will that be something that significantly hinders various disciplines because we don't help people through that that that hole. And instead, we're like, oh, here's a quick fix if you will. I guess that's so so that's where, When I'm thinking back to these conversations, and it came up with Stein a little bit, and then with Paul and the coding, of course, too many people that I speak to that information technology too, similar themes. So, Maya, you may have you're gonna solve the mystery now. I can't wait. Ideas? Maya Barak [00:18:58]: No. I'm not. But thank you, boss. So I think it's interesting to frame it in this way. One of the things there were a few things that stood out to me In that 1st session, but also across the sessions that we had while you were in residence with us. And I'll sort of mention 1 briefly and I'll put a pin in it, which is feelings. A lot of the different feelings that people have about AI. Maybe we can return to that. Bonni Stachowiak [00:19:20]: Yes. Maya Barak [00:19:20]: But one of the things that came out of these conversations, especially in that first Meeting that we had was that even when you're using these AI tools, it it isn't a quick fix. You really have to learn the tool and work with the tool. I mean, I've only played around with these things. I have not used them in my teaching yet. I haven't had the opportunity to, but I will soon. And I'm sort of excited and nervous about that feeling. But when I played around with them and in the conversations we've had with other Folks who are using these different AI tools even if you're trying to Dave a template. Right? You still have to do some back and forth and you are going through Process of well, here's what's in my mind and how do I get the those words on the page and how do I work with AI to do that? And so I I think, actually, We're not we won't necessarily lose that. Maya Barak [00:20:11]: Not yet anyway. Maybe one Dave, with the way AI develops and evolves. But I think we're still engaged in that process, which I think is an important process that you're talking about, which can be a frustrating process. And I can understand and have been there myself where we're I would love a shortcut for this so I can avoid this frustration. But I don't think AI is is the shortcut yet, At least. Maya Barak [00:20:34]: I think we're still engaging in that back and forth a bit, which at the end of the Dave, it's it's useful for us. Bonni Stachowiak [00:20:39]: Yeah. Let's come back to the feelings. I know a lot of what we noticed, not solely in the 3 conversations, but really just so many conversations happening interpersonally, is that the emergence of AI really bumps up against questions of identity. And and the fear of what that what the implications are for our own identity. Would either of you like to start to reflect a bit on those? Understandable, very big feelings that I think the 3 of us or having, and we're just seeing many people have as well? Autumm Caines [00:21:10]: I think that came up in both. Well, really all 3 all 3 of the Events that we had where we invited you in, Bonni, the DigiPen at Dearborn events. I remember talking with the faculty about The way that it kinda challenges their identity as faculty members or maybe their identities within a particular discipline. But I feel like we also heard some of that from the students when we had a chance to talk with them too. Right? Like, is this Where's the line where this tool is helping me and where is it taking the learning away from me because the learning is part of the Struggle. Right? And I was really impressed to hear that from the students. But I guess also not surprised. I think that most students are reflective Bonni what's happening mean, when they're, in the middle of their learning process and kind of realize that the quick fixes maybe aren't Really so good. Autumm Caines [00:22:06]: I think we're trying to figure that out right now. We're trying to figure out what's a quick fix and what's, You know, a use that maybe has some merit or can enhance learning. Maya Barak [00:22:16]: Can I do? If I can jump in. I mean, I think as A faculty member in particular, and when we think about higher education and we could call them hoops, we could call it a journey, the process that that folks go through to get higher education, whether it's a master's or go through and do a PhD, and then if they take a full time academic position, There's this push that you are the expert. Right? And the way that we think about what learning is and what it means in the classroom in a mainstream way is very much You as the instructor at the podium giving the knowledge to the students. They need you for this. They come to you for this And there's a bit of ego wrapped up in all of it as well, but then also your actual promotions and, you know, your your career, what that Means and looks like is all connected to this mainstream notion of education and AI is pushing back against that and saying, wait a minute. We need you know, you're not you're not the only expert now and maybe students don't need to come to you for this. But I do think one of the other One of the areas of AI that I'm interested in is how is this pushing us to embrace more critical pedagogies and critical lenses and frameworks for what is learning? What is teaching? What is the dynamic in the classroom? And challenging us to move away from That banking model of education into some of these more innovative, critical, radical, creative forms of teaching, which Personally, I just enjoy more. I think they're more fun as a as a teacher and a learner. Maya Barak [00:23:52]: So I think that's But that's connected to identity, and that's scary for people when it you've you've been socialized into this in higher education. So I think that's a a big part of the identity crisis around AI. Bonni Stachowiak [00:24:06]: Alright. Well, we've talked a little bit about how students are or are not using artificial intelligence. Would either one of you wanna talk a little bit about either how you have used it in teaching, whether it's faculty, the DigiPen experience? I know, Autumm, you taught a class where it's focused, or even in other aspects of your lives? Would love to hear you share a little bit about that. Autumm Caines [00:24:26]: Yeah. So I got the opportunity to teach a 1st year seminar with a focus on artificial intelligence. I co taught it with Pamela Todorov here in at the University of Michigan Dearborn, because it was a 1st year seminar. If folks are not familiar with the idea of a 1st year seminar, 1st year seminar is, you know, you're bringing students in who are 1st year students and helping them to kind of make that transition from high school over to college. Our course was very focused on the professions, The different professions that maybe they would wanna be going into. We had students we had a lot of students who were pre med Who were hoping to go into something in the medical field. We had a couple of dentists. We had, a couple of nurses and some who wanted to be medical actors. Autumm Caines [00:25:14]: We had a couple of people from Maya's discipline of criminal justice, which I was really happy to see them taking a course like this. I really worry about artificial intelligence in criminal justice. So, you know, we had some folks who were Hoping to go on to be in law enforcement or to go on to law school and become lawyers or judges or those kind of things. And so Giving them an opportunity to explore the way that artificial intelligence might impact their different professions. So that was a big part of the course with them doing research about How artificial intelligence was gonna impact their potential careers going forward. And they did some amazing work thinking about that. As far as how we actually use artificial intelligence in the course, if you followed any of my work, you know that I do have some ethical concerns around this technology. I have ethical concerns about using it in the classroom because in many cases, faculty are asking students to sign up for an account. Autumm Caines [00:26:17]: And I feel like a lot of times students don't, you know, they all of us quite frankly sign up for all these accounts reading the terms of service, without reading the privacy policy, without thinking about the data that we're giving over and not thinking about What might happen to that data and how that could be used by a company. So I'm in a unique position though because I do work for the Michigan and the University of Michigan announced, I guess, several months ago that we were going all in on generative I and we have something called UMGPT, which any University of Michigan student or faculty member or staff member can log in to With their university credentials and get access to GPT 4, get access to GPT 3.5, get access to LAMA. And there's also an image generator in there now that's new, and I'm forgetting the name of it right now. But This is under the university's purview. And the university has been very clear that they have privacy first in thinking about all of this. That they are not Using the data, the inputs, the prompts that are put into the the models, the UMGPT interface, and not using those to Train the models that everything stays local to the university. So one of the things that we did in the class is something that I talked about the blog post that I wrote, which is to do a social annotation of the terms of service over the privacy policy. Because if we were gonna ask students to use these tools, I wanted them to kinda know what they were getting into. Autumm Caines [00:27:54]: I also wanted to allow them to use whatever they wanted to use. So we have this university, this internal tool that looks after your privacy at maybe a higher level. And so I had them do both the privacy policy of OpenAI and the privacy policy. It's actually a privacy notice from the university And kind of compare the 2, the different languages. I also had them read an article from the markup about how to read a privacy policy because the private privacy policies are very dense legal documents. And so, just throwing a bunch of 1st year under let's say, oh, go read this policy. Me not the best approach. So this article from the markup does a great job of calling out the different language that you might see in a privacy policy, What what that language means and some of the different sections that you wanna be sure to look for and just thinking about the way that different things are phrased. Autumm Caines [00:28:52]: Out of that Dave something really interesting though in that I had a few, not a ton, a handful. I had a handful of students Who are actually more skeptical of the UMGPT and actually preferred to use something like an OpenAI or a Bard or a thing or something like that because they thought the university might be able to go in and see their prompts and that somebody might Accuse them of cheating at some point. Right? And that that could be that it could be opening them up to some vulnerabilities That they didn't want. And so I had some students who specifically chose to use tools outside of the university's tools. And I tried to make it clear to them it's not the point that that is not the intent of the university, but They were skeptical of that. And I I also wanted to celebrate that because I think that I think it's good to be skeptical. It's not that the Point of that assignment is not to get you to use 1 tool over another tool. The point of that is to make you think about context And your Dave, who's gonna have access to your data and making smart choices, right, about who's using what, when. Autumm Caines [00:30:09]: After that, we did have an assignment. It's a typical 1st year seminar kind of assignment Where you have to go out and find a professional who's already working in the field that you're working in, and then you Interview that person. It's a great assignment. I did the assignment as an undergraduate student. I think it's kind of a staple in these 1st year classes. Right. To do something like this. There's only 1 problem with it, and that is it's hard To find a professional, get on their calendar, get them to give you some of their time. Autumm Caines [00:30:46]: It's a little bit easier in the, you know, Zoom Dave, if you can just get them on a Zoom call. But it it can be hard. And so we were in a situation where we had Maybe about 30% of the class that had not been able to pull this off yet. And we were about to get into a conversation about synthesis. You know, you've been doing this research about your about your profession, about how AI is going to intersect with it. Now you talked to a person About how AI is gonna affect your profession and so how do you synthesize those things together? Well, it's hard to think about Decising when you only have one thing, when you only have the research side, when you don't have the other side. Right? So We kinda came up with this 1 on the fly, but I worked with CHAT GPT, both UMGPT and CHAT GPT, actually, Dave develop a prompt, so I would use 1 to be a prompt engineer. So that's, like, what I'm assigning to it. Autumm Caines [00:31:44]: You are an expert prompt engineer. You need to design a prompt as an interview. And then I used the prompt that was generated in either another thread or in another system To kinda test that out and sort of went back and forth with this. Eventually, I ended up with a prompt that would act as a professional Of some kind and would then wait for a response and would would really act like it was in an interview situation Rather than just dumping out an interview content where it was doing both the interviewer and the interviewee kind of all at once, where it would really kind of role play and act in this way. And the template that I gave to the students and the example that I gave is I didn't Bonni use anything That they would actually come up with. Right? Because I wanted them to come up with their own. I wanted them to interview their dentist or their doctor or their Police officer or whatever. So the one I came up with was a professional clown. Autumm Caines [00:32:48]: And I just basically put it into a Google document, and I provided it to the students. We can link to it in the show notes as well. It's kind of funny, and it's fun to look at the prompt and think about The different ways that you can work with it. So, you know, I gave it to the students and basically just said change the profession to the profession that you'd be working with. Change the Dave, just make up a kind of thing. And if you Bonni change the prompt, if you Bonni, you know, this is just a starting place for you, but if it'll give you some idea of where you're at with these things. And so that at least gave them something to synthesize with the research that they had that didn't get them out of having their Conversation with their actual human. They had to still do that, but it bought us a little bit of time. Autumm Caines [00:33:32]: And it was kinda fun. Some of the students, we also had them do a, comparison of what they were hearing from the human and what heard from our AI and that was really interesting. Sometimes it was very similar. Sometimes it was very different. And so, yeah, it was just a little bit of Prompt engineering and a little bit just having some fun with the with the systems. Bonni Stachowiak [00:33:55]: You're so Shakespearean with your play within a play. So you have a prompt within a prompt. I'm not sure if I got that that level of detail when you were sharing this with me previously or have since forgotten, but that's really Bonni, like, your play within a play. Maya, before we get to the recommendation segment, is there anything that you wanna share about what you're either using it for now or you're hearing about, you know, people in teaching and learning using it? Maya Barak [00:34:20]: Yeah. I think I'll return just briefly if we can to the The ethical concerns that come up, I think some of these ethical concerns that faculty and students have around AI and cheating and plagiarism, What does that mean? What does it look like? How might this be used to sanction students to lead to sanctions for students? And as a criminologist, I'm particularly interested in issues of plagiarism and academic ethics and how we treat students, particularly as a critical criminologist who is interested in harm reduction And a nonviolent and restorative practices, and I think in a lot of college campuses, that's not the approach that is taken to plagiarism. There's not always a thoughtful, let's let's be nonviolent, let's try harm reduction, let's try restoration, reintegration, Transformation, it it's just punishment for the sake of punishment, which can be detrimental to students. And so I share some of those ethical concerns about how When you combine feelings that everyone is having, all the feelings folks are having about AI, faculty in particular, I think, and administrators who are have some legitimate concerns about plagiarism and in academic integrity, but the quick response, It's punishment and I think that there is a presumption in a lot of instances where there are issues around plagiarism and AI, and plagiarism were generally a presumption of guilt As opposed to in our criminal justice system, whether it happens or not in practice, we could have a separate conversation about that, but we're supposed to be presumed Innocent. And I don't think we typically afford that to our students when it comes to these issues of academic integrity and plagiarism. And because AI is emerging and there are all these feelings and we are still learning and many faculty have yet To even play around with an AI tool, I think there's also this presumption that it's automatically cheating. That anytime a student is Is using any kind of AI tool. It is that's it. Maya Barak [00:36:19]: To talk about binaries, right, it's just cheating or not cheating. If it's an AI tool, it is cheating. And I think the conversation we've been having here paints a different picture. And so I do think that it's just important for us to reflect On how AI is bumping into these other ethical quandaries that we have in higher education. I don't have answers, But I think it's important for us to remember one of the reasons that perhaps this is so captivating These emerging AI technologies and how we're using them or not in higher ed is because it it's not only it's it's touching a lot of sensitive spot Mhmm. That have a lot of unresolved issues for us in higher education. So that's one that I would just highlight briefly. Bonni Stachowiak [00:37:03]: That's so helpful. You think about the presumption of innocence, and how about a a presumption of intentionality to learn? What what if what if what we took away was that people are innocent until proven guilty, but also that we what if we assumed students were there to learn? That was their intention, was to learn. I mean, that I mean, that frames things. It's really really powerful to think through the ways our our identities are getting bumped up against and bruised a bit, and and to kinda wanna be really self aware and reflecting. Now thank you so much. This is the time in the show where we each get to share our recommendations, and mine are all AI related. I feel like I saved a bunch of them up for this conversation. I I have 4, but I'm gonna go quick. Bonni Stachowiak [00:37:51]: So first off, I have a There's a blog that I follow from Daniel Christian. He's been blogging for a very long time. And he got into a conversation which he reflected in public in his in his blog with David Goodrich. And he shared where David he he was basically it was a he was linking to an article that said, can new AI help to level up the scales of justice? And David wrote back to him, I shared this hope with you, but I can't help but feel extraordinarily skeptical that it will actually do so, what's the source of your hope? Thanks, Daniel. And it was just Such a lovely exchange where I mean, how many times do you hear about an Internet social media exchange where people respect each other? They regard each other. I mean, as far as I know, I can tell these 2 know each other and have corresponded previous to this, you know, interaction, but also kinda going, I'm not seeing what you're seeing. And then this whole article from, again, the article on learning ecosystems from Daniel Christian. It's a whole thing. Bonni Stachowiak [00:38:53]: I think, Maya, you might love this one because it's all who are people within the United States who or and some actually outside. It says some of these individuals don't reside in the United States. Their work still impacts many here in America. Individuals who are fighting for change, specifically in the legal ecosphere. So a whole list of I mean, I kept bookmarking and bookmarking, attorneys and social justice advocates. Then he has 5 companies, firms, or other organizations that are fighting for change in the legal realm, a bunch of different organizations. Then he has the ways that artificial intelligence and machine learning are being used to advocate for greater social justice, some bots that assist lawyers with discovery. It was just really a fascinating a fascinating fascinating post. Bonni Stachowiak [00:39:44]: Related to that, Maya, you were talking about cheating, and Stanford's Graduate School of Education came out with an article recently, what do AI chat thoughts really mean for students and cheating. And these are 2 scholars who talk about ongoing research into why and how often students cheat. And this was a ray of hope in that it's not as much as we think in terms of using artificial intelligence. So That was a fun one. I watched the TED talk with Rob Toews who looked at AI's single point of failure. And in all the conversations we've been having, I haven't really heard as much about the semiconductor chips. By the way, I've heard about it with our car. We bought a car, and you're supposed to be able to stick your foot under it and have the little back hatch thing go up. Bonni Stachowiak [00:40:31]: There's not enough chips in the world. Like, our car could do that if it had the chip, if there were enough chips The chip in the car. So I'm still, like, scratching my head on that one. But, yes, even in the world of AI, we hear so much about these single points of failures, But this guy's going, no. It's actually semiconductor chips. And it was really an interesting look at that limitation in terms of this technology. And finally, I really enjoyed the it Dave me laugh. I think it made many people laugh. Bonni Stachowiak [00:41:00]: This is both serious and it's funny. So Chat GPT recently started essentially calling people lazy, and they started saying, well, you're asking me to do this. But really, don't you think you should do this yourself. And it's just so funny because it's just predicting words that will come next. So we had to figure that at some point in time, it was gonna predict The word should come next. You're asking me to do this. Don't you think you should do this yourself? But there were lots of very funny memes about it, but it was confirmed by OpenAI. They did confirm, Yes. Bonni Stachowiak [00:41:29]: We understand these complaints have been coming. Our chat gpt is saying you're lazy. We're working on it to get it not to do that anymore. So So sometimes when we're our brains are tired from wrestling with all these puzzles and AI and how it impacts us, we can just sometimes laugh about how it's now calling us lazy. So probably by the time you're listening to this, it will have learned not to be so late or to tell us we're so lazy. But in the meantime, I'm gonna be laughing. So, Autumm, I'm gonna pass it over to you for whatever you want to recommend. Autumm Caines [00:42:00]: Okay. I've got a couple of recommendations here. On the AI front, I am always trying to pay attention to the critical voices that are out there. And I found an article that just kinda does that In a good way and combines a bunch of voices that I have a lot of respect for. It's actually from Rolling Stone magazine, and it's called These Women Tried to Warn US About AI, and it highlights, a whole bunch of voices in here like Timnit Gebru and, Safiya Noble as well as many other women who have worked in AI. So I will get the link for that in, and I think that that's a great place to start if you're thinking about looking for a place that kinda pulls a bunch of these voices together and outlines Some of the issues at stake there. I also wanna recommend that Mike Caulfield and Sam Wineburg's new book, Verified. I had Mike on the on the podcast not too long ago. Autumm Caines [00:43:00]: The book is new. It's out. I haven't made it all the way through the book. I will But I've been following Mike's work for years, and this book is culmination of a lot of the stuff that he has been doing for a really long time. I think Especially in a world of AI, being able to be more critical about the information that's coming to us and having Better tools available to us to be able to sort out the truth from the fiction or the fictionalized, I think, is just gonna become more and more important. And the book is super practical to the the techniques that they give you or some things that anybody can do that happen quickly and are just Really smart, really smart approaches to things. I'm also going to recommend Dave Cormier's new book, which I have not read because it's not out yet. But again, I've been following Dave's work for many years now, and he has, his new book coming out called Learning in the Time of Abundance. Autumm Caines [00:44:00]: The community is the curriculum. I believe by the time that this episode airs, that the book it might just be like a week Or so until the book is released because it's it's set to be coming out in January. I'd also recommend The page that you put together for us, Bonni, here at the University of Michigan Dearborn for your residency, you quote you curated a bunch of resources for us, a bunch of the podcast episodes as well as some other things for us. And that page has been really helpful for us here at Dearborn and I think it could be helpful for others as well. So I'll And with that 1 and pass things over to Maya. Maya Barak [00:44:36]: Thanks, Autumm. So I just have 2 recommendations and they're kind of tangentially related to what we've been talking about today. So one is the 5 calls app. I'm not sure if either of you have heard about the 5 calls app, but it is actually a way to make your voice heard in the political sphere, you put in your address and it pulls up your representatives for you. So not quite AI, but technology. Right? Connecting us, making some things a little bit easier if you would like to voice any of your opinions to your representatives, your elected officials. And so I found that Really useful lately. And then my 2nd recommendation is actually the show Julia on Max. Maya Barak [00:45:13]: I don't know. It's the 2nd season. Not sure if either of you are watching it. It is such an enjoyable just a nice thing to watch. It's fun. It's light, but it also has some really important moments and messages about Ethical questions about civil rights, about gender, things that do relate to the conversation that we've been having. So it's a nice way to take a little bit of a break, but still stay engaged And think about things that matter. Bonni Stachowiak [00:45:39]: Maya and Autumm, it's been so great to have this conversation with you today and so have enjoyed all the ones in the past and I look forward to the ones in the future. So grateful for you and your work. Thank you so much for your time, and again, for just the honor that it was to serve as the University of Michigan Dearborn's scholar in residence in the fall of 2023. Autumm Caines [00:45:59]: Thank you so much, Bonni, for having us, and thank you for being our resident. Bonni Stachowiak [00:46:05]: Thanks once again to Autumm Caines and Maya Barak for joining me on today's episode. Today's episode was produced by me, Bonni Stachowiak. It was edited by the ever talented Andrew Kroeger. Podcast production support was provided by the amazing Sierra Priest. Thank you for listening to this episode of Teaching in Higher Ed. If you have yet to subscribe to the weekly updates, you could receive the show notes from the most recent episode, soda, which this one has some really good ones, as well as some other resources and recommendations that don't show up on the main episode. Head over to teachinginhighered.com/subscribe to receive those emails. Thank you so much for listening, and I'll see you next time on teaching in higher