Classroom Caffeine

A Conversation with Brad Robinson

Lindsay Persohn Season 4 Episode 8

Send us a text

Dr. Bradley Robinson talks to us about artificial intelligence technologies, including how we can critically approach possibilities for teaching and learning with AI, and the deeply human nature of the ways AI tools were built. Brad is known for his work focusing on the creative and critical capacities of digital technologies in literacy education. Specifically, he has examined topics like novice video game design, digital platforms in and out of education, and artificial intelligence, all with a commitment to mindful, authentic, and just implementations of digital technologies. Dr. Bradley Robinson is an Assistant Professor of Educational Technology and Secondary Education in the Department of Curriculum and Instruction at Texas State University. You can connect with Brad via email (bradrobinson@txstate.edu) or on Twitter (@Prof_Brad_TxSt).

Resources mentioned in this episode: https://tech.ed.gov/ai-future-of-teaching-and-learning/

To cite this episode:
Persohn, L. (Host). (2024, Feb 13). A conversation with Brad Robinson (Season 4, No. 8) [Audio podcast episode]. In Classroom Caffeine Podcast series. https://www.classroomcaffeine.com/guests. DOI: 10.5240/1974-7A05-2E9B-7B45-C029-7

Connect with Classroom Caffeine at www.classroomcaffeine.com or on Instagram, Facebook, Twitter, and LinkedIn.

Speaker 1:

Education Research has a problem the work of brilliant education researchers often doesn't reach the practice of brilliant teachers. Classroom Caffeine is here to help. In each episode, I talk with a top education researcher or expert educator about what they have learned from their research and experiences. In this episode, dr Bradley Robinson talks to us about artificial intelligence technologies, including how we can critically approach possibilities for teaching and learning with AI and the deeply human nature of the ways AI tools were built.

Speaker 1:

Brad is known for his work, focusing on the creative and critical capacities of digital technologies in literacy education. Specifically, he has examined topics like novice video game design, digital platforms in and out of education and artificial intelligence, all with a commitment to mindful, authentic and just implementations of digital technologies. Dr Bradley Robinson is an assistant professor of educational technology and secondary education in the Department of Curriculum and Instruction at Texas State University. For more information about our guest, stay tuned to the end of this episode. So pour a cup of your favorite drink and join me, your host, lindsay Persan, for Classroom Caffeine Research to Energize your Teaching Practice. Brad, thank you for joining me. Welcome to the show.

Speaker 2:

It's a pleasure to be here. Thanks so much for inviting me. I'm excited to think with you for a few minutes.

Speaker 1:

Thank you. From your own experiences in education, will you share with us one or two moments that inform your thinking now?

Speaker 2:

Yeah, sure. So over the past couple of years, a lot of my thinking has been around the influence of emerging artificial intelligence technologies on education in general, literacy education in particular in my case. And there are kind of two stories that come to mind when I think about that. One of them was when I was a PhD student at the University of Georgia. I taught a class to pre-service English teachers, secondary English teachers, and it was called Digital Tools in English Education, and the basic purpose of the course was to explore kind of an open-ended, creative way, lots of different technologies for supporting literacy learning in English classrooms. So we would look at podcasting, for example, or digital storytelling. The very last kind of unit of the class was called N plus one, and the point was to kind of say well, what's next? What emerging technology should we be thinking about?

Speaker 2:

So in the fall of 2019, I don't even remember how I came across it, but I have read something about GPT2. And someone had linked to a website called talktotransformercom and I started looking into it and I was like OK, so this is a website that allows you to interact with this thing called a language model that I had never heard of before, and it uses some sort of algorithmic processes to generate text, and it seems pretty natural, and so the article that I read was very hype-driven and I was interested in it. So I went to the website talktotransformer and I started playing around with it and immediately I was kind of struck by it. I was like wow, it was nowhere near as sophisticated as the GPT3 or GPT4 now or the other language models that people are using now, but it was still pretty impressive at the time and I was like this is a great thing. I immediately started thinking this is probably going to be a big deal here in a few years, and so in the N plus one unit I took it into the classroom and I just kind of started talking about it and sharing it with the students.

Speaker 2:

And we did this activity where we all picked the first sentence of a beloved novel. So I picked the first sentence of Ralph Ellison's Invisible man. People picked the first sentences of other novels and the objective was to put the sentence into talktotransformer and click Enter and see what was produced and then do some line breaks to kind of make a poem out of it. So the idea was to create what we were calling automated poetry, and it was just such bizarre stuff and, as a quick footnote here, by the way, the talktotransformer used GPT2, I believe, but it had been kind of dumbed down a bit.

Speaker 2:

Even at that time, openai this was when they were still kind of a nonprofit research outfit were concerned, like they were genuinely concerned about the influence that this technology could have, and so they deliberately constrained it. And it was good and interesting at the time, but in retrospect it was nowhere near sophisticated as it could have been then, had they unleashed the whole technology, or, as it is now, with GPT3, 0.5, and 0.4, et cetera. And so, anyways, the students started doing the assignment and they immediately, just completely started freaking out, and this was in the fall of 2019. And they started saying all the things that everyone has been saying since Chat GPT was deployed. I think what was that? In November of 2021?, 2022?

Speaker 1:

I think it became public in 2022, I do believe.

Speaker 2:

I should say widely used right.

Speaker 1:

That's when they hit their million users in a little bit, yeah yeah.

Speaker 2:

And so this was well before that. But they were like kids are going to cheat on their essays, english teachers aren't going to be necessary anymore, what's the point of writing anymore? And so we just talked about it and we explored it. But again, this was an N plus one unit. So there was a lot of speculation Because I had just discovered this technology just through meandering on the internet in the fall of 2019. And so we kind of left it there and it was a very interesting thing. And then, slowly but surely, I started hearing more about it and so I started to kind of take it more seriously and started to research a lot about it and learn a lot about language models and open AI. And then I published an article in early 2023 called speculative propositions for the new autonomous model of literacy, where I kind of tried to take the prior thinking about the autonomous model of literacy that Brian Strait referred to it as in the 20th century and how that was kind of challenged by ideological models and social cultural perspectives on literacy. And I kind of try to think about how can we think about this as a new autonomous model that's kind of reoriented around machine cognition as a kind of focal way of thinking about literacy, and so, yeah, I've just been doing a lot of thinking about it at then, and so that happened at Texas State. So that's story one. The story two is I'm now a professor of educational technology and secondary education at Texas State University.

Speaker 2:

I teach an undergrad course every semester called Introduction to Educational Technology. It's similar to the class that I taught at UGA, but rather than being for pre-service secondary English teachers, it's just for any students in the curriculum instruction department. I knew that I needed at some point to integrate a unit on generated AI. I hadn't found the time or the energy to just produce the content In the spring of this year. This course is structured around a project-based learning unit idea that the students come up on their own that's relevant to their discipline and age level. Then they keep that unit idea with them throughout the course. When we explore different technologies, they say okay, how could this technology help us do something cool with this project-based learning idea? One of the units is just about different apps, like different apps that people use. The assignment was very simple. It was pick two apps that are relevant to your project-based learning unit and explain how it might support your students' learning and creativity with the unit.

Speaker 2:

That semester I was teaching several sections of the class and I had a total of 70 students, and it was at least 10. It might have been a bit more than that. In their apps they included chat GPT, they said. I should add here that most of my students right now are elementary teachers. These were teachers who were going to be teaching elementary school primarily. They said they had heard about chat GPT, as we all had, so this was the app they wanted to include. They said they wanted to use it to teach their elementary school students how to do research in relation to their project-based learning units.

Speaker 2:

I completely freaked out.

Speaker 2:

I freaked out in the way that my English students freaked out at UGA, because I was like, oh my gosh, their write-ups, their assignments, made it clear that they understood chat GPT to be this very reliable research tool that made the research process way easier and more natural than Google, and that they could take their elementary school kids and bring that app to them and say when you're researching photosynthesis, hop on there, or whatever it is you're researching, hop on to chat GPT and it'll help you find your information.

Speaker 2:

I realized at that moment it was a potential problem not in the sense that teachers should never use chat GPT or anything like that but it was clear to me that my students didn't understand the implications of how they were using chat GPT. That was then I and my colleague in the program decided we really needed to redouble our efforts and create a generative AI unit, which we now have, where we walk them through learning about it. But those are two stories that take AI in relation to teacher education. That had really informed a lot of how I think about it and how I think about the ways that might influence education and the ways that teachers might respond to it in their practice.

Speaker 1:

Those are two really great stories that I think help us to not only follow your path, but it also traces a bit of the history of how open AI tools have been introduced in education. As we were talking before we started recording, I've begun to play with some of these tools myself, not only as an instructor, but then thinking about how I can help my pre-service teachers to potentially use a tool like chat GPT to make their work a little bit easier on them. I don't mean easy in the way that I think the freak out kind of way. We're not looking to cheat here. We're looking to make our work clearer, stronger, more robust. I've been thinking a lot about how that can happen Because, as soon as you said using chat GPT to help elementary-age students in their research, there are so many potential pitfalls there Because you're talking to an audience of elementary-age students who likely have cursory knowledge of the research process at best or are still working to build their knowledge of the concepts that they're learning.

Speaker 1:

So it can be difficult to know when your chat bot has it right or when they're leading you astray and really sourcing. There are just so many potential implications there that I think are really important to unpack. But I think the other thing there is your pre-service teachers recognizing the power of this tool, but I think there always has to be this caveat that it's not a human and so that element of teaching is different with humans than it is when we're being taught by robots.

Speaker 2:

Absolutely, and a couple of things to say about that. The point that you made about the stories in some way loosely narrating the trajectory of open AI releasing chat GPT Not a lot of people are aware that the developers at open AI a lot of them were really uncertain about deploying chat GPT. They were concerned about all the things that everybody else did. It's not as if they just weren't aware of it. They knew there were potential for the proliferation of miss and disinformation. The language models can tend to reproduce algorithm bias, re-represent the biases All the human ugliness that is on the internet getting reproduced statistically and probabilistically through algorithms and the language models. They knew all that, and so there was a lot of reluctance to publish it. But they heard that other large organizations, like Google, had language models that were about to be deployed, and when they heard that, they freaked out because they didn't want someone to beat them to market. And so when I read it, only it's looked at about two weeks or so to build the front end that we now call chat GPT, which is just the user interface that allows you to interact with their language model, and so they really rushed it out. And in some ways, when I saw my students talking about chat, gpt in the spring of 2023, what I saw, in some ways, was like a triumph of marketing, that it's almost like the Xerox effect we don't talk about photocopying, we talk about Xeroxing. We don't talk about search, we talk about Googling. We don't talk about language models, we talk about chat, gpt, and in some ways, I think it was very shrewd, maybe cynical kind of business move to throw that user interface together and put it out there to beat the market, and they ended up being wrong, right, they ended up learning that there wasn't another organization that was about to publish their language model, but because they published theirs, that then prompted Google and these NMETA to then try and get their language models out there too, and so the point being is that corporate motivations are all entangled with the ways these technologies are created and deployed, and so I saw that surfacing in some ways when my students were talking about it.

Speaker 2:

The other thing that I would say is to your point about interacting with robots, that's true. Sometimes. I'm concerned, though, that when we talk so much about the machine dimension of these technologies, that we do forget their deeply human quality. Again, all the language used to train language models were derived from people's interactions on the internet and, as I said before, all the beauty and ugliness that humanity is capable of that is expressed on the internet is then hoovered up into the training data and then is used to complete answer whatever prompts that you put in there.

Speaker 2:

At the same time, you have people in countries like Kenya who are playing part of what they call user in the loop, where they have certain outputs, and then those workers in Kenya will say, well, this one's better than that one, this one's better than that one, and that's one of the layers of the training that the model goes through.

Speaker 2:

And so there's humans in the place there. So when you're interacting with chat, dpp and it fits out its output, what you see there seems very robotic and machine-like. By the way, too, some people aren't aware of this, but the little dot-dot-dot that happens, that makes it seem like it's thinking, is just there as an effect to make it seem like you're interacting with someone, like in a text messaging chain where the three dots appear on, like iMessage or whatever. It's there to create the effect of interacting with the machine. But it's super important for anyone interacting with these technologies to always keep in mind that they are not artificial. They are deeply and irrevocably human, and it's something that we should always keep in mind, even if we're looking at a screen and it's not always easy to see the human there.

Speaker 1:

In my mind, that is less a part of the conversation. At the forefront right now there are doom and gloom kinds of perspectives. Artificial intelligence is going to take over the world and it tied to movies. It just makes me think of the tangled webs that we weave, and in particular, not only the humanistic ways in which we can interact with artificial intelligence, but also the capitalistic ways in which this is weaving its way right into our lives.

Speaker 2:

This isn't really fit. You were talking about the doom and gloom. One of the things about chatGPT is that it's like a Langdon winner, the science and technology scholar from the 20th century. He often talked about technology as being tools without handles, which is a confusing formulation, but I always thought about it as when technologies are deployed without a clear use case. When chatGPT was deployed and all these language models came online, there's all the speculation and people imagining ways that you could use them, but these technologies were not developed by, for or with educators at all. Yet what was the first domain of human life that people understood very clearly would be most impacted immediately by these technologies? It was education, and so I think that just because the technologies exist, I'm not sure that's necessarily reason for us to use them like they're tools without handles. I think that as we, as educators, think about how do we respond to these technologies, how can we use them, I think it's important that we have a clear sense of the use case and that we're not using them just to fulfill some sort of type filled expectation or these inevitability arguments about AI taking over the world, not in that AI apocalypse way, but just the fact that they're going to be deeply integrated into everything we do, and if we don't teach our students how to use them, they're going to be left behind. That could be true, but again, it could also be marketing, and so it's worth asking like whose interests are served when we accept those arguments uncritically and then begin to train our students to use these technologies? There's a certain kind of self-fulfilling nature to those kind of prophecies if we respond to that way.

Speaker 2:

One other story that came to mind I don't know if you'll ask for two, but I have been working, collaborating with a researcher in Germany on doing some game studies on the research on the gaming strand of my research, and English is its second language, and this is relevant to the whole GPT-3 use case and he and I and another author have written his names. So the German author's name is Andrei Salderna I hope I'm pronouncing that correctly and then the other author is someone you might be familiar with, sam Von Gillern, who's a literacy researcher at the University of Missouri. But we've written several manuscripts together and the first one we wrote together. When Andrei, the German guy, sent me his writing, I was super impressed with his written English being his second language, and I'm always just blown away by people who do academic writing and a language that's not their native language. It's just because academic writing can just be so specific, it's really impressive. Nevertheless, I still had to spend time like as a leader, author, kind of overwriting and you know making some tweaks some places where not quite the right preposition would be there. If you're a native speaker, you might kind of get the nuance, but it's really difficult to explain why you would use of rather than in a specific place, and so I'd go through and fix all those little things, but otherwise it was great.

Speaker 2:

We were working on our second manuscript a bit later and he sent me his writing and I started reading through it, thinking I would have to do the same thing, and I was like, wow, his writing has gotten so much better. This is amazing. Like this is I mean, this is really amazing. I didn't have to change anything.

Speaker 2:

And so we had a research meeting and I was like I complimented him. I was like Andrei, look, your writing is. Your academic English is super impressive. I just wanted to just let you know that I'm really impressed by your ability to write academic English at being a second language. And he kind of like you know, kind of smiled a little bit and he's like well, I should tell you that the first time I said but I didn't do this.

Speaker 2:

But the second time I took my writing kind of chunk by chunk and put it into chat GPT and said, can you clean this up a little bit? And so, unbeknownst to me, he had been using it to kind of clarify his writing a little bit before submitting it. And to me that was a moment because I have, if people have, read any of my work on this. I'm pretty skeptical of it and I think it's important to cut for me. That's just my disposition. But at that moment I was like, ok, that's actually like doing really meaningful work for him as a writer and it really made sense for me as a youth states. And so I think that you know, for educators, when they're thinking about using these technologies, I think keeping in mind, like what's the youth states, like what are you using for? How is it really helping you support your students? You know they're literacy development and meaningful, authentic ways.

Speaker 1:

Right, because certainly there are really powerful use cases like you just described, as well as some pretty nefarious uses for those kinds of tools too. So, yeah, I think that, in my mind, that's one of my next big steps is determining, you know, how can we put these tools to good use? They are here and they're here to stay, so how do we in fact leverage them for good instead of evil?

Speaker 2:

Yeah.

Speaker 1:

So, brad, what else do you want listeners to know about your work?

Speaker 2:

So about my work in general, something that I've been thinking about a lot lately with these technologies and I've had these conversations in professional development sessions I've done at the university and also with other faculty and also with my student teacher candidates I kind of just think of it in this very simple framework and it's just about before with, and so something that I think a lot about when I think about technologies in general, but educational technology in general, but focusing specifically on generative AI is that before teachers at any level decide to use this in their teaching whether it be developing lesson plans, whatever it might be it's important that they learn about the technology before they start teaching with it. There's just so many resources out there now for doing that. Openai provides pretty good documentation, and pretty readable documentation too, about how chat GPT works. They also recently released some information about using it in education in particular. Of course, that stuff should be read critically. They're not educators. It's important, I think, for us to own our expertise as educators and not just be like let's just listen to what the techno gods have to say to us about this stuff.

Speaker 2:

In May of 2023, the Office of Educational Technology at the Department of Education released a report called Artificial Intelligence and the Future of Teaching and Learning Insights and Recommendations. It's a fairly long it's like a 70-page document that does a really great job introducing the basic concepts around AI, giving very workable, easy to understand definitions of it. It has a whole chapter on ethical considerations related to issues around algorithmic bias, data extraction, misinformation, disinformation, all those kinds of things intellectual property, all those kinds of things that are relevant to debates around language models, the journey of the AI. It's from an educational perspective. It has a what is AI. It has ways that it might influence learning, ways that might influence teaching. It talks about assessment. It talks about research and development and then it has a whole section on recommendation. I think it's a pretty good document for teacher educators, free service teachers who are interested in this area. I think it's an excellent resource to consult as you're thinking about how to respond to these technologies oh yeah, about before with this resource.

Speaker 2:

The Office of Educational Technology's report on AI and Education is a great way to learn about these technologies before you start teaching with them, because, see, that's what I realized that my students were doing they were starting to teach with them and therefore having their students to start learning with them before either of them knew really anything about them. And so, again, the basic heuristic is about before with. Before you teach with these technologies, learn about them. And then the corollary is also true Before your students start learning with them, make sure they have learned about them, and this needs to be developmentally appropriate for the age level. If the students are not able to understand the basic ideas about these technologies, then maybe it's better to wait until they're of an age where they can, when teachers start to think about integrating or having their students use these technologies, just like they need to learn about them before they teach with them.

Speaker 2:

The students need to learn about them before they start learning with them. And if you're right that these technologies are here to stay and they're going to be with us and they're going to start getting layered into all kinds of different programs, I mean even now, and when I open Google Docs, I now have a little barred thing that I can click on on the left side of the page and it's basically similar to a chat GPT prompt window and you can like ask, get a question or insert a prompt or whatever, and it'll just spit the text right into your Google Doc, and I'm sure probably that many school systems who subscribe to Google and haven't integrated into their school system might disable that function. But it just goes to show how these schools are definitely going to, in ways that we may not even know, to start to kind of come into the technology. So again, learning about them before learning with them, I think is really important.

Speaker 1:

That framework that you've given us that about before and with. I find that to be incredibly helpful because I think this is a field that many people are interested about that we are obviously still learning so much about. But it is challenging in so many ways if someone jumps straight to the integrating them or learning with them without really understanding how's it built, what's it for, what's it not so good for Right, I think that without that contextualizing knowledge, it does put us in a little further into the danger zone, right by not understanding anything about them.

Speaker 2:

Yeah, I mean, if you have a teacher who uses it to chart out a pacing guide or notes on some sort of historical event or something to kind of help them speed up their content production for some lesson they're doing, maybe that's okay, but they need to know when they do that that it's prone to providing incorrect information and that the information can only go up to a certain date unless they deliberately go in and insert it. And so if you know that as a teacher, then you know, like if you consult it for something, that you should read it critically and so hopefully, like if you're a history teacher, you'll have your historical expertise that then you can kind of use as a filter for which you engage with on those platforms. That then will prevent you from falling into that trap.

Speaker 1:

I'm absolutely with you on that. I think we've got to have that human vetting of the information, that knowledge and expertise that we bring contextually and in an expert way that the technology just really can't achieve. That really makes me think about a use case that I was considering. This morning. I did some work in the AI course that I'm enrolled in right now that I mentioned earlier, and someone had mentioned in a chat using AI in order to generate potential accommodations for a lesson plan.

Speaker 1:

I thought, hmm, I know that's something that historically, my students have had a little bit of trouble with, particularly coming up with accommodations in their planning for diverse learners in their classrooms. You ask chat GPT to give you accommodations for a lesson plan on this particular book, because my students are working on read aloud plans right now. It'll give you 10 or 12 different recommendations, as you said. That doesn't mean that those are 10 or 12 great recommendations. They're ideas and they're something that we can then work with. I think that when it comes to that sort of thinking, I think of it as a partner in thinking. It doesn't mean that you just take everything as it is, you don't view it critically, you don't vet that information. It can be a good way, I think, to generate ideas, so that we don't feel like we're working in isolation, or if there's some area that we don't feel particularly strong in and we're still working to build our expertise or our knowledge of the different options. I think it can be really useful in those cases.

Speaker 2:

Absolutely. One of my colleagues at Texas State. He's in the computer science department and he does work in the area of natural language processing. He says that he tells the students to think of it as a smart fellow student who they don't entirely trust. You can ask your student can you help me with this thing? You know they're pretty smart but you also know not to completely trust everything they say. That might be a helpful metaphor to think with.

Speaker 1:

I think that's super helpful. We can think of it as peer suggestions, but it's the peer that maybe you haven't worked with over and over again that you really trust. It's the peer that you were partnered with, perhaps, incidentally, and so, approaching it with that in mind, I think it's a really smart way to frame it. I think it translates that scenario in a way that I think many, particularly college students, or even high school, middle school students, would pretty readily be able to understand.

Speaker 2:

Yeah, I think so too.

Speaker 1:

That's great. What else do you want us to know about your work, Brad?

Speaker 2:

One thing, if people are interested. I and another literacy scholar named Ty Hollitt at Penn State University are currently guest co-editing a special issue of reading research quarterly called Literacy and the Age of AI. We're bringing together an interdisciplinary cohort of scholars both in and out of literacy studies. It's a really reckon with a lot of the questions about what are the implications of these technologies for reading, writing, speaking, listening, creating in the 21st century and moving forward. I would encourage listeners, if they're interested, to look out for that. It should be out next year.

Speaker 2:

There's a few other really cool special issues as well. First, teaching practice and critique currently has a special issue and process on this, and learning media and technology also does. There's a bit of a lag with academic publishing responding to technological innovation, but I think a lot of that's going to be coming out at some point in 2024, where a lot of leading voices across diverse fields that are interested in some of the questions around AI and education. They are becoming public, and so that should be a great resource for people to keep their eyes out.

Speaker 1:

That's super helpful to have a bit of a compass in the world of academic publications because it, as we know, it can be a little bit tricky to navigate even for those who work inside that field. But particularly for those who may be just working just outside of academia, that's super helpful to know that those things are in the works and coming our way in the next year or so. Is there anything else you want to share, Brad?

Speaker 2:

Yeah, one other thing that I'll say is that we are as a culture, as a society and as educators. We are understanding artificial intelligence largely as a novel phenomenon. We have known through science fiction and speculative fiction that artificial intelligence is like an idea in the world, but it's recently that it's really. You know, we've really started to understand the ways that it can influence our lives and teaching and learning. But I think it's really important to understand that artificial intelligence is effectively a set of computational processes that have material dimensions to them server forms, energy extraction, all these kinds of things, people in different countries. It has a material impact on the world. That's one thing. But also these same computational practices like machine learning they are not new in education.

Speaker 2:

So take a reading platform like Epic Reading. So Epic Reading, if you're not familiar with it, is a digital reading platform for education and basically it provides it's like a collection of e-books online and some of the books have quizzes and it conforms to your basic structure of reading and answering questions. But one of the interesting things about Epic is when you open it up as a user and you look at it. It actually looks a lot like Netflix. When you open up your Netflix account that has like tiles with the different shows and movies you can watch, like here's the For you, here's the trending right now. So the Epic kind of landing page when you log in it evokes that. It's very similar and it has a recommended for you pane. And the way that recommended for you pane exists is through machine learning. It's through algorithmic prediction, which is the same like, on a fundamental level, making predictions, absorbing lots of data, processing it and then making predictions off of it.

Speaker 2:

Is at the heart what's happening in language model technologies like chat and GPT, and so it's just important, I think, to understand that these technologies have been playing on the lower frequencies of literacy, learning and living for years now, and so it's just and it's only kind of become aware of us when the kind of wow factor of engaging with an AI chat bot kind of blows you away because it sounds so natural.

Speaker 2:

But if you have questions or concerns about the algorithmic processes and their influence on literacy, education or education in general, then some of those concerns are hold for some of these other platforms that, like Epic, that are also driven by algorithmic prediction technology. So just another thing I think it's important to remember about those technologies and that, by the way, is something that I'm currently it's kind of at the center of my research is kind of understanding how do we get here with AI and what are the ways that AI powered platforms have been shaping literacy technologies in ways that we may not have been paying attention to, even before the deployment of language model technologies like chat, gpt.

Speaker 1:

What an interesting thing to think about, right, those things that have been happening all along, that aren't so in your face or that don't have such a wow factor, and how that is already impacting literacies in many hidden ways, I would say. And so, yeah, I look forward to reading more about that work as you move along with it. So, brad, given the challenges of today's educational climate, what message do you want teachers to hear?

Speaker 2:

Well, I will say first that the challenges of today's educational climate are legion and intense. At the same time, though, the first thing that I'll say is something that I always say at the end of my courses is that, in my opinion, teaching is literally the most important job, and I know that's kind of a cliche, but I think it's important to remember that when we are, as a field, facing so many headwinds whether they be economic, political, technological, whatever they might be that the work you are doing is, in my opinion, some of perhaps the most important work that people can do in their lives, and so that's one thing to hold on to is just the importance of it. Another thing that I would say, when it comes to AI technologies in particular, is, for me, and as a literacy researcher, as a writer and a reader, for me, writing has been, and continues to be, a way that I kind of come to understand myself and the world and other people, and that's a message that I deliver to students. I mean, that's something that literacy scholarship has taught us is that literacy is deeply relational, in the sense of getting to know ourselves and understanding each other and the world, and however teachers decide to respond to AI-powered platform technology. I think it's just really, really important to remember and to keep in mind that, given the nature of writing and the way that it connects us with ourselves, each other and the world continue asking yourself what does that mean when massive, globally scaled algorithmic processes start to kind of interface with that process of coming to understand ourselves, each other and the world.

Speaker 2:

Some people may say, yes, that's scary, I'm just not going to do it, and other people may arrive at a different conclusion, which is fair, but I do think it's important to remember that. You know what was it? Kransberg's first law is like on technologies that are neither positive nor negative, but neither are they neutral, and so a lot of the talk around GPT talks about it being a tool. You know it's just a tool, and I use that language too, but tools don't just pop into existence. They come shaped by certain values and certain kind of ideas about use and mind, and that's the same with all of these technologies. And so I think just being super mindful about the emergence of these algorithmic processes as they interface with literacy, learning and living is an important consideration.

Speaker 1:

I couldn't agree more, and I think that you know, I know that a lot of your work has to do with this mindful integration of technology and AI tools, and so I really appreciate all of those reminders, because it is easy to get caught up in the excitement of either you know it's good, it's bad, but I love that quote also to remind us that it's neither, but it's also not neutral, and I think that perhaps that is where human end users can really find themselves in this work, because it is all of those things and it's none of those things, and I think it takes us to bring our own critical lens and our own identity as literacy learners as, hopefully, lifelong literacy learners to really understand what it means, how we best use it and, yeah, how we can leverage it for a better world.

Speaker 2:

Absolutely.

Speaker 1:

Well, Brad, I thank you so much for your time today and I thank you for your contributions to the world of education.

Speaker 2:

My pleasure. Thank you so much for the invitation. It's been super fun to chat with you for a little while. Thank you, you too.

Speaker 1:

Dr Bradley Robinson is known for his work in the creative and critical capacities of digital technologies and literacy education, specifically examining such topics as novice video game design, digital platforms in and out of education, and artificial intelligence. His commitment to mindful, authentic and just implementations of digital technologies runs deep and it informs his work in support of ethical and equitable literacy education across ages and contexts. His work has appeared in written communication learning media and technology. International journal of qualitative studies and education, qualitative inquiry, literacy research, theory, method and practice. Post-digital science and education and English journal. Formerly a secondary English teacher in North Carolina, brad holds a PhD in language and literacy education from the University of Georgia. He also holds a Master's in Arts in English from Middlebury College's Breadloaf School of English in Middlebury, vermont. Dr Robinson is an assistant professor of educational technology and secondary education in the Department of Curriculum and Instruction at Texas State University. You can connect with Brad via email at bradrobbinson, at txtateedu that's b-r-a-d-r-o-b-i-n-s-o-n. At txs-t-a-t-e dot edu, or on Twitter at prof underscore Brad underscore txs-t. For the good of all students, classroom caffeine aims to energize education, research and practice.

Speaker 1:

If this show gives you things to think about, help us spread the word. Talk to your colleagues and educator friends about what you hear. You can support the show by subscribing, liking and reviewing this podcast through your podcast provider. Visit classroomcafinecom, where you can subscribe to receive our short monthly newsletter, the Espresso Shot. On our website, you can also learn more about each guest, find transcripts for our episodes, more topics using our drop-down menu of tags, request an episode topic or potential guest. Support our research through our listener survey or learn more about the research we're doing on our publications page. Connect with us on social media through Instagram, facebook and Twitter. We would love to hear from you. Special thanks to the classroom caffeine team Leah Berger, abaya the LuRu, stephanie Branson and Shaba Hojfath. As always, I raise my mug to you teachers. Thanks for joining me.