The AI Superhero Podcast

AI Superhero 101: Navigating the World of Artificial Intelligence with Star Wars, iRobot and Kanye West!

Chris & Matt Season 1 Episode 1

Are you interested in how artificial intelligence is affecting our world? Look nowhere else! Introducing Chris and Matt, hosts of the AI Superhero Podcast. 

On this episode, they take you on a tour of the fascinating world of AI, demystifying complicated ideas and techniques in a familiar and lighthearted manner. 

In the first episode, they examine the fundamentals of AI and consider how it can affect sectors like education and healthcare. 

They also look at AI through the prism of popular culture icons like Star Wars, Kanye West, iRobot, and The Terminator. 

Don't miss this podcast, which is both educational and amusing. 

Listen in, share, and leave a comment to share your thoughts with Chris and Matt.

Check out AISuperhero.org for more information and sign up to the regular newsletter at https://bit.ly/3CP2qL5!

AI Superhero Podcast: Season 1 Episode 1

Chris: [00:00:00] Hello and welcome to a brand new podcast. This is the AI Superhero Podcast. Thank you for tuning in. My name is Chris Visser, and I hope to take everyone on a journey with me looking at artificial intelligence, what it means for everyone, all the tools that are out there, all the exciting innovation, and I'll be joined by my good friend Matt Bolton on this quest. Hoping to try and test these tools, give hints, tricks, and tips for everyone along the way. I hand over to you, Matt. What do you know about artificial intelligence? 

Matt: What do I know? Very little apart from the Terminator and some of the softwares you can have for using them at work and at schools. But other than that, no, I don't know. 

Chris: There's a lot of people that still don't really know what AI is, which is, its initial. I, I searched for AI and I got some interesting results earlier. [00:01:00] Artificial Insemination was one.

Matt: That's a different podcast.

Chris: That is a different podcast, but we are talking about artificial intelligence. So that's the theory and development of computer systems to be able to perform tasks normally requiring human intelligence. So that can include visual perception, speech recognition, decision making, and translation between languages, just to name a few areas.

Chris: Matt, have you heard of ChatGP T?

Matt: I have heard of Chat GPT. You mentioned me the other day. 

Chris: I did, yes. Everyone's very excited about it and I think this is where a lot of the enthusiasm has come in at the moment. For AI, it's called generative AI, and this is a tool which Open AI has developed where the AI is essentially a chatbot.

Chris: And it acts in a way where you can have a conversation with it, and it gives you colloquial chat friendly [00:02:00] responses, which anyone can understand. So you just, you load it up and you can ask it pretty much anything. So you can ask it for a recipe based on various ingredients you might have. 

Chris: You can ask it to write a poem. You can ask it to write a piece of code even, or to set up an Excel document, anything you want, and it's causing a lot of... A lot of interest and also a lot of concern. It could be used in education, there's a risk that kids could be using it for homework.

Chris: And now just for our listeners, you're a teacher Matt, aren't you? So does this concern you about kids possibly using this tool? 

Matt: They could do, but you'd always, I think a good teacher would be able to tell if all of a sudden students are write like 18, 15 year olds are starting to write 20 year old professors, I think they'll be able to tell especially with some of the type of the language that the chat GP would use.

Matt: It isn't very, it's human, but it's not kid. So I don't think it's at the stage yet where a decent [00:03:00] teacher who knows the class could get I don't think it would pass muster, so to speak. 

Chris: Yeah. The New York city education department has banned it from their devices. Elon Musk tweeted "goodbye homework" in relation to the tool.

Chris: And so would you be happy to get rid of homework for your students? 

Matt: I would, yeah, I don't see it . I don't: they said the logic behind it is it's meant to add an extra a hundred hours or so of learning in an active year, but if the students are rushing it on a bus, they're not doing it. Or if they're copying off each other, how much learning is actually going on?

Matt: It also, there's loads of socioeconomic issues around it, around homework. If you've got access to a computer, Or decent computers and decent softwares, you might be able to do better at the homework than if you haven't. And also, it's a pain in the balls to mark, which is [00:04:00] extra marking for teachers when you've got other, marking, and other planning, another admin stuff too.

Matt: So I don't personally see the benefit of it, but if I'm in a department that tells me I need to do homework, then I shall do. 

Chris: Interesting. So actually it's another reason not to do it, if kids are just going to cheat and bypass the system. So AI, yeah it's definitely, it's there. I think it's still there as an aid, but it's gets things wrong.

Chris: There's a lot of, we're gonna talk about a lot of different topics and areas linked to it. It's a fascinating topic. It's often linked to machine learning. Do you know what machine learning is? 

Matt: Is that the Turing test? 

Chris: Turing test. Oh, that's, that's a good one. That is when a computer can trick someone into believing that it's a human.

Chris: So if you pass a Turing test, you pretty much a computer is finally trick someone into believe in it's a human. But machine learning is more about building methods and understanding and [00:05:00] building methods. That is learning. It's a branch of AI and computer science focuses on the use of data and algorithms. So basically trying to use the data to create outcomes.

Chris: So it's , it's very much linked to AI. And it's that's definitely something we can look into at some point, and that's more about deep learning, neurological networks, so on and so forth. But let's try and keep it, let's try and keep it a bit more simple for now. And I asked Chat GPT - I asked for some advice, making a podcast about AI and its implications for mankind.

Chris: Because that's, seems fair enough for the time being. And to start with, you were saying to talk about the main thing should be to explain the goal of the podcast. Do you think, do you think manage to explain that so far? 

Matt: Yeah, I think you have, I think you've done sterling work, sir. 

Chris: Yeah, it's trying to, assess the impact in society, not just for teaching and education, but it's gonna be vast, isn't it? [00:06:00] It's going to have a lot of implications for transport, healthcare, creative industry, marketing, sales, there's gonna be a lot of ethical questions linked to it. As we go forward there's gonna be general AI, more narrow AI linked to specific industries.

Chris: And where we are now is it's starting to explode and become more part of our day-to-day. Than ever before. And it's gonna impact these industries and sectors more than ever before. And as I said, it's linked to machine learning language and robotics potentially. What do you think, what do you think, you talk about the Terminator.

Chris: What, does it scare you all this? What's the ethical implications? 

Matt: I didn't sit through those James Cameron movies for nothing. So I think we need to remember that you've got the issue of, as you said, with the Turing test, if you could, a computer can eventually trick people. Yeah. Deep Blue?

Matt: Was it ? That beat Kasparov at chess, Google? Yeah. So you have that's an, that's already there. But if ethically, if [00:07:00] you do push computers to the nth degree, and they do become self-aware then. You can't really ask them to, they'll take over. Cuz I'm always nice to Alexa just in case. But you've got to worry.

Matt: What do you think? 

Chris: I think you need to have a framework. I think there's often a phrase called "garbage in, garbage out" with AI. If you put in a bunch of, if you put in a bunch of rubbish, you will end up with a bunch of rubbish. So I've done some tests with AI as part of my day-to-day job, and that is true if you've got a certain data set and there's bias in that data set and it's not, and it's not particularly accurate, the AI will be skewed and the AI will be inaccurate and have issues with it.

Chris: And if it's not analyzed efficiently or effectively, You're gonna have problems. So I think I I'm concerned about that. If AI's in the wrong hands, you could have a disaster. There's various [00:08:00] tools out there. We talked about Chat, G P T, and there's various I mean it's absolutely everywhere.

Chris: I would encourage you to have a go with it. But one, another tool is the DALLE 2. Have you seen that one? 

Matt: Darley? Who? No. 

Chris: DALLE 2. So that's a tool where you type in what you want, like an image. So you can say, I want a picture of a unicorn flying in the air while eating cheese or something.

Chris: And it will then present options . You'll then present images of unicorn flying in the air while eating cheese. They'll probably present four different images, and then you choose which one's the most accurate or which one you want, and then you can download it and there you go. You've got that, that, that exists.

Matt: Yeah. Can't see any use. When you talk about the ethics the film iRobot, starring Will Smith. Chris Rock slapper - Will Smith. 

Chris: I actually seen that. I'm aware of iRobot . Yeah, I'm aware of it. 

Matt: The writer of that Isaac Asimov, [00:09:00] who's a very famous writer, came up with three Laws of Robotics, and that's what the film, a large part of the film and the book iRobot is about.

Matt: And this might answer the ethical stuff. I shall read them out if I have permission. 

Chris: You gotta go ahead please 

Matt: Number one: a robot may not injure a human being or through inaction, allow a human being to come to harm. So that's the first one. A robot must obey orders given it to by human beings, except where such orders would conflict with the first laws.

Matt: And number three, a robot must protect its own existence as long as such protection does not conflict with the first or second law. So you could think that even famous I think he's American, could be Russian. Sci-fi writers have conjectured, if that's the right word, what AI might do or what it might be like, and they've come up with these ethical frameworks [00:10:00] so you could have in the future maybe AI powered weaponry fighting each other instead of people. So like more AI driven drones fighting each other instead of people. But I just wondered if that's already been thought of and people have actually thought about that. Is it, do we have to worry about a Terminator? 

Chris: I think we do. Yeah. I think we could do a whole series on ethics and AI and data, because I think it's a vast, it's a rapid area of where there's, it's growing and it's, I don't think, I think there's much to be done in that area. I spent some time last year working with The Open Data Institute and looking at the data ethics canvas, which exists, which they put together, which is built for machine learning and AI and ensuring that, basic points are put in play to ensure that, the Terminator situation or bias doesn't happen, which we were talking about.

Chris: [00:11:00] Making sure your sources are robust, that you're thinking about rights, about data sources, you're thinking about limitations, you're thinking about legislative context. We can talk, we can go down a certain path here, and I do think that there should be better and more robust legislation, which probably doesn't exist.

Chris: And I think governments need to start getting on top of AI and machine learning pretty quickly. When these tools are being built, when you Chat G P T and the like are being built and they are being built rapidly now in Silicon Valley that are these tools actually, are they ethical? Are they fit for purpose?

Chris: Are they gonna cause harm or damage? Are they really being robustly checked and thoroughly tested? Because the issue is they're being built at such a rapid speed. Now, Elon Musk said that the biggest threat to humankind is not nuclear weaponry. Are we really robustly testing and checking these tools and the negative effects they have on people?

Chris: I'm not sure we are.

Matt: Thinking about that though, is Elon Musk right about that? Is AI [00:12:00] the biggest threat to humanity compared to when you've got compared to other things like global warming? Climate change, Love Island. So isn't all of that more of a threat than AI or is he just catastrophizing?

Chris: I think he's just trying to make a point that maybe AI and it's implications are perhaps not being discussed and thought through the detail they should be, and I think is correct. I think a lot of people's sleepwalk personally into, AI. AI is very exciting and we're gonna talk about some of the fantastic things it can do and some of the capabilities, but I think you've also gotta be aware of the negative implications and ensuring that if this information and data gets in the wrong hands, the harm it could do and it hopefully can assist people with their jobs and make lives better for people. But there is also the threat. It could kill people's jobs as well. So [00:13:00] there's a whole wealth of things to be ethical and philosophical issues to be aware of the potential impact on, on, on jobs in society.

Chris: Fairness. Algorithms and decisions they make, these decisions algorithms are making even now on the welfare system and things like that. It's just but what... 

Matt: You mean instead of employing, so instead of employing like nurses to check PIP claims, for example, like the personal independence payments, you could have an AI.

Matt: Impartial AI do the interview. That's the claimant tax loads of stuff into them. I dunno why I'm typing loads of stuff into a form and then the AI decides whether they're worth giving the money to. So you think that could be an avenue?

Chris: I think, yeah. I think. AI can have a lot of implications. And I think in the healthcare sector there is a risk that it could replace jobs, it could replace and it could be a triage service. Absolutely. I saw [00:14:00] someone create a triage tool using Chat G P T the other day where someone just put in they created a form using Chat G P T, where they put in their ailments and it pumped out some potential illnesses. And then likewise, you could put in your potential illnesses into a form and then it comes up with potential medication. And then yeah, what that could bypass doctors and nurses if it's used in a certain way. And I've also saw somebody creating a diet for themselves, given their body weight and body type.

Chris: Just using this tool the other day. So I think the implications absolutely. Insane for this, for AI. And we'll talk about some of these tools as we go on. But it's just absolutely, it's fascinating. And I was talking before about the image and you asking me, how much use an image making software has.

Chris: And the answer to that is, is vast. Because you think of the creative industries and graphic designers, their jobs are gonna be at risk if [00:15:00] they're not gonna be commissioned to create graphic designs or logos or images. If AI tools can do it inside five minutes to a high level. There are some issues around copyright, and who owns the rights, but, some of the guidance seems to suggest if you put in the prompts, then you own a lot of the rights to that.

Chris: So particularly with how it's developing. So some of these tools like DALLE and Mid Journey, some of the what they're producing is just the epic imagery, fantastic logos. And they're gonna put creatives , gonna leave them without jobs. And that was one of the top things for 2023 - imagery by AI - the MIT put forward in the US for for the biggest innovation for 2023. But I think AI in general, as you say, I think healthcare is gonna have massive implications. So we need to be, hopefully we can talk through various themes as we go. 

Chris: And I, you could see we've got the NHS in the UK, it [00:16:00] could be being used more and more as a triage service in particular to help patients and free up doctors and nurses to do more urgent acute care. But yeah, there's a risk that AI's gonna miss issues if patients can't use the tooling and don't engage with it properly and the AI's not, I doesn't have the correct data to give the right solutions. 

Matt: Plus it might make 1 1 1 faster.

Chris: Yeah, it could be used for that. Absolutely. I could, you could see in three or four years' time, even faster. So is it called the 1 0 1 or one 111 hotline? What is it called? Matt? 

Matt: 111. 

Chris: But I could probably do a tool and we could, we, I could probably create a tool tonight and put it on a website where people could and I could use Chat G P T to write some code.

Chris: And I've seen someone do it on you just put your ailments in and it gives you some potential illnesses. Whether it's accurate or not, I would feel a bit, we talk about ethics and all that, I would feel, [00:17:00] I wouldn't feel confident doing that because I could be giving people misleading information.

Matt: Yeah, you've also got: Dr. Google's quite deadly, isn't it? People going online and checking the symptoms and find and deciding that they've got like hysterical teenage pregnancy or some virus that they've not got. Because so could lead the way of enabling people to abuse the system by just saying 'give me a day off'.

Matt: 'I've got leprosy.'

Chris: Yeah, absolutely. And you mentioned Google, this tool could be a Google replacement because the chat interface and the fact it's pulling together the responses in this way people may turn to this tool instead of using Google. So there's a risk here for search engines like Google. That it could be a Google killer.

Chris: What was your view on that? 

Matt: That, I don't know whether that would happen though, cuz Google's used quite a lot. But then Alexa, things like that and [00:18:00] the, what's the Google one called? It's not Alexa. 

Chris: Oh, the Google version of Alexa? I'm not sure I've actually got one somewhere. I never use it cause I've got the Alexa one.

Chris: The Google Hub or something like that? 

Matt: Google Hub. Google dot or something like that. That's a version of AI. 

Chris: Google Nest hub actually. There you go. 

Matt: Do Google what? 

Chris: Yeah, Google Nest Or Google Nest Hub. Oh, Google Home as well. 

Matt: So yeah, that's like a version of AI, isn't it?

Matt: So that you're have in the house and you ask it questions, it search the webs for you. It sets timers, and it probably listens into your private conversations. It records them all. So you've got to work. That's a worry. So you've got to, how much was of the AI technology is able to track what you're doing and listen to what you want, what you're talking about, and then suggest products to you.

Matt: But that might be a worry. 

Chris: Have you heard of Descript? 

Matt: The Irish band? 

Chris: No, not The Script. 

Chris: Descript, descript. 

Matt: No, it's fair [00:19:00] to assume that I've not heard of something. If you are going to say, have you heard of it?

Matt: The answer is generally, no. No. I have not. 

Chris: You've heard of, you've heard of The Script?. They had some good tunes, didn't they, don't they? Or maybe they didn't? I don't know. I get confused. Yeah. Irish rock band. Yeah. Oh. Danny. O'Donohue. Yeah, that's all. Now Descript is, I was playing with that website earlier and it takes your voice, so you read a script for 10 to 30 minutes, and then the AI takes your voice and then you can put any text in that tool, and then it reads it out in your voice.

Matt: I don't like that. That's creepy. 

Chris: Yeah. My wife didn't like that when I mentioned it to her, she was like that sounds like possible fraud: she's worried about people calling the bank, using your voice. 

Matt: Kanye West. He, because he's Kanye West, had a hologram of Kim Kardashian, Robert Kardashian, Kim Kardashian's dad who... unfortunately for him, is dead. [00:20:00] And he had a hologram of him. At least that's what one, but that's one of the birthday presents. He got her.

Matt: Yeah, if you can do that. So yeah, no, I remember this and it's always like going, you've married a very stable genius. You are beautiful and all that. But if you could have that, a voice recording of somebody who's dead that could really mess someone up and or you could potentially commit fraud by, I don't know, doing whatever people who commit fraud do with all that kind of information. 

Chris: So it's being used under the guise of what we're doing now, podcasting. So if I may, I'm doing too many ums or ahs, which I probably am in this podcast, likewise with you, could use your voice then or I want to change a word or two, it will do the transcription of the podcast and you can easily change an odd word, but yeah, likewise, you could possibly introduce whole passages, which are fresh and are in there. But yeah, you could have a whole passage, which you've never said before, which it's got in your own voice.

Chris: We can [00:21:00] have a look at this. I want and test it out maybe as we are, look into these tools, but it's I was just having a look earlier and stumbled upon this. There's just so much out there at the moment. There's a, there's an explosion of AI products and every day just seems to be more and more coming out which is just revolutionizing things.

Matt: The hologram? Was it a hologram or was it AI?.

Chris: The Kanye West thing? Don't you put me on the spot here - but I know what you're talking about. 

Matt: Not before he got divorced. 

Chris: I know what you're talking about. Know what you're talking about. But I dunno how he did that. Yeah. 

Matt: Money, yes. Is how we did it. But is that not.

Chris: I think we should ask the audience.

Chris: They know this to tell us.

Matt: Why would you do such a thing? It's weird. And it must have cost a fortune.

Chris: Robert Kardashian, O J Simpson's, one of his lawyers. 

Chris: Oh, it's a deep fake, basically a deep fake is AI. 

Matt: I was thinking about deep fakes and one of your favorite topics: pornography. 

Chris: Don't be cheeky!

Matt: So [00:22:00] like with deep fakes. That's where people are doing that. They're putting images of celebrities onto other models and passing it off as them while they're doing computer generated versions and that can damage careers, reputations and other things.

Chris: I think. Deep fake. .Yeah. You can take people's faces and put them, you can take people's voices and do whatever you want, and people's face, this is where the ethics comes in. And is it being, is this stuff being policed and legislated properly? And I think the answer at the moment is not really because yeah, if you want to bring back your dead relative, you do have an opportunity here, don't you?

Chris: And it's just, this is where it's all heading. So...

Matt: Although there is, there are anti revenge porn laws, anti revenge porn laws, quite hard to say, especially the, it's quite fast on the statute. So it's illegal in this country, for example. So I imagine would extend to deep fakes, but that's something we could definitely have a look into.

Matt: Plus why not?

Chris: There is a, there was a, at the time there's a good article on The [00:23:00] Face about reanimating the dead. And whether that's something, cuz I think, technology is moving that way - we can bring people back to life. And you see it in films that we've seen it in Star Wars.

Matt: I was just gonna say yes. Haven't brought Vincent Price. Was it Vincent Price? 

Chris: Oh, they definitely brought...

Matt: It wasn't Vincent Price? 

Chris: Carrie Fisher back, didn't they? 

Matt: Yeah. And the guy who played the leader, the guy who ran, the manager for, want a better term of the Death Star, was like a famous Hammer horror actor.

Matt: They brought him back. 

Chris: Oh, Peter Cushing?

Matt: Yes. That's it. Not Vince, they brought him back didn't he? For a while. Yeah. So that's the thing that you saw. You could get Brando. I think it was in Superman Returns. The terrible Brandon Routh Superman, which had Marlon Brando's face reconstructed and his voice reconstructed from recordings of the original Superman.

Matt: So you can do it. That they don't, they've been doing that for a while. 

Chris: There's a company - Kaleida does digital resurrections. They [00:24:00] did Robert Kardashian's hologram. This? Yes. Yeah, this is, it's, it's just where it's heading.

Chris: And this is an introductory episode. It's just showing just where it can go and this is the implications of all this stuff. But, on a more basic level, getting, going back to the more basic implications are, I wrote a blog, which is on the blog website for AI Superhero using AI. Very simple to do. I asked Chat, GPT to write a blog, Google can detect AI, so I put it through an AI detector, came out a hundred percent. Then I used another tool called Quill Bot to paraphrase it, to make it sound a bit more human. And then I put it back in the human, in the detector, and it got a much lower percent so it could, It could legitimately then go out as a blog post and pass the Google detection so it, it ranks high in with SEO.

Chris: With any luck, you can use SEO tools, which are now powered by AI as [00:25:00] well. So you can literally spew out hundreds or thousands of blog posts generated by AI, thousands of images. There's just gonna be thousands, if not millions of AI created content out there. We're gonna get to the stage where we don't know whether anything has been created by a human, possibly within five years.

Chris: This is just what the level we're at and I think the next year we're gonna see the same with video. There's a company called Synthesia. Have you, you dunno if you've heard of that one? 

Matt: That's where you can see...

Chris: They can create avatars, they can create videos with avatars, which are AI presenters.

Chris: Basically, you can create whole videos with a presenter, with a synthetic voice and a synthetic face. So you could have video, you could have thousands of videos, which are just AI generated with AI generated scripts, with AI generated presenters. This podcast could just be - me and you could just be robots.

Chris: We could [00:26:00] just be AI - quite easy... 

Matt: Very - max headroom, isn't it? 

Chris: We could be the last human podcast. We could have a future where, 10 years' time, where we are struggling to know what are human and what are AI. As we're seeing on Twitter and with comments at the moment and comment apps..

Chris: What are AI comments? What are bots and what are humans, struggling to tackle that - it's gonna get harder and harder.

Matt: Very max headroom. So you've got to be careful. But then I still think there's an element where as a species, not to get too general, but every single person who's ever lived... there's a creative element in us, so we still have people being creative. You still have people writing bad teenage poetry and books, so I don't think that can ever be fully replicated or replaced.

Matt: So not too panicky over that just yet. 

Chris: Have you seen, do you watch Black Mirror? 

Matt: [00:27:00] Black Mirror? No. 

Chris: Yeah. It's the Charlie Brooker drama. Yeah. We had people's avatars. It's like digital versions of themselves, that is starting to happen. So I was talking about this Synthesia.

Chris: There's a site, there are sites now coming out where you can create avatar versions of yourself and there's a tool where it is training on how you write, but also you might be, you might end up soon with avatars where it's based on, you've got an avatar of yourself where he knows how to write like you.

Chris: He knows how to speak like you. He's very much a digital version of you. We're not that far away. We're talking about months. It's alright. It's why we're not follow. 

Matt: Why would you want that though? Why would you want that? 

Chris: I think a lot of it is, so on a very basic level, a site called, I think it's Compose.AI.

Chris: Let me just get the right information. Yeah, Compose.AI. It can essentially write emails in your style. [00:28:00] So if you can't, if you don't have the time or the energy to write emails and you just want to write emails in your style back to people, it can do it. So you say no to Matt. You'll then write an email back to you in a polite way in your style, which is more than one line.

Matt: No, I still don't think that. But then it's anything, if there's an error or anything that goes wrong, you'd be held liable to it because it's come from your address and you can't say, oh, the bot did it for me. So you'd still be liable for that. It's if you leave your computer unlocked and someone calls your manager a bellend, you're still stuck with that 'cause it came from your address. 

Chris: Yeah. That liability is interesting, wasn't it? Yeah. You may just leave the bots running, but you are ultimately responsible for those. But are you responsible for your digital avatar, what he does? Yeah, but I still, or is it the company that, or is it the company or is it the [00:29:00] company that created the avatar which you are paying x pounds a month for?

Chris: That's responsible. It's interesting, isn't it? 

Matt: I'd say you are liable, you personally, but I still don't get why anyone would want a digital avatar though. Why would they want a copy of themselves, what is it? Second World or something like that, where they would wanted to have everybody online.

Matt: Kind of, I think, Facebook wanted it. Everyone's online, looks like I have a digital version of the world, but why would you want a similar acrom of yourself, online, being awkward and unpopular a lot. 

Chris: There are uses, let's say you're a celebrity with a Twitter account and you can't be bothered or you don't have the time to tweet all the time from that account.

Chris: But you want to, you want someone to tweet in that style as they are you and have your thoughts and know generally all about you...

Matt: That's what you use interns for. 

Chris: Yeah, but the interns aren't them, are they? And get it wrong. But that's one. One. What could be one use. [00:30:00] Or what if you want to create lots of videos, which you want to present and front, where you don't have the time or the energy or the inclination, you can just have your avatar do it.

Matt: I'm not convinced. Personally, I don't think it'd be something that I would use. But then I am a Luddite. So...

Chris: Yeah. Your own version of yours. Yeah. Different digital version of yourself. I think it's coming. And there's implications with VR and that as well. And whether your second version of yourself will be existing virtual reality and, which is very much, Mark Zuckerberg's area and Meta and all that.

Chris: But that's a different area than what we're talking about. But I think, for now, so as I said, I asked Chat GPT to come up with a plan, I think we've basically gone through a lot of those areas. We can definitely deep dive and go into different episodes and look at different tools. So in conclusion, summarize the main [00:31:00] points of the episode.

Chris: Encourage listeners to continue the conversation by providing feedback and suggesting future topics such as Chat GPT. Okay. So I think the main points from what I can see is we've talked about AI briefly, what it is, a bit about the history and where it's at and current developments. And we've gone on a bit of a meandering journey.

Chris: We've gone a bit of a meandering journey of different tools out there with a bit of quite a lot of points on ethics and philosophy. It affects different industries. But, I guess, we're trying to think of future topics and where we head to next. For you, I guess it's, you talk about education and where you are in your field.

Chris: What for you do you think is important to get to grip with this technology? , 

Matt: At the moment, I know that you can use AI as a revision tool, so I'd be interested in to seeing how that can help students across the ability range, for want of a better phrase, with their revision or [00:32:00] with core skills.

Matt: So that'd be interesting to see. Or AI in plagiarism detection would be interesting at university level. So I think there's a lot in education or even using Chat GPT or something. I've seen that you can write lesson plans with that.

Matt: So a teacher could be fundamentally get a whole lesson out of that. So that'd be interesting to see whether we think that's ethical or whether that includes things such as differentiation, which is a large part of a teacher's job. So that'd be interesting. We'll probably look at a different topic each week and then you tell me and I'll tell you why I don't like it.

Matt: And we, and I worry about how likely it is that Terminators are gonna come. 

Chris: Yeah. And affect our kids and our future generations. Yeah. I think.

Matt: If we have them. 

Chris: Exactly. Yeah... Yeah, I think there's a lot there to go off. So I'm hoping, I hope that this was of interest for people to listen to as a bit of an introduction.

Chris: We're gonna look at this in more depth. We're gonna try and also look at the news every week and what's been happening in [00:33:00] the world of AI, in every episode as we go ahead to try and keep on top of things and be across the blog. But please try and keep in touch with us. If you've got any comments to make... comment on the blog. Or comment on the social channels, on TikTok, on Twitter, on Instagram, on Facebook. And anywhere else we may happen to be...

Matt: How would they find us on TikTok, Instagram, Twitter, and Facebook or wherever we may be, what is our handle? They can, how would they finds it is?

Chris: It is AI Superhero 'cause we are superheroes. Matt, AISuperhero.org is the website. And sign up to the newsletter and we'll keep you updated with all the news on..

Matt: Which isn't written by AI. 

Chris: Yeah, it is not, we may try a bit of experimentation now and then, but it's, it will be written with a lot of sweat and endeavor.

Chris: So we'll bring you the very best news and information on AI and yeah, we'll look forward to hopefully doing this again [00:34:00] and looking out for your views, but the main thing here, we're trying to help and encourage people to, to use these tools and get involved, but also be wary of all the implications.

Chris: I do think we should be diving in and using these tools because they are gonna, it's gonna explode, is exploding and there's a lot of, lot to make of these tools if we use them properly make sense of them. So we'll look out for that. So Matt, thank you for joining me on this introductory podcast.

Chris: And I look forward to speaking later, and hope you join us. 

Matt: I'm looking forward to learning. Knowledge is power. 

Chris: Libraries gave us power.