baseof.html:

BIG QUESTIONS FROM SMALL MINDS

layouts/_default/single.html:

Ep 1 - Artificial Intelligence: Machines Behaving Badly


Download on AppleSpotifyAmazon Music

About The Guest: Professor Toby Walsh

Professor Toby Walsh is an expert in artificial intelligence and a professor at the University of New South Wales. He holds a master’s degree in theoretical physics and mathematics, as well as a PhD in artificial intelligence.

With research positions held in various countries, including Australia, England, Ireland, Italy, France, Germany, Scotland, and Sweden, Professor Walsh is a highly respected figure in the field of AI.

He has published three books on AI and is set to release his fourth book on Generative AI later this year. ​

Here’s a link to his books

a title

Summary of Episode:

In this episode, we discuss the definition of AI, the difference between artificial intelligence and machine learning, the potential for AI to revolutionize space travel, the concept of The Turing Test, the possibility of AI developing consciousness, the risks associated with AI in warfare, and the importance of privacy policies in the digital age.

Professor Walsh emphasizes the need for regulation and ethical considerations in the development and use of AI.

a title

SHOW NOTES

Exploring the Future of Artificial Intelligence

​ Artificial intelligence (AI) is a topic that has captivated the minds of both young and old alike. It is a field that holds immense potential and promises to revolutionize various aspects of our lives. But what exactly is AI? How does it work? And what does the future hold for this rapidly advancing technology? In this thought-provoking episode of “Big Questions from Small Minds,” we delve into these questions and more with Professor Toby Walsh, an esteemed expert in the field of AI. ​

Unraveling the Mystery of Artificial Intelligence

​ To begin our exploration, we must first understand what artificial intelligence truly means. According to Professor Walsh, AI is about getting a computer to perform tasks that require intelligence, such as perceiving and understanding the world, reasoning, taking actions, and learning. It aims to simulate the cognitive abilities of humans and replicate their problem-solving skills. As Professor Walsh explains, “We’re trying to get machines to be smarter by teaching them, and that’s what we call machine learning.” ​

The Turing Test: A Measure of Intelligence

​ One of the most famous concepts in the field of AI is the Turing Test, proposed by the brilliant mathematician and computer scientist Alan Turing. The Turing Test is a way to determine whether a machine can exhibit intelligent behavior indistinguishable from that of a human. Professor Walsh explains, “If you couldn’t tell [the machine] apart from a human in a conversation, then you might as well say it’s thinking.” ​

The Quest for Consciousness in Machines

​ While AI has made significant strides in narrow domains, such as playing chess or translating languages, the question of whether machines can develop consciousness remains a mystery. Professor Walsh highlights the complexity of consciousness, stating, “We think it happens in our brains, but we have almost no scientific understanding of what it is.” The emergence of consciousness in machines is a profound scientific question that has yet to be answered definitively. ​

a title

The Ethical Dilemmas of AI

​ As AI continues to advance, it raises ethical concerns that must be addressed. Professor Walsh warns of the unintended consequences of AI, stating, “The sort of unintended consequences of somewhat stupid artificial intelligence that’s stubbornly doing the same thing” can pose risks. He cites the example of an AI programmed to build paperclips, which could potentially lead to the entire planet being transformed into a giant paperclip factory. It is crucial to consider the potential negative outcomes and ensure that AI is developed responsibly. ​

The Impact of AI on Space Exploration

​ AI has the potential to revolutionize space travel by enabling the use of robots instead of humans. Professor Walsh explains, “It’s much simpler if we can send robots into space.” Robots have already explored other planets in our solar system, and their capabilities will continue to expand. However, this also raises questions about the future of human space exploration and the role of AI in shaping our understanding of the universe. ​

The Future of AI and Human Intelligence

​ When contemplating the future of AI, Professor Walsh emphasizes that we still have a long way to go to match the complexity of the human brain. While AI can excel in narrow domains, our adaptability, creativity, and emotional intelligence set us apart. He states, “Our superpower is not our intelligence; it’s our ability to empathize with each other.” The true potential of AI lies in complementing human intelligence rather than replacing it. ​

The Risks of Autonomous Vehicles

​ Autonomous vehicles, such as self-driving cars, hold great promise for the future of transportation. However, they also raise questions about responsibility in the event of accidents. Professor Walsh highlights the need to distinguish between the responsibility of the AI system and the human operator. He states, “The car is not conscious, it’s not aware, it can’t be punished.” Determining liability in accidents involving autonomous vehicles is a complex issue that requires careful consideration. ​

The Dark Side of AI: Weaponization and Warfare

​ One of the most concerning aspects of AI is its potential for weaponization. Professor Walsh warns of the dangers of using AI in warfare, stating, “We will wake up, and it will look like one of those Hollywood movies. It will look like the Terminator.” The ability to automate and scale warfare through AI poses significant risks, and it is crucial to regulate and control the use of AI in military applications to prevent catastrophic consequences. ​

Privacy and the Role of AI

​ The proliferation of AI has raised concerns about privacy and data security. Professor Walsh acknowledges the challenges posed by the collection and use of personal data by AI systems. He advocates for greater regulation and transparency, stating, “We need more regulation where every website gives you levels of privacy, and it’s one click to change them.” Protecting privacy in the digital age requires a balance between the benefits of AI and the rights of individuals. ​

Embracing Technology for a Better Future

​ Despite the challenges and risks associated with AI, Professor Walsh remains optimistic about the future. He believes that embracing technology, including AI, is our best hope for addressing the pressing issues facing humanity. He encourages young minds to pursue careers in science and technology, emphasizing the need for diverse perspectives to shape the future of AI. Professor Walsh concludes, “The benefits that technology brings, better sanitation, better agriculture, better medicine, that’s our only hope for dealing with the wicked problems that now face us.” ​

Conclusion: Navigating the Path Ahead

​ As we navigate the uncharted territory of AI, it is essential to approach its development and implementation with caution and foresight. The potential of AI is vast, but so are the ethical, social, and economic implications. By fostering a multidisciplinary approach and engaging in thoughtful dialogue, we can shape the future of AI in a way that benefits humanity. The journey ahead may be challenging, but with the right mindset and collective effort, we can harness the power of AI to create a brighter and more inclusive future for all.

TRANSCRIPT

Hello and welcome to Big Questions from Small Minds, the podcast where we ask professors questions that seem too massive, complicated or even stupid. We also have lots of intelligent questions. No, they’re not ours. They’re questions from actual small minds, kids. Today’s episode is about artificial intelligence. We’re talking to Professor Toby Walsh from the University of New South Wales. He’s a giant of knowledge about AI.

00:29

He has a master’s in both theoretical physics and mathematics and a PhD in artificial intelligence. Oh, you know what? While I was preparing for this show, I used a um, AI joke generator. Cause I thought it’s making everyone’s life easier. Why can’t I make my life easier? Absolutely. How does it work? Do you type in what you want to joke about? You type in whatever words in and it spits out a joke. So I put in AI and it came out like, ah, oh man. I mean, I’m laughing.

a title

00:59

If this takes over the world, it will be no laughing matter. No, I see what you did there. Professor Toby Walsh has held recess positions in Australia, England, Ireland, Italy, France, Germany, Scotland and Sweden. In 2020, he was recipient of the Australian Laureate Fellowship. Toby has published three books on AI, all super accessible. And his fourth book, On Generative AI, is due out later this year.

01:27

You know it’s commonly said, artificial intelligence is no match for natural stupidity. People, Phil and I are going to put that to the test tonight. Absolutely. How does your brain work? What will the world be like in a hundred years if we don’t fix climate change? Why do I have to sleep? Can robots have emotions? Big questions from Small Minds. Toby, welcome to Big Questions from Small Minds. It’s a pleasure to be here. Yeah, Toby.

01:55

I’m often pretending to be smart. I’m quoting books. I’m trying to impress my friends with my little tidbits of knowledge. I get off the internet. Yeah. So you’re pretending to be clever. Always. But does that make me artificially intelligent? Or does it make him an idiot? It makes you naturally intelligent. That’s what we’re trying to achieve with machines, with computers. We’re trying to.

02:16

simulate what you do with computers. Can you tell us what exactly is artificial intelligence? It’s trying to get a computer to do the things that when you do them, we say they require intelligence. So that’s perceiving the world, being able to understand the things you see in the world, understand the things you hear in the world, and then reasoning about those things, and then taking actions and learning.

a title

02:41

Repeat after me. Le rayon vert. So much of your intelligence are things that you learn. When you were born, you couldn’t read, you couldn’t write. Most of the things that you can do now that require intelligence, play a game of chess or do your multiplication tables, those were things that you learned. That’s a way that we’re trying to get machines to be smarter is by them learning. And that’s what we call machine learning. So is there a functional difference between what people think is artificial intelligence

03:11

the phrase machine learning is. Machine learning is a subfield of artificial intelligence. So not everything you do is about learning, but a big component is. A lot of the successes that you hear about today are about machine learning, but it’s not the only thing in artificial intelligence. There’s also getting computers to understand spoken language. Hello. Fuyu, das unhoch. So desu ne. Du musst jetzt machen, was ich dir sage. Getting computers to reason, getting computers to do mathematics.

a title

03:40

Those are things that sometimes we literally program the computer to do rather than try and teach the computer to do them. I’ve heard there’s experiments into AI to write books and make films and are making artworks and doing all these creative things. So my thing is, what’s next for AI? Does it get a job in a cafe? OK, I’m going to jump to a kid’s question. In books and movies, why is AI always evil?

04:10

It is always evil, it’s somewhat scary, it’s got little red glint in its eye. Also, it’s sentient. So it’s typically trying to take over the planet.

04:27

Certainly the AI that we build today is nothing like that at all. It has no desires of its own. It does only what we ask it to. That’s typically the problem. We ask it to do something and it stubbornly, single-mindedly just does that. Anyone who’s frustratingly tried to debug a program or get a computer to do something knows how literally minded a computer is.

a title

04:57

And not that it’s going to be a Terminator robot that’s got a desire to take over the planet, but the sort of unintended consequences of somewhat stupid artificial intelligence that’s stubbornly doing the same thing. Ah, like the classic analogy of the paperclip, turning everything into paperclips. Yes, the paperclip factory. You know, one of the problems if we gave an artificial intelligence the goal of building paperclips. And it was really good at building paperclips. Eventually…

05:26

It would build these mega factories on every continent and eventually it would take over the whole planet into one ginormous paperclip factory

05:37

So stupid. Good luck with that, people. Okay, kids question now. Will AI revolutionize space travel? Yeah, it will revolutionize space travel. Lift off. The bad news there, because I suspect you want to do space travel, is that it’s going to mean we’ll need less humans doing space travel. It’s really expensive, difficult and dangerous to get humans into space.

a title

06:06

So it’s much simpler if we can send robots into space. So yes, it will revolutionize. Indeed, if you think we haven’t stepped foot on any of the other planets in the solar system, but robots have. Yeah, Mars is the only planet that’s entirely populated by robots. We build an AI, and we put in a rocket, and we shoot it into space. It’s searching for other intelligent lives. What if another intelligent planet somewhere has shot AI into space, and they meet each other?

06:34

and conclude that everybody else apart from them are idiots, and start a whole new colony of only AI-based robots. I suspect that’s how we will explore beyond our own solar system, by sending robots out. That’s an argument perhaps for why we might be alone in the universe, because if there was other intelligent life out there, they would almost surely have done that. We haven’t found them yet, but the universe is still very young. They could still be on the way. Well, as always, watch the skies.

a title

07:02

Okay, I’m going to jump to a kid’s question. Could you tell us a bit about the Turing test and how it can determine whether AI is sentient or that AI can think like a human being? That’s a fantastic question. Of course, Alan Turing is probably the greatest mind of this century. He’s the founder of computing and also he wrote the very first scientific paper about artificial intelligence. Wow. In which he proposed this idea.

07:32

of what he called the imitation game. But it’s now become known after him as the Turing test. And he asked this question, a really fundamental question if you’re trying to build Art of Intelligence. How will we know when we’ve succeeded? Cool. It’s always good to have an end point, yeah. That’s a bit of a tough question to say, is this computer thinking? We don’t really know what thinking is. So he said, well, let me propose a simpler test that a computer could pass. And the test was really…

08:00

Could it pretend to be us? Could you sit down with it and have a conversation through a screen or whatever? And if you couldn’t tell it apart from a human, then you might as well say it’s thinking. Wow, that’s huge, that’s amazing. And people will also talk about the Turing test for a particular task. Oh, so there’s like different levels of it. Yeah, so if you’re gonna build a self-driving car, you’ve gotta be able to pick things out, you’ve gotta be able to pick out bicycles and buses.

08:29

and pedestrians. And so that’s a task that requires some intelligence. You’ve got to be able to recognise objects. Yeah, really. So do we have to worry that people are going to use that technology in internetbots and we’ll no longer get asked those questions, are you a robot? Funny, those captures, those are in some sense reversed during tests. We’re actually asking a human, not a robot, to check that they are a human. What would be the Toby Walsh test?

08:57

Ah, well, fairly enough, I do have a Toby watch test. But the problem is that it’s actually quite easy to spoof the Turing test. And indeed, a computer has already spoofed the Turing test. Hello? By pretending to be, strangely enough, a Ukrainian 13-year-old boy. Could I borrow your phone, please? That’s very specific, yes. Yeah, it was. So it’s quite easy to pretend to be who you’re not. Ah.

09:26

So I actually proposed what I call the meta-turing test, which is it takes intelligence to spot intelligence. All right. Is that a little bit like Dr. Zeus’s watcher, watcher, watcher? Yes. So the idea is you get all the AIs and the humans together in a big pool and they all have to speak with each other and work out which are the humans and which aren’t. Hello, my friend. Hello. How are you? The computer doesn’t pass the Turing test and it’s both. People think it’s human.

09:55

and it can spot the humans from the other computers. It’s a fantastic idea. That’s like a game of hit the ball. Takes a lot of intelligence to spot fake intelligence. I know, I’ve been getting away with it for years. I haven’t. Oh well. If a self-driving car crashes and kills someone, who is responsible? Well, that is an interesting question.

10:24

We know one thing for sure, it’s not the robot. It’s not the car, because a car is not conscious, it’s not aware, it can’t be punished. You can turn it off and it won’t care. There have been a number of accidents where people in Teslas have been killed, and it’s not clear whether the self-driving computer was responsible or not. It was in the accident that happened in Florida where Joshua Brown was in a Tesla that drove into a truck. It continued to drive at full speed into the bottom of the truck.

10:55

Not the great way to go. But equally there, Joshua Brown was supposedly watching a movie. Harry Potter, it was claimed. It’s a terrible thing to die in front of, isn’t it? Arguably, it’s a good movie to go out on. He was supposed to be paying attention, ready to take back control. That’s part of the problem. We’re very trusting, and if it works for a few minutes, then we think, oh, it’s going to work always. At some point, the technology will be better than human drivers, because most accidents, they’re not caused by mechanical failure, they’re not caused by…

11:24

trees falling to the road they’re caused by human stupidity. It used to be till the age of 30 your major cause of death was that you were caught up in a car accident. Yeah which is quite remarkable when you think of all the ways that a human being can die. Yes. Three people are stung by octopuses, five people are crushed by coke machines, one person sipped by a comic. We don’t have to go through them all. How long until AI takes over the world? The previous book I wrote was titled

11:53

In 2062, the year 2062, I surveyed 300 of my colleagues, experts in AI, as to what they thought AI would be as capable as humans and the average answer they gave was 2062. Right, okay. There was huge variability in their answers. It’s not going to happen on August the 3rd at 2pm on 2062. We don’t know. You heard it here first. August the 3rd, 2pm, lock it doors. Or open them.

a title

12:22

Depending on how you feel. I mean, if you’ve got AI doors, it doesn’t matter. But what was interesting about the survey was that no one said it was going to take five or 10 years. We still got a lot of things we need to do. But equally, no one was saying it was going to take thousands of years. I hope it’s not next Tuesday. So that’s something that is going to happen probably in your lifetime. And if I’m lucky, maybe in my lifetime, but almost certainly in your lifetime. So how old?

12:49

would AI currently be in terms of a human level? Well, that’s the good news. We’ve got a long way to go to match human intelligence. There is nothing that matches the complexity of the human brain. It is the most complex system in the universe. The billions of neurons, the trillions of connections, the synapses between those neurons. It is an amazing thing. And so we’ve still got a big mountain to climb to match that. But on narrow domains.

13:17

We can already exceed human performance, playing chess, reading x-rays, translating. Konnichiwa. Hello. But those are only narrow activities. If you ask for our true strength, our adaptability, our creativity, then we’re not really even at the level of a two or three-year-old. That being the case, how come every time I buy something online, I always get ads for the thing that I’ve just bought after I bought it and not before I need it?

13:47

Whilst it can be pretty good at making those predictions, it lacks common sense. It doesn’t realise when you’ve brought one toilet seat that you actually probably don’t want to buy a second toilet seat. Well, I mean, I do have five on standby. What are you doing in your toilet? Since we had the COVID pandemic and the toilet paper shortages, I thought, what is the extension of toilet paper, toilet seats? So now I’m doubly prepared for any toilet seat shortages. Am I allowed to say when the s*** hits the fan?

14:17

Okay, I’m going to jump to a kid’s question. Can robots have emotions? Well in a technical sense, they don’t have emotions. Your emotions are biochemical. They’re determined in part by your chemistry and robots don’t have any interest in chemistry there.

14:47

So we could give them fake emotions, but would that be the same as real emotions? And would we care as much? I don’t think we would. It could be argued that they’re learning, as you say, learning on humans. So they’re learning to push our buttons. They are. And that is somewhat worrying if they learn how to manipulate us and get us to do things that aren’t necessarily in our own best interest.

15:16

At the end of the day, our superpower is not our intelligence. It’s our ability to empathize with each other. It’s not our cognitive intelligence, it’s our emotional intelligence. Our ability to come together and work in groups. Our ability to understand how other people think about things. Giving the keys to that kingdom away sounds dangerous. Yes, it is dangerous. I think we have to be somewhat concerned that it’s going to be attractive to do that, but equally there are…

15:44

There are lots of risks associated. You’re so sweet. You’re giving me a toothache. Can an artificial intelligence develop consciousness? Well, famously a Google engineer thought the latest super duper chatbot lambda that Google had developed was conscious, was sentient. Is that a case of Dr. Google Frankenstein falling in love with his chatbot?

16:13

It was a little bit, yeah. The consensus amongst all of my colleagues was this is not in any way conscious at all. There’s no sentence there at all. Yeah, that’s almost the same level of when you see people out there with billboards and silver hats on telling you aliens are coming and adopting them. Or is he just before his time and there’ll be more people like that coming forward in the future? We are going to have to face these difficult, tricky questions. The problem is we don’t actually have a way of measuring consciousness. There’s no

16:43

There’s no physical device we have. We can put on a person’s head. Are you ready? No! And say, oh, he’s very conscious. Put it on your dog.

16:58

Well yes, your dog’s slightly conscious. I love it! Yeah, put it on your goldfish.

17:17

Yeah, well apparently they’ve done tests on goldfish and goldfish don’t remember things for more than a minute. Right, somewhere in the seconds. Well, there are benefits to that. Yes, if so if your tank takes more than a minute to swim around, your tank is essentially infinite so you’re not caged in. Excellent. What’s round the corner? I’ll get round the corner. What’s round the next corner? What an adventure. Yeah, a never-ending adventure. But we are going to have to worry at some point, will machines become conscious? And

17:46

We have absolutely no idea. It’s remarkable. This is one of the deepest, most profound scientific questions you could ask today. We know how the universe began. We know from the very first millisecond of the universe, 13.8 billion years ago, and we know how the universe is gonna end. We know how the solar system came into existence. We know how the Earth came into existence. We know how life evolved. We know about DNA. We know all of this stuff about us.

18:16

And yet the one thing about being alive, the thing that you experienced when you opened your eyes this morning, You woke up and you said, oh you didn’t say I’m intelligent, say I’m conscious again. I’m awake. That we have almost no scientific understanding of what it is. We think it happens in our brains. If any kids are out there thinking about which science to go into, there’s a broad field out there for consciousness. All you know is it’s in the brain. So it’s an interesting question to say well,

18:45

If we’re going to make intelligent machines and humans are intelligent, are they also going to at some point become conscious as well? The big question is, if we make an intelligent machine, it can learn from its own experiences and we put that in a big that of protons and neurons and plugs some electricity into it. Do we think that that will begin a consciousness at some point down the line? That we don’t know. That’s going to be the interesting experiment.

19:12

At some point, are the machines going to spontaneously become conscious? Is it some sort of emergent phenomenon that comes out of enough complexity? It seems to be the more intelligent you are, the more conscious you are. But we should always be careful of confusing correlation. Those two things happen together. It’s entirely plausible that consciousness is something that’s restricted to biology. It’s not something that you get in silicon and the intelligence we build in machines never become conscious. And it’s what Australian philosopher David Chalmers called

19:41

zombie intelligence. So you know, really smart, but lacking conscious. The apocalypse has now been upgraded to robot zombies. Ha ha ha ha ha ha. Yeah.

19:56

What happens if AI becomes so advanced that we can’t think freely for ourselves? That’s a great question because we are lazy. We do tend to outsource things to our machines. To be fair, I tried to outsource writing some jokes for this show tonight. And I went to a AI joke generator and whew. Tismol. Well, did you hear about the computer walked into the bar? No.

20:25

Because it doesn’t work! But so if you’re not learning, then obviously your hippocampus is not getting bigger. In fact they’ve done studies, they’ve taken London taxi drivers. So if you become a London cabbie, you have to do a test. Go to the left. It’s called the knowledge, you spend a year learning how to get from A to B anywhere in London. Go to the right. You have to know London like the back of your hand.

20:53

You having a laugh? And they’ve done brain studies, fMRI studies, where they look at the inside of people’s brains. And the hippocampus, the part of the brain, the top of the brain, which is involved in spatial reasoning, spatial memory, in London cabbies, gets 15% bigger once you’ve done the knowledge. It physically increases the size of the brain. That’s why they all wear those little flak caps, to hide that bump in the back of their head.

21:22

What should a safe privacy policy look like? And what are some things that I should look out for when signing up for new websites? That’s a fantastic question. The problem today is that you really only have one choice, which is if you don’t like the T’s and C’s, then you can’t use the service at all. In fact, that’s why I think we need more regulation, where, for example, every website gives you levels of privacy, and it’s one click to change them. You may have to pay for the service now, but…

21:51

We won’t keep any information other than perhaps your login so you can access the service. But it seems like a lot of people I would know would just go straight for that I want to have it for free and I don’t really care about what my data is. Yeah, the fundamental problem here is that humans can be hacked. We can hack humans to do things that aren’t in their interests. Yeah, I go to work all the time. We can get you to buy products that you don’t want. We can get you to vote for politicians who aren’t going to improve your life. And we can do that, unfortunately, with the data that we learn.

22:21

from your web browsing and artificial intelligence to then personalize that and tweak your buttons. Ooh. The prices, almost all the products now, are directed towards what the suppliers think you can afford to pay. Oh, really? Now, time for a word from our sponsor. Thinking about a new rug? I know you are. We strongly regulate conventional media, old-fashioned media, right? TV, radio, in terms of political advertising.

22:49

for the good reason. We don’t want the people with the most money or the media barons to have the most influence on our democracy. And yet there’s this new social media, in some ways more powerful, more persuasive than the old-fashioned media. It used to be if a politician wanted to speak to the electorate they had to broadcast what they wanted to say. Everyone would see the untruths that they were saying or the truth that they were saying. But now you can narrowcast.

23:15

can pick out individual voters and tell them what will appeal to them. And that may not even be true. And no one else gets to see them. So we don’t know how you’re being lied to. That is something I think where we probably need ultimately more regulation. I can’t understand why more people haven’t logged out of Facebook. I logged out of Facebook five years ago and I’m never logging back in. For anyone that follows us on Facebook, please ignore that. You know, there’s the paradox, right? Because Facebook is useful to discover things like your podcast.

23:45

But equally, the way that Facebook abuses all the information it gains on you and sells it to the lowest common denominator is not good. It’s not helpful for our democracy. Well, why are we talking about evil? AI is being used for far more evil things than necessarily Facebook. It’s being developed for war. That is quite scary. That is very scary. You can see it happening today in the Ukraine. You can see it in Syria, Libya, and various other places where…

24:15

conflicts are taking place. The problem and the beauty of computers, if you can get them to do something once, you can get them to do it a thousand times. You could just put a loop around it and it can do it repeatedly. So if you can get a computer to kill one person, you get a computer to get a thousand drones to kill a thousand people. 10,000 drones to kill 10,000 people, is it? Ultimately, they will let us industrialize scale warfare in a way that’s only like other.

24:45

weapons of mass destruction. The first atomic bomb was dropped on Hiroshima. And that is a really dangerous phenomenon and something that I’m very passionate about. I’ve surprisingly found myself at the United Nations, but never thought I’d end up being invited to speak at the United Nations and warned of the risks. We’ve regulated chemical weapons, we’ve regulated biological weapons, even nuclear weapons. We’ve regulated to a certain extent.

25:13

And this is another technology where I think we have to be very careful. Machines are not something that can be held accountable. They don’t have emotions, they don’t have empathy, and they can’t be punished. And so we should be very careful to hand over killing to machines. We will wake up and it will look like one of those Hollywood movies. It will look like The Terminator. And that’s a world we can avoid if we tune. And it’s entirely up to us.

25:42

And so it’s been an interesting journey to go and speak to the diplomats at the United Nations and there are ongoing discussions about that. I’m hopeful within a few years that we will have decided like these other technologies to regulate it. What do you decide to wear when you step into the UN? I assume it makes it look like you’re from the future, you wear an entirely silver jumpsuit. It is a very strange place. I can tell a funny story.

26:12

There was an AI documentary being made and they said, oh, we want to film you in the UN. And I said, I don’t think you could just turn up at the UN. I think you need to get special permission. You can’t just turn up with a film crew. Any rate, somehow they blagged their way in and they got in before I arrived. So I turned up and someone said, your film crew’s here. I was the only person who had a film crew.

26:33

So everyone was giving me lots of attention because I was this person with a film crew. So I must be really important. Oh wow, you turned up with an entourage. An entourage, you know. There was a bit where all the flags were and there was this grass and these big signs saying don’t walk on the grass. And they said, we want to film you walking purposefully up between these flags. But there’s a sign saying don’t walk on the grass. I said, ah, but you’re with a film crew. You can do anything. But the reassuring part of the story is the world is reassuringly flat.

27:02

a random scientist like myself can get invited into the United Nations. I remember I had a meeting with the number two in the United Nations who said, tell me what your concerns are. I thought, well, this is fantastic. The world is relatively flat if a random scientist like myself can end up speaking to the person who’s overall in charge of the world’s disarmament. Yeah, that’s quite powerful. Yeah. So looking forward to the future.

27:29

Are you hopeful about the future of AI? I’m optimistic in the long term, but I’m pessimistic in the short term. It’s a very bumpy road. We’ve got the world’s back at war in Europe. We’re going through a period of economic disruption. And of course, I have mentioned the greatest existential risk, which is the climate emergency, right? But the only hope we have is by embracing technology. And artificial intelligence is part of that technological solution. And so I’m confident.

27:59

we will deal with these massive great challenges. And what would you say to those kids willing to take up the challenge? Well, first of all, I’d apologize. Because every generation up to ours has inherited a better world from their parents. But to encourage them to embrace technology and to embrace science because the benefits that technology bring, better sanitation, better agriculture, better medicine, that’s our only hope for dealing with the wicked problems that now face us.

28:27

And specifically for kids wanting to do AI, what should they look towards doing? If you’re technically minded, then there’s nothing better than studying more maths. No, that’s the worst answer ever. But if you’re not technically minded, that’s fine, because the dirty secret is computers are programming themselves. That’s what machine learning is. So there’s plentiful opportunity for people of all different backgrounds. You need people with critical thought. You need philosophers. You need sociologists.

28:56

political scientists to help us think through about, well, how do we allow this technology into our lives? Even hope for us a couple of dorky dads. Sooner or later, someone’s going to need some AI about how to make a great joke that everyone groans at and we’ll be there to help out. Dad jokes unite. Dad-eye, that’s what that is. Dad intelligence. If anyone listening is interested in finding out any more about Toby’s book, it is called Machines Behaving Badly. It is written in a very accessible way.

29:25

It is extremely informative. All ages can read it. My son read some of it. He loved it. He’s 10. He’s a very clever kid. He’s much better than me. But I can’t recommend it enough. Thank you. The reason I wrote the book was to try and get everyone to start thinking about this future because it’s gonna be in all of our lives. Links to everything we’ve talked about and to Toby’s book can be found at smorminds.au. Keep curious people and keep asking the big questions. Big questions from Small Minds.

29:55

That was awesome!

partials/footer.html: