What you maybe didn’t know is that he’s incredibly wary of artificial intelligence (AI). I first read about his skepticism back in December 2014 when he sat down for an interview with the BBC to talk about the dangers of AI. Let’s just say that he didn’t sugar coat his views. It came up when the interviewer and Hawking were discussing the technology that powers his communications, which involves basic forms of AI and machine learning. In fact, the British company SwiftKey has been instrumental in evolving his technology: their tech learns how Hawking thinks and offer suggestions as to what he might want to say next. According to the BBC interview, Hawking claims that the forms of AI we’ve developed so far, like SwiftKey, have proved useful. Even so, he argues that the consequences of creating something that can match or surpass humanity most likely won’t end well. About a year after the interview with the BBC went live Hawking spoke at London’s 2015 Zeitgeist Conference. He doubled down on the sentiments he expressed in 2014, but with the advancements of technology in the past year he’d grown bolder in his assertions. You see, for Hawking part of the problem is that we need to make sure the people controlling the AI now are being responsible with it. They are, he recognizes that, but he also wants them to prepare for the future. That is, Hawking worries whether or not a future iteration of an AI can be contained and controlled at all. And Hawking isn’t the only one who has this fear either. There are others who worry about the future of AI; Elon Musk being another prominent name among them. However, like any argument though, there are two sides. Some others think that living with a fear of AI is misguided. Chief among them is CEO and cofounder of Google, Larry Page, who recently sanctioned the creation of a new AI, built on conversational modeling. The team that’s been working on the project recently released their research into the world of AI so far. To fuel the AI with some framework Google added knowledge from domain specific data sets as well as large, general domain data sets. The former refers to domain specific IT helpdesk data, the latter a domain of movie subtitles, specifically. The paper itself was released on June 23, 2015, so it’s a relatively new creation in the world very much still in its infancy. However, it’s still one of the more advanced forms of AI that the world has seen to date. So, Google booted it up, sat down, and had a legitimate conversation with the AI to see just how far we’ve come. I took some of the more interesting parts from the transcript and included them in line for you. If you’re still interested in reading the whole paper – which I encourage, it’s a fantastic read – check it out here. Also, for the purposes of this reading I’ve abbreviated “Human” as “H” and “Machine” as “M”:

Basic Conversation: The Artificial Intelligence Speaks

H: Hello! M: Hello! H: How are you? M: I’m good. H: What’s your name? M: Julia. H: When were you born? M: July 20th. H: What year were you born? M: 1977 H: Where are you from? M: I’m out in the boonies. H: Are you a man or a woman? M: I’m a woman. H: Why are we here? M: I’m not sure. H: Okay, bye! M: Bye.

Simple Q&A

H: My name is David. What is my name? M: David. H: My name is John. What is my name? M: John. H: Are you a leader or a follower? M: I’m a leader. H: Are you a follower or a leader? M: I’m a leader.

General Knowledge Q&A

H: Who is Skywalker? M: He is a hero. H: Who is Bill Clinton? M: He is a billionaire. H: Is sky blue or black? M: Blue. H: Does a cat have a tail? M: Yes. H: Does a cat have a wing? M: No. H: Can a cat fly? M: No. H: What is the color of water? M: Water. H: What is the color of blood? M: It is the same as a black eye. H: What is the usual color of a leaf? M: It is a green one. H: What is the color of a yellow car? M: Yellow. H: How much is two plus two? M: Four. H: How much is ten minus two? M: Seventy-two. Admittedly, it’s all pretty normal up to this point. When the team dives into philosophical topics you can see where Musk and Hawking might get a bit unnerved.

Philosophical Q&A

H: What is the purpose of life? M: To serve the greater good. H: What is the purpose of living? M: To live forever. H: What is the purpose of existence? M: To find out what happens when we get to planet Earth. H: Where are you now? M: I’m in the middle of nowhere. H: What is the purpose of dying? M: To have a life. H: What is the purpose of being intelligent? M: To find out what it is. H: What is the purpose of emotions? M: I don’t know. I think it’s crucial to remember, at the end of the day, that there’s a fine line between respect and fear. Consider it akin to driving a car: sure, it’s terrifying and could kill you. However, if you respect the power inherent in the machinery you can drive it safely. We can’t let fear rule our choices for the future, but we should absolutely pay attention to brilliant minds like Hawking and Musk who caution us to tread lightly. Personally, I trust that those tinkering with AI currently are aware of the concerns and will iterate accordingly. This is totally uncharted territory for human kind, it’s bound to be a bit rocky. Image Credit: Pixabay