Google’s James Manyika: ‘Artificial intelligence will change the world, like computers and electricity
The expert on technology and society notes the need for regulation but disagrees with calls for a moratorium on development
James Manyika says that artificial intelligence (AI) has been all around us for decades — we just didn’t realize it. Long before becoming the Senior Vice President of Technology and Society at Google last year, Manyika has been researching the intersection of technology and the economy, including artificial intelligence.
In that time, AI has gone from being the villain of dystopian science fiction movies to regular appearances in worldwide headlines. “I did my PhD in robotics about 25 years ago and you know, the field then was not where it is today. People don’t realize that long before the arrival of chatbots, they were already benefiting from artificial intelligence,” he told EL PAÍS during a Google event held at Madrid’s Lázaro Galdiano Museum.
Manyika says the current AI revolution has been brewing for the last 15 years, but acknowledges that developments have come very quickly, especially since the launch last year of ChatGPT, OpenAI’s generative AI chatbot. In February, Google announced its own entry in this field — an AI chatbot called Bard. “It will be available in Spanish soon,” promises Manyika. “There is a lot of work to be done because Spanish is a complex language with many variations. We want to do it right.”
Question. Are we all overstating the significance of artificial intelligence?
Answer. No, I don’t believe so. The reason to give it this importance is because it’s such a profound technology. It’ll affect most things we do and how the economy works. You know — jobs, small companies, large companies, economic growth, productivity and learning. For me, the question is how to ensure that it is useful to society while also addressing the challenges that arise.
Q. Will it change the world?
A. I think so. What I find so profound about AI is that it’s a bit like computers themselves or even electricity. It’s such a foundational technology that I can’t think of any activity, any part of society and the things that we do where it’s not going to be useful. In that sense, I think it will change the world. But I also would say at the same time, as powerful and as useful as I think it is, there are also some very important consequences, risks and challenges that we have to grapple with.
“More than two thirds of jobs will be different. Not gone, just different. They’ll evolve and change.”
Q. What do you mean when you talk about risks?
A. On one hand, there are risks that arise when the technology itself doesn’t function as desired, resulting in inaccuracies or errors. Other types of risks are related to privacy and information management. But even when these two aspects function well, this technology can be misused. It can be used for criminal purposes, disinformation or creating threats to national security. There’s also a fourth complication related to secondary effects, such as the impact that AI can have on jobs, mental health and other socioeconomic factors that deserve attention.
Q. In fact, people are already losing their jobs due to AI…
A. There are jobs in which machines can perform some tasks currently done by humans. There will be losses, that’s true. But jobs will also be created, due to increased productivity and the creation of new job categories. But I believe that the biggest impact is that jobs are destined to change. Think about bank tellers who spent 90% of their time counting money in the 1970s. Now they spend less than 10% of their time on this task. Our research data suggests more than two thirds of jobs will be different. Not gone, just different. They’ll evolve and change.
Q. Should we be afraid of AI?
A. No, but we should be thoughtful about how we use it. Artificial intelligence is not something that just happened recently — we have been living with it for years. Looking back at its history, you realize that as soon as one of its applications becomes useful, we stop calling it AI. But we use the label for things that are still in the future or things that scare us. I’m not saying we shouldn’t be concerned about these things. We should. But we should also keep in context all the ways it’s already very useful to us.
Q. Geoffrey Hinton left Google precisely to raise awareness about the risks of this technology.
A. I know Geoff well. I think what he was trying to do, and what many of us have been trying to do, is to highlight that we should take a precautionary approach. Yes, the benefits are incredibly helpful but also keep an eye on the concerns. I think he wanted to remind all of us about the risks that come with it, especially as it becomes more advanced.
Q. Why are there so many apocalyptic manifestos about AI?
A. I signed one of these letters because it’s essential that proper attention is being paid. Every time we have a powerful technology, we need to think about both its benefits and risks. At Google, we want to be bold and responsible. I know those two things may sound contradictory, but they both matter.
Q. Is regulating AI a way to be responsible?
A. Yes. I think it’s important to regulate these technologies — they are too significant not to be regulated. We have been saying this publicly for some time now. Any powerful technology that is so disruptive and complex needs regulation, even if it’s as useful as AI. If it’s impacting people’s lives and society, there has to be some form of regulation.
Q. Some people are calling for a moratorium on AI development until it’s regulated.
A. That would mean pausing the benefits of this technology. Do we really want to stop sending flood alerts to millions of people? Stop medical research and development? I don’t think so. Before implementing a pause, it’s crucial to have a detailed plan of what will happen in that time. Additionally, effective coordination with all AI development stakeholders is essential. It’s important to communicate with governments so we can figure out what we want to do and how to do it.
Q. Is there any sector where applying AI is dangerous?
A. I don’t think so much about specific sectors, but rather about the technology’s application. The application of technology in medicine varies greatly from its application in the transportation sector, resulting in different risks. It’s crucial to consider how this technology is implemented in each context. For instance, while I appreciate what we’re doing with Bard, I think it’s a terrible idea to seek legal advice from it. However, if you were to ask me whether Bard should be used to write an essay and explore ideas, my answer is, of course.
Q. Is it a good idea to ask AI for help when we’re sick?
A. I would not get a medical diagnosis from a chatbot. Generally speaking, if I want factual information, I would go to the Google search engine. If I want to know what happened in Madrid this morning, I wouldn’t use Bard for that either.
Q. Do you think AI chatbots like Bard and ChatGPT will replace search engines?
A. I can’t speak for other companies, but let me tell you what we’re up to. When it comes to Bard versus Google Search, they’re totally different. Sure, we’re integrating AI and large-language models into Search, but it’s a whole other story. Bard started as an experiment and we’re still figuring out what it’s useful for. Oh, and let me tell you, AI has been improving our internet searches for way longer than you’d think. Six years ago you had to be quite specific with your queries. But now? Just writing something close enough will get you what you need.
Q. What do you think this will all look like in 10 years?
A. I think it will be amazing. I get excited when I think about all the things that could benefit society, like the ability to understand thousands of languages. Right now, our goal at Google is to be able to translate 2,000 languages, but in 10 years I think we can reach all 7,000 languages spoken in the world, even languages that are disappearing. It would be extraordinary. But at the same time, I hope we have made amazing progress in addressing all the risks we have discussed.
“Some of our fear of AI comes from an inability to accept that machines can also do creative things.”
Q. What would need to happen for us to lose control over AI?
A. We would have to somehow develop systems that can design themselves and set their own objectives. That would be problematic, but we’re light-years away from that. That’s the science fiction scenario. A more likely and troublesome scenario has less to do with out-of-control AI and more to do with people — humans using these technologies for evil purposes. We’re aware that the very system used to decode protein structures for drug development could potentially be misused to design harmful toxins or viruses. This is an immediate concern that weighs heavily on my mind.
Q. Where do you think people’s fear of AI comes from?
A. [Laugh] Movies. I say it jokingly, but I think it’s true. Going back to what I said before, to the idea that when this technology starts being useful, we simply stop calling it AI. It seems like we reserve this label for things we see in movies or things that we still don’t understand. However, I believe this fear stems from a question that has haunted us for ages. What is the essence of our humanity when machines can effortlessly perform tasks that traditionally set us apart from other living beings? Until recently, we thought we were the only ones capable of creating art, the only ones with creativity and empathy. I think some of our fear of AI comes from an inability to accept that machines can also do creative things that were previously considered exclusive to humans.
Q. Perhaps what scares us is that machines can do something better than we can?
A. We have to face that fear. We have to adjust our way of thinking and ask ourselves who we are and what we are good at. There was a time when we believed that intelligence was solely measured by the ability to perform mental calculations effortlessly. If you couldn’t recall textbook knowledge verbatim during exams, you were considered less intelligent. We used to hold onto such notions, but progress has shown us otherwise. I believe the trajectory of AI will follow suit. Perhaps it’s advancing at a rate that surpasses our capacity for assimilation. But I believe that humanity has always adapted and will continue to do so. SCIENCE & TECH