arrow twitter facebook twitter
Thinking about AI

Interview with Yoshua Bengio, Pioneer of AI – 02

On June 7th, 2019 at the MILA (Montreal Institute for Learning Algorithms) in Montreal, Canada, I conducted my interview with Professor Yoshua Bengio, who is one of the pioneers of AI. He is well-known as the “father of AI” for his great contribution to developing so-called deep learning. He has received the 2018 ACM A.M. Turing Award with Geoffrey Hinton and Yann LeCun for major breakthroughs in AI.

In my interview, I asked him about the possibilities of AGI, biased data, people’s concerns about GAFA and China, the opportunities and risks of AI and the future of AI. All these questions are based on my previous experiences in the University of Cambridge as well as many international summits and conferences on AI I have been invited to recently.

Bengio is also noteworthy because he chooses to remain as an academic, staying at the University of Montreal as head of the MILA, while other AI leaders such as Geoffrey Hinton have left academia and now work for Google. Bengio continues to contribute to teaching students as well as engaging with local communities. He believes the education of future generations and people’s engagement with AI is crucial for the creation of a better society including AI. This is because he is aware of not only the opportunities but also the risks of AI. As he owns his startup, so-called Element AI, he is instrumental in buildinga bridge between academia and the business world.

This is my interview with Yoshua.

People’s Concerns about GAFA and China

TT : People are concerned about big companies such as Google or Apple, or a country like China because they have a huge amount of data so they can do what they want.

YB : Yes, but they probably also want to be considered as positive, responsible agents in society, and so if we do the right communication, to explain the issues, and engage in social discussion about these things I’m optimistic that we can have those social norms be improved. And of course, it means changing how to do things.

TT : I met Dan Klein, chief data officer from Valtech in Cambridge, UK. He is also concerned about China.  Because China has a huge amount of data which computer scientists can use in order to develop the AI, while UK and EU have the limited data access because of the Data Protection Act.  Also Chinese companies pay high salaries for computer engineers outside of China, so great European engineers are going to leave for China.

YB : ヨI don’t think our European engineers are going to China.

TT : Oh really? Maybe American?  Oh, I don’t know.

YB : Not much. No but they don’t need that. They have plenty of good scientists and engineers. The issue with China is that it’s difficult for many of us to have confidence that the current political system of China will behave responsibly, but it’s true of many countries that governments are not very responsible. If you consider, for example, climate change, the US has been behaving very badly.

TT : Yes.

YB : So even if it’s a democracy it doesn’t mean that governments will be doing the right things. So I think every country has an interest in being part of the global consensus for obvious economic reasons, but also, to feel good about themselves. So I think we should not pit countries against each other and peoples against each other.

TT : Yes.  I totally agree with you.

YB : Yes.  I totally agree with you.

TT : If you develop an algorithm in Canada, would it also work in Japan or should we adapt it to our society?

YB : No the algorithms are very generic. It’s like math.  Addition is the same in Japan as in Canada.

TT : No the algorithms are very generic. It’s like math.  Addition is the same in Japan as in Canada.

YB : No, but that’s data. That’s not the algorithm.

TT : Okay. I see.

YB : So the learning procedures are going to be the same, but the data will be different, and the systems that are trained using the learning procedures and the data, of course, will be different in different countries.

TT : So once we have the algorithm, we can use it with our own data.

YB : That’s right.

TT : Then it will work very well.

YB : It works better.

The Opportunities and Risks of AI

TT : You might be getting tired of talking about the opportunities and risks of AI because people always like to ask you as one of the leading experts on AI.  But I also think it’s very important for us to understand both.  Then we can maximize opportunities and minimize risks in order to get social benefit from AI.  So what are the biggest opportunities and the biggest risks? I know, there are many, many risks, but from your point of view, what are you concerned about the most?

YB : So with AI in terms of opportunities, I think they’re a huge potential for social good, in healthcare, in the environment, fighting climate change, which is a very important question for the planet. Maybe a little bit further down the road in education as well. And on the risk side, I think the biggest risk really is a threat to democracy, a threat to the stability of our social fabric because of things like killer drones, because of things like political advertising and the influence that one can buy on social networks, because of things like concentration of power, in a few hands, in a few people, a few companies, a few countries, and because of potential social unrest that could come from rapid automation. So all of these could be disruptive to society and we have to be careful where we cross the red line between what is acceptable and what is not acceptable in the applications of AI.

TT : Who determines the red line?

YB : It’s a very good question. Humans define their social norms through a global discussion, and in different countries it might be different types of people. Scholars usually have more impact on the result, and scientists, I think, should be part of the discussion, but regular citizens should be part of the discussion as well.

TT : Yes, I agree.

「人間を幸せにするためのAI(AI4Good)」イベントで地元の100人以上の参加者と一緒に、2日間にわたって行われたハッカーソンの様子

Need for a global collaboration

YB : At the end of the day, in decent democratic countries it’s going to be democratic decisions where we put those lines. Where I think it’s trickier is that many of these decisions cannot be taken in isolation in each country. There has to be global international coordination.

TT : Yes, definitely. I met Mr. Irakli Beridze, the head of Centre for Artificial intelligence and Robots, the United Nations Interregional Crime and Justice Research Institute (UNICRI), and he said that he goes to Russia, Syria and other countries because the governments of those countries also have to cooperate.

YB : Right. Yes. It’s very important.

TT : But it’d be very difficult.

YB : Yes, unfortunately, we don’t have a good international coordination framework. The UN is very weak.

TT : Oh really?

YB : Oh yes. It doesn’t have any power.

TT : Really? I thought it had power. No?

YB : No, the UN doesn’t have nearly enough power. One issue I’m a little bit more familiar with is the killer robots and lethal autonomist weapons. The Secretary General has been saying for a while now, this is both morally repugnant and dangerous for global security, but the problem is that a lot of the decisions in UN decision-making committees and treaties happen by complete consensus. So if just one country in the committee says no, there’s no treaty.

TT : Yes.

YB : That cannot work. The problem is that individual countries have been too scared of losing some sovereignty, some power, to a higher level, which would be, for example, international government, but we have to do that otherwise we will not solve the climate change problem. We will not solve fiscal issues across the planet. We will not prevent dangers from misuse of AI. So there are lots of issues for which we have to have global coordination.

TT : Yes. I also believe we need more discussions about living with AI in the future, in terms of both opportunities and risks, locally and globally. Thank you very much.

YB : You’re welcome.

Acknowledgement

I would like to express my gratitude to Myriam Côté, director of AI for Humanity at the Mila for her kind invitation and great support on my cross-cultural research on AI for good.

The 1st part is here

SHARE