Blog

“It is not only AI that needs to be responsible. It is, above all, the humans deploying it” – Yoshua Bengio

“It is not only AI that needs to be responsible. It is, above all, the humans deploying it” – Yoshua Bengio

Yoshua Bengio is one of the world’s leading experts in the field of artificial intelligence and a pioneer in deep learning. 

Since 1993, he has been a professor at the Department of Computer Science and Operations Research at the Université de Montréal. Co-director of CIFAR”s programme Apprentissage automatique, apprentissage biologique, he is also the founder and scientific director of Mila, the Institut québécois d”intelligence artificielle, the largest university research group in deep learning in the world.

You explain that one of the next objectives is to achieve human-level AI. What does that mean? And what are the obstacles keeping us from doing so?

Achieving human-level AI is an approximate goal, as people can be very smart on some tasks and relatively dumb on others. This is true of humans, but even more so of today”s intelligent systems. Alpha Go, for example, only plays go, it can”t recognise faces, drive a car or produce machine translations. For each of these tasks we have systems that have been trained separately on specialised data. For the time being, we have systems that, in general, do not even reach a human being”s level of skill, with exceptions such as the game of go. We would therefore like these systems to reach a human”s level of skill on each of their tasks, but also overall, so that the same system has the capacity to understand the world as comprehensively as possible. And that is still a long way off. What is preventing us from doing so? These are questions that researchers are asking themselves. We obviously don”t have the answers, even though each researcher has a hunch. What is clear is that there are both material barriers (our brain is much more efficient in terms of computing power and neuronal connections than the artificial networks we are building today) but also algorithmic barriers (in terms of the mathematical methods behind these systems).

How could AI help us face the climate emergency?

The algorithms already at our disposal can lend us a hand in this fight against global warming. For example, we have a project on modelling new materials. Learning algorithms can be used creatively, to invent new molecules. This is particularly valuable in dealing with climate change because new materials will be needed if we are to create more efficient batteries or to capture CO2 emissions. We have another project where, working in cooperation with climatologists, we are trying to better model climate changes themselves. There are all sorts of things which the current physical models are not capable of grasping, such as clouds or water flow (to predict floods). We are also working to use these climate models to be able to predict whether a home will be flooded in 50 years, and if so, how that will look. The idea here is to raise awareness of global warming and make people become aware of how they will be affected by climate change. There are many other areas of application, for example to optimise an infrastructure”s energy consumption, to better spread the demand for electricity or to increase the share of renewable energies used by the electricity system.

One of the things you advocate for is responsible AI. What is responsible AI? And how do you, at Mila, make sure you are creating responsible AI? 

It is not only AI that needs to be responsible. It is, above all, the humans deploying it. We are very far from creating AI that needs to be responsible, quite simply because we make highly-specialised applications that don”t (yet) need to understand human psychology. When we do get to that point, yes, we will need to build responsible AI that understands human morals and the values that humans can have. For the time being, what I and my colleagues are most worried about, in Montreal, are the ways in which these technologies will be used. AI can be used to do good, to save lives and to improve people”s well-being — but it can also be used to take advantage of people and manipulate them, to make money at the expense of people”s health and the environment.