Blog

“AI will become distributed across many devices which learn in a privacy preserving way”

“AI will become distributed across many devices which learn in a privacy preserving way”

Max Welling is a research chair in Machine Learning at the University of Amsterdam and a VP Technologies at Qualcomm. He is also the co-founder of “Scyfer BV” a start-up specialized in deep learning which got acquired by Qualcomm in summer 2017. 

You’ve begun your AI career in computer vision. In a few simple words, could you explain to us how this technology works?

With computer vision you take an input image, consisting of a collection of pixels, and you’re trying to analyse what’s happening in that image. For instance, which objects are in the image and how do they relate to each other, which actions are depicted… typically those things that we find interesting when looking at images.

In the old days, people had much more geometric methods to analyse these images. The modern way of doing it is by using deep learning. Basically, the pixels of the image, input into a deep neural network, get sent through a whole bunch of layers and then you get some kind of abstract representation. We call this the learned features. Then from there, there might be different tasks that you may want to do. For instance, you may want to detect, localize, segment or classify things.

Is this process explainable or is it a full Blackbox?

On the one hand it is a Blackbox because you’re fitting a very complicated function. On the other hand, you can do things to visualize what the network was looking at. For instance, you can go back to the input image and highlight the regions that the neural network was looking at when it made the decision to answer if there’s a cat in the image. So, you can certainly visualize the decisions or the predictions which are made by the neural network but you would have to do some extra work.

You’ve written that one of AI’s major challenges is its energy consumption. How can we solve this issue?

The neural networks we are talking about are getting bigger and bigger and consume a lot of energy. At Qualcomm, that we’re working on reducing this energy consumption in two ways.

One is called model compression. Regarding model compression you take a trained neural network, which is very big and over parametrised, and then you start shrinking away bits and pieces of that neural network to make it a smaller and more efficient one.

The other one is quantization. In this case we assume that a neural network doesn’t need as much precision as we provide to it to do computations. In fact, we often add noise to neural networks to make them generalize better. Some people have even gone to the extreme of one bit for every weight and one bit for every activation. That worked for some classification problems, but it doesn’t work for everything. Thus, you typically want to have a few more bits. Qualcomm hardware works with eight-bit quantization. So that’s already a lot less and a lot more energy-efficient.

After compression and quantization we can for instance use the resulting neural networks for workloads on a phone or other mobile device.

Could you introduce us to Qualcomm? How are you working on AI?

Qualcomm is a company which does mobile computing. Thus, they build a lot of modems, but also AI chipsets for mobile devices. Basically, they focus a lot on energy efficiency. We started “Qualcomm AI Research” last year, which came out after the acquisition of Scyfer and brings together AI researchers across the company globally. We have machine learning teams working together based in the US, Netherlands, Canada, Korea and China.

Qualcomm is interested in “perception” which is like putting cameras and other sensors on devices and analysing the information stream which comes in. The other part is about building energy-efficient neural networks. So basically, building a platform for neural networks (or neural computing) on phones or other IoT devices, and having a software package around it where you can take your favourite neural network, push it through this platform and compile it to run efficiently on your phone.

Would you have an example?

For instance, when you have a camera and you’re pointing your camera at something, you may want the information coming out of that to be analysed. Another case where it could be used would be speech recognition: you talk to your phone and you want what you say to be recognized by your phone. Thus, you need to have some speech engine which takes your audio signals and turns it into words.

For you, what’s the future of AI?

Big question. I think AI will become distributed across many devices which learn in a privacy preserving way.

We also see a push towards more flexible AI. Right now, with AI, you collect a lot of data on a very specific task and a very specific domain. Then you train your neural network to just execute that particular task. But if you change the task a little bit, then they will fail dramatically while humans are very good at learning a lesson in one context and then applying it in another context. This is how we are a lot smarter than machines. We can process less information than machines, but we are much more flexible. I think that we are going to focus a lot on trying to reach this sort of artificial general intelligence.

Also, we might be in for big surprises with computers if we get quantum computing running and we find interesting applications for machine learning.

Finally, you can think about how our brains can inspire computing… humans are a lot more energy-efficient than neural networks. That’s why there’s also a lot to learn from neuroscience.

In which cases is the use of AI important in an urban context?

I would say the most important ones are energy efficiency, mobility and safety. For instance, through an app showing you how to go from A to B as efficiently as possible and by using all forms of public and private transportation. You can also use it to find an optimal route or just a parking lot if you own a car. Then there’s energy efficiency: making buildings energy-efficient, which means anticipating people’s needs to optimize their energy consumption. Regarding safety, you can have cameras and sensors to make the city safer but without becoming Big Brother. That’s challenging, so we will need to regulate the use of AI to prevent its unethical use.

Last question, what does your ideal city look like?

My ideal city doesn’t look like a completely digitized city. I’d like it to be efficient and safe, but we shouldn’t lose track of human values.  I don’t want it to be a technocratic city. I’d like it to be a city filled with art and where humans are still at the centre. Maybe we shouldn’t let technocrats nor engineers design our cities. Maybe we should let architects design our cities because they think much more about what humans need.