Blog

“There is nothing about AI that is anti-democratic” – François Chollet

“There is nothing about AI that is anti-democratic” – François Chollet

François Chollet is a sofwtare engineer at Google and the author of Keras, a leading deep learning framework used by more than 350,000 people around the world. He also wrote “Deep learning with Python” and created Wysp , a social network and learning platform for artists.

You’ve created Keras, one of the most used deep learning library in the world. Could you explain what’s a deep learning library?

Deep learning is a set of technologies for creating programs that solve problems from examples. For instance, if you go over thousands of your vacation pictures and you assign tags to them, like “beach”, “forest”, “party”, etc., you can use these images and labels to train a “deep learning model” that will associate a new image with corresponding labels. That’s “image classification”. In general, deep learning performs especially well on machine perception problems, things like computer vision, speech recognition, and so on. Deep learning is used almost everywhere these days, in particular most smart applications that Google create involve deep learning in some form.

A deep learning library is a software toolkit that enables researchers and engineers to easily create deep learning models. At this time nearly half of all people who do deep learning are using Keras. Keras is also at the core of TensorFlow 2.0, the new release of Google flagship machine learning platform.

Through Keras and your book, Deep Learning with Python, you’ve worked to democratize AI. However, isn’t AI an anti-democratic technology ?

There is nothing about AI that is anti-democratic. That’s like saying that smartphones are antidemocratic, or that computers and the Internet are antidemocratic. They’re all technologies that have immense potential to empower individuals. And most of the time, that’s exactly what happens. AI is being used to improve healthcare, to develop self-serve education solutions, to improve the efficiency of many processes across almost every industry, including farming, and so on. It’s empowering to have an AI assistant give you recommendations about what you should plant in your fields to get the best yield at any given time, it’s empowering to have a translator on your phone that can translate to and from almost any language. In general, AI will increasingly become our interface to an increasingly rich world of information. And the people who stand to benefit the most from this trend are the people who previously did not have access to that information and that expertise, who did not have access to the services that AI is increasingly able to deliver at a low cost or even for free. That was true for smartphones are well, which were also derisively dismissed by the techno-pessimists in 2007 as toys for the rich, but that have dramatically improved the lives of nearly 3 billion people since. Bashing new technology in this way is backwards and retrograde, and ends up hurting everyone. The better attitude is to ask what the problems are and how we can fix them.

Of course, if you put these technologies in the hand of an authoritarian state, they can be used for anti-democratic purposes. That’s true of any technology, including the Internet, computers, television, or the radio. We should be mindful of that. We should do what we can to stop or limit unethical uses of AI, for instance by refusing to work on AI projects that seem to go against the public interest. I believe there is rising awareness in the AI community of these ethical questions.

You’ve wrote that one of the greatest risks of AI is manipulation. What did you mean? And how can we cope with this phenomenon?

Our lives are increasingly taking place online. And as a result, they’re increasingly being mediated by algorithms. Your conversations with your friends, your morning newspaper, the books you read, the movies you watch, all of this is now happening on devices and in apps. The algorithms that enable these processes behind the scenes have visibility into your entire information consumption, which enables them to build very accurate psychological models of who you are. But that’s only half of the problem. These algorithms are also in charge of managing your information diet. They select which updates from your friends you will read, which articles you will read, which videos you will watch, and so on. They have simultaneously clear visibility into your current mental state and your current opinions, and extensive access to levers to modify this mental state. And as a social animal, you are *extremely* influenceable. This is a setting in which an AI can start “optimizing” your information diet to push you in a specific direction. Which can take a political turn — this is not very different from old school propaganda, but at a larger scale and better optimized. Certain authoritarian states are probably already doing something like this, and it will only get worse as AI capabilities improve.

In itself, having your information diet being managed by algorithms isn’t the problem. If you are in control of the direction in which the algorithm is pushing you, this is actually empowering. Control applied to oneself is free will. Just like deliberately reading a book of your choice will influence you in a way that you agree to, or deliberately taking article recommendations from a friend you trust will influence you in a way that you agree to. If algorithms can help you learn the things you want to learn and help you become who you want to be, like a caring teacher or a trusted mentor, that’s great. In general, AI has tremendous potential to help us make sense of the information around us and help us have greater control over our own lives by helping us overcome our own weaknesses. I’ve been talking about this for a decade.

The problem is when other people, including bad people, start injecting their own goals into these algorithms (for instance, getting a particular candidate elected to the US presidency). It may not even be something that is done by the creator of the algorithm: 3rd parties can sometimes hijack neutral recommendation algorithms to push political propaganda, as we saw to an extent with the Kremlin’s influence on US social media dynamics during the 2016 presidential election.

To address this, whenever we create information management AI algorithms, we should analyze whether they present the risk of being hijacked. We should preempt or patch any potential vulnerability we find. And as much as possible, we should give control to the end user over what the algorithm optimizes for. We should give users settings and knobs to adjust by themselves how they are affected by the algorithms in the products they use. We should build technology that puts people in control, not technology that controls people. The future of AI should be to become a personal mentor or a good librarian, not a nefarious propagandist.