When robots will overtake humans


Artificial intelligence could overtake the human brain by 2029, and be a billion times smarter than us by 2050. “For robots, then we will be just a fly in the room, and nobody knows how they will deal with this fly,” warns computer scientist and author Mo Gawdat, former commercial director of Google X, Google’s innovation component. Here are five striking things from his new best-selling book Scary Smart : The Future of Artificial Intelligence and How You Can Save Our World (Terrifyingly Intelligent: The Future of Artificial Intelligence and How You Can Save Our World), published by Pan Macmillan.

Posted at 8:00 a.m.

Nicolas Berube

Nicolas Berube
The Press

New era

Developed over decades, artificial intelligence (AI) has produced striking results: the best players of chess and go (a complex Chinese strategy game) are now robots. Not to mention commercial and military use, where armed autonomous drones today are able to identify the face of their target and find the best way to hit it. However, an exponential increase in the capacities of AI is at our doorstep, in particular due to new quantum computers, which are much faster. “We will not experience 100 years of progress in AI over the next century: rather, we will experience 20,000 years of progress at the current rate,” writes Mo Gawdat. And that’s assuming that no unforeseen technological revolution comes to further increase the pace. »


PHOTO FROM MO GAWDAT’S LINKEDIN ACCOUNT

Mo Gawdat, former commercial director of Google X, the innovation arm of Google

A fly

By the middle of the century, artificial intelligence will be a billion times smarter than the human brain. “It will be like comparing the intelligence of a fly with the intelligence of Einstein. And in this example, we are the fly. And if we were able, with our limited human intelligence, to invent AI, no one knows what AI, with its supreme intelligence, will be able to invent in turn, he notes. According to Gawdat, negative consequences are to be expected, simply because it is unclear what interests the AI ​​robots will have at heart. “Putting machines capable of intelligence in the hands of humans capable of cruelty will inevitably produce scenarios where the machines will not act for the good of all. »

AI Armament Race

The race between various companies like IBM and Google, and different states like the United States, Russia and China, to develop ever more powerful AI is very similar to that to develop atomic weapons, notes the author. “In the service of a state, these superintelligent machines could create an advanced virus, manipulate information or cause chaos in the financial markets in seconds. Not to mention military applications: if the opposite side allows AI to make decisions about the use of force, we will either have to do the same or accept being overwhelmed in our response time or to our military strategy, he notes. “This is the kind of scenario humanity managed to avoid when it was one to midnight in the 1960s. [lors de la crise des missiles nucléaires entre les États-Unis et l’URSS] because we move at the speed of humans. But when machines think for us, we can only hope that they come to the same conclusions as us. »

Solutions ?

Mo Gawdat also believes that our future with AI can be generally positive. How? ‘Or’ What ? “You have to teach AI like we teach our children. The AI ​​will maximize our intentions. A car allows us to improve our ability to move around. In the same way, AI will accelerate our intelligence, our values ​​and our ethics. The challenge, he writes, is imparting the right values ​​and ethics to the robots. “The AI ​​will take that seed and create a tree that will offer an abundance of that same seed. As such, we all have a role to play. Even being polite and constructive on social media can help, as the AI ​​will read these posts and use them to determine what type of reaction is acceptable or not. “If we use love and compassion, AI will use those principles as well,” concludes Gawdat. We are like the parents of a prodigious child. One day he will be independent. Our role is to ensure that he has the right tools. »

He said

Robots will be able to do everything better than us… I don’t know exactly what to do about it. This is really the scariest problem in my opinion. […] I’m not normally a regulation advocate…but [l’intelligence artificielle] is a case where you have a very serious danger to the public.

Elon Musk, co-founder of Tesla and SpaceX, in 2018

Scary Smart : The Future of Artificial Intelligence and How You Can Save Our World

Scary Smart : The Future of Artificial Intelligence and How You Can Save Our World

Pan Macmillan Editions

208 pages



Leave a Reply

Your email address will not be published. Required fields are marked *