The AI Paradox

Fake news, hype… it’s perhaps a misnomer to call AI, intelligence. Whatever, it becomes more and more complicated to see what is behind AI acronym. As we are embracing a new area of decision process, what can we expect from this seventy years old technology? How far the AI based services could change our lives?

AI is everywhere. Really?

You can’t open any economic newspaper without reading the acronym AI several times in each article especially those talking about DeepTech startups. What’s going on with this technology or these algorithms… and first of all what are we talking about?

What is AI about?

During the last Viva Technology event, I had the opportunity to attend Bruno Maisonnier (founder of Aldebaran Robotic) talk on AI, “the leading edge of artificial intelligence”. He was very clear. There is no “intelligence” behind Machine Learning or Deep Learning methods. It could be true if we define this term as “the capability of adaptation in a non-forecasted situation”. Of course nobody is fooled by the marketing goal expected by a smart entrepreneur.

Let’s take an example. Try to ask your personal digital assistant to address the following statement: “I have two eggs in my fridge and I am very hungry”. I bet you that you will wait for a while before it tells you: “ hey guy, make two eggs over…”

However there is not a day without a new startup launching a new product or service based on this technology. Even some large companies, whatever their activity sector, claim that this new technology is going to change drastically their long term strategy.

One argues it is a communication or even semantic issue! But what is the kind of intelligence behind this acronym?

To deal with this issue, I will use the following definitions of “intelligence”.
The Weak intelligence (Weak AI) is focused on one narrow task, by solving a specific problem with known context such as Chess or Go Games. The Strong intelligence (Strong AI) is something else: the problem to be solved requires consciousness, sentience and mind. The decision process has the ability to take a decision in non-forecasted context.

Scientifically, we can say that the birth of the AI goes back to the 50s with the Perceptron [1], the first machine learning algorithm. Then we had to wait for the 90s to see the first results with Deep Blue beating Kasparov in a Chess Game in 1997. The third period boosted by computer power and storage capacities started in 2011 with IBM Watson and Alpha Go performances. They gave the Weak AI its first resounding successes.

Nowadays AI is an outstanding set of models, tools and powerful algorithms that are going to revolutionize the making decision processes domain. Even if these results are linked to computer power and storage capacities evolutions in the last decade.

The actual revolution: a new step in decision making process

Beyond these facts, we can’t deny that a new decision making process area has begun. What is interesting with the AI phenomenon is the analogy with what happened during the 50s with the first revolution in the decision making process. It was the birth of what we call “Operations Research” discipline. Thanks to “the Simplex”[2] an algorithm due to of linear problems (Min ; subject to Ax=b). It was used in particular in the Second World War to fix the troop transportation to the front issue. As a huge number of optimization problems can be written as linear programs, we thought, at that time, that with the arrival of the first computers, no problem could resist this method. In fact, we waited for 30 years before getting a powerful tool for linear optimization problems. Indeed, IBM developed MPLX Code in the 80s and Ilog[3] the Cplex code in 1987.

Like all new theory, after the hype period, the frustrations occur and for the Simplex and the linear codes these frustrations took two shapes. One was the data size issue and the second was the non-convexity of large numbers of problems (with multiple local optima).

The first issue was called complexity theory[4]. It is about the NP-complete problems, the time required for running the algorithm according to the size of the data. To make it short, if the function that links this time to the data size is not linear, we could expect a non-reasonable computing time. Therefore, the convergence issues were also at stake for theses algorithms that we also find today in Deep Learning approach[5] for AI.

The non-convexity issue was partially fixed by the Simulated Annealing[6] approach. This is a probabilistic technique coming from statistical mechanic for approximating the global optimum of a given function. Specifically, it is a metaheuristic to approximate global optimization in a large search space or if the search space is discrete. This method is due to an IBM Researcher S. Kirkpatrick. Today[7], for solving large scale combinatorial problems Fujistu announced the Digital annealer[8] based on Quantum Computing. In the same paper Fujitsu claimed that they foster the software part in their product with a tremendous effort in AI technologies.

There is no Strong AI paradox; this is only the very first step in this new area in the decision making process world. In the 50s, we would have found the same expectations and frustrations for the “Operations Research” theory. But we learnt two things: first, whatever the solution chosen, the nature, the structure and the size of data remain a crucial issue. Second, the combinatorial and non-convex nature of the problems will remain among the difficult ones for a long time, as we know since Laszlo Lovász.

How long will it take to accept these new services?

Indeed, some hurdles remain to overcome, even if some tremendous development of weak AI will become reality much earlier than expected. Beyond the ethical, the regulatory and legal issues and the fact that we will have to manage the societal impact of the work sharing process, it could take some time before we can delegate in a tenable and sustainable ways.

However I can’t make this assumption without mentioning Carlota Perez’ book: “Technological revolution and financial capital”. She explains that all of the last five industrial revolutions took between 45 and 50 years to be fully digested by the mass market.

It took more than 40 years for Operations Research between De la Vallée Poussin first works and Pierre Haren’s Dantzig algorithm to Cplex code. So we can think that for Strong AI we have more than a decade ahead to remove all the frustrations and fears generate by AI.

This new industrial revolution could follow the same law?

More info:

[4] Garey, Michael R.Johnson, David S.(1979), Computers and Intractability: A Guide to the Theory of NP-CompletenessW. H. Freeman,
[5] Understanding deep convolutional networks – ‎Stéphane Mallat Philosophical transactions A
[6] Optimization by Simulated Annealing:  S. Kirkpatrick; C. D. Gelatt; M. P. Vecchi; Science New Series, Vol. 220, No. 4598. (May 13, 1983),
[7] “Les  Echos” May 30  2018 : Fujitsu s’inspire de la recherche quantique pour son « digital annealer »


Read also on Hello Future

For a Contextual Approach to AI Explainability

GettyImages - machine learning - research recherche

Machine-Learning-Based Early Decision-Making (ML-EDM)


AI in Medicine: Uses and Consequences on Radiology Work

Une développeuse avec son ordinateur devant une armoire informatique

AI Research for Discovering Faults in the Fiber Access Network


Can Chatbots Really Influence Us?

A man at his desk checks datasets on multiple screens

Unbiased AI: Are Companies Ready?


Encrypting or classifying Internet traffic: do we have to choose?


Advocating for ethical and responsible AI by design