skip to Main Content

Can Artificial Intelligence (AI) replace human intelligence one day?

Can Artificial Intelligence (AI) Replace Human Intelligence One Day?

Can Artificial Intelligence (AI) replace human intelligence one day?

Last years’ success of algorithms dedicated to Machine Learning has started a very original deductive process that guides our mind in a series of spontaneous steps aimed at believing that the artificial intelligence (AI) can actually replace the human one over the years. In truth, it is not exactly like that. According to Yoroi’s founder, Marco Ramilli, the experts in this field are asking themselves many questions about AI, and they have created a new tool called “Explainable Artificial Intelligence” (XAI).

DARPA’s XAI aims to understand how AI reaches a decision. An artificial “mind” finds difficult to correct a human error, while a machine error is easier to correct for a human one

XAI is a DARPA project aimed to understand how the “system” (AI) has reached a decision. Every decisional algorithm, whether artificial or human, implies a margin of error. Such error is very different if it happens in a human decision or in an artificial one. In fact, an artificial “mind” will find the former difficult to correct, while the latter will be easier to correct for a human mind. From this evidence comes the need to comprehend how the “machine” (meant as an artificial intelligence algorithm) has made its decision. If man can understand how the automatic system has chosen a solution over another, in case an error is made, he can correct it or, even better, he can teach the machine not to make “that error” any more in its decisions. We are moving the problem from the ability to decide to the ability to trust the decision.

When Artificial Intelligence is called upon to make real strategic decisions, the concept of trust must be well established

There is no need to remember that, as long as AI-based systems will be used to decide when it is “time to do the laundry” or “how often to water the garden”, or even “when to raise the temperature in the house”, the concept of trust related to AI remains somewhat marginal. However, when such systems will be employed to drive our cars autonomously, to perform surgical operations, or to lead wars on enemy territories, the concept of trust must be well established in the AI. We shouldn’t have any doubts about our safety when our car decides, completely by itself, to drive at 130KM/H on the highway, we shouldn’t fear a software error just before undergoing an open-heart surgery, nor we should feel remorse if the artificial intelligence decides that it’s time to start a missile conflict.

All the questions about the AI that the ARPA XAI project is asking itself

On the background of the ARPA XAI project, there is the need to understand how a system based on artificial intelligence algorithms has made a decision in order to be able to correct it if necessary and/or prevent wrong decisions. This discipline asks the Artificial Intelligence the following main questions. Why (AI) did you make this choice? Why (AI) did you not choose another solution? When can I deem that decision to be the right one? When can I trust your decision? How can I detect your errors so that I can correct them?

The example of the “Wolf” and the “Dog”

One of the most patent examples is the case of the “Wolf” and the “Dog”. The application of decision systems based on supervised machine-learning, specifically designed to distinguish between a dog and a wolf, and the adoption of the “X” (explainability) to understand how the system “observes” and “reasons” in every step of the decision-making process, allowed the human being to understand how the artificial system based its decision-making capacity focusing on the “micro details” of the paws. In other words, the system was able to tell apart wolves and dogs in 97% of cases through a hyper-detailed study of the paws.

All the doubts about a system that has a 97% capacity for success

The question was pertinent: “How can I trust a system that has been successful 97% of the time, but that distinguishes a wolf from a dog simply by analyzing their paws? Can I, a human being, say that that system has understood what a wolf actually is, in order to delegate to it a decision regarding the life of that animal? It is evident that a wolf is not identifiable simply by its “paws,” it has a distinct aggressive behavior, a way of moving and smelling its prey that a dog does not have. A wolf lives in different ways and places, and has different reactions to human contact than a dog. A wolf howls and growls differently than a dog, a wolf looks at its prey in a different way than a dog.

The first step toward a good decision is to be aware of the issue

When a human being has to make important decisions, he needs to be adequately sure that these are based on awareness. No wine producer would delegate the decision on the fermentation of his or her product to a Formula One driver, because the latter is not typically an expert on this subject. Just as a driver would never delegate to a winemaker the decision on the tires he should mount on his car during a race. The first step toward a good decision is to be aware of the issue, to understand fully the dimension in which to make the decision.

The example of the dog and the wolf shows that, even if the artificial system has a high success rate, it is not possible to confirm that it has really understood the subject. Ergo, we cannot delegate our trust to it

The example of the dog and the wolf has actually shown that, even if the artificial system has a high success rate, it is not possible to confirm that it has truly understood the subject (a wolf is not defined by its paws). Being aware that the success of the system is based on a limited set of decisions, we cannot delegate our trust. This is because we are certain that: (a) the system is not aware of what a dog and/or a wolf actually is (b), however large and complex, the set of tests does not represent the totality of the actual cases in which the artificial system could be called into question.

Before it can be autonomous in making decisions, Artificial Intelligence will have to win our trust

Artificial intelligence can be of enormous help to the whole of humanity, amplifying its capacity for action and its speed of reaction to predetermined events. However, – concluded Ramilli – before it can be autonomous in making decisions, it will have to earn our trust.

The Marco Ramilli’s Blog

Back To Top