Explorer

Explainable AI: Bridge Between Humans & Machines Making Way For New Tech Evolution

The tech giants are coming up with a new tool that is called the ‘Explainable AI’ that not only provides accurate results but also gives an explanation of the factors considered to reach that result.

New Delhi: As technology evolves with the human race, the prospective vision of human-machine collaboration becomes more clear and more solid. In this evolution of the relationship between humans and machines, Artificial Intelligence (AI) plays a significant role. 

What seems to be the next step in this evolution is the AI that explains itself. It is already known that artificial intelligence is the tool that helps in processing big data, analyses it in place of a human being in lesser time, and gives more accurate results. 

However, more than often these results can only be taken by the human operator as they are, as no explanation can be attributed to the results except for guessing a plausible reason.

Now the tech giants are coming up with a new tool that is called the ‘Explainable AI’ or XAI that not only provides accurate results but also gives an explanation of the factors considered to reach that result.

What Is Explainable AI or XAI?

According to a report by the news agency Reuters, Microsoft’s LinkedIn was the first platform to apply XAI on its platform and it now claims beneficial outcomes.

Explainable AI can be understood in a manner that AI already provides accurate results of a data set by running multiple functions in much lesser time than humans. However, the results are often incomprehensive to humans.

It often leaves them with the choice of accepting the results or leaving them out entirely. However, the XAI provides a short description of the result giving the reason that how it reached the conclusion.

In the case of LinkedIn, the software was introduced in July 2021, and after months of trial and error, the system is finally now accurately able to predict which subscriber is likely to renew the membership and which is likely to cancel.

Earlier, these results were presented without explanation which left little to no choice for the salespeople in terms of which approach can make the subscriber renew.

However, now that the final result has a description of factors influencing the subscriber's choice, the salespeople can more effectively work on renewing their membership. 

LinkedIn told Reuters that their revenue has gone up by 8% as they got more renewals than expected. However, they refused to put a dollar value on the profits and just termed them as sizeable.

Need For XAI

As per Reuters, the U.S. consumer protection regulators including the Federal Trade Commission have warned time and again in the last two years that AI that is not explainable could be investigated.

The European Union may also pass the Artificial Intelligence Act next year which provides a set of rules for AI including that AI-generated results must be interpretable to the users.

The reason behind these laws remains to be the bias AI adapts while learning. In a simple explanation, the machine learning process runs on a set of data fed to the machine. The data is then analysed over and over by the machine and is accepted as a fact. However, the problem with the model is that machines can also learn human bias as a fact.

There have been multiple pieces of research that prove the AI bias. In order to reduce the same, or at least be aware of that, AI is required to be explainable.

What Do Critics Have To Say?

While the proponents of explainable AI are of the view that it can increase AI’s application and effectiveness in many areas like health and sales, the critics find the technology to be too unreliable. 

As per Reuters, the critics claim that the technology required to interpret AI is not good enough. Some AI experts also fear that it may cause harm by engendering a false sense of security in AI or prompting design sacrifices that make predictions less accurate.

Though LinkedIn asserts that an algorithm's integrity cannot be evaluated without understanding its thinking.

Been Kim, an AI researcher at Google, told Reuters that she sees interpretability as ultimately enabling a conversation between machines and humans. "If we truly want to enable human-machine collaboration, we need that,” she added.

View More
Advertisement
Advertisement
25°C
New Delhi
Rain: 100mm
Humidity: 97%
Wind: WNW 47km/h
See Today's Weather
powered by
Accu Weather
Advertisement

Top Headlines

'You Will Be Chief Minister Someday': Fadnavis Tells His Deputy Ajit Pawar, Shares 24-Hour Work Plan
'You Will Be Chief Minister Someday': Fadnavis Tells His Deputy Ajit Pawar, Shares 24-Hour Work Plan
'Motion' To Remove Jagdeep Dhankhar As Rajya Sabha Chairman Dismissed On Technicality. Here's What Happened
'Motion' To Remove Dhankhar As RS Chairman Dismissed On Technicality
Rahul Gandhi Sports Blue T-Shirt To Protest Ambedkar's 'Insult'. Know Why Blue Is Linked With Dalit Resistance
RaGa Sports Blue T-Shirt To Protest Ambedkar's 'Insult'. Know Why Blue Is Linked With Dalit Resistance
'I Never Thought...': Ravichandran Ashwin Responds To Father's Controversial Remarks
'I Never Thought...': Ravichandran Ashwin Responds To Father's Controversial Remarks
Advertisement
ABP Premium

Videos

Rahul Gandhi Linked to Incident That Led to BJP MP Mukesh Rajput’s HospitalizationBJP MP Mukesh Rajput Hospitalized in ICU After Alleged Push by Rahul GandhiHeated Exchange in Parliament as Opposition Targets Amit Shah Over Ambedkar RemarksLucknow Protest Turns Tragic: Congress Worker Dies, Police Investigation Underway

Photo Gallery

Embed widget