Why AI Got the Answer -Explainable AI Showing Bases-

Main visual : Why AI Got the Answer -Explainable AI Showing Bases-

Artificial intelligence (AI) is now in its third boom. AI answers intuitively just like human right brains do. As black-box AI, it cannot explain the basis for judgement. Explainable AI is essential to get more widely adopted in society.

Can AI Judgments Be Explained Logically?
Deep Learning, the Trigger to AI’s Third Boom

AI evolved through its first boom (1950’s to 1970’s) and second boom (1980’s), but the extreme difficulty in the approaches of teaching human experts’ knowledge to machines (computers) meant AI was not put into practical use as envisioned; this "winter" continued for some time.

Then came the breakthrough of Deep Learning, which implements machine learning modeled on the human brain’s neurons via a computer; it enables computers to gain knowledge from large volumes of data.
This technology triggered the present third AI boom. Modern AI based on Deep Learning defeated the game of Go’s world champion and famous Shogi players, shocking the world by demonstrating performance far surpassing that of humans.

Critical Tasks Cannot Be Left to Black-box AI

However, in reality such AI tech cannot be used immediately in companies’ operations.

In Go or Shogi, what matters is whether you win or lose. Even without knowing why AI makes a move during a match, you can assess the AI’s value based on whether it beats its opponent.

However, can we allow such black-box AI to take charge of critical tasks? Unless you can be convinced of why an AI makes a decision, applying AI to such tasks is difficult.

For example, if your doctor says that your brain has an aneurysm with a diameter of 7mm that is at high risk of causing a subarachnoid hemorrhage if untreated, so an operation is urgently needed based on a blood test, MRI results, and a medical interview, you will accept said fact —but if unconvincing, you will ask the doctor to explain based on experience, research, and recovery records.

When sophisticated AI supports doctors in the future, you will not be convinced if a doctor says an AI decided you need an operation immediately without any basis.

AI must earn society’s trust to be accepted in business and daily life.

Due partly to such factors, AI’s application areas are presently limited to image/speech recognition and conversational translation. In such applications, decisions on correctness are made relatively easily based only on results (e.g., win or lose in Go and Shogi). Large obstacles remain before applications spread throughout society.

Expectations for future AI

Explainable AI to Earn Human Trust

First, AI findings are not always correct. Many of today’s AI are like babies; they become smart only if raised well. For them to be useful adults, we must recognize the importance of how they are raised and the meals (data) given, not just the tech behind them. This is the reality of AI.

For example, suppose you built an AI system for credit card screening. Biased learning data may bias the AI’s results, disadvantaging people with certain attributes.

In one case, an IT vendor’s AI chatbot that learned by user conversations was forcibly shut down after only half a day online. Many malicious conversations about racism, sexism, and violence led to unintended learning and extremely inappropriate remarks.

To build an AI application that can be used in society, the process of inference (thinking) in accordance with ethics and compliance requirements must be satisfied. Humans must understand and manage the logic or circumstances that led to results and correct them if wrong.

Explainable AI ensures trust; this is premised on AI accountability. Only such AI allows humans to rely on AI-generated new discoveries and insights, which leads to improvements and developments (e.g., expanding AI’s scope and advanced usage).

What explainable AI can do

Two Approaches to Explainable AI

What specific approaches achieve explainable AI? Today, two approached are being studied worldwide.

First is a method that adds explanatory capability to black-box AI (Deep Learning, which has high learning capability but is weak at explaining reasons). For example, when recognizing or classifying image data, areas with higher weight in the learning process are extracted from the neural network and presented as reasons.

Another method is white-box AI, which has a learning model that shows the decision making mechanism. By tracing the process to the conclusion (calculation process), reasons can be explained in a relatively easy-to-understand way.

Elements of explainable AI

In addition, promising research has just begun on an attempt to achieve explainable AI: a function that explains in words when an animal is identified (e.g., "It's a cat based on the fur, whiskers, claws, and characteristic ears").

Explainable AI lets you understand why it reached its conclusion

Deep Tensor & Knowledge Graph Give Bases for Inferences

Fujitsu has achieved many results in the world’s most advanced study on Deep Learning-based AI. We expanded Deep Learning applications from image, speech, and handwriting recognition systems in practical use to graph data for highly accurate learning focused on time series data and event connections/relational characteristics.

These efforts have significantly advanced Fujitsu’s R&D on unique technologies for practical use of explainable AI.

One such technology is AI that humans can trust, understand, and manage that combines two new technologies: Deep Tensor and Knowledge Graph.

Deep Tensor evolves conventional Deep Learning by adding an expression, tensor (data in a multi-dimensional array that generalizes concepts of a matrix, vector, etc.). It elicits new insights from graph data that expresses ties between people and things. Specifically, it dramatically improves Deep Learning efficiency and tells why it produced specific inferences.

Explainable AI: Deep Tensor and Knowledge Graph

For example, if it infers a cyberattack occurred by analyzing daily communication logs, Deep Tensor shows the inference factors—which parts of the logs imply an attack (e.g., IP addresses and port numbers); this is basically equivalent to reasons.

Knowledge Graph is a graph-structured knowledge base created by adding semantics to information collected from academic papers, etc.

Combining results from Deep Tensor with Knowledge Graph knowledge provides reasons for inferences and supporting bases. This allows human experts to validate the truth of AI produced results and gain new insights (awareness), creating a new style of AI use in which experts solve problems alongside AI.

Wide Learning Verifies Hypothesis Combinations

AI based on Deep Tensor and Knowledge Graph is basically an extension of Deep Learning; one issue is that highly accurate judgments are impossible unless trained on large volumes of data.

Fujitsu has developed a new machine learning technology for highly accurate judgment when training data is insufficient. This white-box technology is Wide Learning, which improves judgment accuracy when data is insufficient by comprehensively combining multiple data items (instead of training on a single data model like conventional Deep Learning).

Unique Wide Learning technology

For example, when applying Wide Learning to purchase trend analysis of individual products in digital marketing, the system verifies how many hits it gets in the data from those who actually purchased by taking many patterns of hypotheses formed by enormous combinations of data items (e.g., gender, driver’s license or not, relationship status, and age). Verification deems important the hypotheses that achieve high hit rates (knowledge chunks), and the system builds a purchase classification model based on such knowledge chunks. Knowledge chunks used for judgment on whether customers purchase are listed; by choosing a hypotheses, you can confirm the details (explanation) (e.g., a person has an annual income of 5+ million yen and made an inquiry within 1 month).

Example applications of Wide Learning

Far-sighted R&D for Human-AI Collaboration

Explainable AI will greatly expand AI’s use in society.

Deep Tensor with Knowledge Graph is already in use in several mission-critical businesses: health management (health changes are detected from employee activity data for workstyle transformation) and investment decisions in finance (corporate growth is predicted based on KPIs).

For example, in genomic medicine, we built an AI system by training Deep Tensor by inputting 180,000 pieces of disease genetic mutation data and associating more than 10 billion pieces of knowledge from 17 million medical articles, etc. in Knowledge Graph. Medical specialists simply review the inference logic flow, which greatly cut the time from analysis to report submission (2weeks to 1 day).

Wide Learning has demonstrated benefits over conventional Deep Learning: in digital marketing, the tech reduced the probability that the system overlooks potential customers by 10-50%. In medical judgment support, it successfully reduced the probability that the system overlooks patients with a condition by 20-30%.

Going forward, we will accelerate practical use of Wide Learning as a new approach for machine learning targeted at operations that need judgments on rare events or high transparency (e.g., preventive maintenance that detects serious equipment failures, judgment on illegal credit card transactions, product fault sign detection, potential customer discovery in sales promotions, and judgment on product concepts).

Fields for practical use of Wide Learning

With an eye to the unlimited possibilities of explainable AI, Fujitsu will advance its internationally leading AI R&D activities to improve support for customers in making judgments and the ideal next generation systems, including those for human-AI collaboration.


Initiatives at Fujitsu Intelligence Technology,
the Global HQ Leading Our AI Business

Fujitsu has newly established Fujitsu Intelligence Technology (FIT) to spread its AI business worldwide. The company launched in Vancouver, Canada on November 1, 2018. FIT is Fujitsu’s first overseas HQ to lead the AI business developed in Japan and elsewhere, consolidating AI-related products and services with data and know-how. The company will develop and execute global development strategies.
Specifically, FIT is the core of Fujitsu’s AI business in the U.S., EMEIA, Asia, and Oceania regions under a global matrix system.

We chose Canada instead of the U.S. (Silicon Valley) because:

(1) The national and British Columbia’s government promote state-of-the-art IT and actively invite IT companies.
(2) The region has many universities and labs strong in the AI and quantum fields (e.g., the University of British Columbia and University of Toronto) and many talented people.
(3) The many leading-edge IT startups including 1QB Information Technologies Inc., a Fujitsu investment.
(4) Leading companies (e.g., Microsoft, Boeing, and Amazon) operate there, giving co-creation opportunities.
(5) Rent, taxes, and labor are reasonable versus other major North America cities, offering business advantages.
(6) North America is the world’s largest AI market; companies in the region are extremely active in AI investment (17x Japan).

To match this global speed, Fujitsu promotes world-class strategy planning for product development and ecosystem creation by abandoning the self-sufficiency principle. In terms of business scale, we aim for 400 billion yen in 5 years (FY 2018 to 2022).
Fujitsu will tackle AI as both business and research by creating de-facto solutions via co-creation and leveraging human resources and technologies.