Challenges Hindering the Full-Scale Use of AI: Talk Session on AI with a NewsPicks Professional Picker

Main visual : Challenges Hindering the Full-Scale Use of AI: Talk Session on AI with a NewsPicks Professional Picker

The Fourth Industrial Revolution has advanced, and the full-scale use of artificial intelligence (AI) has also began. When AI becomes involved in more important decision-making in the future both in business and society, can we trust the decisions it makes? Based on its Human Centric concept, Fujitsu has announced the "Fujitsu Group AI Commitment," which outlines its AI ethics to address the importance of such ethics. This article features interviews with journalist Taro Matsumura and Fujitsu’s Naoki Kazagoshi about the current state of AI, the black box problem, and explainability.
(Interviewer: Freelance announcer Haruka Mori)

(Photo, from the left)
Mr. Taro Matsumura, Journalist, NewsPicks Professional Picker
Naoki Kazagoshi, Manager, Promotion Department, AI Business Division, Digital Business Development Unit, Fujitsu Limited

The Use of AI in Apps, 4G, and Smartphones Dramatically Changed the U.S.

-- Specifically, in what fields has the use of AI advanced?

Kazagoshi: The AI has started to be sued in the fields of self-driving cars, which drive themselves to a chosen destination without the involvement of a driver, medical image diagnosis, which detect diseases based on medical imaging, and speech recognition and translation, which support communication between people from different countries, transcending the barriers of language.

(Reference)
AI Translation Changes Work Style to Remove Language Barriers, Boost Productivity and Bring Out Skills
AI-Based CT Image Retrieval—Retrieving Similar Cases with an 85% Accuracy Rate in One-Sixth Diagnostic Time

Matsumura: The use of AI is also advancing in services for cellular phones and smartphones. AI performs processing behind the scenes to optimize results, provide information faster, and combine multiple pieces of information to create new information.

-- I hear you lived in Silicon Valley. Would you please tell us some examples of the use of AI in the United States?

Matsumura: Around 2011 when I went to the United States, the level of the weather forecast infrastructure used in the U.S. was almost the same level as that used in Japan decades ago. Since 2012, however, when the mobile Internet has become widespread in the form of smartphones, functions and services using apps connected to AI have become more familiar and easily accessible. Warnings were given to notify approaching rain clouds with the same accuracy as in Japan, and based on the use of AI, the U.S. quickly caught up with the excellent Japanese infrastructure, which Japan has spent years building up. I witnessed dramatic changes brought about in social infrastructure and people's lives.

AI Is Still in Its Development Phase: The World of Astro Boy and Doraemon Is Still a Future Scenario

-- What level is AI presently at?

Kazagoshi: When it comes to AI, some people in Japan may think AI can do anything, like the popular anime characters Astro Boy and Doraemon. However, this scenario is still some way off in the future. Some people say that such a world will come true around 2045. At present, AI is still specialized around specific skills or used to make specific decisions; however, significantly progress is being made in each field with regard to its application.

-- What is the progress of AI development internationally?

Matsumura: In Japan, people have positive images of AI and robots, like Astro Boy and Doraemon. In the United States, however, robots are often drawn as enemies in action films. Currently people have both attitudes towards technologies: While being cautious about technologies, they are incorporating more and more of them because technologies can significantly change their lives. According to Google, it will take 20 to 30 years for AI to be able to make decisions like humans. In the future, I think we will witness that AI that is specialized in specific fields will be developed successively and they will be connected with each other to work out new decisions.

AI as an Outstanding Partner for Humanity

-- What challenges do you think AI will face in the process of expanding into more and more areas?

Kazagoshi: There is still a lot of misunderstanding about AI, and at the same time, there are also excessive expectations. I think the current challenge is to communicate the present reality of AI as much as possible, without betraying people's expectations. There will not be a succession of AI appearing that deprive people of jobs yet. If anything, I would like want people to recognize AI as being an outstanding partner to human.

For example, no matter how excellent doctors are, they cannot retain millions of academic papers in their memories. But AI can instantly give advice, like "Combine this and that to achieve a solution," by conducting an around-the- clock search. I think that it would be good for humans to make decisions by enhancing their knowledge based on the advice of AI.

"Black Box Problem": A New Issue to Address

-- What kind of issue is the so-called "black box problem"?

Kazagoshi: The "black box problem" refers to the difficulty in understanding how and why AI reached a certain result because AI makes judgments intuitively like the right side of the human brain.
For example, when a problem occurs during self-driving, the manufacturer will be held responsible for the problem if the situation is not clarified. I think that it is the mission of AI developers to solve the problem by properly explaining how and why AI is making certain decisions.
When making the next management decision based on a variety of numerical data, we feel unsure whether we can blindly entrust a "black box" machine to make that decision. In the clinical field, if an AI diagnoses us with a very serious illness and recommends that we take a certain medicine, we still want evidence for that diagnosis.

Matsumura: I think black-boxed AI processing will further increase in the future. If the results' accuracy is improved to a certain degree, suspicions will disappear and the results will be naturally accepted, but if any bias or intention is included in the results, such feelings cannot be eliminated.

To avoid blackboxing, users and the provider need to share information on the mechanisms and data used through communication with each other. Because users are also required to have a certain level of knowledge of what AI is, I expect that quite a difficult task will be waiting for them.

Achieving both Accuracy and Explainability in AI Decision-Making

-- Please Tell us about how Fujitsu is working on the black box problem.

Kazagoshi: Fujitsu is working on the development of "Explainable AI," which lets humans understand how and why AI provided a specific answer. Fujitsu's proprietary machine learning technology achieves both judgement accuracy and explainability by using explainable algorithms as a basis and further improving them. With this technology, human can trust the new findings and insights derived from AI, which will expand the scope of application and promote the advanced use of AI.

Why Is It Important to Avoid Blackboxing?

-- Some people say blackboxing is not a problem. Why are there two opposing ideas about blackboxing?

Matsumura: I think the point of discussion is: "Should we not use AI (for decision making) unless all the processes are made transparent?" For example, even if you do not understand the mechanism of an automatic car, you can drive it as long as you know safe driving procedures. Some may say as long as we can make accurate predictions about mature AI and use it properly, we do not need to know what is happening inside the AI. But I think it is very important to make the decision-making mechanism transparent with regard to important matters, such as accidents around us and people's futures.

From now on, the transparency of all AI systems is of course important, but above all, it is very important from the provider side to clearly understand "Who is the AI technology for?" I think Fujitsu's initiative is to clarify the users for whom the AI service is intended. In that context, much is expected from this initiative.

-- I see, thank you very much.