This second part of the article looks further into adoption of principles and guidelines governing development and use of AI.
In part 1, the article introduced principles and guidelines adopted by the OECD, the Group of 20, and the Japanese government. Interestingly, a common theme running through these recommendations is the idea that "AI systems must be human-centric". This advances the notion that AI should be developed and implemented with human ethics in mind.
Human-Centric AI: Emphasizing an Ethical Approach
One thing that nearly all the AI principles being adopted by various governments and international organizations have in common is the need for AI systems to be human-centric. This calls for AI to be developed with strong ethics in mind.
When AI systems make decisions on behalf of humans, those decisions can turn out to be ethically undesirable and create unpleasant outcomes, including being psychologically harmful to us. That could be true even if they comply with a society’s laws. In this case, the AI systems cannot be called human-centric.
Take chatbots, for example. This automated customer-interaction tool is now widely used but needs to be more human-centric. That is to say, answers provided by the bots are sometimes not ethically appropriate. For an AI system to win the trust of society, they must be as ethical as a human being could be.
Based on this idea, there is a movement to emphasize ethics as a key part of the principles for AI. For example, in April 2019, the European Commission announced “Ethics Guidelines for Trustworthy AI,” which included seven recommendations for ensuring trustworthy AI. Under these guidelines, development, deployment and use of AI must have:
- Human agency and oversight
- Technical robustness and safety
- Privacy and data governance
- Diversity, nondiscrimination and fairness
- Societal and environmental well-being
AI systems that adhere to high ethical standards will increase trust in the technology. Without this, society will not accept the technology. The European Commission ethical guidelines define trustworthy AI as development and deployments that are lawful, ethical and robust.
However, unlike laws, ethical principles often are not written down and are subject to change over time and according to geographic and cultural factors. That is why every country or region usually has different ethics. Therefore, for an AI system to be trusted, it is not enough to ensure the trustworthiness of the data AI systems are using, but also for communities to closely monitor whether ethical guidelines are being followed.
Supporting Human Ethics in Development of AI
When implementing AI in essential industries, it’s necessary to meet the ethical requirements of the communities that use the technology. An example is the strategic joint research project that Fujitsu Laboratories of Europe and Fujitsu EMEIA undertook with the San Carlos Clinic Hospital Biomedical Research Foundation (FIBA CSC) in Spain in June 2015. The purpose of this project was to use an AI-based system to extract and visualize insights from medical data to help care for patients with mental health disorders. The proof of concept project, working with psychiatric specialists at San Carlos Hospital, lasted for six months. The project collected information on more than 36,000 anonymous patients, which was then analyzed by an AI system, taking into account past diagnoses of the patients and their potential risks for suicide, drug abuse and alcohol addiction. The project identified existing problems in the clinic and led to the carrying out of a highly accurate risk assessment. As a result, doctors became able to identify and assess life-threatening risks with over 85% greater accuracy than before. This success was made possible because the developer included the ethical values of the medical community in the design of the AI system.
AI helps faster and more precise clinical decision making-IdISSC
Universities and other organizations in Europe and the U.S. are conducting research on the ethical aspects of AI. In some cases, AI technology providers sponsor such research. Fujitsu, for example, held a panel discussion on the theme with experts in Silicon Valley in October 2019 and London in July 2018.
Fujitsu innovation gathering in July 2019 that focused on AI and ethics, including a panel discussion titled, "Towards Trustworthy AI: Transparency and Auditability"
Fujitsu encourages interested parties to join the discussion on AI and ethics. The topic touches on fundamental questions about what a society should strive for, how it can use AI to achieve it, and what are the consequences if AI is misused.
People are discussing not only how to minimize the negative effects of AI but how to maximize its benefits. For example, AI has the potential to solve such social ills as discrimination. And to think deeply about what AI and ethics can help us to broaden the discussion to talk about who we should be as human beings.
Both Developers and Users Must be Committed to AI Ethics
The evolution of AI technology is remarkable, and more applications for the technology are emerging. Label Gear, which Fujitsu Laboratories of Europe released in July 2019, is an example. The technology helped companies involved in manufacturing, infrastructure maintenance and health care to build applications to automate manual inspections. Combining a new GUI with AI-assisted technology enabled efficient detection of defects and abnormal conditions during inspections, even when there was limited training data available. Automation of manual inspections and monitoring is expected to help streamline operations for companies. Traditionally, creating and managing large volumes of data meant huge labor costs. But if this function becomes easier, it will open the way to apply AI to more business fields. However, use of new AI technologies, such as Label Gear, requires ethical considerations because the solutions the technology produces are to be applied to jobs that humans now undertake.
There is also hope that the focus on guidelines promoting responsible development and deployment of AI by governments and international organizations could prompt companies to announce their intention to follow these principles, just as they did to ease concerns about privacy and data security. When these became high-profile areas of concern for the general public, many companies drafted their own privacy and security policies and opened them to the public. Similarly, if AI principles become widely accepted, companies will be pressed to disclose their AI policies and demonstrate that they are following them.
For example, Fujitsu announced the "Fujitsu Group AI Commitment" in March 2019 and then followed in September by establishing the "Fujitsu Group AI Ethics External Advisory Committee." The committee will offer the company objective opinions and insights on its AI commitment and will encourage dialogue about the technology with stakeholders in society. Fujitsu hopes that its AI commitment will serve as a model for other companies to follow. According to Fujitsu’s policies, the company pledges to:
1.Provide value to customers and society with AI:
Fujitsu and its group companies promote co-creation with customers by using emerging technologies. We work together with customers to help them create a prosperous tomorrow. At the same time, we consider the impact that end-users and society face from constantly evolving AI.
2.Strive for a human-centric AI:
Fujitsu, advocates “human-centric AI,” treating people not as a tools, but supporting their desire to seek prosperity and make contributions to society. As part of this effort, Fujitsu will develop trustworthy AI by emphasizing fairness and safety to prevent discrimination and harm from use of the technology.
3.Strive for a sustainable society with AI:
Fujitsu has been strongly committed to the UN’s Sustainable Development Goals (SDGs) and to helping to solve a range of social and environmental issues and thus seeks to contribute to building a better society and fostering the long-term business success of our customers.
4.Strive for an AI that respects and supports human decision-making:
Fujitsu believes it is crucial to protect the right of humans to make their own, informed decisions, including when they base decisions on results generated by AI. To this end, Fujitsu will strive for transparency as we design and develop AI technology that can explain why it makes specific recommendations.
5.Taking corporate responsibility and emphasizing transparency and accountability for AI:
As an information and communication technology provider responsible for the reliability of infrastructure systems, Fujitsu understands it is crucial to avoid negative consequences caused by AI. To this end, Fujitsu commits to leverage its accumulated experience and know-how to develop and constantly improve the reliability and trustworthiness of its AI technology. Moreover, in the unlikely event of negative consequences occurring from use of AI, Fujitsu will seek to take appropriate measures to track the causes and effects of such an occurrence.
Fujitsu Group AI Commitment, March 2019.
Dr. Adel Rouz
CEO, Fujitsu Laboratories of Europe
Dr. Adel Rouz is the Chief Executive Officer and a Board Member of Fujitsu Laboratories of Europe Ltd., a leading research organization based in EMEIA.
His career within Fujitsu spans more than twenty eight years, and during that time he has made many contributions to evolving the company’s global and regional research and development strategies. As part of his role, he has worked closely with external research institutions to encourage close collaboration and mutual innovation.
He joined Fujitsu in 1991, moving into the research field within the organisation in 1996. He maintains a hands-on approach to R&D, taking the lead on a number of high profile projects – majoring on initiatives in support of Fujitsu’s collective human centric intelligent society initiative.
Today, Fujitsu Laboratories of Europe is considered one of the leading research and development centers in EMEIA, with a wide range of R&D and co-creation activities. These span Artificial Intelligence, Trusted Technologies, AI Ethics, Blockchain, Cyber security, Approximate Computing and Digital Annealer applications, focusing on cutting edge innovations that address real-world challenges, underpinned by ethical concepts.
Dr. Rouz is leading key initiatives within Fujitsu Laboratories of Europe across activities including Trusted Technologies and AI research, spearheading the evolution of future technologies to solve social challenges.
In addition, he is a member of the Industrial Advisory Board at the University of Surrey, as well as a Member of the Strategy Advisory Board of the 5G Innovation Centre. He is also a Board Member and Chair of Operations of The Park Federation Academy Trust.
Nikkei BP Intelligence Group Clean Tech Laboratory, Chief Research Officer
Mr. Hayashi joined Nikkei BP after graduating from Tohoku University's School of Engineering in 1985. As a reporter and editor-in-chief for outlets such as Nikkei Datapro, Nikkei Communications, and Nikkei Network, he has covered stories and written articles on topics such as cutting-edge communications and data processing technologies as well as standardization and productization trends. He consecutively held the post of chief editor for Nikkei BYTE from 2002, Nikkei Network from 2005, and Nikkei Communications from 2007. In January 2014, he became Chief Director of Overseas Operations after acting as publisher for magazines including ITpro, Nikkei Systems, Tech-On!, Nikkei Electronics, Nikkei Monozukuri, and Nikkei Automotive. He has served at his present post since September 2015. Since August 2016, Mr. Hayashi has been writing a regular column, "Creating the Future with Automated Driving," in the Nikkei Digital Edition. Moreover, he published the "Overview of International Automated Driving Development Projects" in December 2016 and the "Overview of International Automated Driving/Connected Cars Development" in December 2017. Mr. Hayashi has also served as a CEATEC Award judge since 2011.