Without trust, we will never see the full potential of AI

Main visual : Without trust, we will never see the full potential of AI

Artificial intelligence isn’t new. In fact, the term has been used since 1956 with scientists and technology experts showing interest for the last 50 years.

While AI is used in many functions, its development is beginning to slow. Our recent study found that 52% of business leaders could not trust AI because the data used may be incorrect or biased. Although business leaders want AI to help make better decisions, the figures show they are holding back due to a lack of trust in the technology.

This piece will look at how businesses and vendors can work together to create trustworthy AI that takes into account the best interests of customers and the wider public – rather than aimlessly investing in the technology.

How AI has transformed society

Popular culture often dictates that AI will replace people, but that is not automatically the case. The World Economic Forum found that robots will create more jobs than they will displace. The 75 million jobs that automation will displace will be offset by the 133 million jobs created – showing how jobs will ultimately evolve in the era of AI.

There has been a noticeable trend towards business transformation and expansion by using AI to leverage data and create new opportunities. However, many businesses face a challenge in adopting AI into business practices, largely thanks to a mistrust in the technology. As a result, many analytics projects fail to deliver a good return on investment.

We are in a phase of transition, where AI adoption in business is a real possibility. Yet, it will not be possible until organisations, governments and research experts can trust the technology to benefit all of society.

It’s not too late to build trust

There’s an old saying that trust takes years to build, seconds to break and forever to repair. There have long been doubts about the impact AI will have on society – simply because people do not trust a robot to control aspects of life.

To develop safe and secure AI at Fujitsu, we take an approach with human-centric innovation at the core: the AI that we develop aims to empower people with advanced technology, not displace. AI should be there to assist humans, to make their jobs easier and ultimately to make the workforce more productive.

For example, we used Fujitsu’s advanced AI technologies to help doctors at San Carlos Clinical Hospital in Madrid analyse sensitive patient data in seconds to make faster diagnostic decisions on issues such as mental health and alcohol abuse. Ultimately, it enables quicker patient care and can also uncover trends that spot medical problems, often invisible to the naked eye.

By developing a code of ethical guidelines, organisations can ensure that trust and AI are synonymous. We created our ‘Fujitsu Group AI Commitment’ to build responsibility into our work through five important pillars: providing value to customers; human-centric AI; striving for a sustainable society through AI; AI that respects and supports decision making; emphasising transparency and accountability for the technology.

But it’s not something that just one organisation should be doing – it must be a joint effort to ensure best practices are put in place. We recently helped to drive these collaborations by becoming a founder partner of the AI4People global forum, creating a platform for different stakeholders involved in shaping AI to discuss its social and ethical implications.

Not only this, but we’re also closely engaged with the recently established EU AI Alliance, formed directly to contribute to the EU debate on AI and ensure the creation of good policies. It’s these types of initiatives that will be vital in bringing together governments, the media and everyone working in the industry to develop AI for good.

Will the true potential of AI ever be unleashed?

There is a worry the adoption of AI may begin to slow down as business leaders continue to research, analyse and understand the technology; a fall in trust will inevitably be followed by a decline in the potential for AI.

Organisations are responsible for adopting safe and ethical AI, and they should not wait for external forces to prompt them. Governments, businesses, and developers must work together to address concerns from workers and citizens and show that AI is being created with their interests and needs in mind, and as a force for good.

AI has the capacity to dramatically improve the quality of everyday life, potentially revolutionising business, driving economic growth and empowering people globally. And, provided it is created with these interests in mind, its many different applications should ultimately be celebrated.