How AI can live up to expectations: part 2

Main visual : How AI can live up to expectations: part 2

We’ve all had the sensation of feeling underwhelmed when we’ve set our expectations too high. Certainly, the opposite is also true: some of the most enjoyable films, concerts and parties I can recall were when my anticipations were at their most realistic.

In my previous blog we looked at how to get AI projects underway so they stand a realistic chance of meeting expectations. As a starting point, we looked at how to ensure the project is focused on something everyone agrees is valuable.

Time to value: how can you make AI pay back faster?

A justifiable expectation of value is just the starting point. The time it takes to unlock value is equally important in your decision-making. We hear would-be AI adopters don’t want to get trapped in a seemingly endless series of pilots – eating up time, money, and management attention.  Fortunately, to accelerate the velocity of change, there are five powerful gear changes available:

  1. First gear is a solutions approach where novelty is de-emphasized, and reliable results are foregrounded. Rather than generic, DIY AI platforms, focus on ready-made solutions for vertical markets using pretrained models.
  2. Shifting up into second, there are proven process-centric approaches you can leverage to ideation, as well as to designing, building, deploying, and managing innovative solutions. Adopting the best examples will save you considerable time, as will co-creation methodologies that recognize customers’ domain expertise, place their business at the center of the focus, and ensure market relevance with the ability to scale rapidly and benefit from additional markets.
  3. Into third now, and partner evaluation should include whether AI solutions, tools, knowledge, and techniques are being shared globally in a process of ‘continuous enhancement’, enabling faster adoption and integration of technology and technical know-how to address business challenges and deliver positive outcomes.
  4. Fourth gear brings more reliable, faster and deeper integration of the AI solution into what already exists. Without integration into wider business operations, AI will never deliver its full potential. A study commissioned by Fujitsu and conducted by Forrester Consulting1 uncovered integration as the most pressing AI data-related issue facing business leaders in the next 12 months.
  5. Reaching for fifth gear involves making AI more powerful and intelligent through non-experimental improvements to the technology stack that provide more speed and more complexity. This can mean special purpose hardware. It can also mean reliable access to converging technologies that are now ready for prime-time and which can boost AI performance to even higher levels. Examples include distributed ledger technology to automate complex ecosystem transactions and algorithms that work with AI to optimize entire business processes.

Trust: how do we make AI explainable?

When Forrester Consulting asked global business decision-makers, it found the majority agreed there was a pressing need for ethical AI – you could also call this ‘trustworthy’ AI. More than four-in-five executives interviewed (83%) believe it’s important or critically important to create systems that are ethical, understandable, and legally compliant. A similar number (81%) feel the same way about transparent, explainable, and provable AI models, with just under three-quarters (73%) recognizing the need to test for bias in data, models, and human use of algorithms.

The lack of clarity around AI is worsened by the black box model. Data goes in, and a decision comes out, but the system can’t explain how it reached its answer. Therefore, how is it possible to trust AI, when you can’t explain the basis on which it made its decisions?

In areas like healthcare or financial services, it’s vital to be able to put AI decisions under scrutiny. For instance, it’s now possible to use AI to detect cancer from a scan. But without understanding the machine’s thought process, and without being able to place full faith in the way the AI was trained, it’s difficult for doctors to trust the diagnosis, let alone be able to deal with the consequences if the diagnosis is actually wrong.

There’s a lot of work going on to address these concerns, with an increasing number of studies that aim to make it possible to explain the judgments made by AI. There’s also been an increase in the number of presentations at well-known conferences such as NeurIPS and ICML regarding studies on this technology, which has become known as Explainable AI (or XAI).

Technological explanations are one side of the coin. But they won’t generate genuine trust in AI across the general population without an underlying set of ethical principles. And there’s a lot of progress going on here too, with most serious vendors now aligned to high-level ethical frameworks such as the AI4People initiative, Europe’s first global forum on the social impacts of Artificial Intelligence (AI). AI4People aims to bring together the key players involved in shaping the new applications of AI, including the European Commission, the European Parliament, civil society organizations, industry, and the media.

How to deliver AI projects when specialists are so hard to find

Skills availability, cost, and retention are perennial top concerns of management and in the world of AI this is even more pronounced. The danger is that companies will choose a possible response which fails to address the real needs of the business – and one that will only serve to heighten the sense of anti-climax. The decision to use an “off-the-shelf” AI will lead to almost inevitable disappointment when it becomes apparent that the AI isn’t, or can’t be adequately adapted to the actual task it was required for. Given the pressure on skills, the opposite response – that of having to start from scratch – is obviously not a sensible starter either.

As the new Forrester study shows, unlocking the true potential of AI requires fundamental changes, long-term thinking, commitment, and a high degree of digital sophistication. It also shows most customer organizations now recognize that they don’t currently have the necessary skills – nor do they expect to be able to recruit them, so are looking externally to fill the gaps. Some 60% say they want to work with a mix of established firms and startups, tapping into experience but also innovation. The study concludes that leaders are seeking comprehensive help from best-of-breed providers and solutions providers with experience in implementing AI solutions. It advocates choosing a partner or partners who can help drive business outcomes, in addition to technology implementation.

In summary, there are two fundamental issues that have the potential to derail AI projects: the IT skills scarcity, and trust. Ultimately, I believe that unless the question of trust is properly addressed, an AI project is more likely to end in failure. On the other hand, it’s possible to leverage suitably skilled outsourced AI resources from external suppliers and overcome that challenge.

What’s your view on this? Have you encountered trust issues with AI implementations? Perhaps you’re skeptical AI will ever deliver something meaningful for the business. Maybe you’ve already accepted AI, or you’re holding back due to regulatory reticence. Or could it be that you’ve had the opposite experience? We’d love to hear about projects you’ve been involved with where the results were beyond expectations, and to find out the reasons why. Either way, contact me to discuss where AI is heading.