If 2018 was the year the general public sat up and took notice of AI and its possible impacts on society, then 2019 will be the year when attention turns to focus on the layers beneath the surface. What’s actually going on in a “decision” made by an AI system, how does it connect to the wider environment – IT and people – around it, and how are we actually going to get AI systems built at a time when we are facing acute skills shortages?
How did you reach that decision?
Decision-making by AI is a potentially thorny question. It turns out that the technology is often significantly better at it than are we humans, but often in ways ,we don’t fully understand. AlphaGo Zero, an application from AI developer DeepMind to play the strategy game Go, beat the version (by an overwhelming 100 games to nil) that had already beaten 18-time world champion Lee Se-dol. The application had no database of previous games and had taught itself from first principles in just three days of self-play. Observers said that AlphaGo Zero made moves that were inexplicable to even the subtlest human players.
If the outcomes with AI are good, the underlying ethics need careful scrutiny. For example, AI algorithms sometimes learn from data that is based on human decisions that might not always be fair – but are perhaps based on emotion: something incredibly difficult to teach a machine. Being able to demonstrate the fairness of automatic decisions being taken by AI systems will become more increasingly important in 2019 and beyond and is already being discussed with regard to the self-driving car, when it comes to choosing between two inevitable crash scenarios. “Algorithm accountability” is especially significant where legislation like GDPR and other data regulations are involved and the ability to understand why an algorithm is taking certain decisions will be key for highly-regulated sectors like banking or health. And don’t even get me started on the legal side of things!
The good news is that Deep Learning and knowledge graphs are now gaining momentum, and we are seeing Machine Learning algorithms such as linear regression, that have mostly been in use until now, being replaced by neural networks. I say this is good news, because knowledge graphs show us the way to explain how AI systems behave and make decisions, taking us away from a “black-box” and giving businesses the tools and understanding with which to respond to new regulatory pressures.
Getting AI in the mix
In 2019 we will move decisively away from “hey, look at me” AI applications towards a more humdrum world in which AI just happens to be a part of the solution. The results will still be amazing, but we won’t be so blinded by the novelty.
One of the ways this will happen is by embedding algorithms and intelligence in the existing business process. When combined with Robotic Process Automation, analytics and AI will play an important role in improving daily operations through simplicity, automation, speed, cost, service – a long list of major, major potential benefits.
Taking this down a level, we’ll see the spread of event-driven AI architectures, where streaming pipelines to ingest data, process data, evaluate and score predictions, make decisions and activate processes will be the new normal.
You might expect this all to be taking place in the cloud or in high performance on-premises environments or, even more plausibly, via a hybrid blend of both. But, due to the improvements in communication speed thanks to the arrival of mobile 5G networks, which will start to roll out in 2019, we can expect to see analytics systems running at the network edge on IoT devices, especially in manufacturing and utilities.
Jumping the analytics skills gap
If these predictions are going to occur, it presupposes that organizations can access the skills needed for implementations of what are clearly advanced technologies.
Companies will increase investment in tools that help business analysts and data scientists make use of advanced analytics techniques, without being technical experts. For example, we will start seeing Business Intelligence (BI) departments using chatbots to open the door to analytics for non-technical users. This will mean interacting with and receiving answers from BI systems via conversation rather than predetermined report formats, which we predict will start to fall out of use.
Co-creation is another pathway to successfully overcoming skills gaps, as applying relevant domain experience is a key differentiator in the performance of a data scientist. Anyone can build a recommendation engine. The question is, who can build the best one for a specific use case – for a retailer, for example? Thanks to the sharing economy, finding the rights skills and experience to solve a specific problem will become easier, using co-creation.
I think you’ll agree that it’s a logical step that providing advanced analytics for customers means taking into account a wide range of factors, some potentially contradictory, such as data management policies, scalability, environment heterogeneity and many others. For that to be in any way possible – and certainly for it to deliver commercial value – requires data analytics services providers who can deliver end-to-end solutions.
If you are considering the use of analytics and/or AI in 2019 and can see how the ideas I’ve outlined here could be relevant, I’d be very happy to explore this with you in more detail. You can contact me: