Editor's Picks

Will your products be swept away in the AI reliability revolution?

Main visual : Will your products be swept away in the AI reliability revolution?

If you thought that modern product performance was already pretty good – then hold on to your hat – there’s a reliability revolution coming. And if you are not one of the prime movers here, then you risk being swept away, along with the rest of the old order.

Artificial Intelligence is the force driving this change. It is unleashing predictive maintenance capabilities that will transform the performance of manufactured goods.

In fact, AI is already happening.

Among recent AI projects we’re worked on at Fujitsu is one for the Spanish bank BBVA designed to predict and prevent outages in ATMs.

And to show how quickly and widely this is going to spread, we’ve been able to apply the same AI model to look after gas turbine maintenance at the power plants of a major Russian energy company.

If one AI model can reduce downtime for both cash machines and furnaces, then there is little to stop this predictive maintenance juggernaut from sweeping through your industry or vertical too.

This means major disruption for incumbent players: No matter how good your current reputation for reliability, if you aren’t part of the revolution you are going to lose market share as lower-cost competitors eat into your markets.

Collaboration is the only way forward

However, knowing where to start your own revolution might not be straightforward. When the challenge of change is this profound, perhaps the most difficult thing to do is imagine what the future could look like.

That’s why Fujitsu has just completed an AI demonstration capability (it works remotely too, so you can participate wherever you are) designed to get our customers’ creative juices flowing.

It’s also a means of stoking collaboration. One of the bottlenecks with AI project development is that skills are in short supply.

It’s hardly unique to AI – in the wider realm of IT, it has been estimated there is a shortage of 500,000 specialists in the EU alone – but this is a particularly intense issue for AI projects.

One potential remedy, that of calling in a single supplier to solve the issue, doesn’t work either – because the key AI technologies all not concentrated in one vendor. And when it comes to the job that really needs doing, only the user organization ever really knows the actual story.

The only way forward is effective collaboration across organizational boundaries. Not all vendors are comfortable with that, but Fujitsu is a leading exponent of co-creation.

It’s a fundamental value that drives the company and this demonstrator shows that ethos in action: we enabled the demonstrator because of its flexibility to run, for example, a new containerized service for industrial customers, developed collaboratively with Capgemini’s newly acquired engineering research and development services arm, Altran.

AI made easy – running edge and core AI workloads as containerized cloud services

Let me give you an example of the demonstrator, the DDTS platform, running this application for augmented reality.

The core of this application is an AI-powered digital twin – in this case, focused on the ability to robotically control a device in one location from another using Augmented Reality. It exemplifies our Data-Driven Transformation Strategy (DDTS).

This positions data at the heart of digital transformation within a hybrid cloud, edge-to-core-to-edge model. In the demonstrator, data-driven AI models are handled on the platform at the core.

Two-way communication with devices at the edge is managed via Fujitsu INTELLIEDGE, and the whole thing is kept humming by a software stack managed by container orchestration built on Red Hat OpenShift, the de facto standard.

You’d perhaps expect all that, so let me focus on the aspect of containerization. With any AI model, data is the bottleneck. Containers make it easy to set up an environment for developers very quickly as well as allowing access to the data without transferring huge amounts of information across the globe.

That becomes super relevant when, for example, the answer to skills shortages is offshoring or when legal requirements prevent data from leaving a company, location, or country.

In addition, the demonstrator also highlights the ability for customers to leverage containers to scale up and down with an optimum blend of infrastructure and cloud capabilities, according to their needs at any moment in time.

This allows them to quickly and dynamically handle unexpectedly high workloads on-demand using cloud services. By combining a containerized platform with cloud-like billing such as Fujitsu uSCALE, customers don’t need to actually own their hardware anymore. Instead, a per-use billing model is usually a much lower cost than comparable cloud services.

Build complex capabilities – quickly

The platform logic allows users to perform highly complex operations very easily. For example, they could replace the movements of a human operator with avatar kinematics, choosing between different avatars based on gender, age, or other characteristics.

They can also bring a hologram of the virtual operator into the space where the AR user is working, operating the robotic arm as if in the same room. And, because the robot recognizes and follows objects using AI services deployed on the DDTL Cluster, the VR and AR operators can instruct the robotic arm to follow a specific object.

Other capabilities include voice over IP for remote communication, a pick and play system to control the robot arm with six degrees of freedom, including moves and syncs from the VR or AR operators, and robot intelligence and VR-AR-ROBOT algorithms to create the digital twin deployed on the edge cluster.

How to onboard AI quickly and with low risk

This is a cool tech demo, but in some ways, the application is not the point. What we are demonstrating is the ability to run AI workloads at the edge, transfer them to the core and make them available as cloud services, payable per-use if that’s a requirement.

This is what will allow companies to onboard AI into their products with minimal time and risk. In fact, we are currently working with a customer who only needs this sort of capability in two-week blocks and cannot justify the capital expenditure of incorporating AI capabilities into the data center. In the past, this would have been a major stalling point. Today, no problem.

Only you will know how these ideas might apply in your business. As I hope to have demonstrated here, we have the know-how and the partners when it comes to taking your ideas and turning them into business value.

It's not important that you have those ideas ready to go right now. The point about co-creation is that the right solution evolves through discussion. All I have described here is portable and can be brought to you, or delivered remotely. The crucial thing is to start talking.

Ready to start your data transformation journey? Visit our website to find out more.