There’s a constant arms race in the data center between new technologies and a seemingly unquenchable thirst for ever-greater data performance. Rupert Lehner recently blogged about some of the advances we can expect in 2019, and here I will focus on two drivers that significantly merge IT and business – Artificial Intelligence (AI) and IoT.
How Artificial Intelligence will inhabit the data center
AI is in the process of going mainstream and it – like so much else in the digital world – thrives on data. It is the availability of giant oceans of data, that allows Machine Learning to isolate and experiment with patterns (so called “Training”) until an understanding of the data by the machine comes to the fore. With the “knowledge”, what has been learned, unknown data very often can be classified with a very high and remarkable rate of success. An additional advantage of this approach is that what has been learned is stored in a so-called model. You can imagine this as a kind of file that can be flexibly transferred anywhere. The use of such a model does not require any significant computing power; in contrast to the training of the model. Therefore, it is the speed at which actual and upcoming AI infrastructures can progress through iterations of the training process that leads to massive jumps in AI capabilities.
Fujitsu and AI
In 2019, we will see significant growth rates in the use of AI in medicine, commerce, industry, services, and other sectors. Fujitsu’s long experience in the field of supercomputing, where we are repeatedly at the top of international comparisons, puts us in a very healthy position to deliver the benefits of AI to our customers.
In fact, if you were at the last Fujitsu Forum Munich in November, you would have seen some of the key portfolio elements in AI we are developing. There is Shorlak, which accelerates the digital transformation of business processes, supported by AI. It also analyses potential security risks, supports medical diagnoses and legal evaluations, and much more.
Then there is our Zinrai Deep Learning System (ZDLS), an AI appliance which was demonstrated live with customers at Forum as part of our “Experttalks” showcases. For those Experttalks, Zinrai e.g. was trained for image recognition, it “learns” what’s on the images, creates a trained model and transfers this resulting model to edge devices where it can be leveraged – all without writing a single line of source code. The edge software still has to be adapted to the specific use case, of course, but no adaptations are needed for the data analysis, transfer to and from the edge, classification of unknown images, taken by the edge device, and re-analysis and optimization of data models. And, of course, ZinRai is not only dedicated to Images but also to Text and speech recognition. It goes without saying that all this significantly accelerates the implementation of AI projects.
IoT and edge computing
Back in 2017, Fujitsu made a prediction about IoT when the likely direction of travel was far from clear. We saw a looming gap that would require organizations to bulk up their computing muscles at the edge of networks. That decision now looks prescient and points towards where we will see enterprises reap enormous benefits in 2019.
Our logic was that reliable bandwidth is often lacking at the edge. Think of the aggregate cost of relying on public mobile networks for data backhaul, spread over thousands of industrial sensors. If challenges like these aren’t insuperable, then there’s the latency involved in carrying that information back to the data center for analysis, decision making and the return loop for whatever actionable decisions ensue.
IoT-based systems are also increasingly becoming “artificially intelligent” at the edge. The purpose of edge computing is to capture, analyze and process data directly at the point where it is generated. The more intelligently this is done, the fewer data has to be transmitted back to central IT systems. This will become really important in 2019. For example, autonomous driving generates gigantic amounts of data, the transfer of which to a central data center is on the limit of what is possible, even with 5G technology. Here the intelligent edge will play a central role in solving future challenges.
Just over a year ago, we launched the Fujitsu INTELLIEDGE Edge Computing Appliance, which is enabling the optimization of industrial processes for Industry 4.0, securely connecting data and resources to people and devices and bridging the gap between Operational Technology (OT) in the field, and on-premises and cloud-based Information Technology (IT). Enterprises get from this an unprecedented end-to-end view of entire production processes. And with that transparency they can achieve breakthroughs in monitoring and managing optimized fully-digitalized production lines, in automated supply chain optimization and in the operation of ‘digital twins’ – an overview of all processes, represented digitally, in real-time.
In some ways, INTELLIEDGE was ahead of its time, because it’s only now that we are starting to see more widespread interest in computing at the network edge. You’ll be hearing more about that in 2019, partly driven by the arrival of 5G network technology, which will allow for greater bandwidth and lower latency on mobile networks.
More power for 2019
If an even more powerful infrastructure is required at the edge, our hyper-converged PRIMEFLEX systems are the alternative. And – picking up one of Rupert’s themes – to be able to handle ever-more parallel data streams, Fujitsu has already announced plans to adopt native NVMe in future storage product lines alongside our ETERNUS AF ALL-Flash Arrays, providing massively-parallel, previously-unattainable data transfer speeds.
Perhaps you are already pushing towards these futures in your own data center and have some perspectives you’d like to share. Perhaps you have seen in these ideas the potential to build new capabilities and achieve new business benefits.