close

How AI is driving transformative change

We interviewed Glenn Fitzgerald, one of Fujitsu’s leading technology strategists, to get an update on the status of the technologies that everyone seems to be talking about today. Not only are deployments getting increasingly sophisticated, but the underlying technologies are also evolving to enable more than ever. We asked Glenn about the status of Artificial Intelligence, and where the technology is heading.

AI seems to be all pervasive today – is it the tech industry’s latest shiny thing, or are companies extracting value from it?

If there’s one thing that the technology industry is often guilty of, it’s getting fixated on the latest ‘hot’ new products and services, when actually we should be focused on whether we can deliver the desired beneficial business outcomes. But many new and emerging technologies really do contribute to transformative change when implemented properly and are worthy of the hype.

A very strong example is Artificial Intelligence, and in particular, advanced machine learning. But to get the best out of this technology, you need to start by identifying the business challenges that it can address.

AI isn’t a magical computer brain that can tell customers what strategy to follow. What it is, when applied effectively, is a powerful way of enhancing existing processes and on occasion opening up new business opportunities.

What sort of projects are you working on with customers?

Our customers have typically already identified the business problems that they want to address when they engage with us, but they generally need help defining how they want to tackle them, in addition to looking for assistance in designing and building their systems.

For example, deep learning via image recognition is of great interest to many companies because of its very broad potential applications. Image recognition has a major role to play in the retail industry, for example – by adding an additional layer of machine intelligence. This can prevent fraud at the checkout, by recognizing that a customer has scanned a low-value item – let’s say a bunch of bananas – and placed a high-value item in their bag, for example, a bottle of whisky.

Manufacturers are also finding image analysis extremely useful – not just to enhance processes, but also to analyze logs and video streams, for example, to establish preventative maintenance systems.

In fact, deep learning-based image analysis can help almost any organization solve problems, including many they probably didn’t know they had. This has been an incredibly powerful asset in the medical world, where it can be deployed to identify anomalies in scans of everything from entire MRIs to retinas or even individual cells.

The financial industry is also embracing the possibilities of AI: from the retail banks that are developing knowledge-based systems that will replace insurance underwriters, to the companies that are creating neural networks that can audit a distributed ledger.

How do you go about establishing this kind of AI system?

Ultimately, these exciting applications come down to using neural networks for machine learning – which essentially means applying statistics to ensure a consistent output from stimuli, which could be any data from images to speech, or data logs.

The challenging and highly processor-intensive part of the process is training the network so that it can recognize the elements that it needs to. Once the trained network is running, it needs very little infrastructure to support it.

We are helping our customers to construct neural networks based on IT infrastructure platforms, which is a foundational step in the development of AI systems.

What has Fujitsu been doing to drive the development of AI?

Over in Japan, Fujitsu has been making great strides with RAIDEN (Riken AI Deep learning Environment), a dedicated computer system for artificial intelligence research. This was first deployed in 2017 to the RIKEN Center for Advanced Intelligence Project (AIP Center), the AI research arm of RIKEN, which is Japan’s largest comprehensive research institution.

Founded in 1917, RIKEN is renowned for high-quality research in a diverse range of scientific disciplines. A recent system upgrade for RAIDEN increased its performance by a considerable margin, moving from an initial total theoretical computational performance of four petaflops (PFLOPS) to 54 PFLOPS, which places it in the top tier of Japan’s AI systems.

Since it began operations following system delivery in April 2017, the RIKEN AIP Center has used RAIDEN for research and development on next-generation AI technology. Such cutting-edge AI research is conducted with enormous neural networks, a machine learning method, particularly in deep learning. The increasing scale of neural networks promises to improve factors such as the accuracy with which the networks handle more complex characteristics.

This has also led to a drastic increase in computational volume, although computational time is also increasing due to the increasing complexity of algorithms and the volumes of data involved.

There’s also a link to high-performance computing (HPC) that will affect AI. While this is an entirely different discipline, with a different application of mathematics, the fundamental platform is the same.

HPC works by taking large, complex tasks and splitting them up into many smaller tasks that are solved separately. There is technology emerging in this space that will make it possible to apply the HPC approach to neural networks – also with the potential to significantly change how they work.

One of the issues that has restricted the performance of many different applications is bottlenecks in I/O, slowing the flow of data to and from processors. Intel is changing this with technology that will transform the processing capabilities and therefore lead to much faster neural networks.

Intel is introducing non-volatile memory in large quantities on the internal system buses. This will significantly improve system performance by putting more data much closer to the processor than it is on traditional storage media, such as flash drives. This eliminates the need for processors to communicate with external devices, since everything is handled within the server itself and enables synchronous rather than asynchronous processing.

There’s also something new emerging in terms of the interconnect – this is technology that can deliver high bandwidth and low latency directly from a processor core. This further enables communication with processors and opens the way to things such as memory distributed systems that will address both HPC and neural networks.

This will essentially make it possible to distribute neutral networks across multiple, distributed platforms and will lead to the creation of large networks of inexpensive servers.

In addition, the trend for software-defined everything is changing the platforms that are available. Where once we used to buy functionality such as firewalls, intrusion detection or switches as devices, we can now acquire this kind of capability as software that can be conveniently run on spare server capacity. This has already led to the creation of hyper-converged infrastructures, and over time will also contribute to changes in the software stack itself.

The increasing availability of open source software will play a role here too, as it is the logical platform for further development.

What advice do you have for customers thinking about using AI to help solve their business problems?

One thing that is certain is that change will keep coming. There will never be a ‘right time’ that is better than right now.

The only thing to bear in mind is that during development, businesses must make sure that all the elements they select are compatible with what’s around the corner. In other words, if you invest in a proprietary solution that’s not following industry standards, you are in danger of becoming locked in.

 

About the interviewee

Glenn joined ICL in 1979 as an apprentice and has worked for the company, now Fujitsu, throughout a varied career, during which he has gained expertise in a wide range of IT fields, including manufacturing and production test, hardware, software and firmware design, infrastructure implementation, project management and business and ITC architecture.

In his current role, Glenn is responsible for ensuring that the technologies utilized within Fujitsu are an optimum balance of capability and function, delivering industry leading coherent solutions that balance a supporting vision for customers’ varied businesses with the “art of the possible” at any point in time; technical feasibility, cost, timescales, risk and flexibility.

Tags: , , , ,

No Comments

Leave a reply

Post your comment
Enter your name
Your e-mail address

Before you submit your comment you must solve the following arithmetic function! * Time limit is exhausted. Please reload CAPTCHA.

Story Page