close

How Pictures and an OpenSource approach are Unlocking the Power of AI

How Pictures and an OpenSource approach are Unlocking the Power of AI

The cloud is opening up many opportunities from a development perspective. Just a few years ago, new application development involved working on a single machine in a lab. Some of the greatest challenges started once the app was completed because you’d then have to set up the infrastructure and try to maintain a consistent environment to get the app up and running. It was often a frustrating process. Today, we have some great tools to work with including OpenStack, which allows you to develop in the cloud from the outset and then easily deploy those new applications.

One of the exciting areas that benefits from OpenStack and the cloud is artificial intelligence (AI). AI essentially takes data to create insight. It’s something that we have only recently been able to leverage – thanks to a combination of finally having enough processing capability and machine learning coming of age. We are currently creating a host of AI apps, all driven through OpenStack APIs, which will ultimately enable us to combine many of these different functions and roll them out into a much broader cloud ecosystem.

They say a picture paints a thousand words, but images can also convey a great deal of complex information

We’ve harnessed this concept with our approach to AI, which we call Imagification. Simply put, this turns any given data problem into a puzzle involving images. Before I explain further, it’s important to note that with the usual approach, when you train a neural network, you must specifically train for every new task and repeat the whole training for every new problem.

Alternatively, and that’s what we at Fujitsu do in some cases, one can approach the problem by trying to mimic the way human beings learn: using what has been trained for other purposes and apply it to a new task. This means that we don’t need to train the neural network each time, for every specific task, or at least cut down the training time dramatically. We are using a mechanism in our neural networks which is similar to the way that the human brain sees pictures. We effectively take data and draw a picture with them. Then, we can use a general-purpose neural network we’ve trained to work with images and ask it to examine a whole range of different things or activities. Each new question we ask relates to a new feature vector and each new application just needs a new Imagification input before the neural network is able to examine the features and patterns in that image. We are currently applying this approach to help address a broad cross section of real-world problems.

For example, the neural network can identify someone’s signature. Even in the year 2017, we are still using an ink scribble to prove our identity. Machine learning can be used to make this a far easier problem to solve. The system can now answer questions such as whether a new signature is a good likeness of previous signatures, and from that, we can draw the conclusion that it is genuine. AI can do more, though – we could even ask who is responsible for the signature — and find out how consistent those signatures are across a number of samples.

It’s encouraging that we have already determined that this works not only with static images; in fact, we can apply it to anything. All we need to do is create a visual representation. One of the situations we have looked into is driving. A possible scenario might be for insurance companies to provide customers with wearable wristbands that detect the micro-gestures that suggest poor driving habits. Accelerometer sensor data captured from the wristband is then analyzed by Fujitsu’s deep learning time series technology, which classifies the driver’s activities in near real-time. It can, for example, detect activities such as talking on the phone while driving, eating behind the wheel, or steering with one hand. Ultimately there could be an opportunity for the insurance provider to reward safe drivers who exhibit few of these habits.

We apply the same technique to 3D images captured from different mechanical components, for example car engine parts. We can show the application (we call it ‘3D shape analysis as a service’) a component and ask it what shapes are similar, or tell it to find potential matching shapes. We can even ask it what the approximate manufacturing cost is likely to be.

Each one of these example applications is powered by OpenStack, which provides the agility, performance, API ecosystem and global distributed platform that next-generation AI requires. OpenStack is also at the heart of the Fujitsu Cloud Service K5 and our MetaArc portfolio designed to help businesses implement digital transformation, runs on OpenStack.

In today’s hyper-connected world, every device, from thermostats, cars and machinery, through to sensors and components, is linked. Large enterprises, often characterized by being slow moving, frequently find it difficult to leverage the wealth of data that is produced to compete effectively in this digital era. However, with Fujitsu Cloud Service K5 and MetaArc we are helping enterprises to be competitive in the new world by enabling them to leverage technologies such as IoT, big data and artificial intelligence. By empowering our customers in this way, we are enabling them to achieve their own transformative breakthroughs. As these examples show, people are at the heart of everything we do, and Fujitsu Cloud Service K5 and MetaArc are bringing that to life.

Tags: , , ,

No Comments

Leave a reply

Post your comment
Enter your name
Your e-mail address

Before you submit your comment you must solve the following arithmetic function! * Time limit is exhausted. Please reload CAPTCHA.

Story Page