If you ever read science fiction or watch sci-fi movies, you’ll already be familiar with the idea that we’re only at the beginning of what computers can achieve.
Step back from the advances we observe in our daily lives and think in terms of centuries or even millennia and it’s obvious that we haven’t actually been at it very long and, therefore, the potential to get better must be huge. The first electro-mechanical computers were developed just under 80 years ago, transistor-based devices appeared about 65 years ago and integrated circuits (aka microchips) began to appear about 10 years after that, in the mid 1960s.
If you imagine, for a while anyway, that you are a science fiction writer, then it’s not unreasonable that your next book will include the premise that new technologies will come along with such profound effects on computing that today’s best-performing hardware will look as underpowered and archaic as those early electro-mechanical devices appear to us today.
And in case you don’t believe me, go and see the Cray-1 supercomputer in the Deutsches Museum in Munich.
What was once an enormously expensive, powerful machine (and I had the pleasure of once using it) – built in 1983 – is now mainly used as a place to rest their weary feet by museum visitors. Let’s think big, see the longer-term picture, and together peer into the future to see where we are headed.
The next computer revolution
The most fascinating thing about looking into the future is that we can already see the underlying technology of the next computer revolution.
In the 1970s, physicists familiar with quantum theory began to speculate whether it might provide the basis for encoding information, and in 1980 this led to the proposition by Yuri Manin of a quantum computer, an idea made more widely known by Richard Feynman during a lecture at the Massachusetts Institute of Technology (MIT) the following year.
In a previous blog post, I described the underlying physics of quantum computing and what exactly makes it so exciting. In quantum superposition, objects can be in two (or more) states at once – as popularized by the thought experiment known as Schrödinger’s cat.
If this superposition of states could be harnessed in a computer, you would have the ability to calculate all possible combinations of possible results simultaneously.
If you’re wondering “so what”, I gave the example of public key encryption for data security, where calculating the correct prime factors of a number with 617 decimal digits (in other words a 2,048-bit key) takes today’s fastest supercomputer about 21 billion years, or one and a half times the age of the universe. With a quantum computer equipped with a relatively small number of quantum bits (qubits or QBs) this task could be manageable instantly.
Currently we don’t know, however, just how many qubits we need for this task. This is due to the need for quantum error correction, which requires quite a number of additional qubits to enable practical computation.
Back in the 1980s, the idea of a quantum computer was wildly speculative. Not many people outside of theoretical physics departments really believed that superposition and entanglement were real, let alone something you could use to create new technologies.
Let’s not forget that Schrödinger’s famous thought experiment was originally intended as a reductio ad absurdum – a means of disproving the Copenhagen interpretation of quantum mechanics, which included superposition. Yet despite that skepticism, we now see various kinds of quantum computers used for experimental lab testing.
How long do you want to wait?
Considering we started with an idea considered wacky in the extreme, it’s not altogether surprising that you can’t yet jump in your car and head to the nearest retail park to buy a quantum computer – or order one online and have it delivered to your door by a drone.
Let’s be under no illusions: Quantum computing technology remains very expensive, extremely difficult to run, and requires very expensive and very specific operating conditions in order to compute and provide output – including power and cooling requirements that are simply beyond the reach of even high-end data centers.
In order to get the correct output for a problem, quantum bits need to stay in superposition at near absolute-zero temperatures, free from any outside interference, including cosmic or magnetic rays. Get this wrong and the qubits collapse out of their delicate entangled state, losing all quantum acceleration and of course also rendering any calculation impossible.
In addition, in most machines, quantum bits can only be connected to a small number of adjacent ones because of the need for physical connections between the bits. You can see that the challenges associated with maintaining these precise conditions ensure that for the time being, at least, quantum computing based on superconductivity remains complex and impractical for many sectors and industries.
Which got us thinking. To solve these challenges, Fujitsu used conventional semiconductor technology and developed the Digital Annealer. For the technically-minded, this is a new, non-conventional, non-von-Neumann computing architecture.
It can quickly solve combinatorial optimization problems without the added costs and complications that are typically associated with quantum computing. The Digital Annealer is not a quantum computer, but it uses the advances of quantum computing utilizing a novel architecture built on a classical hardware technology.
Our insight was that quantum computing is not just about hardware – it needs a new generation of software to be able to operate, and here the advances have been equally profound. Fujitsu’s research scientists quickly realized that there was an interesting and fertile possibility to harness the power of the new quantum software within silicon architectures.
We formed strategic alliances with Toronto University, which has a leading research position in the field, and 1QB Information Technologies (1QBit), based in Vancouver, Canada. 1QBit is the leading commercial player and has co-developed software for the Fujitsu Digital Annealer.
Among the various quantum computing methods that exist in the market today, the Digital Annealer is categorized as an example of the annealing method. This means it focuses on the solution of combinatorial optimization problems and the achievement of successful results through rapid operational capabilities.
Unlike classical computers, the digital annealing method does not require conventional programming; instead, simply parametrizing the cost-function to be minimized and feeding it to the Digital Annealer causes calculations to be performed. This step of providing the cost function is, however, far from trivial at this stage in time.
The Fujitsu Digital Annealer uses a circuit design inspired by quantum phenomena. It has a fully connected architecture enabling the free exchange of signals between any two bits, and can therefore solve large-scale combinatorial optimization problems very quickly. The Digital Annealer has 1024-bits, full interconnections with 16-bit precision that can thus achieve a high degree of accuracy, with 65,536 distinct values, which is actually somewhat higher than any other quantum annealing technology using superconductivity today.
Yet, from a practical perspective, the Digital Annealer can operate at data center temperatures and does not need special cooling, i.e. it operates with digital circuits at room temperature, and can fit into a data center rack, without needing any specific expertise or a complex infrastructure to function. This is convenient, as the DA will function as a kind of special accelerator to speed up combinatorial optimizations and will therefore always be used with conventional hardware in a hybrid environment.
Test cases show that when applied to the classic travelling salesman problem, which requires you to identify the shortest possible connecting path between a number of defined stops, the solution is found up to 17,000 times faster by the Fujitsu Digital Annealer than by classical simulated annealing. If we waited for Moore’s Law to give us the same improvement, that would take 14 chip generations, or around 25 years of development using silicon technology alone.
Ready to go – real-world applications
You could be forgiven for thinking that the computer industry sometimes overstates its latest achievements. But this is an advance on a truly significant scale – and the applications to which it can be applied are real too.
Far from being a theoretical experiment, the Digital Annealer is already working to help customers rapidly solve combinatorial optimization problems. Available immediately as a cloud service, and as an on-premise product, it puts the power of the Digital Annealer at the disposal of businesses that require more computational volume than can be delivered by traditional computers to solve their business data problems.
For example, in factories and distribution warehouses, the traveling salesman problem can minimize the time it takes workers to walk around picking parts from stock bins. We know this because Fujitsu applied it to our own factories: the use of Digital Annealer reduced workers’ traveling distances by 45% per month, with a consequent reduction in non-productive time.
The same approach can, of course, be applied across all organizations that need to optimize routes, including postal services and logistics services and, more widely, for addressing challenges such as factory worker shifts and even in speeding up rounds in sports tournaments.
Digital Annealer – opening up the future
Finally, we have all heard a lot in the last 12 months about the possible impact of AI on business, society, and even humanity!
Amidst the welter of speculation, one practical issue has been forgotten. As big data analysis and AI become increasingly important themes in business innovation, some companies have started to invest in AI and data science. However, the difficulty in optimizing business processes is that many problems discourage any attempt to solve them, because they require an immense amount of computation.
Such problems can now be solved thanks to the development of quantum computing and quantum inspired technologies. The Digital Annealer makes it possible to easily verify hypotheses that may have been abandoned due to immense computational costs and time, thereby steadily advancing business objectives.
But as with all emerging technologies, businesses can’t be expected to have the relevant expertise to just run their problems via the Digital Annealer. They need help to ensure they derive the most possible benefit. Consequently, Fujitsu is offering a companion service to its new cloud-based offering – the Fujitsu Digital Annealer Technical Service – to help customers develop applications, define issues, and build and run the mathematical models that will contribute to their digital transformation.
To conclude, if you are still thinking about the plot of your new sci-fi bestseller, you could do worse than base it on 2018, the year in which complex business – and societal – challenges became computationally resolvable for the first time, with the introduction of Fujitsu Digital Annealer.