Few would dispute that organizations today have more data than ever at their disposal as we have more devices and resources interconnected through the Internet. Whilst providing access to immense amounts of information creates great opportunities, centers of research, higher education and scientists are now dealing with terabytes and petabytes of data, making deriving meaningful insights a challenge. Competition has become fierce; hence advantage will accrue to those who can extract insights from the plethora of data through advanced computing.
Research and educational organizations as well as commercial entities have realized the value of high performance computing (HPC) in solving issues related to complex simulations of vast volumes of data. But this is simply not enough.
HPC was traditionally used by well-funded, high-ranking universities and government institutions. But nowadays, HPC has become more mainstream, accessible and affordable enough for tier 2 and 3 universities and commercial organizations; or what I like to call “thriving” businesses. While the key to innovation is deriving insights, the key to success depends on attaining a “faster time to market”. Therefore, the speed at which you can process data is equally important.
Institutions now need to smartly identify and allocate their workloads to appropriate systems based on the level of parallelism, as well as the corresponding processing power required, to be both cost effective and competitive. Tasks such as financial simulations and life sciences computing now have to rely on much higher levels of performance and throughput; particularly as these applications increase their scalability to 100s, 1000s or even more parallel processes.
So, what do we need? High density, highly parallel and throughput-oriented solutions at the right price per performance!
The University of Regensburg was challenged with a similar problem when they wanted to run a joint program for the numerical simulation of quantum chromodynamics. They chose a cluster built with FUJITSU Server PRIMERGY CX600 M1; powered by Intel® Xeon Phi™ processor to achieve rank 5 in the current GREEN500 list. The server provides the highest density, throughput and 9 times more performance in comparison to a standard general purpose server. Best suited for highly vectored and parallel applications and apt for the workloads mentioned above, the PRIMERGY CX600 empowers institutes with the ability to optimize their compute infrastructure and take advantage of very high parallelization opportunity. When you run such highly parallel applications, energy consumption and the associated costs become a major concern, as the three-year electricity cost can exceed the cost of the solution. With CX600, the smallest footprint solution for HPC available on the market, energy consumption is curbed to ensure a cost-effective solution whilst providing the potential to solve larger problems, much faster.
Intel and Fujitsu have teamed up to provide customers with first-hand experience of the power of an HPC environment utilizing Intel® Xeon Phi™ processors, for free. Take advantage of Fujitsu’s vast experience and exposure to HPC deployment across a variety of industries and workloads with a validated integrated solution – FUJITSU Integrated System PRIMEFLEX for HPC.
Users who register for access will have the opportunity to try a system which is preloaded with a set of applications ready to use. Or, alternatively, you can bring your own codes and try them out on the platform for free. Fujitsu’s HPC team will provide technical support and assistance to ensure users get the highest return from their experience.