It’s tough being a software developer today. It used to be that software development was a slow methodical process, not least of all because of the lengthy testing cycles that were necessary for every change to an application. But today, developers and IT departments are under pressure to deliver high quality applications rapidly to help their businesses respond to market opportunities – in real time.
Agility is the order of the day. As a result, we are seeing a shift in how applications are being built, deployed and managed. Developers are turning to micro-services to help them keep pace with the speed of business. The idea is straightforward: You simply separate an application into multiple small, separate entities, each with a discrete functional scope. These entities can be developed independently from one another, but they communicate with each other via APIs – which we think is the new standard for inter-app communication.
The increased use of micro-services is driving another new trend: DevOps. This term was coined for the close cooperation between a company’s application development and systems operations teams. As a result of working collaboratively from the start of a project, software can be built, tested and released much more rapidly than before, resulting in faster feedback and more innovation. This is an approach that lends itself well to deploying micro-services.
When it comes to deploying these micro-services, it turns out that, while not exclusively designed for them, containers represent an excellent platform. Containers are executed as independent processes in the user space of the operating system and usually house a single application. They share many common resources such the operating system kernel, binaries, and libraries. At runtime, everything needed to run the application is packaged inside the container.
At first glance, containers might sound similar to the more familiar virtual machines (VMs). Containers however offer significant advantages for micro-services. They are smaller, consume fewer resources, and allow for heavier workloads per server. Startup is very fast (it takes less than a second compared to a typical VM boot which may take minutes). Furthermore, actions such as; migrating containers between servers, downloading them from their repositories can be executed much faster.
While virtual machines package an entire application, regardless of how it has been created, containers are able to distribute an application across available server cluster nodes, or even across different clouds.
Another compelling advantage that containers offer is portability. Programmers can write and test code locally on their laptops. Then, when they execute their code in a different environment, for example in their data center, or via a cloud provider, they can be sure that it works in exactly the same way as it did on their own machine. Using containers means developers can build an application once and run it anywhere. Scalability is also a breeze – it is possible to run the same instance of the container on the same or multiple servers. All of these benefits also translate into attractive savings both in terms of capital expenditure, and in controlling operational expenses.
As we have seen, the combination of micro-services and containers can deliver some real benefits, however implementation is not without challenges. Firstly, all the dependencies or prerequisites for each micro-service need to be managed across the numerous individual services that constitute an application. Ensuring reliable orchestration is also crucial – an intelligent control function – the container orchestrator – is required to manage the dependencies and to distribute the micro-services to maximize efficiency across the server cluster. The orchestrator should also be able to address problems before they affect performance – for example it should be able to switch containers to other servers in case of a server failure or performance bottleneck to ensure that the workload remains optimally balanced.
While not a prerequisite, a (private) cloud infrastructure can provide significant advantages. Its self-service features and rapid provisioning capabilities allow software developers to request and receive what they need on demand – which perfectly embodies the principle of DevOps.
While adopting a container-based approach dramatically simplifies the software deployment, there still remains the complexity of the infrastructure. Deploying data center hardware involves the careful coordination of elements including servers, storage, networks, virtualization layers, databases and middleware. The DIY approach can be challenging – identifying the right requirements for current and future needs, and configuring the individual components before integrating them. It should also be considered that this will most likely involve coordinating components from multiple vendors, which can make both integration and ongoing maintenance more complex, and which requires an extremely detailed understanding of all the separate elements.
That’s where Fujitsu’s PRIMEFLEX Integrated Systems come into their own. These systems comprise pre-integrated and pre-tested data center components – that can be assembled in a way that best supports a customer’s specific requirements. As a result, PRIMEFLEX represents an easy way to introduce new infrastructure to the data center. Customers can implement solutions for Robust IT, based on servers, storage and network components, or using a Fast IT approach, developed as server-based solutions for scale out scenarios. PRIMEFLEX systems have been designed to enable their components to work together optimally – as they are based on best practice and significant field experience. By opting to take the integrated systems route, complexity is reduced and introducing new infrastructure is much simpler. Risk is minimized and the skill-set required by your IT department is less exacting.
While the pressure on businesses to deliver rapid responses to changing market opportunities is unrelenting, IT teams now have more powerful tools at their disposal to develop and roll out new applications extremely rapidly – deploying a powerful combination of containers and Fujitsu PRIMEFLEX integrated systems. This combination represents an attractive path to Fast IT – and enables IT departments to deliver the disruptive innovation that is the key to success in today’s rapidly shifting marketplace.
If you’re interested in learning more about Fujitsu’s approach to the new world of containers, you may be interested in downloading our latest whitepaper.