How IT containerisation is speeding up application development

For several years now, containers have been revolutionising IT as they change the way in which applications are designed, thus enabling developers to increase productivity. In what way? This is what is explained here by Alex Palesandro*, cloud engineer at D2SI, a consulting firm specialising in cloud and DevOps that works in cooperation with CIOs and digital entities to accompany them in their digital transformation.

Containers are transforming application development in order to provide “infrastructure as a service”.

What are containers?

Just like in the transport sector, IT containers store objects in order to transport them. They facilitate the sending of applications and their dependencies to multiple operating systems, whichever these may be. They guarantee that the content is identical at the start and at the finish, and that it is secure, thanks to their isolation.

What are they used for?

They are used to reduce the complexity linked to the setting up and administration of applications, to speed up application development and production cycles, and, thanks to their flexibility and portability, they make up one of the building blocks that enable “infrastructure as a service”, i.e. the automation of IT infrastructures.

How does containerisation work?

Containerisation is a method that enables the virtualisation, in a container, of the material resources – filing, network, processor, RAM, etc. systems – needed to run an application. All of the applications’ dependencies (files, libraries, etc.) are also stored in this space. To transport virtual applications from one operating system to another, the container connects to their kernel, which enables the various material components and software programs to communicate with one another.

What is its added value?

Containerisation offers a lightweight way of virtualising resources, with isolation being guaranteed by the operating system. These resources are thus more easily portable from one system to another. It is a powerful accelerator of application development.

What role does Docker play in containerisation?

In 2013, Docker was the first market player to launch the concept of containers for applications that retain their lifecycle. This changed the way in which containers were perceived, as until then they were considered as lightweight virtual machines. Docker developed the open source software enabling container management. Meaning that it created an image reconstruction format and runtimes, which were subjected to a standardisation process by the Open Container Initiative (OCI) – a consortium founded by a group of companies to develop open reference standards for containers.

The only other market alternative is that of Rocket, created by CoreOS, which was purchased by Red Hat, but since its standardisation, Docker is the market’s most used solution.

So, there is only one major player in containerisation?

Yes, however new ones are emerging in the area of orchestration. Orchestrators are tools that enable container lifecycle management by providing an overall view of them, so as to be able to configure applications on demand. That is to say that they orchestrate the lifecycles of applications based on containerisation. This is what is being done by the Kubernetes project – a project originally launched by Google in 2015, that then became open source as it was given to the Cloud Native Computing Foundation. In terms of open source projects, it is the largest after that of the Linux kernels project, and the first to be considered mature amongst those hosted by the Cloud Native Computing Foundation. The aim of Kubernetes is to work with any container systems conforming to the OCI standard. It enables programmers to concentrate on the way in which they would like applications to function rather than on the details of their deployment. Thanks to an abstraction layer allowing management of container groups, functionalities are dissociated from the components that provide them. It is competing with Docker Swarm, the native clustering solution for Docker containers.

In what way is it a technological breakthrough?

Containerisation is a true technological breakthrough because it fits perfectly into the continuous development and delivery chain of applications. It allows reduction of the notorious “time to market”, thus helping to speed up the time between the forming of an idea and its materialisation as an application functionality. Containerisation enables much faster delivery of new functionalities. For example, Facebook, Instagram, etc. release around one hundred new functionalities per day without us realising, thanks to containerisation and all the mechanics of the integration chain and of continuous deployment.

What are the limits of containerisation?

Concerning containerisation, the tools are maturing bit by bit. In other words, containerisation and the system on which it is based are starting (within the Linux kernel) to be totally reliable. We no longer observe the flaws that, several years ago, allowed containers to be broken into and resources fraudulently accessed. Today, this is no longer possible because the code used is successful and because ITOps are much better trained in limiting the opportunities for intrusion. For example, by limiting rights on applications.

However, for container orchestrators, there are still limits. For example, in terms of managing large amounts of data, there is still progress to be made for integration with cloud providers. Furthermore, many applications to be migrated to the cloud are unable to benefit from the advantages of container orchestration for scaling and portability. They would need to be rewritten in order to be adapted to this new approach, but this would require too much time.

How is containerisation going to evolve?

When all programmers master container orchestration perfectly and orchestration security is improved, we can imagine having groups of machines that will host containers belonging to completely heterogeneous systems. This will enable any application to securely use resources that do not belong to it. Containerisation and its orchestration will thus enable unified management of heterogeneous resources, which is highly useful for IoT usages as it has a wide range of different processor architectures.

*He joined D2SI a year ago after having spent three years in the cloud security department at Orange as part of his thesis on the subject of multiclouds (at the University Lyon 3). He studied all forms of container-based virtualisation and his chosen fields are: multicloud, DevOps, System Virtualisation, Software Defined Networking, and Network Function Virtualisation.

Read also on Hello Future

Control and Repair Robots Remotely Using Digital Twinning

Discover
GettyImages - Firecell 5G logistics logistique

Firecell Simplifies 5G B2B Private Networks

Discover
Nephio

The Open-Source Community Nephio Aims to Bring Networks into the Cloud

Discover

Orange innovation prepares for the most demanding uses of 5G

Discover

MWC 2021: With Pikeo, 5G Networks Switch to the Cloud and Become Autonomous

Discover

MWC 2021: Stand-alone 5G Reduces Latency for Industry 4.0 Ecosystems

Discover

5G’s impact on society under the microscope

Discover

5G enables real-time applications for business

Discover