The IT market is constantly changing. Sometimes I feel that it is changing too fast. I always live in fear that if I went on a year-long vacation -- upon my return I would find a completely different world to which I would have to match in order to mean anything in this market. Well, I guess that's the reality of the programming profession. We are constantly chasing the news so as not to sleep through a moment that will be decisive for our careers.
Containerization in my mind was that moment that I slept through. I remember a few years ago hearing from all sides what a promising technology Docker was. At the time I figured, as usual, that it was just another fad of the IT world that I could overlook -- a fad that would die as quickly as it appeared. If I had known how wrong I was then....
A few years passed, I was still slapping together code in Java, thinking that I already knew everything, or at least enough to consider myself a senior developer -- after all, I was then passing the 6-year mark of professional Java programming. I remember it very well -- a colleague announced to me that he had set up his development environment on Docker to be able to create his software more easily. At the time, I considered it a curiosity worth at least checking if it was actually worth something. It coincided simultaneously with the moment when a new project I had the opportunity to work on required containerization (one of the implementation requirements). This was one of the turning points in my IT life. Something that opened my eyes and showed me that the boundaries in the IT world go further than I thought.
However, let's start at the beginning. Some time ago, smart people started doing their software in microservices architecture. It quickly became apparent that this trend had firmly established itself in software development standards. Functional decomposition into many smaller services brings basically the same benefits. Microservices are conceptually consistent. They are small projects that are easier to maintain. Each service communicates with another to create, as a whole, a pretty efficient system for the end user. Such bricks are easy to replace with other bricks if it turns out that the customer suddenly changes his mind (how do we know that!). Failure of one component does not necessarily mean failure of the whole system -- but only of one of its many functionalities. In addition, microservices scale very well. If you have performance issues -- it's sometimes enough to duplicate an instance of key elements to handle excessive traffic in parallel. Sounds great -- and it actually is.
The Docker I mentioned is a technology that offers a lot of possibilities. Also in the scope I mentioned. What is it actually? It is a virtualization technology, i.e. running some service in a container, but without emulating the entire hardware layer and operating system. If you are familiar with the old and venerable VirtualBox, the effect is basically similar -- we have running processes in a separate autonomous, isolated image -- but the difference is that in the case of docker, this image does not contain its own operating system and "borrows" from our computer. So the container weighs very little and uses very few resources. In addition, it does so cleverly enough that the running container is isolated from our system and is unable to harm it. We have, of course, at any time the ability to modify, start, stop, remove the docker -- when we like. The result is that there is no problem at all to run 20 containers on a single mid-range computer (database, several backend services, frontend, monitoring, etc...). Cool? Sure! And would you be able to run 5 VirtualBoxes on your own machine? You'd quickly run out of RAM, not to mention how much disk space those 5 images would take up. In the docker, the container weighs little more than the application itself. Likewise when it comes to RAM consumption. After all, it is so beautiful and creates so many possibilities that it had to revolutionize the world. And it did!
The era of containerization has arrived. Docker has become a standard. Many companies are ordering software that containerizes -- so it's easy to drop this brick on your environment and make it work. It's hard to find any serious company these days that isn't using a similar solution. And if such still survives -- it has stopped in the IT Middle Ages.
Want to delve deeper into this topic? Search for detailed information on docker, docker-compose and docker swarm.
Time has moved on, and systems consisting solely of docker have become quite heavy to maintain. Working containers large systems can have hundreds, thousands. There was a problem -- how to manage, operate, automate, resume, scale, etc. all this. Then along came Google, which rode in on a white horse with its Kubernetes. In retrospect, I conclude that they did a great job. Applause is also due here to the Polish employees of Google, who still have their fingers in this project.
Kubernetes is something that docker or docker-compose did not have -- the ease of virtually any container scaling, support for failovers, automatic configuration on any number of computers (cluster). Kubernetes -- a name so long that the whole world calls it k8s (k, then 8 some letters, finally s). In the simplest terms -- k8s standardized the way docker containers are managed (in fact, k8s not only supports docker itself, but that's not what this article is about). A k8s cluster of multiple machines takes care of deciding itself -- where to put a given container (in k8s nomenclature it's called a pod), how much it should be replicated in the system, what to do if any pod stops working, how to ensure communication of one pod with another, which addresses/ports of our application should be exposed to the world, and much more. All this is configured via yml files in a compact and accessible form. Affordable enough that most systems such as Jenkins, Gitlab have built-in support for it! Many times I have seen large companies (telecoms, banks) that put entire environments just on kubernetes. It is a convenient, standardized, fail-safe solution. It is also a nod to developers, who do not need to have administrator knowledge to deploy their software. The developer writes a description of the deployment on k8s in the form of an yml file and it's done -- the application runs on the cluster. k8s itself starts the pod, gives it an IP address, exposes the appropriate ports to the world, provides connectivity to other pods. Our program is ready to run in a kubernetes environment.
How to monitor the performance of our applications? k8s provides its CLI, through which we have access to all the capabilities of the cluster. For people who like to click -- you can expose a web interface that provides basic functionality through a regular computer browser.
Want to explore this topic in more depth? Search for detailed information on Kubernetes, Ingress.
Time went on, the world went crazy about cloud technologies. If you slept through this moment -- this is the last bell to catch the departing train! It is estimated that in a few years, more than 50% of IT jobs will be cloud-related. We'll be able to cobble all services together in a moment, instead of manually putting them on physical machines in company server rooms. We are about to experience a great revolution -- the likes of which no one has ever seen! Clouds are adapting to customer needs. Both Amazon Web Services, Microsoft Azure, Google Cloud Platform, Oracle Cloud, Alibaba Cloud (and many others) have offered their solutions to containerization.
It is on the shoulders of the cloud providers' infrastructure that we entrust the adaptation of resources so that we can successfully run our distributed system. We no longer care about maintenance, purchases, breakdowns of needed server hardware, electricity, licenses. We pay one fixed rate for the resources used and don't worry about anything else. It's up to the cloud to handle all the problems so that we don't feel it. I have seen implementations where one company needed to convert large data in a short period of time. The solution turned out to be to put up a few thousand containers and run parallel calculations in production that way. This was simply cobbled together from one of the cloud providers. The machines, of course, did their job, the money was made, and afterwards all the pods were removed so as not to pay more for the resources used. Beautiful! Flexible! Cheap! That's why clouds are worth learning! Many of them allow free trial periods. Give it a try and you too!
Do you want to explore this topic in more depth? Search for detailed information on:
Google GKE -- Google Kubernetes Engine
AWS EKS -- Amazon Elastic Container Service for Kubernetes
AzureAKS -- Azure Kubernetes Service
Programmer -- sounds proud. We make so much money that we sometimes forget where it comes from. And yet we are able to create peculiar miracles that process huge amounts of information many times faster than a human could manually do it. We can automate many processes to improve the quality of work and life for the average person. However, it is important to remember that our work requires constant learning of new technologies, constant pursuit of new trends and solutions. The knowledge of a programming language itself is of course important, but our software, after all, has to run on something. Don't forget about it!
Want to learn more in training? Check out the links below: