When it comes to Microservices architecture, the deployment of Microservices plays a critical role and has the following key requirements.
Ability to deploy/un-deploy independently from other Microservices.Developers never need to coordinate the deployment of changes that are local to their service. These kinds of changes can be deployed as soon as they have been tested. The UI team can, for example, perform A|B testing and rapidly iterate on UI changes. The Microservices Architecture pattern makes continuous deployment possible.
Must be able to scale at each Microservices level (a given service may get more traffic than other services).
Monolithic applications are difficult to scale individual portions of the application. If one service is memory intensive and another CPU intensive, the server must be provisioned with enough memory and CPU to handle the baseline load for each service. This can get expensive if each server needs high amount of CPU and RAM, and is exacerbated if load balancing is used to scale the application horizontally. Finally, and more subtlety, the engineering team structure will often start to mirror the application architecture over time.
We can overcome this by using Microservices. Any service can be individually scaled based on its resource requirements. Rather than having to run large servers with lots of CPU and RAM, Microservices can be deployed on smaller hosts containing only those resources required by that service.
For example, you can deploy a CPU-intensive image processing service on EC2 Compute Optimized instances and deploy an in-memory database service on EC2 Memory-optimized instances.
Building and deploying Microservices quickly.
One of the key drawback of the monolithic application is the difficult to scale. As explained in above section, it needs to mirror the whole application to scale. With the micro services architecture we can scale the specific services since we are deploying services in the isolated environment. Nowadays dynamically scaling the application is very famous every iSaaS has that capability(eg:- Elastic load balancing). With that approach we need to quickly launch the application in the isolated environment.
Following are the basic deployment patterns which we can commonly see in the industry.
- Multiple service instances per host - deploy multiple service instances on a host
- Service instance per host - deploy a single service instance on each host
- Service instance per VM - a specialization of the Service Instance per Host pattern where the host is a VM
- Service instance per Container - a specialization of the Service Instance per Host pattern where the host is a container
Container or VM?
As of today there is a significant trend in the industry to move towards containers from VMs for deploying software applications. The main reasons for this are the flexibility and low cost that containers provide compared to VMs. Google has used container technology for many years with Borg & Omega container cluster management platforms for running Google applications at scale. More importantly Google has contributed to container space by implementing cgroups and participating in libcontainer project. Google may have gained a huge gain in performance, resource utilization and overall efficiency using containers during past years. Very recently Microsoft who did not had an operating system level virtualization on Windows platform took immediate actions to implement native support for containers on Windows Server.
I found nice comparison between the VMS and Containers in the internet which comparing House and the Apartments.
Houses (the VMs) are fully self-contained and offer protection from unwanted guests. They also each possess their own infrastructure – plumbing, heating, electrical, etc. Furthermore, in the vast majority of cases houses are all going to have at a minimum a bedroom, living area, bathroom, and kitchen. I’ve yet to ever find a “studio house” – even if I buy the smallest house I may end up buying more than I need because that’s just how houses are built.
Apartments (the containers) also offer protection from unwanted guests, but they are built around shared infrastructure. The apartment building (Docker Host) shares plumbing, heating, electrical, etc. Additionally apartments are offered in all kinds of different sizes – studio to multi-bedroom penthouse. You’re only renting exactly what you need. Finally, just like houses, apartments have front.
There are design level differences between these two concepts. Containers are sharing the underlying resources while providing the isolate environment and it only provides the resources which need to run the application but VMS are different. It first start the OS and then start your application. Like or not it's providing default set of unwanted services which consume the resources.
Before move into the actual comparison, lets see how we can deploy micro services instance in any environment. Environment can be single or multi host in the single VM or it can be the multiple container in the single VM, single container in the single VM or dedicated environment. It is not just starting application on the VM or deploy application in the web container. We should have automated way to manage it. As the example AWS provide nice VM management capability for any deployments. If we use VM for the deployment we are normally build the VM with required application component and using this VM we can spawn any number of different instances.
Similar to AWS VM management, we need some container management platform for the container as well, because when we need scale the specific service we cannot manually monitor the environment and start new instance. It should be automated. As the example we can use Kubunertees. It is extending Docker's capabilities by allowing to manage a cluster of Linux containers as a single system, managing and running Docker containers across multiple hosts, offering co-location of containers, service discovery, and replication control.
Both VM and containers are designed to provide an isolated environment. Additionally, in both cases that environment is represented as a binary artifact that can be moved between hosts. There may be other similarities but those are the major differences as i see.
In a VM-centered world, the unit of abstraction is a monolithic VM that stores not only application code, but often it's stateful data. A VM takes everything that used to sit on a physical server and just packs it into a single binary so it can be moved around. But it is still the same thing. With containers, the abstraction is the application; or more accurately a service that helps to make up the application.
When we scale up the instances this is very useful because we use VMs means we need to spawn another VM instance. It will take some times to start(OS boot time, Application boot time) but with the Docker like container deployment we can start new container instance within few milliseconds(Application boot time).
Other important factor is patching the existing services. Since we cannot develop the code without any issue. Definitely we need to patch the code. Patching the code in microservices environment is little bit tricky because we may have more than 100 of instances to patch. So If we get the VM deployment, we need to make the new VM image by adding new patches and use it for the deployment. It is not an easy task because there can be more than 100 micro services and we need to maintain different type of VM images but with the Docker like container based deployment is not an issue. We can configure docker image to get these patched from configured place. We can achieve similar requirement by puput script in the VM environment but Docker has that capability out of the box. Therefore the total config and software update propagation time would be much faster with the container approach.
A heavier car may need more fuel for reaching higher speeds than a car of the same spec with less weight. Sports car manufacturers always adhere to this concept and use light weight material such as aluminum and carbon fiber for improving fuel efficiency. The same theory may apply to software systems. The heavier the software components, the higher the computation power they need. Traditional virtual machines use a dedicated operating system instance for providing an isolated environment for software applications. This operating system instance needs additional memory, disk and processing power in addition to the computation power which needed by the applications. Linux containers solved this problem by reducing the weight of the isolated unit of execution by sharing the host operating system kernel with hundreds of containers. The following diagram illustrates a sample scenario of how much resources containers would save compared to virtual machines
We cannot say Container based deployment is the best for the Micro services for every deployment it is based on the different constrained . So we need carefully select one or both as the hybrid way based on our requirement.
Comments