Skip to main content

Why Container based deployment is preferred for the Microservices?

When it comes to Microservices architecture, the deployment of Microservices plays a critical role and has the following key requirements.
Ability to deploy/un-deploy independently from other Microservices.
Developers never need to coordinate the deployment of changes that are local to their service. These kinds of changes can be deployed as soon as they have been tested. The UI team can, for example, perform A|B testing and rapidly iterate on UI changes. The Microservices Architecture pattern makes continuous deployment possible.

Must be able to scale at each Microservices level (a given service may get more traffic than other services).
Monolithic applications are difficult to scale individual portions of the application. If one service is memory intensive and another CPU intensive, the server must be provisioned with enough memory and CPU to handle the baseline load for each service. This can get expensive if each server needs high amount of CPU and RAM, and is exacerbated if load balancing is used to scale the application horizontally. Finally, and more subtlety, the engineering team structure will often start to mirror the application architecture over time.

main.png

We can overcome this by using Microservices. Any service can be individually scaled based on its resource requirements. Rather than having to run large servers with lots of CPU and RAM, Microservices can be deployed on smaller hosts containing only those resources required by that service.
For example, you can deploy a CPU-intensive image processing service on EC2 Compute Optimized instances and deploy an in-memory database service on EC2 Memory-optimized instances.

Building and deploying Microservices quickly.
One of the key drawback of the monolithic application is the difficult to scale. As explained in above section, it needs to mirror the whole application to scale. With the micro services architecture we can scale the specific services since we are deploying services in the isolated environment. Nowadays dynamically scaling the application is very famous every iSaaS has that capability(eg:- Elastic load balancing). With that approach we need to quickly launch the application in the isolated environment.


Following are the basic deployment patterns which we can commonly see in the industry.
  • Multiple service instances per host - deploy multiple service instances on a host
          multipleservice.png
  • Service instance per host - deploy a single service instance on each host
          multiHost.png
            
  • Service instance per VM - a specialization of the Service Instance per Host pattern where the host is a VM
        VM.png
  • Service instance per Container - a specialization of the Service Instance per Host pattern where the host is a container
container.png

Container or VM?

As of today there is a significant trend in the industry to move towards containers from VMs for deploying software applications. The main reasons for this are the flexibility and low cost that containers provide compared to VMs. Google has used container technology for many years with Borg & Omega container cluster management platforms for running Google applications at scale. More importantly Google has contributed to container space by implementing cgroups and participating in libcontainer project. Google may have gained a huge gain in performance, resource utilization and overall efficiency using containers during past years. Very recently Microsoft who did not had an operating system level virtualization on Windows platform took immediate actions to implement native support for containers on Windows Server.

VM_vs_Docker.png


I found nice comparison between the VMS and Containers in the internet which comparing House and the Apartments.
Houses (the VMs) are fully self-contained and offer protection from unwanted guests. They also each possess their own infrastructure – plumbing, heating, electrical, etc. Furthermore, in the vast majority of cases houses are all going to have at a minimum a bedroom, living area, bathroom, and kitchen. I’ve yet to ever find a “studio house” – even if I buy the smallest house I may end up buying more than I need because that’s just how houses are built.
Apartments (the containers) also offer protection from unwanted guests, but they are built around shared infrastructure. The apartment building (Docker Host) shares plumbing, heating, electrical, etc. Additionally apartments are offered in all kinds of different sizes – studio to multi-bedroom penthouse. You’re only renting exactly what you need. Finally, just like houses, apartments have front.
There are design level differences between these two concepts. Containers are sharing the underlying resources while providing the isolate environment and it only provides the resources which need to run the application but VMS are different. It first start the OS and then start your application. Like or not it's providing default set of unwanted services which consume the resources.
Before move into the actual comparison, lets see how we can deploy micro services instance in any environment. Environment can be single or multi host in the single VM or it can be the multiple container in the single VM, single container in the single VM or dedicated environment. It is not just starting application on the VM or deploy application in the web container. We should have automated way to manage it. As the example AWS provide nice VM management capability for any deployments. If we use VM for the deployment we are normally build the VM with required application component and using this VM we can spawn any number of different instances.
Similar to AWS VM management, we need some container management platform for the container as well, because when we need scale the specific service we cannot manually monitor the environment and start new instance. It should be automated. As the example we can use Kubunertees. It is extending Docker's capabilities by allowing to manage a cluster of Linux containers as a single system, managing and running Docker containers across multiple hosts, offering co-location of containers, service discovery, and replication control.
Both VM and containers are designed to provide an isolated environment. Additionally, in both cases that environment is represented as a binary artifact that can be moved between hosts. There may be other similarities but those are the major differences as i see.
In a VM-centered world, the unit of abstraction is a monolithic VM that stores not only application code, but often it's stateful data. A VM takes everything that used to sit on a physical server and just packs it into a single binary so it can be moved around.  But it is still the same thing.  With containers, the abstraction is the application; or more accurately a service that helps to make up the application.
When we scale up the instances this is very useful because we use VMs means we need to spawn another VM instance. It will take some times to start(OS boot time, Application boot time) but with the Docker like container deployment we can start new container instance within few milliseconds(Application boot time).

Other important factor is patching the existing services. Since we cannot develop the code without any issue. Definitely we need to patch the code. Patching the code in microservices environment is little bit tricky because we may have more than 100 of instances to patch. So If we get the VM deployment, we need to make the new VM image by adding new patches and use it for the deployment. It is not an easy task because there can be more than 100 micro services and we need to maintain different type of VM images but with the Docker like container based deployment is not an issue. We can configure docker image to get these patched from configured place. We can achieve similar requirement by puput script in the VM environment but Docker has that capability out of the box. Therefore the total config and software update propagation time would be much faster with the container approach.
A heavier car may need more fuel for reaching higher speeds than a car of the same spec with less weight. Sports car manufacturers always adhere to this concept and use light weight material such as aluminum and carbon fiber for improving fuel efficiency. The same theory may apply to software systems. The heavier the software components, the higher the computation power they need. Traditional virtual machines use a dedicated operating system instance for providing an isolated environment for software applications. This operating system instance needs additional memory, disk and processing power in addition to the computation power which needed by the applications. Linux containers solved this problem by reducing the weight of the isolated unit of execution by sharing the host operating system kernel with hundreds of containers. The following diagram illustrates a sample scenario of how much resources containers would save compared to virtual machines
RESOURCE.png

We cannot say Container based deployment is the best for the Micro services for every deployment it is based on the different constrained . So we need  carefully select one or both as the hybrid way based on our requirement.
           

                    http://blog.docker.com

Comments

Popular posts from this blog

How to enable proxy service security in ESB 4.9.0?

Security is  one of the major concern when we developing API base integrations or application developments. WSO2 supports WS Security , WS-Policy and WS-Security Policy specifications. These specifications define a behavior model for web services. Proxy service security requirements are different from each others. WSO2 ESB providing pre-define commonly used twenty security scenarios to choose based on the security requirements. This functionality is provided by the security management feature which is bundled by default in service management feature in ESB. This configuration can be done via the web console until ESB 4.8.1 release, but this has been removed from the ESB 4.9.0. Even though this feature isn't provided by the ESB web console itself same functionality can be achieved by the new WSO2 Dev Studio . WSO2 always motivate to use dev studio to prepare required artifacts to the ESB rather than the web console. Better way to explain this scenario is by example. Following

How to preserving HTTP headers in WSO2 ESB 4.9.0 ?

Preserving HTTP headers are important when executing backend services via applications/middleware. This is because most of the time certain important headers are removed or modified by the applications/middleware which run the communication. The previous version of our WSO2 ESB, version 4.8.1, only supported “ server ” and “ user agent ” header fields to preserve with, but with the new ESB 4.9.0, we’ve introduced a new new property ( http.headers.preserve ) for the passthru ( repository/conf/ passthru-http.properties ) and Nhttp( repository/conf/ nhttp.properties ) transporters to preserve more HTTP headers. Passthru transporter – support header fields               Location Keep-Alive Content-Length Content-Type Date Server User-Agent Host Nhttp transport – support headers Server User-Agent Date You can specify header fields which should be preserved in a comma-separated list, as shown below. http.headers.preserve = Location, Date, Server Note that

How to monitor the Thread CPU usage in the WSO2 Products?

1. Download JConsole topthreads Plugin. 2. Add following entries to the PRODUCT_HOME/bin/wso2server.sh     -Dcom.sun.management.jmxremote \     -Dcom.sun.management.jmxremote.port=PORT \     -Dcom.sun.management.jmxremote.ssl=false \     -Dcom.sun.management.jmxremote.authenticate=false \     -Djava.rmi.server.hostname=IP_ADDRESS \ Define your IP_ADDRESS address and PORT (port should be not used anywhere in that instance) 3. Run the JConsole using following command.     jconsole -pluginpath PATH_TO_JAR/topthreads-1.1.jar 4. Copy "JMXServerManager JMX Service URL" from the wso2carbon logs after restart the Wso2 Server (Eg:- service:jmx:rmi://localhost:11111/jndi/rmi://localhost:9999/jmxrmi) to the Remote process with the username and password. 5. Under Top Threads tab you can monitor the thread CPU usage.