Monday, January 23, 2017

Why Container based deployment is preferred for the Microservices?

When it comes to Microservices architecture, the deployment of Microservices plays a critical role and has the following key requirements.
Ability to deploy/un-deploy independently from other Microservices.
Developers never need to coordinate the deployment of changes that are local to their service. These kinds of changes can be deployed as soon as they have been tested. The UI team can, for example, perform A|B testing and rapidly iterate on UI changes. The Microservices Architecture pattern makes continuous deployment possible.

Must be able to scale at each Microservices level (a given service may get more traffic than other services).
Monolithic applications are difficult to scale individual portions of the application. If one service is memory intensive and another CPU intensive, the server must be provisioned with enough memory and CPU to handle the baseline load for each service. This can get expensive if each server needs high amount of CPU and RAM, and is exacerbated if load balancing is used to scale the application horizontally. Finally, and more subtlety, the engineering team structure will often start to mirror the application architecture over time.


We can overcome this by using Microservices. Any service can be individually scaled based on its resource requirements. Rather than having to run large servers with lots of CPU and RAM, Microservices can be deployed on smaller hosts containing only those resources required by that service.
For example, you can deploy a CPU-intensive image processing service on EC2 Compute Optimized instances and deploy an in-memory database service on EC2 Memory-optimized instances.

Building and deploying Microservices quickly.
One of the key drawback of the monolithic application is the difficult to scale. As explained in above section, it needs to mirror the whole application to scale. With the micro services architecture we can scale the specific services since we are deploying services in the isolated environment. Nowadays dynamically scaling the application is very famous every iSaaS has that capability(eg:- Elastic load balancing). With that approach we need to quickly launch the application in the isolated environment.

Following are the basic deployment patterns which we can commonly see in the industry.
  • Multiple service instances per host - deploy multiple service instances on a host
  • Service instance per host - deploy a single service instance on each host
  • Service instance per VM - a specialization of the Service Instance per Host pattern where the host is a VM
  • Service instance per Container - a specialization of the Service Instance per Host pattern where the host is a container

Container or VM?

As of today there is a significant trend in the industry to move towards containers from VMs for deploying software applications. The main reasons for this are the flexibility and low cost that containers provide compared to VMs. Google has used container technology for many years with Borg & Omega container cluster management platforms for running Google applications at scale. More importantly Google has contributed to container space by implementing cgroups and participating in libcontainer project. Google may have gained a huge gain in performance, resource utilization and overall efficiency using containers during past years. Very recently Microsoft who did not had an operating system level virtualization on Windows platform took immediate actions to implement native support for containers on Windows Server.


I found nice comparison between the VMS and Containers in the internet which comparing House and the Apartments.
Houses (the VMs) are fully self-contained and offer protection from unwanted guests. They also each possess their own infrastructure – plumbing, heating, electrical, etc. Furthermore, in the vast majority of cases houses are all going to have at a minimum a bedroom, living area, bathroom, and kitchen. I’ve yet to ever find a “studio house” – even if I buy the smallest house I may end up buying more than I need because that’s just how houses are built.
Apartments (the containers) also offer protection from unwanted guests, but they are built around shared infrastructure. The apartment building (Docker Host) shares plumbing, heating, electrical, etc. Additionally apartments are offered in all kinds of different sizes – studio to multi-bedroom penthouse. You’re only renting exactly what you need. Finally, just like houses, apartments have front.
There are design level differences between these two concepts. Containers are sharing the underlying resources while providing the isolate environment and it only provides the resources which need to run the application but VMS are different. It first start the OS and then start your application. Like or not it's providing default set of unwanted services which consume the resources.
Before move into the actual comparison, lets see how we can deploy micro services instance in any environment. Environment can be single or multi host in the single VM or it can be the multiple container in the single VM, single container in the single VM or dedicated environment. It is not just starting application on the VM or deploy application in the web container. We should have automated way to manage it. As the example AWS provide nice VM management capability for any deployments. If we use VM for the deployment we are normally build the VM with required application component and using this VM we can spawn any number of different instances.
Similar to AWS VM management, we need some container management platform for the container as well, because when we need scale the specific service we cannot manually monitor the environment and start new instance. It should be automated. As the example we can use Kubunertees. It is extending Docker's capabilities by allowing to manage a cluster of Linux containers as a single system, managing and running Docker containers across multiple hosts, offering co-location of containers, service discovery, and replication control.
Both VM and containers are designed to provide an isolated environment. Additionally, in both cases that environment is represented as a binary artifact that can be moved between hosts. There may be other similarities but those are the major differences as i see.
In a VM-centered world, the unit of abstraction is a monolithic VM that stores not only application code, but often it's stateful data. A VM takes everything that used to sit on a physical server and just packs it into a single binary so it can be moved around.  But it is still the same thing.  With containers, the abstraction is the application; or more accurately a service that helps to make up the application.
When we scale up the instances this is very useful because we use VMs means we need to spawn another VM instance. It will take some times to start(OS boot time, Application boot time) but with the Docker like container deployment we can start new container instance within few milliseconds(Application boot time).

Other important factor is patching the existing services. Since we cannot develop the code without any issue. Definitely we need to patch the code. Patching the code in microservices environment is little bit tricky because we may have more than 100 of instances to patch. So If we get the VM deployment, we need to make the new VM image by adding new patches and use it for the deployment. It is not an easy task because there can be more than 100 micro services and we need to maintain different type of VM images but with the Docker like container based deployment is not an issue. We can configure docker image to get these patched from configured place. We can achieve similar requirement by puput script in the VM environment but Docker has that capability out of the box. Therefore the total config and software update propagation time would be much faster with the container approach.
A heavier car may need more fuel for reaching higher speeds than a car of the same spec with less weight. Sports car manufacturers always adhere to this concept and use light weight material such as aluminum and carbon fiber for improving fuel efficiency. The same theory may apply to software systems. The heavier the software components, the higher the computation power they need. Traditional virtual machines use a dedicated operating system instance for providing an isolated environment for software applications. This operating system instance needs additional memory, disk and processing power in addition to the computation power which needed by the applications. Linux containers solved this problem by reducing the weight of the isolated unit of execution by sharing the host operating system kernel with hundreds of containers. The following diagram illustrates a sample scenario of how much resources containers would save compared to virtual machines

We cannot say Container based deployment is the best for the Micro services for every deployment it is based on the different constrained . So we need  carefully select one or both as the hybrid way based on our requirement.


Wednesday, January 18, 2017

How Disruptor can use for improve the Performance of the interdependen Filters/Handlers?

In the typical filter or handler pattern  we have set of data and filters/handlers. We are filtering the available data set using available filters.
These filters may have some dependencies(In business case this could be the sequence dependency or data dependency) like filter 2 depends on filter 1 and some filters does not have dependency with others. With the existing approach, some time consuming filters are designed to use several threads to process the received records parallely to improve the performance.

However we are executing each filter one after another.  Even Though we are using multiple threads for high time consuming filters, we need to wait until all the record finish to execute the next filter. Sometimes we need to populate some data from database for filters but with the existing architecture we need to wait until the relevant filter is executed.
We can improve this by using non blocking approach as much as possible. Following diagram shows the proposed architecture.

distruptor (1).png

According to the diagram, we are publishing routes to the Disruptor(Disruptor is simple ring buffer but it has so many performance improvement like cache padding) and we have multiple handler which are running on different threads. Each handler are belong to different filters. We can add more handlers to the same filter based on the requirement. Major advantage  is, we can process simultaneously across the all routes. Cases like dependency between the handlers could handle in the implementation level. With this approach, we don't need to wait until all the routes are filtered by the single routes. Other advantage is, we can add separate handlers for populate data for future use.
Disruptors are normally consuming more resources and it is depended on the waiting strategies which we can use for the handlers. So we need to decide on what kind  of Disruptor configuration patterns we can use for the application. It can be single disruptor, single disruptor for the user, multiple disruptor based on the configuration or we can configure some Disruption for selected filters(Handlers) and different one for other handlers.     

Monday, November 21, 2016

Application Monitoring

In the software world, application monitoring is critical for the administrators as well as for the maintenance(Application support) teams. Obviously monitoring is very useful for the administrators. They need to monitor the realtime behavior of the application to give uninterrupted service to the end users, but monitoring is even important to the support teams to track down the issues of the applications.

Doing support is the most important phase of the software development life cycle after delivering the product. End Users reported  different kind of issues and support engineers need some informations which are related to the application behaviour to solve the issues. Some issues are domain related and we can simply recreate the issues in our local environment. Fixing the issue is not a big deal if we could reproduce the same behavior in our local setup but some issues are not easy to replicate in the local environment because those aren’t continuously happening in the production setup. So Identifying the exact root cause is the challenge. Concurrency issues, Thread spinning issue and memory issues are in the top of the order. Software developer should have proper plan to report the status of the application with required details when application has some issues. Putting log messages with the proper details and proper place are the most important but same cases like high CPU usage, developer need some more information like thread dump to track the issue. Support engineers or developers may be identified the issue by looking at the logs, thread dump or heap dumps, but application specific information need for some cases. Proper monitoring mechanism can fulfil that requirement. There are different type of  monitoring application available in the industry for different purposes but all these applications are developed as the general purpose applications. Application developer need to implement application specific monitoring mechanism for achieving that requirement.

Note:- Proper Monitoring mechanism can be get as the marketing factor because client can incorporate JMX APis with their existing monitoring dashboards seamlessly or we can provide our own monitoring dashboard to the customers.

JMX(Java management extension)

The JMX technology provides the tools for building distributed, Web-based, modular and dynamic solutions for managing and monitoring devices, applications, and service-driven network. Starting with the J2SE platform 5.0, JMX technology is included in the Java SE platform. JMX is the recommended way to monitor and manage java applications. As an example, administrator can stop or start the application or dynamically can change the configurations. Monitoring and management are the basic usage of the JMX. JMX can be used for design the full modularize applications which can enable and disable the modules at any time via the JMX, but main intention of this article is for discussing management and monitoring capabilities of the JMX.

JMX architecture.

Three main layers can be identified in the JMX architecture.

  1. Prob Level
The level closed to the application is called the instrumentation layer or prob layer. This level consists of four approaches for instrumenting application and system resources to be manageable (i.e., making them managed beans, or MBeans), as well as a model for sending and receiving notifications. This level is the most important level for the developers because this level prepares resources to be manageable. We can identify main two categories when we consider about the  instrumentation level.

  • Application resources( Eg:- Connection pool, Thread pool, .. etc)
An application resources that need to be manageable through the JMX must provide the metadata about a resource’s features are known as its management interface. Management applications may interact with the resources via management interface.

  • Instrumentation strategy.
There are four instrumentation approaches defined by JMX that we can use to describe the management interface of a resource: standard, dynamic, model, and open

     2.  Agent Level
The agent level of the JMX architecture is made up of the MBean server and the JMX agent services. The MBean server has two purposes: it serves as a registry of MBeans and as a communications broker between MBeans and management applications (and other JMX agents). The JMX agent services provide additional functionality that is mandated by the JMX specification, such as scheduling and dynamic loading.

    3.  Remote management Level
Top level of the JMX architecture is called the distributed services level. This level contains the middleware that connects JMX agents to applications that manage them (management applications). This middleware is broken into two categories: protocol adaptors and connectors.

Wednesday, November 2, 2016

Point A to Point B by Air

As a company or individually, you may need to deliver your goods from one place to another place. The most important thing is selecting the suited shipment mode for your logistics. Following are the major modes of shipments. 

1. Ground
Land or "ground" shipping can be done by train or truck.In order to take air and sea shipments,   ground transportion is required to take the cargo from its place of origin to the airport or seaport .it is not always possible to establish a production facility near ports due to limited coastlines of countries.

2. Air
Cargo is transported by air in specialized cargo aircraft or in the luggage compartments of     passenger aircraft

3. Ship
Shipping is done by commercial ships. Merchant shipping is the lifeblood of the world economy,   carrying 90% of international trade with 102,194 commercial ships worldwide

4. Intermodal
Intermodal freight transport refers to shipments that involve more than one mode. 

Each of these modes have different pros and cons. We need to select best suited mode based on our requirement, because shipping cost can be directly affected to the business.However I will only discuss about the Air transport mode in this post.
                            If we consider in the cost wise definitely Air transport mode is not cost effective but it has some advantage which most of the modern business need. I have listed below few of those.

1. The fastest shipping method
When your goods need to be moved quickly, air freight is the best solution compared to sea freight   or ground transport.

2. Highly reliable arrival and departure times
The arrival and departure times of flights are highly reliable, as airlines tend to be very on top of     their schedules. Even missing a flight wouldn’t cause much delay as there are usually flights departing every hour.

3. Send your cargo almost anywhere
Many airlines have a large network of destinations that cover almost the entire world. This means   that you can send the shipment to nearly every destination.

4. Low insurance premium means large savings
As the transportation time for air cargo is comparatively short, the insurance premium is much   lower. Even if air freight can be expensive, this brings about savings in terms of lower insurance costs.

5. High level of security and reduced risk of theft and damage
As the transportation time for air cargo is comparatively short, the insurance premium is much   lower. Even if air freight can be expensive, this brings about savings in terms of lower insurance costs.

6. Less need of warehousing and fewer items in stock
With the quicker transit times of air freight, local warehouse requirement is much lesser  because stock  aren't needed to keep for longer. Customs clearance, cargo inspection and cargo handlers are more efficient, as most of cargos are cleared within a matter of hours.

7. Less packaging required
Normally, air shipments require less heavy packing than for ocean shipments. So it saves both time   and money to spend for additional packing.

8. Follow the status of cargo
Many companies offer the opportunity to track cargo using web applications, which means arrival and departure of cargos can be closely monitored by staying at any where in the world.

Let’s think some company or individual selected Air cargo as the transportation mode for shipments, there are several parties who involved within the process. Airways are the major party who own the carriers. We can find three major type of carriers in the Air cargo arena.

  • Passenger aircraft use  spare volume in the airplane's baggage hold (the "belly") that is not being used for passenger luggage - a common practice used by passenger airlines, who additionally transport cargo on scheduled passenger flights. - this practice is known as Belly Cargo. Cargo can also be transported in the passenger cabin as hand-carry by an “on-board courier”.
  • Cargo aircraft are dedicated for the job - they carry freight on the main deck and in the belly by means of nose-loading or side loading.
  • Combi aircraft carries cargo on the main deck behind the passengers’ area with side loading and in the belly.

Passenger aircrafts are the famous transport medium for the Air cargo because we cannot find Cargo aircrafts commonly for the each and every destinations. However main business of the Passenger airways is not the cargo handling because their main focus is the passenger handling. So Cargo is considered as the secondary business. One of the main reason for this is, marketing is easier with  passenger transportation. As examples, marketing on the passenger seats can be done based on the facilities like business class , economy class , free wifi and meals but this cannot be done for cargo.
                                                                 Passenger Airlines are not putting their full effort on the cargo business since this is secondary business for them. Hence most of the services are outsourced to the several parties. 
Following diagram explains few of them briefly.

According to the above diagram you can see merchants are not directly dealing with the Airways. There are two main parties in between merchant and the airway. Each and every party has separate responsibilities.

Merchants are the clients in the cargo industry. As an example we can get the Seagate(Hard disk manufacturer) and 3M(provides cutting edge health and safety products). They are producing products in their Singapore factories and need to ship products to different destinations(eg:- Sri lanka) based on the requirements via Air cargo. As I explained earlier since this is secondary business for the Airways, they are not putting enough staff members for this service. So most of the required things such as documentation is done by third parties. So merchants are directly dealing with the logistic providers(Forwarders) for these purposes. 

Normally 3PLs work in this scenario is deliver the goods to the required destinations but other than that they are providing various of services in the logistic context like warehousing, local distributing. 
Most of the largest 3PLs are multinational companies and they are based in many countries, if not they have some partner 3PLs. Common practice is Merchant send the request to the 3PL for sending their goods to the single or multiple destinations.  Most of the large scale merchants are using multiple 3PLs for delivering their goods. Most of the time 3PLs are responsible for preparing the required documents for the goods, because most of the times different parties need different documents to proceed freight. Other than this, they are doing re packaging if required. Next step is handover the goods to the ground handling agents.
According to the diagram, Forwarders are playing two different roles in two(Origin and destination) sides. In destination side their responsibility is delivering the goods to the consumers . Consumers could be the company or shopping mall or individual

Ground handling agents
First Of All let see why we need ground handling agents. 
Passenger airlines has several destinations(Singapore Airlines has 63 international destinations) So maintaining different cargo office in each and every destination is not a possible task. Since this is common issue for the each and every Airways, every airport has ground handling agents. Most of the time those are owned by the national carrier of the country. So their responsibility is handling cargo behalf of the Airways. SATS is the main ground handling agent in the Changi airport and it is owned by Singapore airline. It is providing cargo handling to the Singapore airline as well as the other airlines. There can be multiple ground handling agent in big airports.

Friday, May 20, 2016

How to monitor the Thread CPU usage in the WSO2 Products?

1. Download JConsole topthreads Plugin.

2. Add following entries to the PRODUCT_HOME/bin/ \ \ \ \
    -Djava.rmi.server.hostname=IP_ADDRESS \
Define your IP_ADDRESS address and PORT (port should be not used anywhere in that instance)

3. Run the JConsole using following command.
   jconsole -pluginpath PATH_TO_JAR/topthreads-1.1.jar

4. Copy "JMXServerManager JMX Service URL" from the wso2carbon logs after restart the Wso2 Server (Eg:- service:jmx:rmi://localhost:11111/jndi/rmi://localhost:9999/jmxrmi) to the Remote process with the username and password.

5. Under Top Threads tab you can monitor the thread CPU usage.

Tuesday, April 5, 2016

Enable SecureVault Support for - WSO2 ESB - MB 3.0

We cannot use cipertool to automate encryption process for the selected elements in the file, because we can only specify Xpath notation here, but still we can use the manual process.

Sample [ESB_home]/repository/conf/ file
# register some connection factories
# connectionfactory.[jndiname] = [ConnectionURL]
connectionfactory.QueueConnectionFactory = amqp://admin:admin@clientID/carbon?brokerlist

# register some queues in JNDI using the form
# queue.[jndiName] = [physicalName]
queue.MyQueue = example.MyQueue

# register some topics in JNDI using the form
# topic.[jndiName] = [physicalName]
topic.MyTopic = example.MyTopic
1. Enable secure valut in the ESB
sh -Dconfigure

2. Go to the [ESB_home]/bin and execute the following command to generate the encrypted value for the clear text  password.
3. It will prompt following  console for input value.  Answer: wso2carbon
[Please Enter Primary KeyStore Password of Carbon Server : ]
4. Then it will appear second console for  following input value.
     (Answer: According to our property file, the plain text is "amqp://admin:admin@clientID/carbon?brokerlist='tcp://localhost:5672'".)

Encryption is done Successfully
Encrypted value is :cpw74SGeBNgAVpryqj5/xshSyW5BDW9d1UW0xMZDxVeoa6xS6CFtU

5. Open the file, which is under [ESB_home]/repository/conf/security and add the following entry.
6. Open the [ESB_home]/repository/conf/ file and update the key/value of connectionfactory field.

Thursday, November 5, 2015

How SSL Tunneling working in the WSO2 ESB

This blog post assumes that the user who reads has some basic understanding of SSL tunneling and the basic message flow of the ESB. If you are not familiar with the concepts of the SSL tunneling you can refer my previous blog post about the SSL tunneling and you can get detail idea about the message flow from this article.
I will give brief introduction about the targetHandler for understand concepts easily. As you may already know TargetHandler(TH) is responsible for handling requests and responses for the backend side. It is maintaining status (REQUEST_READY, RESPONSE_READY .. ,etc) based on the events which fired by the IOReactor and executing relevant methods. As the example if a response which is coming from the backend side hits to the ESB, IOReactor fire the responseRecived method in the targetHandler side. Followings are the basic methods contain in the target handler and their responsibilities.

  • Connect: - This is executed when new outgoing connection needed.
  • RequestReady:- This is executed by the IOReactor when new outgoing HTTP request headers are coming. Inside this method we are writing http Headers to the relevant backend.
  • OutputReady:- Responsible for writing request body to the specified backend service.
  • ResponseRecived:- This is executed when backend response headers are coming to the ESB.
  • OutputReady:- Responsible for reading the backend response body.

Let me explain tunneling process step by step.

  1. If proxy server is configured in the ESB, once request comes to the target handler side, it creates     connection with the proxy server because all request need to go via the proxy server.
  2. Once request headers comes to the TH side(inside the requestReady method), those headers need to send to the actual backend side, but still we don't have connectivity between the actual backend and the ESB.
    In this point we need to start initializing the tunneling in between ESB and the actual backend service. As I explained in the previous article. For initialize the tunneling, we need to first send the CONNECT request to the proxy server side. If you checked the TH requestReady code you can see the how CONNECT request is sending to the proxy server.
  3. Proxy server establishing the connection with the backend service Once CONNECT request hits. 
  4. Base on the CONNECT request which has send from the TH requestReady methods, Proxy server responses with the 200 range status code.
    This will initially hits to the TH responseRecive method. So it will upgrade the existing connection to secure one by initiating a TLS handshake on that channel. Internally existing IOSession upgrades into the SSLIOSession by using IOSession informations.
    Since everything is now relayed to the backend server, it's as if the TLS exchange was done directly with backend secure service.
  5. Now the tunnel between the ESB and backend secure service has been initialized. we need to send actual request to the backend secure service. For that we need to trigger the target handler execution again. Under responseRecive method you can see executing
     conn.resetInput(); conn.requestOutput(); to execute the requestReady and outputReady methods for sending actual request to the backend. 
How to setup SSL handling in the ESB?

  1. First you need to setup proxy server.
    Install Squid as described here.
  2. Configure for the ESB side.
    In /repository/conf/axis2/axis2.xml, add the following parameters to the transportSender configuration for PassThroughHttpSender,PassThroughHttpSSLSender, HttpCoreNIOSender, and HttpCoreNIOSSLSender:
    <parameter name="http.proxyHost" locked="false">hostIP</parameter>
    <parameter name="http.proxyPort" locked="false">portNumber</parameter>

    where hostIP and portNumber specify the IP address and port number of the proxy server.Uncomment the following parameter in the PassThroughHttpSSLSender and HttpCoreNIOSSLSender configurations and change the value to "AllowAll".
    <parameter name="HostnameVerifier">AllowAll</parameter>

    For example, if the host and port of proxy server is localhost:8080, your transportSender configurations for PassThroughHttPSender and PassThroughHttpSSLSender would look like this:

What is SSL Tunneling?

You want to be able to access some restricted destinations and/or ports with some applications from your computer but you are on a restricted network (corporate) - Even using a Torrent client.

How to overcome this limitation? 
What if backend service is secure one?
We can use SSL tunneling for overcome above issue.

What is the SSL tunneling?

SSL tunneling is when an Internal client application requests a web object using HTTPS on port 8080 through the proxy server. 

An example of this is when you are using online shopping. The internet connection to the target relevant e-commerce website  is tunneled to by you through proxy server. The key word here is through. The client communicates with the target web server directly after the initial connection has been established by proxy server, by means of communication within the SSL tunnel that has been created after SSL negotiation has taken place.

How it's working?

  1. The client makes a tunneling request: CONNECT server-host-name:port HTTP/1.1 (or HTTP/1.0). The port number is optional and is usually 443. The client application will automatically send the CONNECT request to the proxy server first for every HTTPS request if the forward proxy is configured in the browser.  

    RFC 2616 treats CONNECT as a way to establish a simple tunnel. There is more about it in RFC 2817, although the rest of RFC 2817 (upgrades to TLS within a non-proxy HTTP connection) is rarely used.
  2. The proxy accepts the connection on its port 8080, receives the request, and connects to the destination server on the port requested by the client.
  3. The proxy replies to the client that a connection is established with the 200 OK response.
  4. After this, the connection between the client and the proxy server is kept open. The proxy server relays everything on the client-proxy connection to and from proxy-backend. The client upgrades its active (proxy-backend) connection to an SSL/TLS connection, by initiating a TLS handshake on that channel.
    Since everything is now relayed to the backend server, it's as if the TLS exchange was done directly with
    The proxy server doesn't play any role in the handshake. The TLS handshake effectively happens directly between the client and the backend server.
  5. After the secure handshake is completed, the proxy sends and receives encrypted data to be decrypted at the client or at the destination server.
  6. If the client or the destination server requests a closure on either port, the proxy server closes both connections (ports 443 and 8080) and resumes its normal activity.