Monday, November 21, 2016

Application Monitoring

In the software world, application monitoring is critical for the administrators as well as for the maintenance(Application support) teams. Obviously monitoring is very useful for the administrators. They need to monitor the realtime behavior of the application to give uninterrupted service to the end users, but monitoring is even important to the support teams to track down the issues of the applications.

Doing support is the most important phase of the software development life cycle after delivering the product. End Users reported  different kind of issues and support engineers need some informations which are related to the application behaviour to solve the issues. Some issues are domain related and we can simply recreate the issues in our local environment. Fixing the issue is not a big deal if we could reproduce the same behavior in our local setup but some issues are not easy to replicate in the local environment because those aren’t continuously happening in the production setup. So Identifying the exact root cause is the challenge. Concurrency issues, Thread spinning issue and memory issues are in the top of the order. Software developer should have proper plan to report the status of the application with required details when application has some issues. Putting log messages with the proper details and proper place are the most important but same cases like high CPU usage, developer need some more information like thread dump to track the issue. Support engineers or developers may be identified the issue by looking at the logs, thread dump or heap dumps, but application specific information need for some cases. Proper monitoring mechanism can fulfil that requirement. There are different type of  monitoring application available in the industry for different purposes but all these applications are developed as the general purpose applications. Application developer need to implement application specific monitoring mechanism for achieving that requirement.

Note:- Proper Monitoring mechanism can be get as the marketing factor because client can incorporate JMX APis with their existing monitoring dashboards seamlessly or we can provide our own monitoring dashboard to the customers.

JMX(Java management extension)

The JMX technology provides the tools for building distributed, Web-based, modular and dynamic solutions for managing and monitoring devices, applications, and service-driven network. Starting with the J2SE platform 5.0, JMX technology is included in the Java SE platform. JMX is the recommended way to monitor and manage java applications. As an example, administrator can stop or start the application or dynamically can change the configurations. Monitoring and management are the basic usage of the JMX. JMX can be used for design the full modularize applications which can enable and disable the modules at any time via the JMX, but main intention of this article is for discussing management and monitoring capabilities of the JMX.

JMX architecture.

Three main layers can be identified in the JMX architecture.

  1. Prob Level
The level closed to the application is called the instrumentation layer or prob layer. This level consists of four approaches for instrumenting application and system resources to be manageable (i.e., making them managed beans, or MBeans), as well as a model for sending and receiving notifications. This level is the most important level for the developers because this level prepares resources to be manageable. We can identify main two categories when we consider about the  instrumentation level.

  • Application resources( Eg:- Connection pool, Thread pool, .. etc)
An application resources that need to be manageable through the JMX must provide the metadata about a resource’s features are known as its management interface. Management applications may interact with the resources via management interface.

  • Instrumentation strategy.
There are four instrumentation approaches defined by JMX that we can use to describe the management interface of a resource: standard, dynamic, model, and open

     2.  Agent Level
The agent level of the JMX architecture is made up of the MBean server and the JMX agent services. The MBean server has two purposes: it serves as a registry of MBeans and as a communications broker between MBeans and management applications (and other JMX agents). The JMX agent services provide additional functionality that is mandated by the JMX specification, such as scheduling and dynamic loading.

    3.  Remote management Level
Top level of the JMX architecture is called the distributed services level. This level contains the middleware that connects JMX agents to applications that manage them (management applications). This middleware is broken into two categories: protocol adaptors and connectors.

Wednesday, November 2, 2016

Point A to Point B by Air

As a company or individually, you may need to deliver your goods from one place to another place. The most important thing is selecting the suited shipment mode for your logistics. Following are the major modes of shipments. 

1. Ground
Land or "ground" shipping can be done by train or truck.In order to take air and sea shipments,   ground transportion is required to take the cargo from its place of origin to the airport or seaport .it is not always possible to establish a production facility near ports due to limited coastlines of countries.

2. Air
Cargo is transported by air in specialized cargo aircraft or in the luggage compartments of     passenger aircraft

3. Ship
Shipping is done by commercial ships. Merchant shipping is the lifeblood of the world economy,   carrying 90% of international trade with 102,194 commercial ships worldwide

4. Intermodal
Intermodal freight transport refers to shipments that involve more than one mode. 

Each of these modes have different pros and cons. We need to select best suited mode based on our requirement, because shipping cost can be directly affected to the business.However I will only discuss about the Air transport mode in this post.
                            If we consider in the cost wise definitely Air transport mode is not cost effective but it has some advantage which most of the modern business need. I have listed below few of those.

1. The fastest shipping method
When your goods need to be moved quickly, air freight is the best solution compared to sea freight   or ground transport.

2. Highly reliable arrival and departure times
The arrival and departure times of flights are highly reliable, as airlines tend to be very on top of     their schedules. Even missing a flight wouldn’t cause much delay as there are usually flights departing every hour.

3. Send your cargo almost anywhere
Many airlines have a large network of destinations that cover almost the entire world. This means   that you can send the shipment to nearly every destination.

4. Low insurance premium means large savings
As the transportation time for air cargo is comparatively short, the insurance premium is much   lower. Even if air freight can be expensive, this brings about savings in terms of lower insurance costs.

5. High level of security and reduced risk of theft and damage
As the transportation time for air cargo is comparatively short, the insurance premium is much   lower. Even if air freight can be expensive, this brings about savings in terms of lower insurance costs.

6. Less need of warehousing and fewer items in stock
With the quicker transit times of air freight, local warehouse requirement is much lesser  because stock  aren't needed to keep for longer. Customs clearance, cargo inspection and cargo handlers are more efficient, as most of cargos are cleared within a matter of hours.

7. Less packaging required
Normally, air shipments require less heavy packing than for ocean shipments. So it saves both time   and money to spend for additional packing.

8. Follow the status of cargo
Many companies offer the opportunity to track cargo using web applications, which means arrival and departure of cargos can be closely monitored by staying at any where in the world.

Let’s think some company or individual selected Air cargo as the transportation mode for shipments, there are several parties who involved within the process. Airways are the major party who own the carriers. We can find three major type of carriers in the Air cargo arena.

  • Passenger aircraft use  spare volume in the airplane's baggage hold (the "belly") that is not being used for passenger luggage - a common practice used by passenger airlines, who additionally transport cargo on scheduled passenger flights. - this practice is known as Belly Cargo. Cargo can also be transported in the passenger cabin as hand-carry by an “on-board courier”.
  • Cargo aircraft are dedicated for the job - they carry freight on the main deck and in the belly by means of nose-loading or side loading.
  • Combi aircraft carries cargo on the main deck behind the passengers’ area with side loading and in the belly.

Passenger aircrafts are the famous transport medium for the Air cargo because we cannot find Cargo aircrafts commonly for the each and every destinations. However main business of the Passenger airways is not the cargo handling because their main focus is the passenger handling. So Cargo is considered as the secondary business. One of the main reason for this is, marketing is easier with  passenger transportation. As examples, marketing on the passenger seats can be done based on the facilities like business class , economy class , free wifi and meals but this cannot be done for cargo.
                                                                 Passenger Airlines are not putting their full effort on the cargo business since this is secondary business for them. Hence most of the services are outsourced to the several parties. 
Following diagram explains few of them briefly.

According to the above diagram you can see merchants are not directly dealing with the Airways. There are two main parties in between merchant and the airway. Each and every party has separate responsibilities.

Merchants are the clients in the cargo industry. As an example we can get the Seagate(Hard disk manufacturer) and 3M(provides cutting edge health and safety products). They are producing products in their Singapore factories and need to ship products to different destinations(eg:- Sri lanka) based on the requirements via Air cargo. As I explained earlier since this is secondary business for the Airways, they are not putting enough staff members for this service. So most of the required things such as documentation is done by third parties. So merchants are directly dealing with the logistic providers(Forwarders) for these purposes. 

Normally 3PLs work in this scenario is deliver the goods to the required destinations but other than that they are providing various of services in the logistic context like warehousing, local distributing. 
Most of the largest 3PLs are multinational companies and they are based in many countries, if not they have some partner 3PLs. Common practice is Merchant send the request to the 3PL for sending their goods to the single or multiple destinations.  Most of the large scale merchants are using multiple 3PLs for delivering their goods. Most of the time 3PLs are responsible for preparing the required documents for the goods, because most of the times different parties need different documents to proceed freight. Other than this, they are doing re packaging if required. Next step is handover the goods to the ground handling agents.
According to the diagram, Forwarders are playing two different roles in two(Origin and destination) sides. In destination side their responsibility is delivering the goods to the consumers . Consumers could be the company or shopping mall or individual

Ground handling agents
First Of All let see why we need ground handling agents. 
Passenger airlines has several destinations(Singapore Airlines has 63 international destinations) So maintaining different cargo office in each and every destination is not a possible task. Since this is common issue for the each and every Airways, every airport has ground handling agents. Most of the time those are owned by the national carrier of the country. So their responsibility is handling cargo behalf of the Airways. SATS is the main ground handling agent in the Changi airport and it is owned by Singapore airline. It is providing cargo handling to the Singapore airline as well as the other airlines. There can be multiple ground handling agent in big airports.

Friday, May 20, 2016

How to monitor the Thread CPU usage in the WSO2 Products?

1. Download JConsole topthreads Plugin.

2. Add following entries to the PRODUCT_HOME/bin/ \ \ \ \
    -Djava.rmi.server.hostname=IP_ADDRESS \
Define your IP_ADDRESS address and PORT (port should be not used anywhere in that instance)

3. Run the JConsole using following command.
   jconsole -pluginpath PATH_TO_JAR/topthreads-1.1.jar

4. Copy "JMXServerManager JMX Service URL" from the wso2carbon logs after restart the Wso2 Server (Eg:- service:jmx:rmi://localhost:11111/jndi/rmi://localhost:9999/jmxrmi) to the Remote process with the username and password.

5. Under Top Threads tab you can monitor the thread CPU usage.

Tuesday, April 5, 2016

Enable SecureVault Support for - WSO2 ESB - MB 3.0

We cannot use cipertool to automate encryption process for the selected elements in the file, because we can only specify Xpath notation here, but still we can use the manual process.

Sample [ESB_home]/repository/conf/ file
# register some connection factories
# connectionfactory.[jndiname] = [ConnectionURL]
connectionfactory.QueueConnectionFactory = amqp://admin:admin@clientID/carbon?brokerlist

# register some queues in JNDI using the form
# queue.[jndiName] = [physicalName]
queue.MyQueue = example.MyQueue

# register some topics in JNDI using the form
# topic.[jndiName] = [physicalName]
topic.MyTopic = example.MyTopic
1. Enable secure valut in the ESB
sh -Dconfigure

2. Go to the [ESB_home]/bin and execute the following command to generate the encrypted value for the clear text  password.
3. It will prompt following  console for input value.  Answer: wso2carbon
[Please Enter Primary KeyStore Password of Carbon Server : ]
4. Then it will appear second console for  following input value.
     (Answer: According to our property file, the plain text is "amqp://admin:admin@clientID/carbon?brokerlist='tcp://localhost:5672'".)

Encryption is done Successfully
Encrypted value is :cpw74SGeBNgAVpryqj5/xshSyW5BDW9d1UW0xMZDxVeoa6xS6CFtU

5. Open the file, which is under [ESB_home]/repository/conf/security and add the following entry.
6. Open the [ESB_home]/repository/conf/ file and update the key/value of connectionfactory field.

Thursday, November 5, 2015

How SSL Tunneling working in the WSO2 ESB

This blog post assumes that the user who reads has some basic understanding of SSL tunneling and the basic message flow of the ESB. If you are not familiar with the concepts of the SSL tunneling you can refer my previous blog post about the SSL tunneling and you can get detail idea about the message flow from this article.
I will give brief introduction about the targetHandler for understand concepts easily. As you may already know TargetHandler(TH) is responsible for handling requests and responses for the backend side. It is maintaining status (REQUEST_READY, RESPONSE_READY .. ,etc) based on the events which fired by the IOReactor and executing relevant methods. As the example if a response which is coming from the backend side hits to the ESB, IOReactor fire the responseRecived method in the targetHandler side. Followings are the basic methods contain in the target handler and their responsibilities.

  • Connect: - This is executed when new outgoing connection needed.
  • RequestReady:- This is executed by the IOReactor when new outgoing HTTP request headers are coming. Inside this method we are writing http Headers to the relevant backend.
  • OutputReady:- Responsible for writing request body to the specified backend service.
  • ResponseRecived:- This is executed when backend response headers are coming to the ESB.
  • OutputReady:- Responsible for reading the backend response body.

Let me explain tunneling process step by step.

  1. If proxy server is configured in the ESB, once request comes to the target handler side, it creates     connection with the proxy server because all request need to go via the proxy server.
  2. Once request headers comes to the TH side(inside the requestReady method), those headers need to send to the actual backend side, but still we don't have connectivity between the actual backend and the ESB.
    In this point we need to start initializing the tunneling in between ESB and the actual backend service. As I explained in the previous article. For initialize the tunneling, we need to first send the CONNECT request to the proxy server side. If you checked the TH requestReady code you can see the how CONNECT request is sending to the proxy server.
  3. Proxy server establishing the connection with the backend service Once CONNECT request hits. 
  4. Base on the CONNECT request which has send from the TH requestReady methods, Proxy server responses with the 200 range status code.
    This will initially hits to the TH responseRecive method. So it will upgrade the existing connection to secure one by initiating a TLS handshake on that channel. Internally existing IOSession upgrades into the SSLIOSession by using IOSession informations.
    Since everything is now relayed to the backend server, it's as if the TLS exchange was done directly with backend secure service.
  5. Now the tunnel between the ESB and backend secure service has been initialized. we need to send actual request to the backend secure service. For that we need to trigger the target handler execution again. Under responseRecive method you can see executing
     conn.resetInput(); conn.requestOutput(); to execute the requestReady and outputReady methods for sending actual request to the backend. 
How to setup SSL handling in the ESB?

  1. First you need to setup proxy server.
    Install Squid as described here.
  2. Configure for the ESB side.
    In /repository/conf/axis2/axis2.xml, add the following parameters to the transportSender configuration for PassThroughHttpSender,PassThroughHttpSSLSender, HttpCoreNIOSender, and HttpCoreNIOSSLSender:
    <parameter name="http.proxyHost" locked="false">hostIP</parameter>
    <parameter name="http.proxyPort" locked="false">portNumber</parameter>

    where hostIP and portNumber specify the IP address and port number of the proxy server.Uncomment the following parameter in the PassThroughHttpSSLSender and HttpCoreNIOSSLSender configurations and change the value to "AllowAll".
    <parameter name="HostnameVerifier">AllowAll</parameter>

    For example, if the host and port of proxy server is localhost:8080, your transportSender configurations for PassThroughHttPSender and PassThroughHttpSSLSender would look like this:

What is SSL Tunneling?

You want to be able to access some restricted destinations and/or ports with some applications from your computer but you are on a restricted network (corporate) - Even using a Torrent client.

How to overcome this limitation? 
What if backend service is secure one?
We can use SSL tunneling for overcome above issue.

What is the SSL tunneling?

SSL tunneling is when an Internal client application requests a web object using HTTPS on port 8080 through the proxy server. 

An example of this is when you are using online shopping. The internet connection to the target relevant e-commerce website  is tunneled to by you through proxy server. The key word here is through. The client communicates with the target web server directly after the initial connection has been established by proxy server, by means of communication within the SSL tunnel that has been created after SSL negotiation has taken place.

How it's working?

  1. The client makes a tunneling request: CONNECT server-host-name:port HTTP/1.1 (or HTTP/1.0). The port number is optional and is usually 443. The client application will automatically send the CONNECT request to the proxy server first for every HTTPS request if the forward proxy is configured in the browser.  

    RFC 2616 treats CONNECT as a way to establish a simple tunnel. There is more about it in RFC 2817, although the rest of RFC 2817 (upgrades to TLS within a non-proxy HTTP connection) is rarely used.
  2. The proxy accepts the connection on its port 8080, receives the request, and connects to the destination server on the port requested by the client.
  3. The proxy replies to the client that a connection is established with the 200 OK response.
  4. After this, the connection between the client and the proxy server is kept open. The proxy server relays everything on the client-proxy connection to and from proxy-backend. The client upgrades its active (proxy-backend) connection to an SSL/TLS connection, by initiating a TLS handshake on that channel.
    Since everything is now relayed to the backend server, it's as if the TLS exchange was done directly with
    The proxy server doesn't play any role in the handshake. The TLS handshake effectively happens directly between the client and the backend server.
  5. After the secure handshake is completed, the proxy sends and receives encrypted data to be decrypted at the client or at the destination server.
  6. If the client or the destination server requests a closure on either port, the proxy server closes both connections (ports 443 and 8080) and resumes its normal activity.

Sunday, October 4, 2015

How to write a Synapse Handler for the WSO2 ESB ?

Synapse handler is new feature which come with the ESB 4.9.0. It provide abstract handler implementation to the users. User can create their own concrete handlers which is executing in the synapse layer. Main intention of this blog post is to explain how to write synapse handler and explain basic theoretical background.

1. What is the handler?

Handlers are basically talking with the chain of responsibility pattern. Chain of responsibility allows a number of classes to attempt to handle a request independently of any other object along the chain. Once the request is handled, it completes it's journey through the chain.
The Handler defines the interface which required to handle the request and concreteHandlers handle request in a specific manner that they are responsible for.

2. What is Synapse handler?

Synapse handler is providing abstract handle implementation which executes in the following four scenarios.

1. Request in flow
This is executing when request is hitting to the synapse engine.

public boolean handleRequestInFlow(MessageContext synCtx);

2. request out flow
This is executing when request goes out from the synapse engine. 

public boolean handleRequestOutFlow(MessageContext synCtx);

3. Response in flow
This is executing when response is hitting to the synapse engine.

public boolean handleResponseInFlow(MessageContext synCtx);

4. Response out flow
This is executing when response goes out from the synapse engine. 

public boolean handleResponseOutFlow(MessageContext synCtx);

Following diagram shows the basic component structure of the ESB and how above mentioned scenarios are executing in the request and response flow.

3. How to write concrete Synapse handler?

You can implement concrete handler by implementing SynapseHandler(org.apache.synapse.SynapseHandler) interface or can extends AbstractSynapseHandler(org.apache.synapse.AbstractSynapseHandler) class. 

public class TestHandler extends AbstractSynapseHandler {

    private static final Log log = LogFactory.getLog(TestHandler.class);

    public boolean handleRequestInFlow(MessageContext synCtx) {"Request In Flow");
        return true;

    public boolean handleRequestOutFlow(MessageContext synCtx) {"Request Out Flow");
        return true;

    public boolean handleResponseInFlow(MessageContext synCtx) {"Response In Flow");
        return true;

    public boolean handleResponseOutFlow(MessageContext synCtx) {"Response Out Flow");
        return true;

4. Configuration

You need to add following configuration item to the synapse-handler.xml(repository/conf) file to enable  deployed handler.

    <handler name="TestHandler" class="package.TestHandler"/>

5. Deployment

Handler can be deployed as a OSGI bundle or jar file to the ESB.


Sunday, September 13, 2015

How Schedule failover message processor helps for the guaranteed delivery ?

Before we talk about the failover message forwarding processor, it’s better to understand the big picture of the concepts and use cases. The Scheduled Failover Message Forwarding Processor is part of the bigger picture of the message store and message processor.

Message Store Message Processor.
WSO2 ESB’s Message-stores and Message-processors are used to store incoming messages and then deliver them to a particular backend with added Quality of Services (QoS), such as throttling and guaranteed delivery. The basic advantage of the MSMP is that it allows you to send messages reliably to a backend service. These messages can be stored in a different reliable storage such as JMS, JDBC message stores. The MSMP powered by three basic components:

1. Store Mediator.
The Store mediator is the synapse mediator and can be used to store messages in the message store.

2. Message Store.
A message store is storage in the ESB for messages. The WSO2 ESB comes with four types of message store implementations - In Memory, JMS, JDBC and RabitMQ Message Stores. Users also have the option to create a Custom Message Store with their own message store implementation. For more information, refer to the section Message Stores section.

3. Message Processor.
The Message Processor is used for consuming messages from the message store and sending it to the defined endpoint. Currently we have Forward and Sample Message Processor implementations; the ESB provides the facility to add custom message processors as well. Other than forwarding messages to the endpoint, the message processor provides other new features, which can be helpful to guaranteed delivery such as throttling, message retries when endpoint is not available, etc.
For more information, please refer to the Message Processor.

How to achieve guaranteed delivery?
Guaranteed message delivery means once message comes to the ecosystem, it needs to be delivered to the defined endpoint without losing it while processing. We can identify a few areas which should be considered when we talking about the guaranteed delivery in message processor scenario:

1. Store message
Message store unavailability is the only scenario that we can identify for message loss when the store mediator trying to store a message in the message store. This can be happen due to reasons such as network failures, a message store crash or a system shutdown for maintenance. This kind of situation can overcome by different approaches.

  • Configure message store cluster
We can configure a message store cluster as a one solution for this. This lets us avoid the single point of failure.
  • Defining failover message store. 
This allows the message store to store messages in the failover message store if the original message store is not available. ESB 4.9.0 has introduced a new feature allowing you to define a failover message store. Full details of failover message store will be discussed under a different section.

2. Retrieve message from message store.
We need to decide on when we need to remove message from the store after retrieving the message processor from the message store, because otherwise the message can be lost in some cases. The ESB has a mechanism to signal to message store to remove the messages after a message successfully sent to the endpoint. The ESB will not remove the message from the message store if the message cannot be successfully sent to the endpoint.

3. Send message to the define endpoint.
This scenario is technically part of the ‘forwarding message to the endpoint’ scenario. As we discussed under the “Retrieve message from the message store” section, messages are removed from the message store, if only after messages are sent successfully to the endpoint. Message processors provide a retry mechanism to retry to send messages if the endpoint is not available. Even though retry mechanism does not provide guaranteed delivery, it helps successfully send off messages to the endpoint.

The message processor does have more functionality which can be used to tune the processor to improve guaranteed delivery; however, that’s outside the scope of this blog post. You can find more information about the message processor from here.

Failover Message store and Schedule failover message processor scenario.

As discussed in the earlier section, the Failover Message Store is using as a solution for message store failures. The store mediator forwards messages to the failover message store if the original message store fails.

Initially we need to define failover message store first. It can be any type of message store that are available in the ESB. No special configuration is needed to specifically define the failover message store - when you define the original message store, you can simply select failover message store for the failure situations. When a failure situation happens, all incoming messages are forwarded into the failover message store by the store mediator.

The next problem is how can we move messages which were forwarded to the failover message store to the original message store when it becomes available again. This is where the Failover Message Processor comes in. It’s the same as the Scheduled Message Forward Processor, except where the Message Forward Processor sends messages to the endpoints, this on forwards messages to the message store.

The following example explains how to setup a complete message processor scenario with the failover configurations.

1. Create failover message store.
You don't need to specify any special configuration here. Keep in mind that in-memory message store is used for this example, but we cannot use in-memory message stores for the cluster setups since we cannot share in-memory stores among the cluster nodes.

  <messageStore name="failover"/>  

2. Create original message store for storing messages.
A JMS message store is used for this example. For enabling guaranteed delivery on the producer side (configure failover message store), you need to enable “Producer Guaranteed Delivery” property and need to specify the failover message store located under the “Show Guaranteed Delivery Parameters” section.

     class="" name="Orginal">  
     <parameter name="">failover</parameter>  
     <parameter name="">true</parameter>  
     <parameter name="java.naming.factory.initial">org.apache.activemq.jndi.ActiveMQInitialContextFactory</parameter>  
     <parameter name="java.naming.provider.url">tcp://localhost:61616</parameter>  
     <parameter name="store.jms.JMSSpecVersion">1.1</parameter>  

3. Create Proxy Service to send messages to the original message store using store mediator.

 <proxy name="Proxy1" transports="https http" startOnLoad="true" trace="disable">    
     <property name="FORCE_SC_ACCEPTED" value="true" scope="axis2"/>  
     <property name="OUT_ONLY" value="true"/>  
     <log level="full"/>  
     <store messageStore="Orginal"/>  

4. Define endpoint for the schedule forward message processor. 
SimpleStockquate service is used as a backend service.

 <endpoint name="SimpleStockQuoteService">  
  <address uri=""/>  

5. Add schedule forward message processor to forward messages to the previously defined endpoint. 
This link contain more information about the schedule forward message processor.

     messageStore="Orginal" name="ForwardMessageProcessor" targetEndpoint="SimpleStockQuoteService">  
     <parameter name="client.retry.interval">1000</parameter>  
     <parameter name="throttle">false</parameter>  
     <parameter name="">4</parameter>  
     <parameter name="member.count">1</parameter>  
     <parameter name="">Disabled</parameter>  
     <parameter name="interval">1000</parameter>  
     <parameter name="">true</parameter>  
     <parameter name="target.endpoint">SimpleStockQuoteService</parameter>  

6. Add schedule failover message processor for forwarding failover messages from failover message store to the original message store.
When you are defining a failover message processor, you need to fill two mandatory parameters which are very important for the failover scenario.
  • Source message store.
  • Target message store.
Failover message processor sends messages from the failover store to the original store when it is available in the failover scenario. In this configuration, the source message store should be the failover message store and target message store should be the original message store.

     messageStore="failover" name="FailoverMessageProcessor">  
     <parameter name="client.retry.interval">60000</parameter>  
     <parameter name="throttle">false</parameter>  
     <parameter name="">1000</parameter>  
     <parameter name="member.count">1</parameter>  
     <parameter name="">Disabled</parameter>  
     <parameter name="interval">1000</parameter>  
     <parameter name="">true</parameter>  
     <parameter name="">Orginal</parameter>  

Other than above mandatory parameters, we have a few other optional parameters under the “Additional Parameter” section.

7. Send Request to the Proxy Service
Navigate to /samples/axis2client directory, and execute the following command to invoke the proxy service.
 ant stockquote -Daddurl=http://localhost:8280/services/Proxy1 -Dmode=placeorder  
Note a message similar to the following example printed in the Axis2 Server console.
 SimpleStockQuoteService :: Accepted order for : 7482 stocks of IBM at $ 169.27205579038733  

Parameter Name
Forwarding interval (interval)Interval in milliseconds in which processor consumes messages.
Retry interval (client.retry.interval)Message retry interval in milliseconds.
Maximum delivery attempts ( redelivery attempts before deactivating the processor. This is used when the backend server is inactive and the ESB tries to resend the message.
Drop message after maximum delivery attempts (
If this parameter is set to Enabled, the message will be dropped from the message store after the maximum number of delivery attempts are made, and the message processor will remain activated. This parameter would have no effect when no value is specified for the Maximum Delivery Attempts parameter.
The Maximum Delivery Attempts parameter can be used when the backend is inactive and the message is resent.

If this parameter is disabled, the undeliverable message will not be dropped and the message processor will be deactivated.
Fault sequence name (message.processor.fault.sequence)The name of the sequence where the fault message should be sent to in case of a SOAP fault.
Deactivate sequence name (message.processor.deactivate.sequence)The deactivate sequence that will be executed when the processor is deactivated automatically. Automatic deactivation occurs when the maximum delivery attempts is exceeded and the Drop message after maximum delivery attempts parameter is disabled.
Quartz configuration file path (quartz.conf)The Quartz configuration file path. This properties file contains the Quartz configuration 
parameters for fine tuning the Quartz engine. More details of the configuration can be 
found at
Cron Expression (cronExpression)The cron expression to be used to configure the retry pattern.
Task Count (Cluster Mode)The required number of worker nodes when you need to run the processor in more than 1 worker node. Specifying this will not guarantee that the processor will run on each worker node. There can be instances where the processor will not run in some workers nodes.

To test the failover scenario, you can shutdown the JMS broker(original message store) and send few messages to the proxy service. These messages are not sent to the backend since original message store is not available. You can see those messages are stored in the failover message store. If you check the ESB log, you can see the failover message processor is trying to forward messages to original message store periodically. Once the original message store is available, the failover processor send those messages to the original store and forward message processor send that to the backend service.