Friday, May 20, 2016

How to monitor the Thread CPU usage in the WSO2 Products?

1. Download JConsole topthreads Plugin.

2. Add following entries to the PRODUCT_HOME/bin/wso2server.sh
    -Dcom.sun.management.jmxremote \
    -Dcom.sun.management.jmxremote.port=PORT \
    -Dcom.sun.management.jmxremote.ssl=false \
    -Dcom.sun.management.jmxremote.authenticate=false \
    -Djava.rmi.server.hostname=IP_ADDRESS \
Define your IP_ADDRESS address and PORT (port should be not used anywhere in that instance)

3. Run the JConsole using following command.
   jconsole -pluginpath PATH_TO_JAR/topthreads-1.1.jar

4. Copy "JMXServerManager JMX Service URL" from the wso2carbon logs after restart the Wso2 Server (Eg:- service:jmx:rmi://localhost:11111/jndi/rmi://localhost:9999/jmxrmi) to the Remote process with the username and password.

5. Under Top Threads tab you can monitor the thread CPU usage.

Tuesday, April 5, 2016

Enable SecureVault Support for jndi.properties - WSO2 ESB - MB 3.0

We cannot use cipertool to automate encryption process for the selected elements in the jndi.properties file, because we can only specify Xpath notation here, but still we can use the manual process.

Sample [ESB_home]/repository/conf/jndi.properties file
# register some connection factories
# connectionfactory.[jndiname] = [ConnectionURL]
connectionfactory.QueueConnectionFactory = amqp://admin:admin@clientID/carbon?brokerlist
='tcp://localhost:5672'

# register some queues in JNDI using the form
# queue.[jndiName] = [physicalName]
queue.MyQueue = example.MyQueue

# register some topics in JNDI using the form
# topic.[jndiName] = [physicalName]
topic.MyTopic = example.MyTopic
1. Enable secure valut in the ESB
sh ciphertool.sh -Dconfigure

2. Go to the [ESB_home]/bin and execute the following command to generate the encrypted value for the clear text  password.
sh ciphertool.sh
3. It will prompt following  console for input value.  Answer: wso2carbon
[Please Enter Primary KeyStore Password of Carbon Server : ]
4. Then it will appear second console for  following input value.
     (Answer: According to our property file, the plain text is "amqp://admin:admin@clientID/carbon?brokerlist='tcp://localhost:5672'".)

Encryption is done Successfully
Encrypted value is :cpw74SGeBNgAVpryqj5/xshSyW5BDW9d1UW0xMZDxVeoa6xS6CFtU
+ESoR9jRjyA1JRHutZ4SfzfSgSzy2GQJ/2jQIw70IeT5EQEAR8XLGaqlsE5IlNoe9dhyLiPXEPRGq4k/BgU
QDYiBg0nU7wRsR8YXrvf+ak8ulX2yGv0Sf8=

5. Open the cipher-text.properties file, which is under [ESB_home]/repository/conf/security and add the following entry.
connectionfactory.QueueConnectionFactory=cpw74SGeBNgAVpryqj5/xshSyW5BDW9d1UW0xMZ
DxVeoa6RjyA1JRHutZ4SfzfSgSzy2GQJ/2jQIw70IeT5EQEAR8XLGaqlsE5IlNoe9dhyLiPXEPRGq4k/BgUQD
YiBg0nU7wRsR8YXrvf+ak8ulX2yGv0Sf8=
6. Open the [ESB_home]/repository/conf/jndi.properties file and update the key/value of connectionfactory field.
connectionfactory.QueueConnectionFactory=secretAlias:connectionfactory.QueueConnectionFactory



Thursday, November 5, 2015

How SSL Tunneling working in the WSO2 ESB

This blog post assumes that the user who reads has some basic understanding of SSL tunneling and the basic message flow of the ESB. If you are not familiar with the concepts of the SSL tunneling you can refer my previous blog post about the SSL tunneling and you can get detail idea about the message flow from this article.
I will give brief introduction about the targetHandler for understand concepts easily. As you may already know TargetHandler(TH) is responsible for handling requests and responses for the backend side. It is maintaining status (REQUEST_READY, RESPONSE_READY .. ,etc) based on the events which fired by the IOReactor and executing relevant methods. As the example if a response which is coming from the backend side hits to the ESB, IOReactor fire the responseRecived method in the targetHandler side. Followings are the basic methods contain in the target handler and their responsibilities.

  • Connect: - This is executed when new outgoing connection needed.
  • RequestReady:- This is executed by the IOReactor when new outgoing HTTP request headers are coming. Inside this method we are writing http Headers to the relevant backend.
  • OutputReady:- Responsible for writing request body to the specified backend service.
  • ResponseRecived:- This is executed when backend response headers are coming to the ESB.
  • OutputReady:- Responsible for reading the backend response body.

Let me explain tunneling process step by step.


  1. If proxy server is configured in the ESB, once request comes to the target handler side, it creates     connection with the proxy server because all request need to go via the proxy server.
  2. Once request headers comes to the TH side(inside the requestReady method), those headers need to send to the actual backend side, but still we don't have connectivity between the actual backend and the ESB.
    In this point we need to start initializing the tunneling in between ESB and the actual backend service. As I explained in the previous article. For initialize the tunneling, we need to first send the CONNECT request to the proxy server side. If you checked the TH requestReady code you can see the how CONNECT request is sending to the proxy server.
  3. Proxy server establishing the connection with the backend service Once CONNECT request hits. 
  4. Base on the CONNECT request which has send from the TH requestReady methods, Proxy server responses with the 200 range status code.
    This will initially hits to the TH responseRecive method. So it will upgrade the existing connection to secure one by initiating a TLS handshake on that channel. Internally existing IOSession upgrades into the SSLIOSession by using IOSession informations.
    Since everything is now relayed to the backend server, it's as if the TLS exchange was done directly with backend secure service.
  5. Now the tunnel between the ESB and backend secure service has been initialized. we need to send actual request to the backend secure service. For that we need to trigger the target handler execution again. Under responseRecive method you can see executing
     conn.resetInput(); conn.requestOutput(); to execute the requestReady and outputReady methods for sending actual request to the backend. 
How to setup SSL handling in the ESB?

  1. First you need to setup proxy server.
    Install Squid as described here.
  2. Configure for the ESB side.
    In /repository/conf/axis2/axis2.xml, add the following parameters to the transportSender configuration for PassThroughHttpSender,PassThroughHttpSSLSender, HttpCoreNIOSender, and HttpCoreNIOSSLSender:
    <parameter name="http.proxyHost" locked="false">hostIP</parameter>
    <parameter name="http.proxyPort" locked="false">portNumber</parameter>

    where hostIP and portNumber specify the IP address and port number of the proxy server.Uncomment the following parameter in the PassThroughHttpSSLSender and HttpCoreNIOSSLSender configurations and change the value to "AllowAll".
    <parameter name="HostnameVerifier">AllowAll</parameter>

    For example, if the host and port of proxy server is localhost:8080, your transportSender configurations for PassThroughHttPSender and PassThroughHttpSSLSender would look like this:

What is SSL Tunneling?

You want to be able to access some restricted destinations and/or ports with some applications from your computer but you are on a restricted network (corporate) - Even using a Torrent client.

How to overcome this limitation? 
What if backend service is secure one?
We can use SSL tunneling for overcome above issue.


What is the SSL tunneling?

SSL tunneling is when an Internal client application requests a web object using HTTPS on port 8080 through the proxy server. 


An example of this is when you are using online shopping. The internet connection to the target relevant e-commerce website  is tunneled to by you through proxy server. The key word here is through. The client communicates with the target web server directly after the initial connection has been established by proxy server, by means of communication within the SSL tunnel that has been created after SSL negotiation has taken place.

How it's working?




  1. The client makes a tunneling request: CONNECT server-host-name:port HTTP/1.1 (or HTTP/1.0). The port number is optional and is usually 443. The client application will automatically send the CONNECT request to the proxy server first for every HTTPS request if the forward proxy is configured in the browser.  
    CONNECT www.example.com:443 HTTP/1.1
    Host: www.example.com:443

    RFC 2616 treats CONNECT as a way to establish a simple tunnel. There is more about it in RFC 2817, although the rest of RFC 2817 (upgrades to TLS within a non-proxy HTTP connection) is rarely used.
  2. The proxy accepts the connection on its port 8080, receives the request, and connects to the destination server on the port requested by the client.
  3. The proxy replies to the client that a connection is established with the 200 OK response.
  4. After this, the connection between the client and the proxy server is kept open. The proxy server relays everything on the client-proxy connection to and from proxy-backend. The client upgrades its active (proxy-backend) connection to an SSL/TLS connection, by initiating a TLS handshake on that channel.
    Since everything is now relayed to the backend server, it's as if the TLS exchange was done directly with www.example.com:443.
    The proxy server doesn't play any role in the handshake. The TLS handshake effectively happens directly between the client and the backend server.
  5. After the secure handshake is completed, the proxy sends and receives encrypted data to be decrypted at the client or at the destination server.
  6. If the client or the destination server requests a closure on either port, the proxy server closes both connections (ports 443 and 8080) and resumes its normal activity.

Sunday, October 4, 2015

How to write a Synapse Handler for the WSO2 ESB ?


Synapse handler is new feature which come with the ESB 4.9.0. It provide abstract handler implementation to the users. User can create their own concrete handlers which is executing in the synapse layer. Main intention of this blog post is to explain how to write synapse handler and explain basic theoretical background.

1. What is the handler?

Handlers are basically talking with the chain of responsibility pattern. Chain of responsibility allows a number of classes to attempt to handle a request independently of any other object along the chain. Once the request is handled, it completes it's journey through the chain.
The Handler defines the interface which required to handle the request and concreteHandlers handle request in a specific manner that they are responsible for.




2. What is Synapse handler?

Synapse handler is providing abstract handle implementation which executes in the following four scenarios.





1. Request in flow
This is executing when request is hitting to the synapse engine.

public boolean handleRequestInFlow(MessageContext synCtx);

2. request out flow
This is executing when request goes out from the synapse engine. 

public boolean handleRequestOutFlow(MessageContext synCtx);

3. Response in flow
This is executing when response is hitting to the synapse engine.

public boolean handleResponseInFlow(MessageContext synCtx);

4. Response out flow
This is executing when response goes out from the synapse engine. 

public boolean handleResponseOutFlow(MessageContext synCtx);

Following diagram shows the basic component structure of the ESB and how above mentioned scenarios are executing in the request and response flow.





3. How to write concrete Synapse handler?

You can implement concrete handler by implementing SynapseHandler(org.apache.synapse.SynapseHandler) interface or can extends AbstractSynapseHandler(org.apache.synapse.AbstractSynapseHandler) class. 


public class TestHandler extends AbstractSynapseHandler {

    private static final Log log = LogFactory.getLog(TestHandler.class);

    @Override
    public boolean handleRequestInFlow(MessageContext synCtx) {
        log.info("Request In Flow");
        return true;
    }

    @Override
    public boolean handleRequestOutFlow(MessageContext synCtx) {
        log.info("Request Out Flow");
        return true;
    }

    @Override
    public boolean handleResponseInFlow(MessageContext synCtx) {
        log.info("Response In Flow");
        return true;
    }

    @Override
    public boolean handleResponseOutFlow(MessageContext synCtx) {
        log.info("Response Out Flow");
        return true;
    }
}


4. Configuration

You need to add following configuration item to the synapse-handler.xml(repository/conf) file to enable  deployed handler.

<handlers>
    <handler name="TestHandler" class="package.TestHandler"/>
</handlers>

5. Deployment

Handler can be deployed as a OSGI bundle or jar file to the ESB.


  

Sunday, September 13, 2015

How Schedule failover message processor helps for the guaranteed delivery ?

Before we talk about the failover message forwarding processor, it’s better to understand the big picture of the concepts and use cases. The Scheduled Failover Message Forwarding Processor is part of the bigger picture of the message store and message processor.

Message Store Message Processor.
WSO2 ESB’s Message-stores and Message-processors are used to store incoming messages and then deliver them to a particular backend with added Quality of Services (QoS), such as throttling and guaranteed delivery. The basic advantage of the MSMP is that it allows you to send messages reliably to a backend service. These messages can be stored in a different reliable storage such as JMS, JDBC message stores. The MSMP powered by three basic components:



1. Store Mediator.
The Store mediator is the synapse mediator and can be used to store messages in the message store.

2. Message Store.
A message store is storage in the ESB for messages. The WSO2 ESB comes with four types of message store implementations - In Memory, JMS, JDBC and RabitMQ Message Stores. Users also have the option to create a Custom Message Store with their own message store implementation. For more information, refer to the section Message Stores section.

3. Message Processor.
The Message Processor is used for consuming messages from the message store and sending it to the defined endpoint. Currently we have Forward and Sample Message Processor implementations; the ESB provides the facility to add custom message processors as well. Other than forwarding messages to the endpoint, the message processor provides other new features, which can be helpful to guaranteed delivery such as throttling, message retries when endpoint is not available, etc.
For more information, please refer to the Message Processor.


How to achieve guaranteed delivery?
Guaranteed message delivery means once message comes to the ecosystem, it needs to be delivered to the defined endpoint without losing it while processing. We can identify a few areas which should be considered when we talking about the guaranteed delivery in message processor scenario:

1. Store message
Message store unavailability is the only scenario that we can identify for message loss when the store mediator trying to store a message in the message store. This can be happen due to reasons such as network failures, a message store crash or a system shutdown for maintenance. This kind of situation can overcome by different approaches.


  • Configure message store cluster
We can configure a message store cluster as a one solution for this. This lets us avoid the single point of failure.
  • Defining failover message store. 
This allows the message store to store messages in the failover message store if the original message store is not available. ESB 4.9.0 has introduced a new feature allowing you to define a failover message store. Full details of failover message store will be discussed under a different section.

2. Retrieve message from message store.
We need to decide on when we need to remove message from the store after retrieving the message processor from the message store, because otherwise the message can be lost in some cases. The ESB has a mechanism to signal to message store to remove the messages after a message successfully sent to the endpoint. The ESB will not remove the message from the message store if the message cannot be successfully sent to the endpoint.



3. Send message to the define endpoint.
This scenario is technically part of the ‘forwarding message to the endpoint’ scenario. As we discussed under the “Retrieve message from the message store” section, messages are removed from the message store, if only after messages are sent successfully to the endpoint. Message processors provide a retry mechanism to retry to send messages if the endpoint is not available. Even though retry mechanism does not provide guaranteed delivery, it helps successfully send off messages to the endpoint.

The message processor does have more functionality which can be used to tune the processor to improve guaranteed delivery; however, that’s outside the scope of this blog post. You can find more information about the message processor from here.

Failover Message store and Schedule failover message processor scenario.

As discussed in the earlier section, the Failover Message Store is using as a solution for message store failures. The store mediator forwards messages to the failover message store if the original message store fails.

Initially we need to define failover message store first. It can be any type of message store that are available in the ESB. No special configuration is needed to specifically define the failover message store - when you define the original message store, you can simply select failover message store for the failure situations. When a failure situation happens, all incoming messages are forwarded into the failover message store by the store mediator.

The next problem is how can we move messages which were forwarded to the failover message store to the original message store when it becomes available again. This is where the Failover Message Processor comes in. It’s the same as the Scheduled Message Forward Processor, except where the Message Forward Processor sends messages to the endpoints, this on forwards messages to the message store.

The following example explains how to setup a complete message processor scenario with the failover configurations.

1. Create failover message store.
You don't need to specify any special configuration here. Keep in mind that in-memory message store is used for this example, but we cannot use in-memory message stores for the cluster setups since we cannot share in-memory stores among the cluster nodes.

  <messageStore name="failover"/>  

2. Create original message store for storing messages.
A JMS message store is used for this example. For enabling guaranteed delivery on the producer side (configure failover message store), you need to enable “Producer Guaranteed Delivery” property and need to specify the failover message store located under the “Show Guaranteed Delivery Parameters” section.

 <messageStore  
     class="org.apache.synapse.message.store.impl.jms.JmsStore" name="Orginal">  
     <parameter name="store.failover.message.store.name">failover</parameter>  
     <parameter name="store.producer.guaranteed.delivery.enable">true</parameter>  
     <parameter name="java.naming.factory.initial">org.apache.activemq.jndi.ActiveMQInitialContextFactory</parameter>  
     <parameter name="java.naming.provider.url">tcp://localhost:61616</parameter>  
     <parameter name="store.jms.JMSSpecVersion">1.1</parameter>  
   </messageStore>  


3. Create Proxy Service to send messages to the original message store using store mediator.

 <proxy name="Proxy1" transports="https http" startOnLoad="true" trace="disable">    
  <target>  
    <inSequence>  
     <property name="FORCE_SC_ACCEPTED" value="true" scope="axis2"/>  
     <property name="OUT_ONLY" value="true"/>  
     <log level="full"/>  
     <store messageStore="Orginal"/>  
    </inSequence>  
  </target>  
 </proxy>  

4. Define endpoint for the schedule forward message processor. 
SimpleStockquate service is used as a backend service.

 <endpoint name="SimpleStockQuoteService">  
  <address uri="http://127.0.0.1:9000/services/SimpleStockQuoteService"/>  
 </endpoint>  

5. Add schedule forward message processor to forward messages to the previously defined endpoint. 
This link contain more information about the schedule forward message processor.

 <messageProcessor  
     class="org.apache.synapse.message.processor.impl.forwarder.ScheduledMessageForwardingProcessor"  
     messageStore="Orginal" name="ForwardMessageProcessor" targetEndpoint="SimpleStockQuoteService">  
     <parameter name="client.retry.interval">1000</parameter>  
     <parameter name="throttle">false</parameter>  
     <parameter name="max.delivery.attempts">4</parameter>  
     <parameter name="member.count">1</parameter>  
     <parameter name="max.delivery.drop">Disabled</parameter>  
     <parameter name="interval">1000</parameter>  
     <parameter name="is.active">true</parameter>  
     <parameter name="target.endpoint">SimpleStockQuoteService</parameter>  
   </messageProcessor>  

6. Add schedule failover message processor for forwarding failover messages from failover message store to the original message store.
When you are defining a failover message processor, you need to fill two mandatory parameters which are very important for the failover scenario.
  • Source message store.
  • Target message store.
Failover message processor sends messages from the failover store to the original store when it is available in the failover scenario. In this configuration, the source message store should be the failover message store and target message store should be the original message store.


 <messageProcessor  
     class="org.apache.synapse.message.processor.impl.failover.FailoverScheduledMessageForwardingProcessor"  
     messageStore="failover" name="FailoverMessageProcessor">  
     <parameter name="client.retry.interval">60000</parameter>  
     <parameter name="throttle">false</parameter>  
     <parameter name="max.delivery.attempts">1000</parameter>  
     <parameter name="member.count">1</parameter>  
     <parameter name="max.delivery.drop">Disabled</parameter>  
     <parameter name="interval">1000</parameter>  
     <parameter name="is.active">true</parameter>  
     <parameter name="message.target.store.name">Orginal</parameter>  
   </messageProcessor>  

Other than above mandatory parameters, we have a few other optional parameters under the “Additional Parameter” section.

7. Send Request to the Proxy Service
Navigate to /samples/axis2client directory, and execute the following command to invoke the proxy service.
 ant stockquote -Daddurl=http://localhost:8280/services/Proxy1 -Dmode=placeorder  
Note a message similar to the following example printed in the Axis2 Server console.
 SimpleStockQuoteService :: Accepted order for : 7482 stocks of IBM at $ 169.27205579038733  


Parameter Name
Description
Forwarding interval (interval)Interval in milliseconds in which processor consumes messages.
Retry interval (client.retry.interval)Message retry interval in milliseconds.
Maximum delivery attempts (max.delivery.attempts)Maximum redelivery attempts before deactivating the processor. This is used when the backend server is inactive and the ESB tries to resend the message.
Drop message after maximum delivery attempts (max.delivery.drop)
If this parameter is set to Enabled, the message will be dropped from the message store after the maximum number of delivery attempts are made, and the message processor will remain activated. This parameter would have no effect when no value is specified for the Maximum Delivery Attempts parameter.
The Maximum Delivery Attempts parameter can be used when the backend is inactive and the message is resent.

If this parameter is disabled, the undeliverable message will not be dropped and the message processor will be deactivated.
Fault sequence name (message.processor.fault.sequence)The name of the sequence where the fault message should be sent to in case of a SOAP fault.
Deactivate sequence name (message.processor.deactivate.sequence)The deactivate sequence that will be executed when the processor is deactivated automatically. Automatic deactivation occurs when the maximum delivery attempts is exceeded and the Drop message after maximum delivery attempts parameter is disabled.
Quartz configuration file path (quartz.conf)The Quartz configuration file path. This properties file contains the Quartz configuration 
parameters for fine tuning the Quartz engine. More details of the configuration can be 
found at http://quartz-scheduler.org/documentation/quartz-2.x/configuration/ConfigMain.
Cron Expression (cronExpression)The cron expression to be used to configure the retry pattern.
Task Count (Cluster Mode)The required number of worker nodes when you need to run the processor in more than 1 worker node. Specifying this will not guarantee that the processor will run on each worker node. There can be instances where the processor will not run in some workers nodes.


To test the failover scenario, you can shutdown the JMS broker(original message store) and send few messages to the proxy service. These messages are not sent to the backend since original message store is not available. You can see those messages are stored in the failover message store. If you check the ESB log, you can see the failover message processor is trying to forward messages to original message store periodically. Once the original message store is available, the failover processor send those messages to the original store and forward message processor send that to the backend service.

Wednesday, September 9, 2015

Illegal key size or default parameters

When you run the pre-defined security scenarios in the WSO2 ESB most probably you already faced the Illegal key size or default parameters exception

org.apache.axis2.AxisFault: Error in encryption
    at org.apache.rampart.handler.RampartSender.invoke(RampartSender.java:76)
    at org.apache.axis2.engine.Phase.invokeHandler(Phase.java:340)
    at org.apache.axis2.engine.Phase.invoke(Phase.java:313)
    at org.apache.axis2.engine.AxisEngine.invoke(AxisEngine.java:261)
    at org.apache.axis2.engine.AxisEngine.send(AxisEngine.java:426)
    at org.apache.axis2.description.OutInAxisOperationClient.send(OutInAxisOperation.java:398)
    at org.apache.axis2.description.OutInAxisOperationClient.executeImpl(OutInAxisOperation.java:224)
    at org.apache.axis2.client.OperationClient.execute(OperationClient.java:149)
    at org.apache.axis2.client.ServiceClient.sendReceive(ServiceClient.java:554)
    at org.apache.axis2.client.ServiceClient.sendReceive(ServiceClient.java:530)
    at SecurityClient.runSecurityClient(SecurityClient.java:111)
    at SecurityClient.main(SecurityClient.java:33)
Caused by: org.apache.rampart.RampartException: Error in encryption
    at org.apache.rampart.builder.SymmetricBindingBuilder.doSignBeforeEncrypt(SymmetricBindingBuilder.java:765)
    at org.apache.rampart.builder.SymmetricBindingBuilder.build(SymmetricBindingBuilder.java:86)
    at org.apache.rampart.MessageBuilder.build(MessageBuilder.java:144)
    at org.apache.rampart.handler.RampartSender.invoke(RampartSender.java:65)
    ... 11 more
Caused by: org.apache.ws.security.WSSecurityException: Cannot encrypt data; nested exception is: 
    org.apache.xml.security.encryption.XMLEncryptionException: Illegal key size or default parameters
Original Exception was java.security.InvalidKeyException: Illegal key size or default parameters
    at org.apache.ws.security.message.WSSecEncrypt.doEncryption(WSSecEncrypt.java:608)
    at org.apache.ws.security.message.WSSecEncrypt.doEncryption(WSSecEncrypt.java:461)
    at org.apache.ws.security.message.WSSecEncrypt.encryptForExternalRef(WSSecEncrypt.java:388)
    at org.apache.rampart.builder.SymmetricBindingBuilder.doSignBeforeEncrypt(SymmetricBindingBuilder.java:755)
    ... 14 more


Reason for this issue could be not installed JCE file to your JRE. First you can check after install JCE file on your JRE. To confirm JCE file installed correctly, you can find following Java source.


public class JCETest {

    public static void main(String args[]) {
        int maxKeyLen = 0;
        try {
            maxKeyLen = Cipher.getMaxAllowedKeyLength("AES");
        } catch (NoSuchAlgorithmException e) {
            Assert.fail();
        }

        Assert.assertEquals(2147483647, maxKeyLen);
        System.out.println(maxKeyLen);
    }
}


AES key size should be equal to the 2147483647 if JCE files has been installed sucessfully. 

How to preserving HTTP headers in WSO2 ESB 4.9.0 ?


Preserving HTTP headers are important when executing backend services via applications/middleware. This is because most of the time certain important headers are removed or modified by the applications/middleware which run the communication.
The previous version of our WSO2 ESB, version 4.8.1, only supported “server” and “user agent” header fields to preserve with, but with the new ESB 4.9.0, we’ve introduced a new new property (http.headers.preserve) for the passthru (repository/conf/passthru-http.properties) and Nhttp(repository/conf/nhttp.properties) transporters to preserve more HTTP headers.

Passthru transporter – support header fields            
  • Location
  • Keep-Alive
  • Content-Length
  • Content-Type
  • Date
  • Server
  • User-Agent
  • Host
Nhttp transport – support headers
  • Server
  • User-Agent
  • Date

You can specify header fields which should be preserved in a comma-separated list, as shown below.
http.headers.preserve = Location, Date, Server
Note that properties(http.user.agent.preserve, http.server.preserve), which were used in ESB 4.8.1 for preserving headers, also works in ESB 4.9.0 - we’ve kept that for the backward compatibility.