connecting mule ESB to Apache web server - mule

How to access the data packets routed to apache webserver in Mule ESB flow.
After I call the exposed webservice, the webservice sends two responses,
To mule server
Routes response XML as data packets to apache webserver.
Can someone help in how to proceed with second flow? Thanks.

Use a Java library that understands PCAP and create your own Mule connector around it.
Reference:
List of Java PCAP libraries: https://en.wikipedia.org/wiki/Pcap#Wrapper_libraries_for_libpcap.2FWinPcap
Mule connector development kit: https://developer.mulesoft.com/docs/display/current/Anypoint+Connector+DevKit

Related

Mule custom connector connection timed out for external web service

Mule version 3.8
First I created custom connector for local SOAP service and successfully received response using the connector.
Then I created custom connector for remote (internet) SOAP service and got Connection timed out error. I searched Mule docs how to add attributes to custom connector here and added proxy host and port (already successfully tested using soapUI for the same remote service).
The issue is I'm still getting same Connection timed out error in custom connector. I'm wondering that custom connector is not able to reflect the proxy attributes I specified.
Any advise will be very helpful. Thanks.
Update:
Solved using this SO thread
HTTPConduit http = (HTTPConduit) client.getConduit();
http.getClient().setProxyServer("proxy");
http.getClient().setProxyServerPort(8080);
http.getProxyAuthorization().setUserName("user proxy");
http.getProxyAuthorization().setPassword("password proxy");;

Mule HA Cluster - Application configuration issue

We are working on Mule HA cluster PoC with two separate server nodes. We were able to create a cluster. We have developed small dummy application with Http endpoint with reliability pattern implementation which loops for a period and prints a value. When we deploy the application in Mule HA cluster, even though its deploys successfully in cluster and application log file has been generated in both the servers but its running in only one server. In application we can point to only server IP for HTTP endpoint. Could any one please clarify my following queries?
In our case why the application is running in one server (which ever IP points to server getting executed).
Will Mule HA cluster create virtual IP?
If not then which IP we need to configure in application for HTTP endpoints?
Do we need to have Load balancer for HTTP based endpoints request? If so then in application which IP needs to be configured for HTTP endpoint as we don't have virtual IP for Mule HA cluster?
Really appreciate any help on this.
Environment: Mule EE ESB v 3.4.2 & Private cloud.
1) You are seeing one server processing requests because you are sending them to the same server each time.
2) Mule HA will not create a virtual IP
3/4) You need to place a load balancer in front of the Mule nodes in order to distribute the load when using HTTP inbound endpoints. You do not need to decide which IP to place in the HTTP connector within the application, the load balancer will route the request to one of the nodes.
creating a Mule cluster will just allow your Mule applications share information through its shared memory (VM transport and Object Store) and make the polling endpoints poll only from a single node. In the case of HTTP, it will listen in each of the nodes, but you need to put a load balancer in front of your Mule nodes to distribute load. I recommend you to read the High Availability documentation. But the more importante question is why do you need to create a cluster? You can have two separate Mule servers with your application deployed and have a load balancer send request to them.

Some concept questions on Mulesoft

I have some questions on concepts of Mulesoft. If you could answer that would be appreciated.
1.When mule connects to an FTP server and checks for new files,does Mule by default deletes the file once it is downloaded? And can FTP is available both as polling and as event subscription based?
When we connect to a RESTful service over HTTP, can Dynamic endpoints select whether HTTP or HTTPS is used or can be used to set the Host, Port and Path? (multiple choice for this question: a. Dynamic endpoints can select what transport to use. b. The Host, Port and Path can be set using dynamic endpoint. c. Dynamic endpoint can select whether HTTP or HTTPS is used. Which one is correct?)
in configuration, if
is used, does it mean my1.properties will take precedence?
Thanks!
1a) Community FTP connector will always delete the remote file. Enterprise FTP connector allows you to move it to another folder.
1b) Reading FTP connector is only available as a polling mechanism to read files from a FTP server.
2) The only part that can not be dynamic is the scheme, but you could place two connectors (one for HTTP and another for HTTPS) inside a choice and select which one to use dynamically.
3) I didn't get this, please elaborate a bit more.
HTH,
Marcos
Yes, it does. Mule EE give more features as you can see in the documentation FTP is always polling the directory.
You can setup HTTP or HTTPS. But, you can use a composite-source and tell that your flow is available for HTTP and HTTPS.
I know that it is possible, I found time ago the same requirement for FTP in mule forum. Hope this help
<!-- streaming to prevent deleting remote file -->
<ftp:connector name="ftpConnector" streaming="true" />
<flow name="ftpBridge">
<vm:inbound-endpoint path="fetchFtpFile" exchange-pattern="request-response"/>
<scripting:component>
<scripting:script engine="groovy">
def ftpFileUri = "ftp://${payload.userName}:${payload.password}#${payload.host}${payload.path}?fileNameFilter=${payload.fileName}"
muleContext.client.request(ftpFileUri, 30000L);
</scripting:script>
</scripting:component>
</flow>
If you want to load properties you can use spring to do that. Review this link.

How to Load Balance Mule ESB without using Mule Management Console

I am working with Mule ESB and instead of using Mule Management Console (MMC). I just want to load balance so that if I am exposing my Mule ESB as a Service so in that case I don't want to use load balancer to balance my Mule ESB , because once the request will come Load Balancer, it is the single point of failure in case if it is down. So I just need a use case how to Expose Mule as a Service with Optimized Load Balancing without using MMC (Mule Management Console).
For load balancing incoming HTTP request, over multiple Mule instances, you will need a external loadbalancer. Mule ESB Enterprise Edition nor MMC will help you with that.
You can use a commercial one, such as a F5 BIP-IP, or setup a HAProxy. To avoid the loadbalancer to be a single point of failure you can setup a redundant HAProxy.
For JMS make sure to setup a external message broker cluster and connect to it using the normal jms:inbound-endpoint that way Mule will act as a competing consumer and you will achieve load balancing of messages.
I would also advice you to have a look at "MuleSoft Blueprint: Load Balancing Mule for Scalability and Availability" that covers this. It is a bit dated but most of the information in there is still valid.
It's unclear what transport are you using, anyhow you have just limited number of options.
Use Mule EE clustering feature for the VM transport.
Use a load balancer
Use a transport that support competing consumers like JMS or AMQP.
Could you provide a more detailed explanation of you deployment so I can provide more extact info?

Mule ESB integration with web applications

We have more than 5 corporate applications running on different servers with technologies like spring, struts communication between these application is point to point. We are planning to migrate this to ESB using Mule.
I didnt quite understand how mule works i have few doubts,
Mule is running in a different server do i need to deploy all my 5 aplications into the mule server.
I have spring application delployed on a tomcat server how this application is going to receive messages through mule or what are the configuration changes i need to do in my server or mule server.
Any advice or tutorials.
You need not deploy all you 5 applications in the Mule Server.
You said that all your application are currently point to point (which means that all are talking/communicating now through http protocol), similarly you can also use mule's http endpoints to communicate with all the 5 applications.
I.e. the spring application talking to another spring application can be modified to Spring application talking to Mule and Mule in turn talking to another Spring application .
You must learn the basics from the Documentation
http://www.mulesoft.org/documentation/display/current/Mule+Fundamentals --> browse through the navigation on the lefthand side.
Mule is an integration tool. you no need to change any of your existing applications. All you need is to develop an mule application which can do the mediation/orchestration.
For connecting with your spring application you no need to change any configuration you need to use http:outbound connector inside your mule flow
Just go through http://www.mulesoft.org/documentation/display/current/HTTP+Transport+Reference
So mule is based on SOA principle, so your 5 corporate service need not to be their in a single system. so if you want to consume the service/functionality of any of your 5 application, expose those service as web service, cloud be soap or rest. And you can call those service inside your mule. so in this case you have to create only 5 connection and where ever required you can refer to those connection inside mule configuration file.
#saravanan shanmugavel you need to use Mule ESB to orchestrate the communication between these application...ESB came into the picture to remove your point to point communication...you can create a proxy service for each of your service and one flow which will orchestrate communication between all...
All you need to do is change the application that is configurable according to mule server.
Please refer below link that will be helpful for understanding of mule that helps you better orchestrate communication between all.
https://docs.mulesoft.com/