I am integrating an application with database through Mule. I have exposed database queries as SOAP web services in Mule. In all there are 4 such services. These are in 4 different flows hence will be running on different ports. The problem is there is an internal firewall between application and Mule. I need to tell network administrator upfront the ports that I want to open up for inbound connections from application to Mule.
For any new service I need another HTTP port and ask for changes in firewall rules. One solution I could think of is having common entry flow (hence single HTTP port) which then delegates requests to other flows through VM or JMS transport. However I think it can be designed in better way to remove or minimize dependency on network configurations with multiple HTTP endpoints. Please advise.
If you would be deploying four different web services (i.e. 4 different WSDLs) in a JavaEE container, you would probably use the URL's path to discriminate across the services (say: /accounts /orders...).
You can achieve the same in Mule using a single HTTP endpoint and choice router, since your four services are in 4 different flows (and not 4 different applications, for which case using JMS, or the new shared VM endpoints, would be a must).
In the choice router, you can route the HTTP requests based on the http.request.path inbound property (see: http://www.mulesoft.org/documentation/display/current/HTTP+Transport+Reference#HTTPTransportReference-HTTPProperties )
Related
In a microservice world, what is the recommended way of configuring the endpoint of a downstream API?
For example, if Service A needs to invoke an endpoint in Service B, we have two options:
a. Make the hostname and port number of Service B's API configurable in Service A (service-b:8080) and append the path URI in your code
or
b. Make the complete endpoint configurable in Service A (http://service-b:8080/somepath)
While I like the idea of making the endpoint configurable, it leaves a lot of room for error because the entire path needs to be specified. It also doesn't fit well when multiple endpoints need to be called from Service A to Service B which may potentially have different paths, requiring us to configure multiple endpoints.
On the other hand, option (a) seems more scalable due to above mentioned reasons.
Most search results online just demonstrated how a service can call another service and uses a hardcoded URL to demo this. It would be good to know how is the community doing this in real world projects.
P.S: We use Spring Webflux and deploy to k8s.
I have seen mostly that teams use option a, where the serviceB "baseUrl" (which is basically https://serviceb-hostname:8080) is injected as environment property (kubernetes configmap) into the application during deployment.
The specific API specific paths are configured in application yaml or in the "proxy config" class itself as constants (eg. ServiceBProxy.java - proxy classes are those which will make rest-calls to the dependent services like service B).
Here is a portion of application yaml from one of the microservices (from one of my projects):
authorizationService:
baseUri: ${authorizationServiceBaseUri}/api
tenantService:
baseUri: ${tenantServiceBaseUri}/api/v1
tenantsUri: ${tenantService.baseUri}/tenants
settingsService:
baseUri: ${settingsServiceBaseUri}
iamService:
fetchBatchSize: 500
baseUri: ${iamServiceBaseUri}
Here the values of iamServiceBaseUri,settingsServiceBaseUri,tenantServiceBaseUri,authorizationServiceBaseUri are all injected during deployment. And each of them contains the clusterIP with port.
Using Windows Azure Service, and exposing a WCF service endpoint from on premise (behind firewall, NAT ....) is it possible to make available an intranet site to a worker role in Azure?
Basically I want to be able to make an HttpWebRequest request from a worker role in azure to a site in on premise for example http:// intranet.domain.net. Is this possible? Or how can I make it possible?
This is not possible out of the box. You have a few options (non-trivial)
1) You will need to build a custom "proxy" that receives requests from Service Bus (over say WebHttpRelayBinding) and then forward the request to "local" IIS using HttpWebRequest or other HTTP clients. Note that this is a lot of work and needs thorough testing of corner cases (e.g. authentication scenarios). Also, it doesn't work with custom domain names (e.g. intranet.domain.net) over "https".
2) Alternatively, you can split the content/web-site and business-data/logic into separate pieces. You can host the content directly at a public site. This public site can then talk to a web-service in the intranet over NetTcpRelayBinding (or one of the *HttpRelayBinding) and execute business logic or retrieve the data.
Currently we run a UI web role and a web service web role(WCF REST) on Azure. Each role contains 2 instances (for load balancing and meeting the SLA reqs.)
The UI Web role and web service web role are within the same subscription but in different deployments. We do not want to merge the code bases (maintainability etc etc). So the UI layer is on xyz.cloudapp.net and the Web Service layer is on abc.cloudapp.net.
Currently, the requirement is to make the web service web role an internal endpoint i.e only accessible by the UI layer. The literature on configuring internal endpoints and accessing it from a different deployment is not very clear.
I am assuming that the two different roles need to be part of a single deployment for this to work. Can this be done without affecting the deployments? Any pointers in the right direction would be greatly appreciated.
Internal endpoints are only accessible within a single deployment, and do not route through the load balancer (so if you have 2 instances of your wcf services accessible on internal endpoint, you'd need to distribute calls between the instances). This, of course, would require you to put both your web role and wcf web role into the same deployment.
You might want to consider service bus for a secure way of reaching your wcf services from your web role instances. Or... expose the wcf services via input endpoint but secure the service.
There's an approach I like to call the virtual DMZ that sould meet your needs: http://brentdacodemonkey.wordpress.com/?s=virtual+dmz
It leverages the ACS and WCF bindings to allow you to create access control to input endpoints (which are then load balanced). Of course, if you don't want something tha robust, you can go with just a standard old WCF mutual auth scenario.
That said, David makes an excellent point. Internal endpoints are only accessible with a single deployed service. This is because that service represents an isolation boundary (think virtual lan branch) and the only input endpoints can be adressed from outside of that boundary.
Have you considered using ACS (Access Control Services) for restricting access using claims-based authentication to your WCF endpoint?
There are numerous protection schemes you could provide via WCF bindings.
Internal Endpoints can only communicate with inter-roles in the same deployment. If you have 2 separate deployments (abc.cloudapp.net and xyz.cloudapp.net, internal endpoints won't help you).
I need to capture and run realtime analysis on messages being exchanged between various web services implemented in Java and client apps. The server code and config can not be modified and is hosted on various servers.
Is it possible to build a proxy layer that will take all calls from the client app and route them to actual web services.
So it needs to do the following:
Accept a config file containing endpoints for various web services that need to be proxied
For each end point, generate a proxy URL
The client apps will point to these proxy URLs
The proxy layer will listen to traffic on these proxy URLs, and route them to real end points.
Track all SOAP traffic in between the client and services and run the necessary analysis.
I considered SoapUI but it does not seem to provide enough control that I need for realtime analysis.
You should start with WCF Routing service. Once you have working communication you can add some custom message processing through custom behaviours or channels to grab SOAP messages and do your analysis.
When I deploy the same service on different machines as they have different information that I need , how can I use my client gracely to consume these service .
You need to define the service endpoint you want to connect to in your client's config.
You cannot define a list of endpoints - if you need load-balancing features, you need to implement those on the server side and "hide" them behind a single service endpoint.
With .NET 4 and WCF 4, you have new capabilities you could check out:
WCF 4 has a new routing service which you can use to get called on a single URL, and you have control over how to "distribute" those calls to the actual back-end servers
WCF 4 also supports dynamic service discovery, so you could potentially just "yell out onto the network" and get back one service endpoint address that supports your contract you're interested in
Resources:
Developer's Introduction to WCF 4
10-4 Show on WCF 4 Routing Service
Content-based routing with WCF 4
WCF 4.0 Routing Service
WCF 4.0 Routing Service - Failover
Using WS-Discovery in WCF 4.0
Ad-hoc Discovery with Probing messages
It sounds like you want to connect to BOTH servers. you say they have different data that you need. Well, if you already know how to make a client to one of them, the easiest way is to define an entire other client to access the second one. You can define as many clients as you want in the config file. Then just call them both in code.