Mulesoft :- Force Implementation url to listen to proxy only (or) Secure Implementation url - mule

How to force implementation url to listen from proxy only in Mulesoft?
Right now proxy can be secured using client_id, client_secret etc. However implementation url is not secure. By chance if anyone knows the implementation url then its potential risky affair.
Is there any way we can force implementation url to listen to proxy only.
(or) Can we add policies to Implementation url.

Mulesoft documentation setting-up-an-api-proxy states that the proxy application is nothing but a mule application mocking the contractual behavior of the actual service implementation and making service calls to the actual API for fulfilling requests. So instead of HTTP, it is recommended to use HTTPS for enhanced security and data integrity. Since Mulesoft suggests using HTTPS protocol for the connection between mule proxy and service implementation, so leveraging the HTTPS protocol, one option would be to try enforcing two way SSL between your proxy and the implementation which will help you accept requests only from legitimate clients.
Check the topic enable-two-way-ssl-in-mule for further implementation details
The second option would be to enable policies on the actual service implementation i.e. enable api-auto-discovery on your service.
Although you can do it but it would be an overhead due to below reasons :
As you would be enforcing policies at two layers and doubling the
calls to API Manager for sync up of policies as the service
implementation would poll the API manager every fixed interval of
time to check/fetch the policies.
To enable the policy application on the service implementation, the
service needs to run on either api-gateway runtime or mule 3.8
onwards runtime as older mule versions do not support policies.
The implementation can be done by having below XML snippet in the API xml.
<api-platform-gw:api apiName="app-${env}" version="${api.version}" flowRef="api-main" create="true" apikitRef="api-config" doc:name="API Autodiscovery" />
apiName would be the API definition created in API Manager from where
you can view and manage the API
version would be same major version of the API
flowRef would map it to the main flow reference
create flag to signify if the definition needs to be created in API Manager in case it does not exist
Conclusion:
Enforce 2 way SSL to enforce client-server certificate based authentication
Add Auto Discovery to service implementation so to apply policies on implementation layer as well

Mulesoft documentation suggests adding VPC . When we tested http was working in VPC but not https.
Since https was a mandatory requirement and we were unable to do it via VPC , we fixed it in a different way.
We added a custom header at proxy code and we validate for that header in implementation .
This was the fix rolled out

Related

Mule API AutoDiscovery vs Mule API GatewayProxy

When should we use API Proxy against API AutoDiscovery. After implementing both, I found AutoDiscovery can also apply policies, analytics which API Gateway does, only thing is I cannot use a different url if using AutoDiscovery. Main advantage of API Proxy would be if my Gateway application and Mule Implementation Project is in different subnet, so if we are my Gateway server is compromised, no one can get to my implementation network.
But if both interface and implementation is in the same network, and purpose is just to call a REST Endpoint, should we not go with API AutoDiscovery.
Problems with Mule API Gateway Proxy
No defined way of Exception Handling, if we are not able to reach the Implementation Server.
No defined way of moving the Proxy Application across environments (CI/CD)
Extra HTTP Hops, can be acceptable if the above 2 issues have a defined way
Mule API AutoDiscovery
Since this is in the Mule Application, standard Exception Handling.
CI/CD is defined as it is the Mule Implementation Project.
No Extra HTTP Hop.
Only thing here is, we cannot change the implementation URL, that is only tightly coupled thing.
Can someone provide insight on when should we go for API Gateway vs AutoDiscovery. Also currently is there a way of doing Exception Handling in API Gateway Project and also CI / CD ?
API Autodiscovery is required if you plan to apply/unapply a policy to a particular endpoint, and/or take advantage of usage statistics in the context of API Platform. Api Autodiscovery is kind of metadata that links a HTTP(s) listener to its counterpart API version on API Manager.
For example:
<api-platform-gw:api id="api.basic.path" apiName="My API" version="1.0.0" flowRef="basic.path">
</api-platform-gw:api>
<flow name="basic.path">
<http:listener config-ref="a.http.config" />
<set-payload value="Endpoint successfully called." />
</flow>
Autogenerated Mule Proxies do have autodiscovery defined. You can also develop your own project and define the corresponding autodiscovery either by using the Studio UI, or handling the XML config directly.
The proxies are mean to be used in the case that your implementation backend is not a Mule application (for example, an existing REST based API hosted in a Tomcat server). You can enrich the logic with custom exception handling among other things on the Mule side. If you'd like better exception handling on the implementation backend, you will have to implement it there.
If your implementation backend is a Mule based application, using a proxy is not required. For most use cases, adding the corresponding autodiscovery element in the configuration file will do the trick.

How to apply different policies to service and proxy service?

I have a mule service, named IS, deployed on mule runtime and proxied on API gateway. I'd like to set up different policies to the IS and its proxy service. How can I do it?
My environment:
Mule runtime: 3.7.4
Mule API gateway: 2.1.1
The following are two valid and equally correct solutions that you can choose from, taking into account that your implementation API is a Mule app:
Create an API on API Platform
Solution A:
Configure the autogenerated proxy to use your implementation API URL
Deploy the proxy to a correctly configured API Gateway/Mule runtime
>= v3.8.0
Apply one or more policies to the tracked proxy
Solution B:
Add autodiscovery to your implementation API, using the same API
name and API version name than your already created API on API
Platform
Deploy the impl app to a correctly configured API
Gateway/Mule runtime >= v3.8.0
Apply one or more policies to the tracked implementation app
With solution A, you have to make sure that your implementation app is only accessible by the proxy app (eg with a firewall).
If your implementation API would not be a Mule app, then Solution B would not be possible.
We can create endpoint with a proxy or select Basic endpoint if you create your API outside API Manager, for example, you created the API using Mule ESB. You don’t need a proxy in this case. So policies will be applied to API. For more details go through the link.
https://docs.mulesoft.com/api-manager/setting-up-an-api-proxy
If you're using Mule runtime v3.8.x, and if the service is an HTTP/S listener, you can actually make it auto-discovered in the API Manager and have policies applied directly on it, even if the mule config is not generated using APIkit.
https://docs.mulesoft.com/api-manager/api-auto-discovery
Choose the flow that you want the API Manager to manage and apply policies.
Do note that you will need to have to right entitlement (API Gateway) in the Mule Runtime license and that it has the right Anypoint Platform Client ID/Secret pairs configured in the wrapper.conf. The IDs should be automatically configured if you've added the Mule Runtime server in the Anypoint Runtime Manager.
Here is my solution to apply policy to proxy service:
Create a new API using proxy service's url
Apply policy to API created in step1
Can anyone confirm this is the correct way?

Mule API - deploy to a Mule Runtime

I am experimenting with Mule API management these days. What I come to know is we can deploy our API to one of these:
A Mule Runtime
An API Gateway
In the documentation, it is said that we should go with option 1 when we want to separate out the implementation of your API from the orchestration. What does it mean?
Can any one please explain in detail?
Policy management from API Platform and analytics generation can be achieved only by using a correctly configured API Gateway, which is a superset of Mule EE (current version is API Gateway 2.1.0 which contains Mule EE 3.7.2).
Depending on your architecture you may have different solutions.
For example:
Proxy running on API Gateway, implementation API running somewhere
else (eg. Mule EE/CE, Tomcat, cobol server, etc)
Proxy and implementation API running on the same API Gateway
Implementation API
managed directly from API Platform without using the autogenerated
proxies.
HTH :-)
Not exactly sure what they mean there, because on this page: https://developer.mulesoft.com/docs/display/current/API+Gateway they also mention this:
Note that the API Gateway, because it acts as an orchestration layer
for services and APIs implemented elsewhere, is technology-agnostic.
You can proxy non-Mule services or APIs of any kind, as long as they
expose HTTP/HTTPS, VM, Jetty, or APIkit Router endpoints. You can also
proxy APIs that you design and build with API Designer and APIkit to
the API Gateway to separate the orchestration from the implementation
of those APIs.
So both methods technically allow you to separate API from orchestration, as your API gateway application could simply proxy another Mule application elsewhere that performs the orchestration. But my understanding of the two options are:
The API gateway is a limited offering that allows you to use a subset of Mule's connectors, transports and modules such as ApiKit and HTTP, it allows you to expose and API then use http to connect to whatever backend systems you want as a proxy and perform the orchestration in the API layer.
By using the Mule runtime operation, it gives you much more flexibility and allows you to compose as many applications as you want using the full range of connectors etc. and separate out the different aspects of your applications into as many layers as you want as separately deployable entities that you can deploy to on-premise standalone instances or Cloudhub etc.
#Ryan answer is more or less on the mark, however if you do choose the Mule ESB offering you will loose out on the API Management and governance functionality that API gateway provides OOTB.
These include
Lets you enforce runtime policies and collect data for analytics
Applies policies to APIs or endpoints around security, throttling,
rate limiting, and more
Extends PingFederate to serve as identity management and OAuth
provider for your APIs
Lets you require or restrict certain behaviors in a few simple steps
Lets you add or remove policies at runtime with no API downtime
Manages access to your API by issuing contract keys
Monitors the API to confirm it is meeting all contract terms
Ensures compliance with service level agreements (SLAs)
In my opinion go with API Gateway/Manager if your API will be consumed my third party developers with whom you might not have too many interactions (think public API's) else Mule ESB should be good.
You should be able to migrate from Mule ESB to API Manager (and vice versa) also easily if you need to, so I do not think you will get locked into your decision
PS: Content copied from here

How to use my authentication filter with Websocket for Cometd deployed in Jetty?

I am using Cometd 3.0.1 with jetty 9.2.3 using JSR 356 based websocket implementation (and not jetty's own websocket implementation).
I have added some auth filters which basically ask for authentication headers from request. But as websocket upgrade happen as a part of websocketupgrade filter, is there a way to make authentication work here?
Authenticating via a Filter is the wrong way to accomplish authentication.
Correct Solution:
The servlet spec expects you to setup and configure the the authentication and authorization layers of your application using the servlet techniques of both the container and the application metadata (WEB-INF/web.xml)
This means you setup a the container side security, either using the Jetty container specific LoginService, or using a JAAS spec configuration. Then you reference your security realms in your WEB-INF/web.xml and use them. If you have something custom, then you can hook into the LoginService of your choice (even a custom one) and manage it accordingly.
JAAS and LoginService Authentication and Authorization is applied before all filters and servlets.
In this scenario, you'll have access to the authentication information during the upgrade process, in particular during the ServerEndpointConfig.Configurator.modifyHandshake()
Ugly Hack Solution:
Add the org.eclipse.jetty.websocket.server.WebSocketUpgradeFilter to your WEB-INF/web.xml manually.
This then leaves it up to you to attempt to get your authentication filter to exist before this WebSocketUpgradeFilter in 100% of use cases.
Caution: filter execution ordering is not part of the servlet spec. Be careful with this, as it might seem to be working on your dev machine and then suddenly not work in QA or production. Simply because the Set of filters in the metadata will have a different order in it.
Notes:
Path Spec must be /*
Async Supported must be true
Dispatcher Types must be REQUEST only.
Do not set the contextAttributeKey for that filter
All other WebSocketUpgradeFilter init-params are invalid for JSR-356 use (they are overridden by the various JSR-356 endpoint configurations)

WCF routing and service metadata

I'm building a WCF router which needs to act as a proxy for a number of internal web services (WCF and ASMX). The routing part is fairly straight-forward, but I can't understand how the service metadata exchange would work in this solution.
In other words: how would a client obtain metadata for an internal service behind the router? Do I need to manually supply WSDL files to the consumer? Can I somehow setup the router to return the metadata for an appropriate internal service?
Or perhaps my architecture is completely wrong?
I see 2 options here:
It may be an option to create a "non-transparent" proxy, if you don't want to expose the internal addresses. The advantage is that you can do more than just routing messages (i.e. such proxy may serve as a "security boundary", unwrapping ciphered messages and passing them plain to the internal endpoint). It can also provide an "interoperable level", exposing a WCF service as simple SOAP using same datatypes/message XML structure. The downside is that you'll have to update its code along with the proxied services
You may implement a WSDL rewriter. With it, you can mask the internal service URL on-the-fly - depending on your conditions, a simple string replace may or may not suffice.
Refer to:
Message Inspectors
IWsdlExportExtension
The same "router service" can also be used to get the individual WSDL for internal services behind the router.
Check out this thread
Have you considered using a simple HTTP Proxy instead? All WCF using REST or SOAP are at their core HTTP requests. It seems like the routing functionality (which I am assuming you are basing on hostname, URL path or parameters) could be performed by proxying the HTTP request without needing to understand the contents. ASP.Net will do a fairly good job of sanitizing incoming requests on its own, but you could always add additional custom filtering as necessary.