xmlns:cxfrs="http://camel.apache.org/schema/cxf"
xmlns:jaxrs="http://cxf.apache.org/jaxrs"
I am trying to understand the cxfrs:server and JAXRS:Server in Apache Camel. How it is different?
First, these two are used to configure the server.
But cxfrs:server is used to configure the server which is used by the camel-cxfrs to route the REST request to camel route, it doesn't invoke the resources classes instance method; jaxrs:server is used to expose the REST service which can accept the REST request and send response by invoke the resources classes instance that you configured.
Related
How to force implementation url to listen from proxy only in Mulesoft?
Right now proxy can be secured using client_id, client_secret etc. However implementation url is not secure. By chance if anyone knows the implementation url then its potential risky affair.
Is there any way we can force implementation url to listen to proxy only.
(or) Can we add policies to Implementation url.
Mulesoft documentation setting-up-an-api-proxy states that the proxy application is nothing but a mule application mocking the contractual behavior of the actual service implementation and making service calls to the actual API for fulfilling requests. So instead of HTTP, it is recommended to use HTTPS for enhanced security and data integrity. Since Mulesoft suggests using HTTPS protocol for the connection between mule proxy and service implementation, so leveraging the HTTPS protocol, one option would be to try enforcing two way SSL between your proxy and the implementation which will help you accept requests only from legitimate clients.
Check the topic enable-two-way-ssl-in-mule for further implementation details
The second option would be to enable policies on the actual service implementation i.e. enable api-auto-discovery on your service.
Although you can do it but it would be an overhead due to below reasons :
As you would be enforcing policies at two layers and doubling the
calls to API Manager for sync up of policies as the service
implementation would poll the API manager every fixed interval of
time to check/fetch the policies.
To enable the policy application on the service implementation, the
service needs to run on either api-gateway runtime or mule 3.8
onwards runtime as older mule versions do not support policies.
The implementation can be done by having below XML snippet in the API xml.
<api-platform-gw:api apiName="app-${env}" version="${api.version}" flowRef="api-main" create="true" apikitRef="api-config" doc:name="API Autodiscovery" />
apiName would be the API definition created in API Manager from where
you can view and manage the API
version would be same major version of the API
flowRef would map it to the main flow reference
create flag to signify if the definition needs to be created in API Manager in case it does not exist
Conclusion:
Enforce 2 way SSL to enforce client-server certificate based authentication
Add Auto Discovery to service implementation so to apply policies on implementation layer as well
Mulesoft documentation suggests adding VPC . When we tested http was working in VPC but not https.
Since https was a mandatory requirement and we were unable to do it via VPC , we fixed it in a different way.
We added a custom header at proxy code and we validate for that header in implementation .
This was the fix rolled out
When should we use API Proxy against API AutoDiscovery. After implementing both, I found AutoDiscovery can also apply policies, analytics which API Gateway does, only thing is I cannot use a different url if using AutoDiscovery. Main advantage of API Proxy would be if my Gateway application and Mule Implementation Project is in different subnet, so if we are my Gateway server is compromised, no one can get to my implementation network.
But if both interface and implementation is in the same network, and purpose is just to call a REST Endpoint, should we not go with API AutoDiscovery.
Problems with Mule API Gateway Proxy
No defined way of Exception Handling, if we are not able to reach the Implementation Server.
No defined way of moving the Proxy Application across environments (CI/CD)
Extra HTTP Hops, can be acceptable if the above 2 issues have a defined way
Mule API AutoDiscovery
Since this is in the Mule Application, standard Exception Handling.
CI/CD is defined as it is the Mule Implementation Project.
No Extra HTTP Hop.
Only thing here is, we cannot change the implementation URL, that is only tightly coupled thing.
Can someone provide insight on when should we go for API Gateway vs AutoDiscovery. Also currently is there a way of doing Exception Handling in API Gateway Project and also CI / CD ?
API Autodiscovery is required if you plan to apply/unapply a policy to a particular endpoint, and/or take advantage of usage statistics in the context of API Platform. Api Autodiscovery is kind of metadata that links a HTTP(s) listener to its counterpart API version on API Manager.
For example:
<api-platform-gw:api id="api.basic.path" apiName="My API" version="1.0.0" flowRef="basic.path">
</api-platform-gw:api>
<flow name="basic.path">
<http:listener config-ref="a.http.config" />
<set-payload value="Endpoint successfully called." />
</flow>
Autogenerated Mule Proxies do have autodiscovery defined. You can also develop your own project and define the corresponding autodiscovery either by using the Studio UI, or handling the XML config directly.
The proxies are mean to be used in the case that your implementation backend is not a Mule application (for example, an existing REST based API hosted in a Tomcat server). You can enrich the logic with custom exception handling among other things on the Mule side. If you'd like better exception handling on the implementation backend, you will have to implement it there.
If your implementation backend is a Mule based application, using a proxy is not required. For most use cases, adding the corresponding autodiscovery element in the configuration file will do the trick.
What is the difference between a Proxy service and API service in wso2esb?
To expose my service I can give proxy URL and API URL then in which scenario both differs? and in which scenario I can use proxy and in which I can use API?
Please help me in understanding..,
An API has resources so it is suitable when you have to perform multiple operations like CRUD etc. then you can call particular resource which will be performing some particular operation.
A proxy service is suitable when you have to perform an isolated operation (single operation).
So, what you can do is, make an API for multiple operations and then create proxy services for each operation.
Moreover, API can be called as REST service and Proxy service is called as a soap service.
Use a proxy service to expose a SOAP web service
You can consume JMS messages or files with VFS, but since ESB 4.9.0 you can use inbound endpoints for that purpose
Use API to expose a REST service
Soap service in wso2 esb can be created in two ways one way is via creating a custom proxy service and another way is by creating a API. by using custom proxy and Api we can create the soap service so i want to understand when do i create custom proxy and when do i create API.
Use a proxy service if you need to :
consume JMS messages (JMS Proxy)
consume files (VFS Proxy)
receive SOAP messages (publish a webService with it's WSDL)
Use an API if you need to publish a REST service (generaly, you want to send XML or JSON to such a service)
Have a look there : https://docs.wso2.com/display/ESB490/Creating+APIs
That is depend on your requirement. If you want to expose your service as a REST service (i.e expose via HTTP methods get/post/put/delete) you can use API's. Similarly if you want to expose your service as a web-service you can use proxies.
You can also proxy a REST service via proxy.
I'm building a WCF router which needs to act as a proxy for a number of internal web services (WCF and ASMX). The routing part is fairly straight-forward, but I can't understand how the service metadata exchange would work in this solution.
In other words: how would a client obtain metadata for an internal service behind the router? Do I need to manually supply WSDL files to the consumer? Can I somehow setup the router to return the metadata for an appropriate internal service?
Or perhaps my architecture is completely wrong?
I see 2 options here:
It may be an option to create a "non-transparent" proxy, if you don't want to expose the internal addresses. The advantage is that you can do more than just routing messages (i.e. such proxy may serve as a "security boundary", unwrapping ciphered messages and passing them plain to the internal endpoint). It can also provide an "interoperable level", exposing a WCF service as simple SOAP using same datatypes/message XML structure. The downside is that you'll have to update its code along with the proxied services
You may implement a WSDL rewriter. With it, you can mask the internal service URL on-the-fly - depending on your conditions, a simple string replace may or may not suffice.
Refer to:
Message Inspectors
IWsdlExportExtension
The same "router service" can also be used to get the individual WSDL for internal services behind the router.
Check out this thread
Have you considered using a simple HTTP Proxy instead? All WCF using REST or SOAP are at their core HTTP requests. It seems like the routing functionality (which I am assuming you are basing on hostname, URL path or parameters) could be performed by proxying the HTTP request without needing to understand the contents. ASP.Net will do a fairly good job of sanitizing incoming requests on its own, but you could always add additional custom filtering as necessary.