I would like to document the gRPC API in my Spring Boot application. The sources on the web that I have looked through suggest something like mapping endpoints to REST and then generating documentation (e. g. Swagger). I would like to find out if there is a simpler way (without REST) to document gRPC endpoints (something like Swagger), to be able to "automatically" generate the documentation if some changes occur.
I am experimenting with Mule API management these days. What I come to know is we can deploy our API to one of these:
A Mule Runtime
An API Gateway
In the documentation, it is said that we should go with option 1 when we want to separate out the implementation of your API from the orchestration. What does it mean?
Can any one please explain in detail?
Policy management from API Platform and analytics generation can be achieved only by using a correctly configured API Gateway, which is a superset of Mule EE (current version is API Gateway 2.1.0 which contains Mule EE 3.7.2).
Depending on your architecture you may have different solutions.
For example:
Proxy running on API Gateway, implementation API running somewhere
else (eg. Mule EE/CE, Tomcat, cobol server, etc)
Proxy and implementation API running on the same API Gateway
Implementation API
managed directly from API Platform without using the autogenerated
proxies.
HTH :-)
Not exactly sure what they mean there, because on this page: https://developer.mulesoft.com/docs/display/current/API+Gateway they also mention this:
Note that the API Gateway, because it acts as an orchestration layer
for services and APIs implemented elsewhere, is technology-agnostic.
You can proxy non-Mule services or APIs of any kind, as long as they
expose HTTP/HTTPS, VM, Jetty, or APIkit Router endpoints. You can also
proxy APIs that you design and build with API Designer and APIkit to
the API Gateway to separate the orchestration from the implementation
of those APIs.
So both methods technically allow you to separate API from orchestration, as your API gateway application could simply proxy another Mule application elsewhere that performs the orchestration. But my understanding of the two options are:
The API gateway is a limited offering that allows you to use a subset of Mule's connectors, transports and modules such as ApiKit and HTTP, it allows you to expose and API then use http to connect to whatever backend systems you want as a proxy and perform the orchestration in the API layer.
By using the Mule runtime operation, it gives you much more flexibility and allows you to compose as many applications as you want using the full range of connectors etc. and separate out the different aspects of your applications into as many layers as you want as separately deployable entities that you can deploy to on-premise standalone instances or Cloudhub etc.
#Ryan answer is more or less on the mark, however if you do choose the Mule ESB offering you will loose out on the API Management and governance functionality that API gateway provides OOTB.
These include
Lets you enforce runtime policies and collect data for analytics
Applies policies to APIs or endpoints around security, throttling,
rate limiting, and more
Extends PingFederate to serve as identity management and OAuth
provider for your APIs
Lets you require or restrict certain behaviors in a few simple steps
Lets you add or remove policies at runtime with no API downtime
Manages access to your API by issuing contract keys
Monitors the API to confirm it is meeting all contract terms
Ensures compliance with service level agreements (SLAs)
In my opinion go with API Gateway/Manager if your API will be consumed my third party developers with whom you might not have too many interactions (think public API's) else Mule ESB should be good.
You should be able to migrate from Mule ESB to API Manager (and vice versa) also easily if you need to, so I do not think you will get locked into your decision
PS: Content copied from here
I have a requirement to connect Mule ESB with Hybris. I didnt find Hybris connector provided by Mule( Also didnt find sample examples as well). This is the 1st time going to try Mule with Hybris..Please let me know the steps or efficient procedural way to connect Hybris.
Is the hybris URL enough to connect?. Please suggest me with your thoughts to implement. Thanks in advance.
Hybris has many ways to integrate with their platform. If you have a login you can access their docs which details examples of integration with the platform including JMS and the Platform Web Services.
There is no connector supplied by Mule, but you can access their Platform Web Services which are implemented with a HTTP Restful API using the Mule http transport or by building your own connector using the Mule DevKit.
The hybris wiki has pages dedicated to most of the endpoints and the request/response formats. You can view this here if you have a login: https://wiki.hybris.com/display/release5/WebService+API+-+Reference
Also, in your hybris installation there are a bunch of examples in /bin/ext-platform-optional/platformwebservices/src...
and /bin/ext-platform-optional/platformwebservices/testsrc which show the actual web service implementations and their test cases using Jersey client.
After going trough this link on API versioning, i am trying to adapt the version through the headers and deploy it of AWS, but it looks like AWS does not support any custom verndor specific MIME types(link here)
Any idea how to achieve the versioning in this scenario where you do not have control over the MIME type...or any insights on how to do API versioning when using AWS?
My suggestion for cloud based service is "v1.api.mysite.com". It provides good flexibility. More details and reasoning here - http://timanovsky.wordpress.com/2013/01/28/api-versioning-scheme-for-cloud-rest-services/
Any body implemented workflow with apache Sling before? How easily to integrate third party workflow engine such as activiti with Apache Sling?
I don't know activiti but if you can package it in an OSGi bundle, it should be possible to use it in Sling.
The JCR and Sling observation mechanisms are very helpful in integrating with workflow systems, as they can call back into your code when content is modified in the JCR repository, in a very decoupled event-based way.