Distributed tracing library - Custom trace id - spring-cloud-sleuth

As part of our spring application, we are using Spring Sleuth to inject traceid & spanid into the requests. This neatly works with SL4J via MDC integration to propagate to the logs as well.
But running into issues with our organization not using B3 headers that Sleuth is tightly coupled with. So looking at alternatives for using custom request header like "x-trace-id" that could be injected into the traces.
Our traceability is still via centralized logging like splunk. We do not yet have a centralized collector like zipkin & hence sampling is not relevant yet. So the immediate usecase is to ensure log traceability and once we have a central collector for tracing, hoping sampling is available out of the box to use.

Sleuth is not tightly coupled with B3, it supports AWS, B3, W3C, and custom (B3 is the default): see the docs about Context Propagation
You can change the context propagation mechanism, see docs: How to Change The Context Propagation Mechanism?

Related

Authenticating subject with shiro in spring application that uses atmosphere for sockets

I have a Spring-boot web application that uses Apache Shiro for security management. The web application also uses the Atmosphere framework for socket communication.
Whilst working with it I have a requirement to authenticate a user who is currently logged in when I receive /socket request to atmosphere. However, when trying to access the Shiro Subject I get the following error:
No SecurityManager accessible to the calling code, either bound to the org.apache.shiro.util.ThreadContext or as a vm static singleton. This is an invalid application configuration.
I also have a filter added to my SecurityManager for /socket. Regardless, I continue to get the above error when I try to authenticate the user with atmosphere for socket connections.
I have searched up quite a lot on the web and haven't found an answer that explains what's happening thoroughly. I found many posts that mentioned something about the thread pool used by atmosphere is different from that which is allocated to servlet requests. Thus, async requests that reach atmosphere have no context of the original user. I also read a workaround here, which is quite old. I tried some of the things mentioned in the comments too.
Note: I am a newbie to the Spring, Shiro and Atmosphere frameworks. I understand things in a more systems-level perspective.
I would highly appreciate if I can get some explanation (or some link that might help) as to what is happening with each of these things and why it is producing the error above. Most of the online material I have read regarding this seems to be very vague and does not cover as an exclusive answer.
If I am not mistaken when the Spring application launches it also loads the Shiro and Atmosphere, related classes. The requests arrive at the Apache server and are delegated to a class based on annotations. Spring/Apache stores per request info (some cookie or session token) and subsequent requests are mapped according to this. However, the information stored between requests that hit the Atmosphere related (ex: onRequest) endpoints and the rest are not shared, hence I cannot use the same subject info.
I have sincerely searched a lot trying to understand and would like an elaborate explanation. I hope this question is not regarded unsuitable to the forum.
Thank you
Shabir
Take a look at the doc for Thread Association in Shiro
Your assumption seems correct (guessing as I've never used Atmosphere). The thread pools are different. There are generally two ways to deal with this. Some frameworks allow you to add data to a "context" and you can pull that data out from your running thread (much like a Servlet or Spring context). The other option (assuming you have access to manage the Atmosphere threads), you can wrap them in:
Subject.execute(...)

WebFlux web and webflux starter

I created a Spring boot 2.0.0.M7 project with the webflux starter because I want to use all the asynchronous and the non-blocking capabilities.
I added the server.servlet.context-path but it does not work if I don't add the web starter as well.
If I add both starters can I have issues with the non-blocking functionality?
I executed some stress tests with Gatling and I have received the same scores removing the web starter or adding it.
Any help with this?
If you add both spring-boot-starter-web and spring-boot-starter-webflux to your application, Spring Boot will configure it as a Spring MVC app.
This is intentional as many Spring MVC will get the webflux dependency to leverage the new WebClient in their MVC apps. Also, as of Spring Framework 5, Spring MVC knows how to handle a few cases with Flux at the controller level.
You can always force your choice like this:
SpringApplication app = new SpringApplication(MyApplication.class);
app.setWebApplicationType(WebApplicationType.REACTIVE);
app.run(...);
In your case, this is not about forcing a choice but rather using something that's not supported in WebFlux.
The server.servlet.context-path configuration property is Servlet-specific, so it won't work with WebFlux. Currently Spring Boot does not support war deployment nor multiple web contexts for WebFlux applications. So there's no point in offering such a property.
The runtime model difference between "Servlet-based" and Reactive runtime with Spring can be quite subtle, and I encourage you to watch a talk that describes those choices. The short answer is: if you're using Spring MVC with async types (DeferredResult, Flux or SseEmitter) things will be async but reading and writing will still be blocking.
Properly benchmarking that is quite hard, but the results you're seeing are somehow expected. Running locally server+client, no latency involved, looking at raw throughput - all of those constraints should not be in favour of the reactive model which has a concurrency cost. If anything, this benchmark shows that the reactive stack is quite optimized already, even for non-ideal use cases!

Mule API AutoDiscovery vs Mule API GatewayProxy

When should we use API Proxy against API AutoDiscovery. After implementing both, I found AutoDiscovery can also apply policies, analytics which API Gateway does, only thing is I cannot use a different url if using AutoDiscovery. Main advantage of API Proxy would be if my Gateway application and Mule Implementation Project is in different subnet, so if we are my Gateway server is compromised, no one can get to my implementation network.
But if both interface and implementation is in the same network, and purpose is just to call a REST Endpoint, should we not go with API AutoDiscovery.
Problems with Mule API Gateway Proxy
No defined way of Exception Handling, if we are not able to reach the Implementation Server.
No defined way of moving the Proxy Application across environments (CI/CD)
Extra HTTP Hops, can be acceptable if the above 2 issues have a defined way
Mule API AutoDiscovery
Since this is in the Mule Application, standard Exception Handling.
CI/CD is defined as it is the Mule Implementation Project.
No Extra HTTP Hop.
Only thing here is, we cannot change the implementation URL, that is only tightly coupled thing.
Can someone provide insight on when should we go for API Gateway vs AutoDiscovery. Also currently is there a way of doing Exception Handling in API Gateway Project and also CI / CD ?
API Autodiscovery is required if you plan to apply/unapply a policy to a particular endpoint, and/or take advantage of usage statistics in the context of API Platform. Api Autodiscovery is kind of metadata that links a HTTP(s) listener to its counterpart API version on API Manager.
For example:
<api-platform-gw:api id="api.basic.path" apiName="My API" version="1.0.0" flowRef="basic.path">
</api-platform-gw:api>
<flow name="basic.path">
<http:listener config-ref="a.http.config" />
<set-payload value="Endpoint successfully called." />
</flow>
Autogenerated Mule Proxies do have autodiscovery defined. You can also develop your own project and define the corresponding autodiscovery either by using the Studio UI, or handling the XML config directly.
The proxies are mean to be used in the case that your implementation backend is not a Mule application (for example, an existing REST based API hosted in a Tomcat server). You can enrich the logic with custom exception handling among other things on the Mule side. If you'd like better exception handling on the implementation backend, you will have to implement it there.
If your implementation backend is a Mule based application, using a proxy is not required. For most use cases, adding the corresponding autodiscovery element in the configuration file will do the trick.

Mule API - deploy to a Mule Runtime

I am experimenting with Mule API management these days. What I come to know is we can deploy our API to one of these:
A Mule Runtime
An API Gateway
In the documentation, it is said that we should go with option 1 when we want to separate out the implementation of your API from the orchestration. What does it mean?
Can any one please explain in detail?
Policy management from API Platform and analytics generation can be achieved only by using a correctly configured API Gateway, which is a superset of Mule EE (current version is API Gateway 2.1.0 which contains Mule EE 3.7.2).
Depending on your architecture you may have different solutions.
For example:
Proxy running on API Gateway, implementation API running somewhere
else (eg. Mule EE/CE, Tomcat, cobol server, etc)
Proxy and implementation API running on the same API Gateway
Implementation API
managed directly from API Platform without using the autogenerated
proxies.
HTH :-)
Not exactly sure what they mean there, because on this page: https://developer.mulesoft.com/docs/display/current/API+Gateway they also mention this:
Note that the API Gateway, because it acts as an orchestration layer
for services and APIs implemented elsewhere, is technology-agnostic.
You can proxy non-Mule services or APIs of any kind, as long as they
expose HTTP/HTTPS, VM, Jetty, or APIkit Router endpoints. You can also
proxy APIs that you design and build with API Designer and APIkit to
the API Gateway to separate the orchestration from the implementation
of those APIs.
So both methods technically allow you to separate API from orchestration, as your API gateway application could simply proxy another Mule application elsewhere that performs the orchestration. But my understanding of the two options are:
The API gateway is a limited offering that allows you to use a subset of Mule's connectors, transports and modules such as ApiKit and HTTP, it allows you to expose and API then use http to connect to whatever backend systems you want as a proxy and perform the orchestration in the API layer.
By using the Mule runtime operation, it gives you much more flexibility and allows you to compose as many applications as you want using the full range of connectors etc. and separate out the different aspects of your applications into as many layers as you want as separately deployable entities that you can deploy to on-premise standalone instances or Cloudhub etc.
#Ryan answer is more or less on the mark, however if you do choose the Mule ESB offering you will loose out on the API Management and governance functionality that API gateway provides OOTB.
These include
Lets you enforce runtime policies and collect data for analytics
Applies policies to APIs or endpoints around security, throttling,
rate limiting, and more
Extends PingFederate to serve as identity management and OAuth
provider for your APIs
Lets you require or restrict certain behaviors in a few simple steps
Lets you add or remove policies at runtime with no API downtime
Manages access to your API by issuing contract keys
Monitors the API to confirm it is meeting all contract terms
Ensures compliance with service level agreements (SLAs)
In my opinion go with API Gateway/Manager if your API will be consumed my third party developers with whom you might not have too many interactions (think public API's) else Mule ESB should be good.
You should be able to migrate from Mule ESB to API Manager (and vice versa) also easily if you need to, so I do not think you will get locked into your decision
PS: Content copied from here

How do I implement basic API gateway

I am working on one school project, And my task is to make a simple api gateway, which can placed between any of the 3rd party api and the end users, tha gateway can be used for defining usage limits of the api or to do some security analysis, I am totally new to this, I know the basic concept of API gateway, but don't know how do I implement it using JAVA.
Can anyone please give me some starting point where to start implementation of API gateway?
And what are the frameworks I should use and for what purpose?
Thanks,
Nixit Patel
In a nutshell, API gateway exposes public APIs, applies policies (authentication - typically via OAuth, throttling, adherence to the the defined API, caching, etc.) and then (if allowed) optionally applies transformation rules and forwards the call to the backend. Then, when the backend responds, gateway (after optionally applying transformation rules again) forwards the response to the original caller. Plus, there would typically be an API management solution around it providing subscriber portal, user management, analytics, etc.
So basically any web service framework would work as a quick DYI solution.
You can also use plugin model of an open-source load-balancer such as NGINX.
Or take an open-source API Gateway to learn from it - e.g. WSO2 API Manager (the easiest way to see it in action is the hosted version: WSO2 API Cloud)