How to put mono to the current context in reactor - spring-webflux

I got a mono<*> from reactive repository, I want put its value to the context, but I did not found a correct api.
kotlin code:
val user:Mono<User> = userRepository.findById(userId)
How do I put its value to the reactor context, like this:
userRepository.findById(userId).doOnNext {
// got current context...
context["USER"] = it
}

Context represents some metadata attached at the bottom of the chain, at subscription time. Each individual subscription to a Mono can have a different Context, and it is more meant to have downstream subscription communicate metadata to upstream operators, not the reverse.
I would greatly encourage you to pursue transmitting the value downstream as a first-class citizen onNext (for example, using map, flatMap, as a Tuple2...) rather than attempting to do that by abusing the Context.

Related

Do I need to destory a StateFlow object created in view model manually?

The Code A is from a Android offical sample project here.
The author create a val uiState, it's MutableStateFlow, I know that MutableStateFlow is hot flow, it will occupy system resource when it is created.
Do I need to destory a StateFlow object created in view model by myself? will the system release it automatically when the app doesn't need it again?
Code A
class InterestsViewModel(
private val interestsRepository: InterestsRepository
) : ViewModel() {
// UI state exposed to the UI
private val _uiState = MutableStateFlow(InterestsUiState(loading = true))
val uiState: StateFlow<InterestsUiState> = _uiState.asStateFlow()
...
}
You don't need to manually destroy the flow after you stop using it.
You mentioned that MutableStateFlow is a "hot" flow. When we talk about "hot" and "cold" flows we're really talking about how the values get emitted from the flow.
In a hot flow, there's typically a second "producer" coroutine that's doing the work to emit values. When you try to collect the flow, you won't receive any values unless the producer is actively emitting them.
In a cold flow, collecting the flow also does the work that produces the values. There's no need for a separate producer coroutine; all the work is done by the consumer.
In that sense, it's correct to call a MutableStateFlow a hot flow. To collect a value from it, you have to first emit a value to it from somewhere else.
However, that doesn't need to imply that the flow holds any resources. The flow is really just acting as a communication channel between the coroutines that are producing and consuming values. Once those coroutines are no longer using or referencing the flow, it becomes eligible for garbage collection just like any normal object.
It's a normal object and as such will be cleaned up once there are no references to it (so when both observing view and ViewModel seizes to exist)

Is there a way to dynamically change destination in Spring Cloud Stream Reactive method?

I know that if I implement SC Stream Function in Spring reactive way, I can't send DLQ like traditional imperative way.
To do this, I am trying to manually change the Destination in the MessageHeader so that the message goes to the DLQ Topic.
However, if an error occurs, I want to send a message to the zombie Topic using the onErrorContinue method, but that doesn't work.
Shouldn't I produce it in onErrorContinue?
input.flatMap { sellerDto ->
// do something..
}.onErrorContinue { throwable, source ->
log.error("error occured!! ${throwable.message}")
source as Message<*>
Flux.just(MessageBuilder.withPayload(String(source.payload as ByteArray)).setHeader("spring.cloud.stream.sendto.destination","zombie").build())
}
No, since with reactive function the unit of work is the entire stream (not individual item). I provide more details here.
That said, you can inject StreamBridge which is always available as a bean and use it inside of your filter or any other applicable reactive operator. Basically streamBridge.send("destination", Message)

WebFlux Controllers Returning Flux and Backpressure

In Spring WebFlux I have a controller similar to this:
#RestController
#RequestMapping("/data")
public class DataController {
#GetMapping(produces = MediaType.APPLICATION_JSON_VALUE)
public Flux<Data> getData() {
return <data from database using reactive driver>
}
}
What exactly is subscribing to the publisher?
What (if anything) is providing backpressure?
For context I'm trying to evaluate if there are advantages to using Spring WebFlux in this specific situation over Spring MVC.
Note: I am not a developer of spring framework, so any comments are welcome.
What exactly is subscribing to the publisher?
It is a long living subscription to the port (the server initialisation itself). Therefore, the ReactorHttpServer.class has the method:
#Override
protected void startInternal() {
DisposableServer server = this.reactorServer.handle(this.reactorHandler).bind().block();
setPort(((InetSocketAddress) server.address()).getPort());
this.serverRef.set(server);
}
The Subscriber is the bind method, which (as far as I can see) does request(Long.MAX_VALUE), so no back pressure management here.
The important part for request handling is the method handle(this.reactorHandler). The reactorHandler is an instance of ReactorHttpHandlerAdapter. Further up the stack (within the apply method of ReactorHttpHandlerAdapter) is the DispatcherHandler.class. The java doc of this class starts with " Central dispatcher for HTTP request handlers/controllers. Dispatches to registered handlers for processing a request, providing convenient mapping facilities.". It has the central method:
#Override
public Mono<Void> handle(ServerWebExchange exchange) {
if (this.handlerMappings == null) {
return createNotFoundError();
}
return Flux.fromIterable(this.handlerMappings)
.concatMap(mapping -> mapping.getHandler(exchange))
.next()
.switchIfEmpty(createNotFoundError())
.flatMap(handler -> invokeHandler(exchange, handler))
.flatMap(result -> handleResult(exchange, result));
}
Here, the actual request processing happens. The response is written within handleResult. It now depends on the actual server implementation, how the result is written.
For the default server, i.e. Reactor Netty it will be a ReactorServerHttpResponse.class. Here you can see the method writeWithInternal. This one takes the publisher result of the handler method and writes it to the underlying HTTP connection:
#Override
protected Mono<Void> writeWithInternal(Publisher<? extends DataBuffer> publisher) {
return this.response.send(toByteBufs(publisher)).then();
}
One implementation of NettyOutbound.send( ... ) is reactor.netty.channel.ChannelOperations. For your specific case of a Flux return, this implementation manages the NIO within MonoSendMany.class. This class does subscribe( ... ) with a SendManyInner.class, which does back pressure management by implementing Subscriber which onSubscribe does request(128). I guess Netty internally uses TCP ACK to signal successful transmission.
So,
What (if anything) is providing backpressure?
... yes, backpressure is provided, e.g. by SendManyInner.class, however also other implementations exist.
For context I'm trying to evaluate if there are advantages to using Spring WebFlux in this specific situation over Spring MVC.
I think, it is definitely worth evaluating. For performance I however guess, the result will depend on the amount of concurrent requests and maybe also on the type of your Data class. Generally speaking, Webflux is usually the preferred choice for high throughput, low latency situations, and we generally see better hardware utilization in our environments. Assuming you take your data from a database you probably will have the best results with a database driver that too supports reactive. Besides performance, the back pressure management is always a good reason to have a look at Webflux. Since we adopted to Webflux, our data platform never had problems with stability anymore (not to claim, there are no other ways to have a stable system, but here many issues are solved out of the box).
As a side note: I recommend, having a closer look at Schedulers we just recently gained 30% cpu time by choosing the right one for slow DB accesses.
EDIT:
In https://docs.spring.io/spring/docs/current/spring-framework-reference/web-reactive.html#webflux-fn-handler-functions the reference documentation explicitly says:
ServerRequest and ServerResponse are immutable interfaces that offer JDK 8-friendly access to the HTTP request and response. Both request and response provide Reactive Streams back pressure against the body streams.
What exactly is subscribing to the publisher?
The framework (so Spring, in this case.)
In general, you shouldn't subscribe in your own application - the framework should be subscribing to your publisher when necessary. In the context of spring, that's whenever a relevant request hits that controller.
What (if anything) is providing backpressure?
In this case, it's only restricted by the speed of the connection (I believe Webflux will look at the underlying TCP layer) and then request data as required. Whether your upstream flux listens to that backpressure though is another story - it may do, or it may just flood the consumer with as much data as it can.
For context I'm trying to evaluate if there are advantages to using Spring WebFlux in this specific situation over Spring MVC.
The main advantage is being able to hold huge numbers of connections open with only a few threads - so no overhead of context switching. (That's not the sole advantage, but most of the advantages generally boil down to that point.) Usually, this is only an advantage worth considering if you need to hold in the region of thousands of connections open at once.
The main disadvantage is the fact reactive code looks very different from standard Java code, and is usually necessarily more complex as a result. Debugging is also harder - vanilla stack traces become all but useless for instance (though their are tools & techniques to make this easier.)

Best strategy for creating a child container (or isolated scope) with Microsoft.Extensions.DependencyInjection

In my AspNetCore application, I process messages that arrive from a queue. In order to process a message, I need to resolve some services. Some of those services have a dependency on ITenantId, which I bind using information from the received message. To solve this, the processing of a messages starts with the creation of a child container, which I then use to create an IServiceScope from which I resolve all the needed dependencies.
The messages can be processed in parallel, so the scopes must be isolated from each other.
I can see to ways of creating the child container, but I'm not sure which is best in terms of performance, memory chrurn etc:
Option A: Each time a message arrives, clone the IServiceCollection into a new ServiceCollection, and rebind ITenantId on the cloned instance.
Option B: When the program starts, create an immutable copy of the IServiceCollection (using ImmutableList<ServiceDescriptor> or ImmutableArray<ServiceDescriptor>). Each time a message arrives, replace ITenantId (resulting in a new instance of ImmutableList<ServiceDescriptor>) and call CreateScope() on the new immutable instance.
The thing I don't like about option A is that the whole collection of services needs to be cloned every time a message arrives. I'm not sure if the immutable collections in option B handles this in a smarter way?
Both options cause the creation of a new container instance for each incoming messages. Although this allows each message to run in a completely isolated bubble, this has severe implications on performance and memory use of the application. Creating container instances is expensive and resolving a registered instance for the first time (per container) causes generation of expression trees, compilation of delegates, and JIT compiling them. This can even cause memory leaks.
Besides, it also means that any registered singleton class, will have a lifetime that equals that of any scoped classes. State can't be shared any longer.
So instead, I propose Option 3:
Use only one Container instance and don't call BuildProvider more than once
Create a ITenantId implementation that allows setting the Id after instantiation
Register that implementation as Scoped
At the start of every new IServiceScope, resolve that implementation and set its id.
This might look as follows:
// Code
class TenentIdImpl : ITenantId
{
public Guid Id { get; set; } // settable
}
// Startup:
services.AddScoped<TenentIdImpl>();
services.AddScoped<ITenantId>(c => c.GetRequiredService<TenantIdImpl>());
// In message pipeline
using (var scope = provider.CreateScope())
{
var tenant = scope.ServiceProvider.GetRequiredService<TenantIdImpl>();
tenant.Id = messageEnvelope.TenantId;
var handler =
scope.ServiceProvider.GetRequiredService<IMessageHandler<TMessage>>();
handler.Handle(messageEnvelope.Message);
}
This particular model, where you store state inside your object graph, which I explain in my blog, is called the Closure Composition Model.

Access Service Provider Context in HystrixCommand's RunFallbackAsync

I am working to add the Hystrix CircuitBreaker pattern to an existing ASP.NET Core microservice, using Steeltoe CircuitBreaker, while maintaining the existing logging functionality with minimal refactoring (or as little as I can hope for).
Currently, an incoming HTTP request goes through the following layers:
Controller -> Service -> DerivedProvider -> AbstractProvider (and out to downstream service)
with Hystrix, I would like it to be:
Controller -> Service -> HystrixCommand<> -> DerviedProvider (via HystrixCommand's ExecuteAsync) -> AbstractProvider
Lots of context is stored in the providers, which is passed down through the layers via constructors, and logging is then happening in the AbstractProvider using that context, regardless of the outgoing call's result. The AbstractProvider also supports a fair amount of custom logic, such as optional pre and post execution callbacks. The post callback is invoked when a non-success response message is returned. Needless to say, changing the layers drastically doesn't appear easy to me, with my current understanding.
After reviewing the Hystrix documentation and Steeltoe CircuitBreaker documentation I am unclear if I can maintain, and access, the provider and its context within the HystrixCommand<>.RunFallbackAsync().
Perhaps the answer might relate to the lifecycle hooks you can override? Like onFallbackStart(HystrixInvokable commandInstance?
Ultimately, the goal is simply to make sure that any existing callback/logging functionality is not lost by wrapping these existing providers in a HystrixCommand. I am failing to understand how the HystrixCommand manages the providers and its context, and when/where you do or do not have access to them. Any suggestions or direction you can offer would be very much appreciated! Cheers!
Hystrix commands can be added to the service container or can be "new'd" (i.e. new MyHystrixCommand(...) whichever makes the most sense for you situation.
Remember that Hystrix commands can not be reused .. i.e. once you create and execute the command you must not try and reuse it.
Clearly if you are new'ng the HystrixCommand then you can define whatever arguments you want in the constructor and supply it with the right arguments (i.e. state) it needs to execute.
If you are injecting it into a controller or another service.. then before you use it... you can initialize it with whatever state you want using properties and then execute it.