what is Mono.zipDelayError equal function in Flux - spring-webflux

I want to Triger the below two calls parallelly and combine the Mono and flux.
Mono<EmpAddressDetail> empAddDetail = getTimeoutDuration()
.flatMapDelayError(duration -> timeoutWrappedEmpDetailFlux(Service.getemeEmpAddress(empno),
duration, Exception.ErrorCode.TIMED_OUT), CONCURRENCY, PREFETCH);
Flux<Employee> empInfo= getTimeoutDuration()
.flatMap(duration -> mapEmpTypes(empTypes)
.map(empTypedata -> Tuples.of(duration, empTypedata)))
.flatMapDelayError(durationEmpTuple -> getEmpdetails(empno, durationBusinessTuple.getT1(), durationEmpTuple.getT2())
.filter(empdetails -> requestTypes.contains(empdetails.getType()))
.doOnNext(empdetails -> empdetails.setEmpId(empno)), CONCURRENCY, PREFETCH);
tried with Mono.zipDelayError but looking alternate in flux without converting my flux object into mono. do we have any method in flux for Triger the parallel call and merge the result.

Related

Spring Flux stage execute on a different thread when using reactive redis template

When one of the intermediate stage uses reactive Redis template the thread in the subsequent stage changes.
Does it mean reactive redis template has its own scheduler? Which effects every stage moving forward.
If the first point is true, then should I be switching to my own scheduler for processing afterwards?.
Code example
return Flux.fromArray(new String[]{"1", "2", "3"}).map(s -> s.toUpperCase()).flatMap(s -> {
System.out.println(Thread.currentThread().getName());
return redisTemplate.opsForValue().set(s, s);
}).map(aBoolean -> {
System.out.println(Thread.currentThread().getName());
return aBoolean.booleanValue();
}).log();
The first map function print outputs reactor-http-nio-3.
The second map function executes in lettuce-nioEventLoop-5-1.

How to Integrate Akka.Net Streams with AspNet core, or giraffe

When I use Giraffe or ASP.Net Core in general, I can create an actor system, add it as a service and then get It thought the request handler select any actor and ask/tell a message.
Either using Cluster.Sharding or a normal user/actor I know it will be a single instance of the actor in the whole system processing multiple messages.
How can I do the same communication with Streams? They don’t seem to be references in the router, or the actor system as the actor paths: Actor References, Path and Addresses.
Should this be done differently?
Copying from the IO section, I could materialize one graph to handle each request, but in general I communicate with “Singletons” like Domain Driven Design Aggregate Roots to handle the domain logic (thats why the sharding module), I’m not sure how to do Singleton Sinks that can be used in the newly materialized graph in the request handler, as there must be only one sink for all the requests.
There are many ways to integrate akka streams with external systems. The one that makes it easy recipient would be Source.queue (somewhat similar to System.Threading.Channels and predating them). You can materialize your stream at initialization point and then register queue endpoints in Giraffe DI - this way you don't pay cost of the same stream initialization on every request:
open Akka.Streams
open Akkling
open Akkling.Streams
open FSharp.Control.Tasks.Builders
let run () = task {
use sys = System.create "sys" <| Configuration.defaultConfig()
use mat = sys.Materializer()
// construct a stream with async queue on both ends with buffer for 10 elements
let sender, receiver =
Source.queue OverflowStrategy.Backpressure 10
|> Source.map (fun x -> x * x)
|> Source.toMat (Sink.queue) Keep.both
|> Graph.run mat
// send data to a queue - quite often result could be just ignored
match! sender.OfferAsync 2 with
| :? QueueOfferResult.Enqueued -> () // successfull
| :? QueueOfferResult.Dropped -> () // doesn't happen in OverflowStrategy.Backpressure
| :? QueueOfferResult.QueueClosed -> () // queue has been already closed
| :? QueueOfferResult.Failure as f -> eprintfn "Unexpected failure: %O" f.Cause
// try to receive data from the queue
match! receiver.AsyncPull() with
| Some data -> printfn "Received: %i" data
| None -> printfn "Stream has been prematurelly closed"
// asynchronously close the queue
sender.Complete()
do! sender.WatchCompletionAsync()
}

What does it mean that an object handle has so many TimerQueueTimer references

I have an app in which I suspect a memory leak. Not only in the heap, but it seems to me the whole working set is growing for each request that is made to my app. I am trying to debug it according to these instructions but I am having a hard time interpreting what I see. I am using the dotnet-dump tool to analyze a dump.
All in all I have 618 DocumentClient instances if I interpret it correctly. Of course that will add up to a lot of data in strings, byte arrays etc.
Statistics:
MT Count TotalSize Class Name
00007f853c355110 618 187872 Microsoft.Azure.Cosmos.DocumentClient
Here is a snippet of a single reference taken from the method table of the document client. See the pastebin for full reference. It continues for 1200+ lines with mostly TimerQueueTimer references.
00007F85AF2F10D8 (strong handle)
-> 00007F84C80FBAD8 System.Object[]
-> 00007F84C80FBB00 System.Threading.ThreadLocal`1+LinkedSlotVolatile[[System.Collections.Concurrent.ConcurrentBag`1+WorkStealingQueue[[System.IDisposable, System.Private.CoreLib]], System.Collections.Concurrent]][]
-> 00007F84C80FBB40 System.Threading.ThreadLocal`1+LinkedSlot[[System.Collections.Concurrent.ConcurrentBag`1+WorkStealingQueue[[System.IDisposable, System.Private.CoreLib]], System.Collections.Concurrent]]
-> 00007F84C80FBB70 System.Collections.Concurrent.ConcurrentBag`1+WorkStealingQueue[[System.IDisposable, System.Private.CoreLib]]
-> 00007F84C80FBBB0 System.IDisposable[]
-> 00007F84C80FBA90 System.Diagnostics.DiagnosticListener+DiagnosticSubscription
-> 00007F84C80FAF30 Microsoft.ApplicationInsights.AspNetCore.DiagnosticListeners.HostingDiagnosticListener
-> 00007F84C80EB450 Microsoft.ApplicationInsights.Extensibility.TelemetryConfiguration
-> 00007F84C80D5688 Microsoft.ApplicationInsights.Extensibility.Implementation.ApplicationId.ApplicationInsightsApplicationIdProvider
-> 00007F84C80D5A60 Microsoft.ApplicationInsights.Extensibility.Implementation.ApplicationId.ProfileServiceWrapper
-> 00007F84C80D5A88 System.Net.Http.HttpClient
-> 00007F84C80D5AD0 System.Net.Http.HttpClientHandler
-> 00007F84C80D5B00 System.Net.Http.SocketsHttpHandler
-> 00007F84D80D1018 System.Net.Http.RedirectHandler
-> 00007F84D80D1000 System.Net.Http.HttpConnectionHandler
-> 00007F84D80D0D38 System.Net.Http.HttpConnectionPoolManager
-> 00007F84D80D0F70 System.Threading.Timer
-> 00007F84D80D0FE8 System.Threading.TimerHolder
-> 00007F84D80D0F88 System.Threading.TimerQueueTimer
-> 00007F84C80533A0 System.Threading.TimerQueue
-> 00007F84D910F3C0 System.Threading.TimerQueueTimer
-> 00007F84D910EE58 System.Threading.TimerQueueTimer
-> 00007F84D910A680 System.Threading.TimerQueueTimer
https://pastebin.com/V8CNQjR7
Do I have an Application Insights or Cosmos memory leak? Why are there so many TimerQueueTimer references?
await Task.Delay create new TimerQueueTimer on every call.
Lots of TimerQueueTimer is sign of someone is using await Task.Delay() in a loop, instead of using simple new Timer().
-> Microsoft.Azure.Cosmos.Routing.GlobalEndpointManager+<StartRefreshLocationTimer>d__25
-> Microsoft.Azure.Cosmos.Routing.GlobalEndpointManager
Looks like GlobalEndpointManager of Microsoft.Azure.Cosmos uses await Task.Delay every time exception is thrown in StartRefreshLocationTimer method of GlobalEndpointManager.cs class
You can try few things here:
1) Check which exception is thrown and how to avoid it.
My guess this should help log exception:
DefaultTrace.TraceSource.Listeners.Add(new System.Diagnostics.ConsoleTraceListener())
(check example)
2) make sure ShouldRefreshEndpoints returns false, if it's ok for your app :)

How can I model a parallel flow that branches back into a regular flow?

I have a BPMN process that should handle 2 alternative scenarios:
TaskA -> TaskB -> Last Task
OR
TaskA -> TaskX -> (TaskY and TaskB in parallel) -> Last Task
I can't find what's is the proper way to join the parallel tasks.
I have designed this solution, but it doesn't look fine to me:
for the first scenario, the parallel gateway looks like a fork rather than a join.
How should I design this case (without having to duplicate tasks) ?
I think the following diagram do just what you want:
I use an inclusive gateway that will always take the transition that goes to "Task B" and based on condition also execute "Task Y" in parallel.
Same condition is also use to include or skip "Task X".
I create a runnable version of this process for Bonita BPM and it seems to behave like what you expect.

Why does an ets table survive ct:init_per_testcase but not init_per_suite?

I have a common test suite that attempts to create an ets table for use in all suites and all test cases. It looks like so:
-module(an_example_SUITE).
-include_lib("common_test/include/ct.hrl").
-compile(export_all).
all() -> [ets_tests].
init_per_suite(Config) ->
TabId = ets:new(conns, [set]),
ets:insert(TabId, {foo, 2131}),
[{table,TabId} | Config].
end_per_suite(Config) ->
ets:delete(?config(table, Config)).
ets_tests(Config) ->
TabId = ?config(table, Config),
[{foo, 2131}] = ets:lookup(TabId, foo).
The ets_tests function failed with a badarg. Creating/destroying the ets table per testcase, which looks like so:
-module(an_example_SUITE).
-include_lib("common_test/include/ct.hrl").
-compile(export_all).
all() -> [ets_tests].
init_per_testcase(Config) ->
TabId = ets:new(conns, [set]),
ets:insert(TabId, {foo, 2131}),
[{table,TabId} | Config].
end_per_testcase(Config) ->
ets:delete(?config(table, Config)).
ets_tests(Config) ->
TabId = ?config(table, Config),
[{foo, 2131}] = ets:lookup(TabId, foo).
Running this, I find that it functions beautifully.
I'm confused by this behavior and unable to determine why this would happen, form the docs. Questions:
Why does this happen?
How can I have an ets table to share between a per suite and per testcase?
As was already mentioned in the answer by Pascal and as discussed in the User Guide only init_per_testcase and end_per_testcase run in the same process as the testcase. Since ETS tables are bound to a owner process your only way to have a ETS table persist during a whole suite or group is to give it away or define a heir process.
You can easily spawn a process in your init_per_suite or init_per_group functions, set it as heir for the ETS table and pass its pid along in the config.
To clean up all you need is to kill this process in your end_per_suite or end_per_group functions.
-module(an_example_SUITE).
-include_lib("common_test/include/ct.hrl").
-compile(export_all).
all() -> [ets_tests].
ets_owner() ->
receive
stop -> exit(normal);
Any -> ets_owner()
end.
init_per_suite(Config) ->
Pid = spawn(fun ets_owner/0),
TabId = ets:new(conns, [set, protected, {heir, Pid, []}]),
ets:insert(TabId, {foo, 2131}),
[{table,TabId},{table_owner, Pid} | Config].
end_per_suite(Config) ->
?config(table_owner, Config) ! stop.
ets_tests(Config) ->
TabId = ?config(table, Config),
[{foo, 2131}] = ets:lookup(TabId, foo).
You also need to make sure you can still access your table from the testcase process, by making it either protectedor public
An ets table is attached to a process and destroyed as soon as the process ends, unless you use the the give_away function (which is not feasible I fear in this case)
As state in the common tets doc, each test case and the init_per_suite and end_per_suite are run in separate processes, so the ets table is destroyed as soon as you leave the init_per_suite function.
fron common_test doc
init_per_suite and end_per_suite will execute on dedicated Erlang
processes, just like the test cases do. The result of these functions
is however not included in the test run statistics of successful,
failed and skipped cases.
from ets doc
The default owner is the process that created the table. Table
ownership can be transferred at process termination by using the heir
option or explicitly by calling give_away/3.