How to switch Flux concat back to main thread? - spring-webflux

When using Flux to concat multi publisher, if one of publisher is run on another thread, then all the value will publish on that thread instead of run on main thread, is there a way to publish all the values on main thread?
#Test
public void test() {
Flux.concat(Flux.just(3), Flux.just(1).publishOn(Schedulers.newParallel("hai")), Flux.just(2))
.collectList()
.log()
.subscribe();
}
You can see that the on next will run on thread instead of main:
####<2022-04-02 12:09:57.391> <DEBUG> <reactor.util.Loggers> <main> <> - <Using Slf4j logging framework>
####<2022-04-02 12:09:59.666> <INFO> <reactor.Mono.CollectList.1> <main> <> - <| onSubscribe([Fuseable] MonoCollectList.MonoCollectListSubscriber)>
####<2022-04-02 12:09:59.671> <INFO> <reactor.Mono.CollectList.1> <main> <> - <| request(unbounded)>
####<2022-04-02 12:09:59.678> <INFO> <reactor.Mono.CollectList.1> <hai-1> <> - <| onNext([3, 1, 2])>
####<2022-04-02 12:09:59.688> <INFO> <reactor.Mono.CollectList.1> <hai-1> <> - <| onComplete()>

Related

RxJava Grouped Flowable with Conflation

I' m trying to create a Flow for a fast producer with slow consumer for FX (foreign exchange) prices. The basic idea is that prices coming from the source should be sent to the consumer as fast as possible.
The following is important for the working of this flow:
While the consumer is busy submitting prices, new prices must be received from the price source (in other words we don't want to slow down the producer at any stage).
When the consumer is finished processing it's current rate, it is then available to process another rate from the source.
The consumer is only ever interested in the latest published price for a ccy pair - in other words, we want prices to be conflated if the consumer does not keep up.
The consumer can (and should) process submissions in parallel, as long as these submissions are of a different ccy pair. For example, EUR/USD can be submitted in parallel with GBP/USD, but while EUR/USD is busy, other EUR/USD rates must be conflated.
Here is my current attempt:
public class RxTest {
private static final Logger LOG = LoggerFactory.getLogger(RxTest.class);
CcyPair EUR_USD = new CcyPair("EUR/USD");
CcyPair GBP_USD = new CcyPair("GBP/USD");
CcyPair USD_JPY = new CcyPair("USD/JPY");
List<CcyPair> ALL_CCY_PAIRS = Empty.<CcyPair>vector()
.plus(EUR_USD)
.plus(GBP_USD)
.plus(USD_JPY);
#Test
void testMyFlow() throws Exception {
AtomicInteger rateGenerater = new AtomicInteger(0);
Flowable.<Rate>generate(emitter -> {
MILLISECONDS.sleep(ThreadLocalRandom.current().nextInt(100));
final CcyPair ccyPair = ALL_CCY_PAIRS.get(ThreadLocalRandom.current().nextInt(3));
final String rate = String.valueOf(rateGenerater.incrementAndGet());
emitter.onNext(new Rate(ccyPair, rate));
})
.subscribeOn(Schedulers.newThread())
.doOnNext(rate -> LOG.info("Process: {}", rate))
.groupBy(Rate::ccyPair)
.map(Flowable::publish)
.doOnNext(ConnectableFlowable::connect)
.flatMap(grp -> grp.map(rate -> rate))
.onBackpressureLatest()
.observeOn(Schedulers.io())
.subscribe(onNext -> {
LOG.info("Long running process: {}", onNext);
MILLISECONDS.sleep(500);
LOG.info("Long running process complete: {}", onNext);
});
MILLISECONDS.sleep(5000);
}
record CcyPair(String name) {
public String toString() {
return name;
}
}
record Rate(CcyPair ccyPair, String rate) {
public String toString() {
return ccyPair + "->" + rate;
}
}
}
Which gives me this output:
09:27:05,743 INFO [RxTest] [RxNewThreadScheduler-1] - Process: EUR/USD->1
09:27:05,764 INFO [RxTest] [RxCachedThreadScheduler-1] - Long running process: EUR/USD->1
09:27:05,805 INFO [RxTest] [RxNewThreadScheduler-1] - Process: USD/JPY->2
...
09:27:06,127 INFO [RxTest] [RxNewThreadScheduler-1] - Process: GBP/USD->9
09:27:06,165 INFO [RxTest] [RxNewThreadScheduler-1] - Process: EUR/USD->10
09:27:06,214 INFO [RxTest] [RxNewThreadScheduler-1] - Process: USD/JPY->11
09:27:06,265 INFO [RxTest] [RxCachedThreadScheduler-1] - Long running process complete: EUR/USD->1
09:27:06,265 INFO [RxTest] [RxCachedThreadScheduler-1] - Long running process: USD/JPY->2
09:27:06,302 INFO [RxTest] [RxNewThreadScheduler-1] - Process: USD/JPY->12
09:27:06,315 INFO [RxTest] [RxNewThreadScheduler-1] - Process: EUR/USD->13
...
09:27:06,672 INFO [RxTest] [RxNewThreadScheduler-1] - Process: EUR/USD->23
09:27:06,695 INFO [RxTest] [RxNewThreadScheduler-1] - Process: GBP/USD->24
09:27:06,758 INFO [RxTest] [RxNewThreadScheduler-1] - Process: EUR/USD->25
09:27:06,773 INFO [RxTest] [RxCachedThreadScheduler-1] - Long running process complete: USD/JPY->2
09:27:06,773 INFO [RxTest] [RxCachedThreadScheduler-1] - Long running process: EUR/USD->3
There are a number problems here:
The consumer is not getting the "latest" rate per ccy pair, but the next one. This is obviously because there is an internal buffer and the backpressure has not kicked in yet. I don't want to wait for this backpressure to kick in, I simply want whatever is the latest emission to go to the consumer next.
The consumer is not processing in parallel - while EUR/USD is processing, for example, the other ccy pairs are being buffered.
Some notes/thoughts:
I'm very new at JavaRx and am struggling to find common patterns/idioms in how to tackle these kinds of problems, so please be patient :)
I'm not at all sure that backpressure is the right way to achieve this at all. RxJava has many interesting operators like window(), cache(), takeLast() etc, but none of them seem to work exactly like I want. I would have liked to have an operator like "conflate" or such - I'm sure there is something that can achieve this, just not sure how.
I struggle to understand how a slow consumer can tell the flow that "I'm busy, just conflate everything until I'm done". Can this only be done via scheduling on threads? That seems worrisome because what if the consumer is asynchronous - how will it tell the producer to hold on while it's busy?
You can use groupBy to create subflows for each currency conversions, but then you need to put those subflows into their own threads. RxJava has many buffers so skipping items while some part of the code is busy may be difficult.
source
.groupBy(Rate::ccyPair)
.flatMap(group -> {
return group
.onBackpressureLatest()
.delay(0, TimeUnit.MILLISECONDS)
.doOnNext(rate -> {
LOG.info("Long running process: {}", onNext);
MILLISECONDS.sleep(500);
LOG.info("Long running process complete: {}", onNext);
})
;
}, false, 128, 1)
.subscribe();

How to use delayElements() with Flux merge?

I am following a tutorial and I believe my code is same with the instructor's one but I did not understand why delayElements() is not working.
Here is the caller method:
public static void main(String[] args) {
FluxAndMonoGeneratorService fluxAndMonoGeneratorService = new FluxAndMonoGeneratorService();
fluxAndMonoGeneratorService.explore_merge()
.doOnComplete(() -> System.out.println("Completed !"))
.onErrorReturn("asdasd")
.subscribe(System.out::println);
}
If I write the method without delay elements as:
public Flux<String> explore_merge() {
Flux<String> abcFlux = Flux.just("A", "B", "C");
Flux<String> defFlux = Flux.just("D", "E", "F");
return Flux.merge(abcFlux, defFlux);
}
Then the output in the console is(as expected):
00:53:19.443 [main] DEBUG reactor.util.Loggers$LoggerFactory - Using Slf4j logging framework
A
B
C
D
E
F
Completed !
BUILD SUCCESSFUL in 1s
But I want to use delayElements() to test merge() method as:
public Flux<String> explore_merge() {
Flux<String> abcFlux = Flux.just("A", "B", "C").delayElements(Duration.ofMillis(151));
Flux<String> defFlux = Flux.just("D", "E", "F").delayElements(Duration.ofMillis(100));
return Flux.merge(abcFlux, defFlux);
}
And nothing happens, not onComplete nor onErrorReturn and the output is just nothing:
0:55:22: Executing ':reactive-programming-using-reactor:FluxAndMonoGeneratorService.main()'...
> Task :reactive-programming-using-reactor:generateLombokConfig UP-TO-DATE
> Task :reactive-programming-using-reactor:compileJava
> Task :reactive-programming-using-reactor:processResources NO-SOURCE
> Task :reactive-programming-using-reactor:classes
> Task :reactive-programming-using-reactor:FluxAndMonoGeneratorService.main()
00:55:23.715 [main] DEBUG reactor.util.Loggers$LoggerFactory - Using Slf4j logging framework
BUILD SUCCESSFUL in 1s
What is the reason for this ? (I mean at least onError I am expecting but nothing...)
note: mergeWith() is also not working with this delayElements()
subscribe is not blocking operation and delayElements will be scheduled on another thread (by default parallel Scheduler). As result your program exits before elements are emitted. Here is a test
#Test
void mergeWithDelayElements() {
Flux<String> abcFlux = Flux.just("A", "B", "C").delayElements(Duration.ofMillis(151));
Flux<String> defFlux = Flux.just("D", "E", "F").delayElements(Duration.ofMillis(100));
StepVerifier.create(Flux.merge(abcFlux, defFlux))
.expectNext("D", "A", "E", "B", "F", "C")
.verifyComplete();
}

Receive message from an Elm process

I'm toying around with Elm processes in order to learn more about how they work. In parts of this, I'm trying to implement a timer.
I bumped into an obstacle, however: I can't find a way to access the result of a process' task in the rest of the code.
For a second, I hoped that if I make the task resolve with a Cmd, the Elm runtime would be kind enough to perform that effect for me, but that was a naive idea:
type Msg
= Spawned Process.Id
| TimeIsUp
init _ =
( Nothing
, Task.perform Spawned (Process.spawn backgroundTask)
)
backgroundTask : Task.Task y (Platform.Cmd.Cmd Msg)
backgroundTask =
Process.sleep 1000
-- pathetic attempt to send a Msg starts here
|> Task.map ( always
<| Task.perform (always TimeIsUp)
<| Task.succeed ()
)
-- and ends here
|> Task.map (Debug.log "Timer finished") -- logs "Timer finished: <internals>"
update msg state =
case msg of
Spawned id ->
(Just id, Cmd.none)
TimeIsUp ->
(Nothing, Cmd.none)
view state =
case state of
Just id ->
text "Running"
Nothing ->
text "Time is up"
The docs say
there is no public API for processes to communicate with each other.
I'm not sure if that implies that a process can't cummunicate with the rest of the app.
Is there any way to have update function receive a TimeIsUp once the process exits?
There is one way but it requires a port of hell:
make a fake HTTP request from the process,
then intercept it via JavaScript
and pass it back to Elm.
port ofHell : (() -> msg) -> Sub msg
subscriptions _ =
ofHell (always TimeIsUp)
backgroundTask : Task.Task y (Http.Response String)
backgroundTask =
Process.sleep 1000
-- nasty hack starts here
|> Task.andThen ( always
<| Http.task { method = "EVIL"
, headers = []
, url = ""
, body = Http.emptyBody
, resolver = Http.stringResolver (always Ok "")
, timeout = Nothing
}
)
Under the hood, Http.task invokes new XMLHttpRequest(), so we can intercept it by redefining that constructor.
<script src="elm-app.js"></script>
<div id=hack></div>
<script>
var app = Elm.Hack.init({
node: document.getElementById('hack')
})
var orig = window.XMLHttpRequest
window.XMLHttpRequest = function () {
var req = new orig()
var orig = req.open
req.open = function (method) {
if (method == 'EVIL') {
app.ports.ofHell.send(null)
}
return orig.open.apply(this, arguments)
}
return req
}
</script>
The solution is not production ready, but it does let you continue playing around with Elm processes.
Elm Processes aren't a fully fledged API at the moment. It's not possible to do what you want with the Process library on its own.
See the notes in the docs for Process.spawn:
Note: This creates a relatively restricted kind of Process because it cannot receive any messages. More flexibility for user-defined processes will come in a later release!
and the whole Future Plans section, eg.:
Right now, this library is pretty sparse. For example, there is no public API for processes to communicate with each other.

Fail/Error in cordapp-template FlowTest progressTracker hasn't been started

when I try to run this tests for the flows in the cordapp-template:
#Test
fun flowRecordTransactionInBothVaults() {
val flow = IOUFlow.Initiator(1,b.info.legalIdentity)
val future = a.services.startFlow(flow).resultFuture
net.runNetwork()
val signedTx = future.getOrThrow()
for (node in listOf(a,b)) {
assertEquals(signedTx, node.storage.validatedTransactions.getTransaction(signedTx.id))
}
}
I get this error: Progress tracker hasn't been started
[INFO ] 15:14:52.144 [Mock network] AbstractNetworkMapService.processRegistrationRequest - Added node CN=Mock Company 3,O=R3,L=New York,C=US to network map
[WARN ] 15:14:52.172 [Mock network] [a11087fc-381d-4547-8736-5265c334c71f].maybeWireUpProgressTracking - ProgressTracker has not been started
[WARN ] 15:14:52.191 [Mock network] [0dcfa270-b1af-40e9-92f1-411334cf0c73].run - Terminated by unexpected exceptionkotlin.NotImplementedError: An operation is not implemented: not implemented
at com.template.flow.IOUFlow$Acceptor$call$1.checkTransaction(TemplateFlow.kt:225) ~[main/:?]
at net.corda.flows.SignTransactionFlow.call(CollectSignaturesFlow.kt:201) ~[corda-core-0.13.0.jar:?]
at net.corda.flows.SignTransactionFlow.call(CollectSignaturesFlow.kt:177) ~[corda-core-0.13.0.jar:?]
[WARN ] 15:14:52.202 [Mock network] [a11087fc-381d-4547-8736-5265c334c71f].run - Terminated by unexpected exceptionnet.corda.core.flows.FlowSessionException: Counterparty flow on CN=Mock Company 3,O=R3,L=New York,C=US had an internal error and has terminated
at net.corda.node.services.statemachine.FlowStateMachineImpl.erroredEnd(FlowStateMachineImpl.kt:382) ~[corda-node
[WARN ] 15:14:52.203 [Mock network] [a11087fc-381d-4547-8736-5265c334c71f].uncaughtException - Caught exception from flowjava.lang.IllegalStateException: Progress tracker has already ended
at net.corda.core.utilities.ProgressTracker.endWithError
Actually the code is much longer but I think these are the relevant parts. Is it a known thing? How can I fix it?
When implementing SignTransactionFlow, you must override checkTransactions.

How to read from console in "not main" process

How to read from stdin in new process? I can put line and print it only in main process. Should I pass to get_line console device or sth similar or it's not possible?
My code:
-module(inputTest).
-compile([export_all]).
run() ->
Message = io:get_line("[New process] Put sth: "),
io:format("[New process] data: ~p~n", [Message]).
main() ->
Message = io:get_line("[Main] Put sth: "),
io:format("[Main] data: ~p~n", [Message]),
spawn(?MODULE, run, []).
The problem is that your main/0 process spawns run/0 and then immediately exits. You should make main/0 wait until run/0 is finished. Here's how you can do that:
-module(inputTest).
-compile([export_all]).
run(Parent) ->
Message = io:get_line("[New process] Put sth: "),
io:format("[New process] data: ~p~n", [Message]),
Parent ! {self(), ok}.
main() ->
Message = io:get_line("[Main] Put sth: "),
io:format("[Main] data: ~p~n", [Message]),
Pid = spawn(?MODULE, run, [self()]),
receive
{Pid, _} ->
ok
end.
After spawning run/1 — and note that we changed it to pass our process ID to it — we wait to receive a message from it. In run/1 once we print to the output, we send the parent a message to let it know we're done. Running this in an erl shell produces the following:
1> inputTest:main().
[Main] Put sth: main
[Main] data: "main\n"
[New process] Put sth: run/1
[New process] data: "run/1\n"
ok