What is the difference between flux cache(), replay() and publish() if creating a hot publisher? - spring-webflux

What is the difference between flux cache(), replay() and publish() if creating a hot publisher? For which use case which operator would suit best?
The following samples replays all 5 elements for the 3 different methods.
cache():
var flux = Flux.fromStream(Stream.of(1,2,3,4,5))
.delayElements(Duration.ofSeconds(1)).cache();
flux.doOnNext(v -> System.out.println("First: " + v))
.subscribe();
Thread.sleep(5000);
flux.doOnNext(v -> System.out.println("Second: " + v))
.subscribe();
Thread.sleep(10000);
replay():
var flux = Flux.fromStream(Stream.of(1,2,3,4,5))
.delayElements(Duration.ofSeconds(1)).replay();
flux.doOnNext(v -> System.out.println("First: " + v))
.subscribe();
Thread.sleep(5000);
flux.doOnNext(v -> System.out.println("Second: " + v))
.subscribe();
flux.connect();
Thread.sleep(10000);
publish():
var flux = Flux.fromStream(Stream.of(1,2,3,4,5))
.delayElements(Duration.ofSeconds(1)).publish();
flux.doOnNext(v -> System.out.println("First: " + v))
.subscribe();
Thread.sleep(5000);
flux.doOnNext(v -> System.out.println("Second: " + v))
.subscribe();
flux.connect();
Thread.sleep(10000);
One variation of printed result:
First: 1
First: 2
First: 3
First: 4
Second: 1
Second: 2
Second: 3
Second: 4
First: 5
Second: 5

cache() is a convenience alias to .replay().autoConnect(1), ie. it will perform the connect() for you as soon as the first subscriber comes in.
but since it replays the whole history, the second subscriber still sees all elements.
from your replay() and publish() examples, you might think there is no difference between the two. but that is because you connect() AFTER both subscribers have subscribed...
if you were to move connect() call before the second subscriber, you would see that in the case of publish() it wouldn't see any value. replay() on the other hand would replay the whole history to the second subscriber, despite it being late.

Related

Receive message from an Elm process

I'm toying around with Elm processes in order to learn more about how they work. In parts of this, I'm trying to implement a timer.
I bumped into an obstacle, however: I can't find a way to access the result of a process' task in the rest of the code.
For a second, I hoped that if I make the task resolve with a Cmd, the Elm runtime would be kind enough to perform that effect for me, but that was a naive idea:
type Msg
= Spawned Process.Id
| TimeIsUp
init _ =
( Nothing
, Task.perform Spawned (Process.spawn backgroundTask)
)
backgroundTask : Task.Task y (Platform.Cmd.Cmd Msg)
backgroundTask =
Process.sleep 1000
-- pathetic attempt to send a Msg starts here
|> Task.map ( always
<| Task.perform (always TimeIsUp)
<| Task.succeed ()
)
-- and ends here
|> Task.map (Debug.log "Timer finished") -- logs "Timer finished: <internals>"
update msg state =
case msg of
Spawned id ->
(Just id, Cmd.none)
TimeIsUp ->
(Nothing, Cmd.none)
view state =
case state of
Just id ->
text "Running"
Nothing ->
text "Time is up"
The docs say
there is no public API for processes to communicate with each other.
I'm not sure if that implies that a process can't cummunicate with the rest of the app.
Is there any way to have update function receive a TimeIsUp once the process exits?
There is one way but it requires a port of hell:
make a fake HTTP request from the process,
then intercept it via JavaScript
and pass it back to Elm.
port ofHell : (() -> msg) -> Sub msg
subscriptions _ =
ofHell (always TimeIsUp)
backgroundTask : Task.Task y (Http.Response String)
backgroundTask =
Process.sleep 1000
-- nasty hack starts here
|> Task.andThen ( always
<| Http.task { method = "EVIL"
, headers = []
, url = ""
, body = Http.emptyBody
, resolver = Http.stringResolver (always Ok "")
, timeout = Nothing
}
)
Under the hood, Http.task invokes new XMLHttpRequest(), so we can intercept it by redefining that constructor.
<script src="elm-app.js"></script>
<div id=hack></div>
<script>
var app = Elm.Hack.init({
node: document.getElementById('hack')
})
var orig = window.XMLHttpRequest
window.XMLHttpRequest = function () {
var req = new orig()
var orig = req.open
req.open = function (method) {
if (method == 'EVIL') {
app.ports.ofHell.send(null)
}
return orig.open.apply(this, arguments)
}
return req
}
</script>
The solution is not production ready, but it does let you continue playing around with Elm processes.
Elm Processes aren't a fully fledged API at the moment. It's not possible to do what you want with the Process library on its own.
See the notes in the docs for Process.spawn:
Note: This creates a relatively restricted kind of Process because it cannot receive any messages. More flexibility for user-defined processes will come in a later release!
and the whole Future Plans section, eg.:
Right now, this library is pretty sparse. For example, there is no public API for processes to communicate with each other.

Spring AMQP + RabbitMQ RPC How to execute/delegate different methods in a class

I spent the entire day trying to get this to work. I've been following the tutorial 5 & 6 for Spring AMQP in the RabbitMQ tutorials page.
Is it possible for a single class to execute a different method based on some property? E.g. Routing key?
I've tried this so far to no avail:
#RabbitListener(bindings = #QueueBinding(
value = #Queue(value = "requests"),
exchange = #Exchange(value = "ourexchange"),
key = "doFunc1")
)
public String func1(long id) {
System.out.println("func1 " + id);
return "func1 " + id;
}
#RabbitListener(bindings = #QueueBinding(
value = #Queue(value = "requests"),
exchange = #Exchange(value = "ourexchange"),
key = "doFunc2")
)
public String func2(long id) {
System.out.println("func2 " + id);
return "func2 " + id;
}
In my client I did this:
public void send() {
System.out.println(" [x] Get func1( account_id: " + accountId + ")");
String response = (String) template.convertSendAndReceive
("ourexchange", "doFunc1", accountId);
System.out.println(" [.] Got '" + response + "'");
System.out.println(" [x] Get func2( account_id: " + accountId + ")");
String response = (String) template.convertSendAndReceive
(exchange.getName(), "doFunc2", accountId);
System.out.println(" [.] Got '" + response + "'");
}
I've got it "somewhat" to work but it appears to work in a round-robin fashion where the first method is called then the next one.
I've already considered the explanation here: Single Queue, multiple #RabbitListener but different services
But since the signatures of both methods look identical I'm not sure it's possible.
Do note that I'm a beginner to the concepts of AMQP (like I've just read about the basics today). Am I doing this right or am I misunderstanding the usage?
The #RabbitListener infrastructure doesn't perform routing based on the routing key. You should use a different queue for each method and let RabbitMQ do the routing at the exchange level.
Alternatively, if you must use a single queue for some reason, you can pass the RECEIVED_ROUTING_KEY as a #Header parameter to your listener and delegate to different methods from the listener.
I've got it "somewhat" to work but it appears to work in a round-robin fashion where the first method is called then the next one.
That's because RabbitMQ sees 2 consumers and will round-robin the messages. You need to use 2 queues or a single method and do the routing therein.

ReactiveX collect elements processed before a failure

I'm using RxJava to create a background job syncronizing my db.
It connects to an external source and start to process entries, map them and insert in the db.
When it ends I need the list with all the elements processed, I can get it when everything goes right, but how can I collect all the elements processed if during the flow something fail?
final List<String> res = Observable.create(onSubscribe)
.buffer(4)
.flatMap(TestRx::doStuff)
.buffer(8)
.map(TestRx::calculateList)
.toList()
.toBlocking()
.single();
System.out.println("strings = " + res);
What I would like to have is a way that if doStuff or calculateList throw exceptions, the flow stop an returns the list with everything it processed until the error.
List<String> res = Observable.create(onSubscribe)
.buffer(4)
.flatMap(TestRx::doStuff)
.onErrorResumeNext(Observable.empty()) // turn error into completion
.buffer(8)
.map(TestRx::calculateList)
.onErrorResumeNext(Observable.empty()) // turn error into completion
.toList()
.toBlocking()
.single();
System.out.println("strings = " + res);

RxJava timeout without emiting error?

Is there an option to have variant of timeout that does not emit Throwable?
I would like to have complete event emited.
You don't need to map errors with onErrorResumeNext. You can just provide a backup observable using:
timeout(long,TimeUnit,Observable)
It would be something like:
.timeout(500, TimeUnit.MILLISECONDS, Observable.empty())
You can resume from an error with another Observable, for example :
Observable<String> data = ...
data.timeout(1, TimeUnit.SECONDS)
.onErrorResumeNext(Observable.empty())
.subscribe(...);
A simpler solution that does not use Observable.timeout (thus it does not generate an error with the risk of catching unwanted exceptions) might be to simply take until a timer completes:
Observable<String> data = ...
data.takeUntil(Observable.timer(1, TimeUnit.SECOND))
.subscribe(...);
You can always use onErrorResumeNext which will get the error and you can emit whatever item you want-
/**
* Here we can see how onErrorResumeNext works and emit an item in case that an error occur in the pipeline and an exception is propagated
*/
#Test
public void observableOnErrorResumeNext() {
Subscription subscription = Observable.just(null)
.map(Object::toString)
.doOnError(failure -> System.out.println("Error:" + failure.getCause()))
.retryWhen(errors -> errors.doOnNext(o -> count++)
.flatMap(t -> count > 3 ? Observable.error(t) : Observable.just(null)),
Schedulers.newThread())
.onErrorResumeNext(t -> {
System.out.println("Error after all retries:" + t.getCause());
return Observable.just("I save the world for extinction!");
})
.subscribe(s -> System.out.println(s));
new TestSubscriber((Observer) subscription).awaitTerminalEvent(500, TimeUnit.MILLISECONDS);
}

JMeter: Recallable BeanShell Assertion?

I'm performing API testing of basic CRUD functionality. For the creation of each record type in each thread group, I am using the same BeanShell Assertion template customized to each thread group.
import org.apache.jmeter.services.FileServer;
if (ResponseCode != null && ResponseCode.equals("200") == true) {
SampleResult.setResponseOK();
}
else if (!ResponseCode.equals ("200") == true ) {
Failure = true;
FailureMessage ="Creation of a new {insert record type} record failed. Response code " + ResponseCode + "." ; // displays in Results Tree
print ("Creation of a new {insert record type} record failed: Response code " + ResponseCode + "."); // goes to stdout
log.warn("Creation of a new {insert record type} record failed: Response code " + ResponseCode); // this goes to the JMeter log file
// Static elements or calculations
part1 = "\n FAILED TO CREATE NEW {insert record type} RECORD via POST. The response code is: \"";
part2 = "\". \n\n - For \'Non-HTTP response code - org.apache.jorphan.util.JMeterStopThreadException\' is received,verify the payload file still exists. \n - For response code = 409, \n\t a) check the payload for validity.\n\t b) verify the same {insert record type} name doesn't already exist in the {insert table name} table. If found, delete record and re-run the test. \n - For response code = 500, verify the database and its host server are reachable. \n";
// Open File(s)
FileOutputStream f = new FileOutputStream(FileServer.getFileServer().getBaseDir() + "\\error.log", true);
//FileOutputStream f = new FileOutputStream("c:\\error.log", true);
PrintStream p = new PrintStream(f);
// Write data to file
p.println( part1 + ResponseCode + part2 );
// Close File(s)
p.close();
f.close();
}
Is there a way to make this re-callable as opposed to having to repeat it in each thread group? Right now I'm up to 20 thread groups, thus 20 versions of this same assertion.
I've looked at multiple pages on this site and also at How to Use BeanShell: JMeter's Favorite Built-in Component, but I'm not finding a solution to this. Any feedback is appreciated.
If you place your Assertion (any Assertion) at the same level as Thread Groups like:
It will be applied to both "Sampler 1" and "Sampler 2". Moreover, the assertion will be applied to each sampler in each thread group on each iteration.
See How to Use JMeter Assertions in Three Easy Steps guide which clarifies assertions scope, cost and best practices.