Retrieve all flux elements in StepVerifier - spring-webflux

I am working on testing a flux. I don't know how many elements exactly the flux has. Initially I have tried with StepVerifier and faced issues as i do not know the elements. Later I have referred this question and tried the same but I am getting the below error:
java.lang.AssertionError: expectation "expectComplete" failed (expected: onComplete(); actual: onNext
My understanding is that, my code is expecting a complete signal but the flux has some more elements left(so it gives onNext() instead of onComplete()). Please help me to understand where I am missing things. Below is my code:
StepVerifier.create(flux)
.recordWith(ArrayList::new)
.consumeRecordedWith(elements-> {assertThat(elements.size()).isGreaterThan(0);})
.verifyComplete();

You're not actually consuming your Flux, you're just setting up what happens when it's consumed. Your verifyComplete(); call then fails, understandably, because the Flux hasn't been consumed at all, and it's thus not complete!
You need to add a thenConsumeWhile() call to actually consume it.
If you really need to use AssertJ as you do above, then you can do:
StepVerifier.create(flux)
.recordWith(ArrayList::new)
.thenConsumeWhile(x -> true)
.consumeRecordedWith(elements -> {
assertThat(elements.isEmpty()).isFalse();
})
.verifyComplete();
However, there's no need for AssertJ here - the reactor test package is enough, and adding additional testing frameworks makes the testing code much less clear IMHO. So if you're not wedded to AssertJ, just do:
StepVerifier.create(flux)
.recordWith(ArrayList::new)
.thenConsumeWhile(x -> true)
.expectRecordedMatches(elements -> !elements.isEmpty())
.verifyComplete();
Note that in real-world use, you'd probably want to adjust the predicate in thenConsumeWhile so that it runs a check against each element in turn, too. I've also adjusted the above code to use isEmpty() rather than checking if size()>0, as it's semantically clearer while achieving the same purpose.

From the same issue, with something new: I had so many entries in my flux that it couldn't fit into the memory (yes, those test case fixtures were designed that way)...
So buffering everything into a List wasn't an option.
And I tried different API methods on StepVerifier and found the following to work:
StepVerifier.create( myFlux )
.thenConsumeWhile( Predicate<T>, Consumer<T> )
.verifyComplete();
I literally did
StepVerifier.create( myFlux )
.thenConsumeWhile( __ -> true, entry -> {
// assertions
} )
.verifyComplete();

Related

How to handle a forEach loop that returns early based on an enum except for one path in Kotlin

We have an API service call that returns a bunch of validation messages. In each message there is a string that contains an error code.
Our implementation converts the validation string into an enum value and then we process the enumeration as there are some error code we just don't care about.
The question becomes, how to handle the loop of messages in a Kotlin way:
response.validationErrors?.forEach {
val mediaFailure = decodeValidationMessage(it.message)
if (mediaFailure != MediaFailure.Unknown) {
return when (mediaFailure) {
MediaFailure.Encrypted -> DomainResponse(ErrorReasonCode.ERR_DOCUMENT_ENCRYPTED)
MediaFailure.NotSupported -> Response.validationFailed()
MediaFailure.InternalError -> Response.serviceFailed()
else -> throw NotImplementedError()
}
}
}
Here we loop through all the messages, then once the message error is not "Unknown" it returns the necessary response to the caller.
However, IntelliJ wants the else path, even though the if prevents that from happening.
Is there a proper Kotlin way of implementing this kind of loop?
From what I understood, you want to return a response for the first mediaFailure which is not MediaFailure.Unknown and you don't want that throw NotImplementedError() part in your function.
One way to fix this is to remove the if condition and continue the forEach loop when MediaFailure.Unknown is found.
response.validationErrors?.forEach {
val mediaFailure = decodeValidationMessage(it.message)
return when (mediaFailure) {
MediaFailure.Encrypted -> DomainResponse(ErrorReasonCode.ERR_DOCUMENT_ENCRYPTED)
MediaFailure.NotSupported -> Response.validationFailed()
MediaFailure.InternalError -> Response.serviceFailed()
MediaFailure.Unknown -> return#forEach // continue the loop
}
}
I think this is one of the many cases when it pays to step back from the code a bit and try to look at the big picture. To ask “What's the ultimate goal here? What am I trying to achieve with this code?”
(In traditional, lower-level languages, almost anything you want to do with a list or array requires a loop, so you get into the habit of reaching for a for or while without thinking. But there are often alternative approaches in Kotlin that can be more concise, clearer, and harder to get wrong. They tend to be more about what you're trying to achieve, rather than how.)
In this case, it looks you want to find the first item which decodes to give a known type (i.e. not MediaFailure.Unknown), and return a value derived from that.
So here's an attempt to code that:
val message = response.validationErrors?.asSequence()
?.map{ decodeValidationMessage(it.message) }
?.firstOrNull{ it != MediaFailure.Unknown }
return when (message) {
MediaFailure.Encrypted -> DomainResponse(ErrorReasonCode.ERR_DOCUMENT_ENCRYPTED)
MediaFailure.NotSupported -> Response.validationFailed()
MediaFailure.InternalError, null -> Response.serviceFailed()
else -> throw NotImplementedError()
}
This is still fairly similar to your code, and it's about as efficient. (Thanks to the asSequence(), it doesn't decode any more messages than it needs to.) But the firstOrNull() makes clear what you're looking for; and it's obvious that you go on to process only that one message — a fact which is rather lost in the original version.
(If there are no valid messages, message will be null and so this will return serviceFailed(), as per comments.)
There are of course many ways to skin a cat, and I can think of several variations. (It's often a worthwhile exercise to come up with some — if nothing else, it gives you more confidence in the version you end up with!) Try to pick whichever seems clearest, simplest, and best matches the big picture of what you're doing; that tends to work out best in the long run.

Mono.Defer() vs Mono.create() vs Mono.just()?

Could someone help me to understand the difference between:
Mono.defer()
Mono.create()
Mono.just()
How to use it properly?
Mono.just(value) is the most primitive - once you have a value you can wrap it into a Mono and subscribers down the line will get it.
Mono.defer(monoSupplier) lets you provide the whole expression that supplies the resulting Mono instance. The evaluation of this expression is deferred until somebody subscribes. Inside of this expression you can additionally use control structures like Mono.error(throwable) to signal an error condition (you cannot do this with Mono.just).
Mono.create(monoSinkConsumer) is the most advanced method that gives you the full control over the emitted values. Instead of the need to return Mono instance from the callback (as in Mono.defer), you get control over the MonoSink<T> that lets you emit values through MonoSink.success(), MonoSink.success(value), MonoSink.error(throwable) methods.
Reactor documentation contains a few good examples of possible Mono.create use cases: link to doc.
The general advice is to use the least powerful abstraction to do the job: Mono.just -> Mono.defer -> Mono.create.
Although in general I agree with (and praise) #IlyaZinkovich's answer, I would be careful with the advice
The general advice is to use the least powerful abstraction to do the job: Mono.just -> Mono.defer -> Mono.create.
In the reactive approach, especially if we are beginners, it's very easy to overlook which the "least powerful abstraction" actually is. I am not saying anything else than #IlyaZinkovich, just depicting one detailed aspect.
Here is one specific use case where the more powerful abstraction Mono.defer() is preferable over Mono.just() but which might not be visible at the first glance.
See also:
https://stackoverflow.com/a/54412779/2886891
https://stackoverflow.com/a/57877616/2886891
We use switchIfEmpty() as a subscription-time branching:
// First ask provider1
provider1.provide1(someData)
// If provider1 did not provide the result, ask the fallback provider provider2
.switchIfEmpty(provider2.provide2(someData))
public Mono<MyResponse> provide2(MyRequest someData) {
// The Mono assembly is needed only in some corner cases
// but in fact it is always happening
return Mono.just(someData)
// expensive data processing which might even fail in the assemble time
.map(...)
.map(...)
...
}
provider2.provide2() accepts someData only when provider1.provide1() does not return any result, and/or the method assembly of the Mono returned by provider2.provide2() is expensive and even fails when called on wrong data.
It this case defer() is preferable, even if it might not be obvious at the first glance:
provider1.provide1(someData)
// ONLY IF provider1 did not provide the result, assemble another Mono with provider2.provide()
.switchIfEmpty(Mono.defer(() -> provider2.provide2(someData)))

Gmock - InSequence vs RetiresOnSaturation

I don't understand the following gmock example:
{
InSequence s;
for (int i = 1; i <= n; i++) {
EXPECT_CALL(turtle, GetX())
.WillOnce(Return(10*i))
.RetiresOnSaturation();
}
}
When I remove .RetiresOnSaturation() the above code works the same way - GetX returns 10, 20 and so on. What is the reason to use .RetiresOnSaturation() when we also use InSequence object ? Could you explain that ?
In the exact example given RetiresOnSaturation() doesn't change anything. Once the final expectation in the sequence is saturated that expectation remains active but saturated. A further call would cause the test to fail.
RetiresOnSaturation() is generally used when overlaying expectations. For example:
class Turtle {
public:
virtual int GetX() = 0;
};
class MockTurtle : public Turtle {
public:
MOCK_METHOD0(GetX, int());
};
TEST(GmockStackoverflow, QuestionA)
{
MockTurtle turtle;
// General expectation - Perhaps set on the fixture class?
EXPECT_CALL(turtle, GetX()).WillOnce(Return(0));
// Extra expectation
EXPECT_CALL(turtle, GetX()).WillOnce(Return(10)).RetiresOnSaturation();
turtle.GetX();
turtle.GetX();
}
This property can be used in combination with InSequence when the sequence of expected events overlays another expectation. In this scenario the last expectation in the sequence must be marked RetiresOnSaturation(). Note that only the last expectation needs to be marked because when an expectation in sequence is saturated it retires the prerequisite expectations.
The example below demonstrates how this might work out in practice. Removing RetiresOnSaturation() causes the test to fail.
TEST(GmockStackoverflow, QuestionB)
{
MockTurtle turtle;
EXPECT_CALL(turtle, GetX()).WillOnce(Return(0));
{
InSequence s;
EXPECT_CALL(turtle, GetX()).WillOnce(Return(10));
EXPECT_CALL(turtle, GetX()).WillOnce(Return(10)).RetiresOnSaturation();
}
turtle.GetX();
turtle.GetX();
turtle.GetX();
}
From my experience, some (ok, possibly many) developers have a problem, like a gtest error message, discover that RetiresOnSaturation() makes the problem go away, and then get into the habit of liberally sprinkling RetiresOnSaturation() throughout their unit tests – because it solves problems. This is apparently easier than reasoning about what the test case is supposed to accomplish. On the other hand, I like to think in terms of what has to happen (according to the documented API contract) in what order – which can be a partial order if you use After() or don't have everything in the same sequence – and that makes more expressive constructs like InSequence or After() come naturally to my mind.
So, as Adam Casey stated, there is no technical reason, but IMO there could be an issue of magical thinking or insufficient training.
I recommend avoiding RetiresOnSaturation(). There are some general issues with it (like causing confusing warning messages, see example below), but mostly it is too low level when compared to the alternatives, and is almost never needed if you have clean contracts, and use the previously mentioned alternatives correctly. You could say it's the goto of gtest expectations…
Addendum A: Example of when a gratuitous RetiresOnSaturation() makes for worse messages, and yes, I have seen such code:
EXPECT_CALL(x, foo()).WillOnce(Return(42)).RetiresOnSaturation();
If x.foo() is called more than once, let's say twice, then, without RetiresOnSaturation(), you would have received an error message like "No matching expectation for foo() … Expected: to be called once … Actual: called twice (oversaturated)", which is about as specific as possible. But because of RetiresOnSaturation(), you will only get an "Unexpected function call foo()" warning, which is confusing and meaningless.
Addendum B: In your example, it is also possible that a refactoring to use InSequence was made after the fact, and the person doing the refactoring didn't realize that RetiresOnSaturation() was now redundant. You could do a "blame" in your version control system to check.

Alternative to the try (?) operator suited to iterator mapping

In the process of learning Rust, I am getting acquainted with error propagation and the choice between unwrap and the ? operator. After writing some prototype code that only uses unwrap(), I would like to remove unwrap from reusable parts, where panicking on every error is inappropriate.
How would one avoid the use of unwrap in a closure, like in this example?
// todo is VecDeque<PathBuf>
let dir = fs::read_dir(&filename).unwrap();
todo.extend(dir.map(|dirent| dirent.unwrap().path()));
The first unwrap can be easily changed to ?, as long as the containing function returns Result<(), io::Error> or similar. However, the second unwrap, the one in dirent.unwrap().path(), cannot be changed to dirent?.path() because the closure must return a PathBuf, not a Result<PathBuf, io::Error>.
One option is to change extend to an explicit loop:
let dir = fs::read_dir(&filename)?;
for dirent in dir {
todo.push_back(dirent?.path());
}
But that feels wrong - the original extend was elegant and clearly reflected the intention of the code. (It might also have been more efficient than a sequence of push_backs.) How would an experienced Rust developer express error checking in such code?
How would one avoid the use of unwrap in a closure, like in this example?
Well, it really depends on what you wish to do upon failure.
should failure be reported to the user or be silent
if reported, should one failure be reported or all?
if a failure occur, should it interrupt processing?
For example, you could perfectly decide to silently ignore all failures and just skip the entries that fail. In this case, the Iterator::filter_map combined with Result::ok is exactly what you are asking for.
let dir = fs::read_dir(&filename)?;
let todos.extend(dir.filter_map(Result::ok));
The Iterator interface is full of goodies, it's definitely worth perusing when looking for tidier code.
Here is a solution based on filter_map suggested by Matthieu. It calls Result::map_err to ensure the error is "caught" and logged, sending it further to Result::ok and filter_map to remove it from iteration:
fn log_error(e: io::Error) {
eprintln!("{}", e);
}
(|| {
let dir = fs::read_dir(&filename)?;
todo.extend(dir
.filter_map(|res| res.map_err(log_error).ok()))
.map(|dirent| dirent.path()));
})().unwrap_or_else(log_error)

FreeMarker ?has_content on Iterator

How does FreeMarker implement .iterator()?has_content for Iterator ?
Does it attempt to get the first item to decide whether to render,
and does it keep it for the iteration? Or does it start another iteration?
I have found this
static boolean isEmpty(TemplateModel model) throws TemplateModelException
{
if (model instanceof BeanModel) {
...
} else if (model instanceof TemplateCollectionModel) {
return !((TemplateCollectionModel) model).iterator().hasNext();
...
}
}
But I am not sure what Freemarker wraps the Iterator type to.
Is it TemplateCollectionModel?
It doesn't get the first item, it just calls hasNext(), and yes, with the default ObjectWrapper (see the object_wrapper configuration setting) it will be treated as a TemplateCollectionModel, which can be listed only once though. But prior to 2.3.24 (it's unreleased when I write this), there are some glitches to look out for (see below). Also if you are using pure BeansWrapper instead of the default ObjectWrapper, there's a glitch (see below too).
Glitch (2.3.23 and earlier): If you are using the default ObjectWrapper (you should), and the Iterator is not re-get for the subsequent listing, that is, the same already wrapped value is reused, then it will freak out telling that an Iterator can be listed only once.
Glitch 2, with pure BeansWrapper: It will just always say that it has content, even if the Iterator is empty. That's also fixed in 2.3.24, however, you need to create the BeansWrapper itself (i.e., not (just) the Configuration) with 2.3.24 incompatibleImprovements for the fix to be active.
Note that <#list ...>...<#else>...</#list> has been and is working in all cases (even before 2.3.24).
Last not least, thanks for bringing this topic to my attention. I have fixed these in 2.3.24 because of that. (Nightly can be built from here: https://github.com/apache/incubator-freemarker/tree/2.3-gae)