Map<String,Mono<byte[]>> map = new HashMap<>();
List<User> userList = new ArrayList<>();
map.entrySet().stream().forEach(entry -> {
if (entry.getValue() == null) {
log.info("Data not found for key {} ", entry.getKey());
} else {
entry.getValue().log().map(value -> {
try {
return User.parseFrom(value);
} catch (InvalidProtocolBufferException e) {
e.printStackTrace();
}
return null;
}).log().subscribe(p -> userList.add(p));
}
here entry.getValue() => MonoNext
parseFrom(accepts byte[])
I am new to reactive programming world, How to resolve this MonoNext to values it actually have, tried using flatMap instead but that also didnot work
Any suggestion appreciated !! Thanks in advance !!
MonoNext (an internal Reactor implementation of Mono) emits the value asynchronously, which means that it might not have yet the value when evaluated in your code. The only way to retrieve the value is to subscribe to it (either manually or as part of a Reactor pipeline using flatMap and others) and wait until the Mono emits its item.
Here is what your code would look like if placed in a Reactor pipeline using flatMap:
Map<String, Mono<byte[]>> map = new HashMap<>();
List<User> userList = Flux.fromIterable(map.entrySet())
.filter(entry -> entry.getValue() != null)
.doOnDiscard(Map.Entry.class, entry -> log.info("Data not found for key {} ", entry.getKey()))
.flatMap(entry -> entry.getValue()
.log()
.map(User::parseFrom)
.onErrorResume(error -> Mono.fromRunnable(error::printStackTrace)))
.collectList()
.block();
Note that the block operator will wait until all items are retrieved. If you want to stay asynchronous, you can remove the block and return a Mono<List<User>>, or also remove the collectList to return a Flux<User>.
Related
Hello recently started studying Webflux.
And sometimes I encounter the tasks that you need to form a simple DTO and return it
Take for example the usual class dto
#Data
#Builder
public static class Dto {
private long id;
private String code1;
private String code2;
}
And a primitive service with two methods...
#Nullable Mono<String> getCode1(long id);
#Nullable String getCode2(long id);
And wrote a method that forms at the output of Mono
private Mono<Dto> fill(long id) {
var dto = Dto.builder()
.id(id)
.build();
//doOnNext
var dtoMono1 = service.getCode1(id)
.map(code -> {
dto.setCode1(code);
return dto;
})
.doOnNext(innerDto -> innerDto.setCode2(service.getCode2(id)));
//map
var dtoMono2 = service.getCode1(id)
.map(code -> {
dto.setCode1(code);
return dto;
})
.map(unused -> service.getCode2(id))
.map(code -> {
dto.setCode1(code);
return dto;
});
//just
var dtoMono3 = Mono.just(dto)
.flatMap(innerDto -> service.getCode1(innerDto.getId()));
//just
var dtoMono4 = Mono.fromCallable(() -> dto)
.subscribeOn(Schedulers.boundedElastic())
.flatMap(innerDto -> service.getCode1(innerDto.getId()));
}
QUESTION:
Is it possible to simply create DTO and use it in the Webflux call
chain ... Or I need to wrap it in mono.just or mono.fromcallable
(what are the pros and cons)
How best to fill in values via doOnNext
or through MAP. An extra line (Return DTO) appears in the case of
MAP and some people also told me if for example NULL comes, doOnNext will
miss it and go further to fill up current dto. But on the other, the MAP is used
to transform the object, and the doOnNext is more for debugging and
logging
Thanks you...
How about using zip operator in such a case?
I hope this example can help you:
private Mono<Dto> fill(long id) {
return Mono.zip(someService.getCode1(id), Mono.just(someService.getCode2(id)))
.map(tuple ->
Dto.builder()
.id(id)
.code1(tuple.getT1())
.code2(tuple.getT2())
.build()
);
}
i am new to reactor, i tried to create a flux from Iterable. then i want to convert my object into string by using object mapper. then the ide warns a message like this in this part of the code new ObjectMapper().writeValueAsString(event). the message Inappropriate blocking method call. there is no compile error. could you suggest a solution.
Flux.fromIterable(Arrays.asList(new Event(), new Event()))
.flatMap(event -> {
try {
return Mono.just(new ObjectMapper().writeValueAsString(event));
} catch (JsonProcessingException e) {
return Mono.error(e);
}
})
.subscribe(jsonStrin -> {
System.out.println("jsonStrin = " + jsonStrin);
});
I will give you an answer, but I don't pretty sure this is what you want. it seems like block the thread. so then you can't get the exact benefits of reactive if you block the thread. that's why the IDE warns you. you can create the mono with monoSink. like below.
AtomicReference<ObjectMapper> objectMapper = new AtomicReference<>(new ObjectMapper());
Flux.fromIterable(Arrays.asList(new Event(), new Event()))
.flatMap(event -> {
return Mono.create(monoSink -> {
try {
monoSink.success(objectMapper .writeValueAsString(event));
} catch (JsonProcessingException e) {
monoSink.error(e);
}
});
})
.cast(String.class) // this cast will help you to axact data type that you want to continue the pipeline
.subscribe(jsonString -> {
System.out.println("jsonString = " + jsonString);
});
please try out this method and check that error will be gone.
it doesn't matter if objectMapper is be a normal java object as you did. (if you don't change). it is not necessary for your case.
You need to do it like this:
Flux.fromIterable(Arrays.asList(new Event(), new Event()))
.flatMap(event -> {
try {
return Mono.just(new ObjectMapper().writeValueAsString(event));
} catch (JsonProcessingException e) {
return Mono.error(e);
}
})
.subscribe(jsonStrin -> {
System.out.println("jsonStrin = " + jsonStrin);
});
I am using spring hexagonal architecture (port and adapter) as my application need to read the stream of data from the source topic, process/transforms the data, and send it to destination topic.
My application need to do the following actions.
Read the data (which will have the call back url)
Make an http call with the url in the incoming data (using webclient)
Get the a actual data and it needs to be transformed into another format.
Send the transformed data to the outgoing topic.
Here is my code,
public Flux<TargeData> getData(Flux<Message<EventInput>> message)
{
return message
.flatMap(it -> {
Event event = objectMapper.convertValue(it.getPayload(), Event.class);
String eventType = event.getHeader().getEventType();
String callBackURL = "";
if (DISTRIBUTOR.equals(eventType)) {
callBackURL = event.getHeader().getCallbackEnpoint();
WebClient client = WebClient.create();
Flux<NodeInput> nodeInputFlux = client.get()
.uri(callBackURL)
.headers(httpHeaders -> {
httpHeaders.setContentType(MediaType.APPLICATION_JSON);
List<MediaType> acceptTypes = new ArrayList<>();
acceptTypes.add(MediaType.APPLICATION_JSON);
httpHeaders.setAccept(acceptTypes);
})
.exchangeToFlux(response -> {
if (response.statusCode()
.equals(HttpStatus.OK)) {
System.out.println("Response is OK");
return response.bodyToFlux(NodeInput.class);
}
return Flux.empty();
});
nodeInputFlux.subscribe( nodeInput -> {
SourceData source = objectMapper.convertValue(nodeInput, SourceData.class);
// return Flux.fromIterable(this.TransformImpl.transform(source));
});
}
return Flux.empty();
});
}
The commented line in the above code is giving the compilation as subscribe method does not allow return types.
I need a solution "without using block" here.
Please help me here, Thanks in advance.
I think i understood the logic. What do you may want is this:
public Flux<TargeData> getData(Flux<Message<EventInput>> message) {
return message
.flatMap(it -> {
// 1. marshall and unmarshall operations are CPU expensive and could harm event loop
return Mono.fromCallable(() -> objectMapper.convertValue(it.getPayload(), Event.class))
.subscribeOn(Schedulers.parallel());
})
.filter(event -> {
// 2. Moving the if-statement yours to a filter - same behavior
String eventType = event.getHeader().getEventType();
return DISTRIBUTOR.equals(eventType);
})
// Here is the trick 1 - your request below return Flux of SourceData the we will flatten
// into a single Flux<SourceData> instead of Flux<List<SourceData>> with flatMapMany
.flatMap(event -> {
// This WebClient should not be created here. Should be a singleton injected on your class
WebClient client = WebClient.create();
return client.get()
.uri(event.getHeader().getCallbackEnpoint())
.accept(MediaType.APPLICATION_JSON)
.exchangeToFlux(response -> {
if (response.statusCode().equals(HttpStatus.OK)) {
System.out.println("Response is OK");
return response.bodyToFlux(SourceData.class);
}
return Flux.empty();
});
})
// Here is the trick 2 - supposing that transform return a Iterable of TargetData, then you should do this and will have Flux<TargetData>
// and flatten instead of Flux<List<TargetData>>
.flatMapIterable(source -> this.TransformImpl.transform(source));
}
I have a generic screen that subscribes to an RxJava2 flowable that returns a List. It then displays the content in the list.
I have a use case now though where I need to collect data from multiple endpoints, and emit data once some complete, and then emit data again once the remaining ones complete.
I'm doing this using Flowable.create() but I've seen a lot of posts saying that there's usually a better and safer way to do so than using create? I seem to believe that is the case since I need to subscribe to an observable within the observable which ideally I wouldn't want to do?
Because I subscribe within, I know the emitter can become cancelled within the observable while other network calls are completing so I've added checks to ensure it doesn't throw an error after its disposed which do work (at least in testing...) [I also just remembered I have the code available to dispose of the inner subscription if I kept it like this, when the outer is disposed]
The first 2 calls may be incredibly fast (or instant) which is why i want to emit the first result right away, and then the following 4 network calls which rely on that data may take time to process.
It looks roughly like this right now...
return Flowable.create<List<Object>>({ activeEmitter ->
Single.zip(
single1(),
single2(),
BiFunction { single1Result: Object, single2result: Object ->
if (single1result.something || single2Result.somethingElse) {
activeEmitter.onNext(function(single1result, single2result) //returns list
}
Single.zip(
single3(single1result),
single4(single2result),
single5(single1result),
single6(single2result),
Function4 { single3Result: Object,
single4Result: Object,
single5Result: Object,
single6Result: Object ->
ObjectHolder(single1Result, single2Result, single3Result, single4Result, single5Result, single6Result)
}
)
}
).flatMap { objectHolder ->
objects.flatMap { objectHolder ->
Single.just(parseObjects(objectHolder))
}
}.subscribeBy(
onError = { error ->
if (!activeEmitter.isCancelled) {
activeEmitter.onError(error)
}
},
onSuccess = { results ->
if (!activeEmitter.isCancelled) {
activeEmitter.onNext(results)
activeEmitter.onComplete()
}
}
)
}, BackpressureStrategy.BUFFER)
I can't figure out another way to return a Flowable that emits the results of multiple different network calls without doing it like this?
Is there a different/better way I can't find?
I worked this out given ctranxuan response. Posting so he can tweak/optimize and then I accept his answer
return Single.zip(single1(), single2(),
BiFunction { single1result: Object, single2result: Object ->
Pair(single1result, single2result)
}
).toFlowable()
.flatMap { single1AndSingle2 ->
if (isFirstLoad) {
createItemOrNull(single1AndSingle2.first, single1AndSingle2.second)?.let { result ->
Single.just(listOf(result)).mergeWith(proceedWithFinalNetworkCalls(single1AndSingle2))
}.orElse {
proceedWithFinalNetworkCalls(single1AndSingle2).toFlowable()
}
} else {
proceedWithFinalNetworkCalls(single1AndSingle2).toFlowable()
}
}.doOnComplete {
isFirstLoad = false
}
fun proceedWithFinalNetworkCalls(): Flowable<List> {
return Single.zip(
single3(single1result),
single4(single2result),
single5(single1result),
single6(single2result),
Function4 { single3Result: Object,
single4Result: Object,
single5Result: Object,
single6Result: Object ->
ObjectHolder(single1Result, single2Result, single3Result, single4Result, single5Result, single6Result)
}
)
Sorry, it's in Java but from what I've understood, something like that may be a possible solution?
public static void main(String[] args) {
final Single<String> single1 = single1().cache();
single1.map(List::of)
.mergeWith(single1.zipWith(single2(), Map::entry)
.flatMap(entry -> Single.zip(
single3(entry.getKey()),
single4(entry.getValue()),
single5(entry.getKey()),
single6(entry.getValue()),
(el3, el4, el5, el6) -> objectHolder(entry.getKey(), entry.getValue(), el3, el4, el5, el6))))
.subscribe(System.out::println,
System.err::println);
Flowable.timer(1, MINUTES) // Just to block the main thread for a while
.blockingSubscribe();
}
private static List<String> objectHolder(final String el1,
final String el2,
final String el3,
final String el4,
final String el5,
final String el6) {
return List.of(el1, el2, el3, el4, el5, el6);
}
static Single<String> single1() {
return Single.just("s1");
}
static Single<String> single2() {
return Single.just("s2");
}
static Single<String> single3(String value) {
return single("s3", value);
}
static Single<String> single4(String value) {
return single("s4", value);
}
static Single<String> single5(String value) {
return single("s5", value);
}
static Single<String> single6(String value) {
return single("s6", value);
}
static Single<String> single(String value1, String value2) {
return Single.just(value1).map(l -> l + "_" + value2);
}
This outputs:
[s1]
[s1, s2, s3_s1, s4_s2, s5_s1, s6_s2]
I was wondering if anybody find a way to stub/mock a logic inside a lambda without making the lambda's visibility?
public List<Item> processFile(String fileName) {
// do some magic..
Function<String, List<String>> reader = (fileName) -> {
List<String> items = new ArrayList<>();
try (BufferedReader br = new BufferedReader(new FileReader(fileName))) {
String output;
while ((output = br.readLine()) != null) {
items.add(output);
}
} catch (IOException e) {
e.printStackTrace();
}
return items;
};
List<String> lines = reader.apply("file.csv");
// do some more magic..
}
I would say the rule is that if a lambda expression is so complex that you feel the need to mock out bits of it, that it's probably too complex. It should be broken into smaller pieces that are composed together, or perhaps the model needs to be adjusted to make it more amenable to composition.
I will say that Andrey Chaschev's answer which suggests parameterizing a dependency is a good one and probably is applicable in some situations. So, +1 for that. One could continue this process and break down the processing into smaller pieces, like so:
public List<Item> processFile(
String fileName,
Function<String, BufferedReader> toReader,
Function<BufferedReader, List<String>> toStringList,
Function<List<String>, List<Item>> toItemList)
{
List<String> lines = null;
try (BufferedReader br = toReader.apply(fileName)) {
lines = toStringList.apply(br);
} catch (IOException ioe) { /* ... */ }
return toItemList.apply(lines);
}
A couple observations on this, though. First, this doesn't work as written, since the various lambdas throw pesky IOExceptions, which are checked, and the Function type isn't declared to throw that exception. The second is that the lambdas you have to pass to this function are monstrous. Even though this doesn't work (because of checked exceptions) I wrote it out:
void processAnActualFile() {
List<Item> items = processFile(
"file.csv",
fname -> new BufferedReader(new FileReader(fname)),
// ERROR: uncaught IOException
br -> {
List<String> result = new ArrayList<>();
String line;
while ((line = br.readLine()) != null) {
result.add(line);
}
return result;
}, // ERROR: uncaught IOException
stringList -> {
List<Item> result = new ArrayList<>();
for (String line : stringList) {
result.add(new Item(line));
}
return result;
});
}
Ugh! I think I've discovered new code smell:
If you have to write a for-loop or while-loop inside a lambda, you're doing something wrong.
A few things are going on here. First, the I/O library is really composed of different pieces of implementation (InputStream, Reader, BufferedReader) that are tightly coupled. It's really not useful to try to break them apart. Indeed, the library has evolved so that there are some convenience utilities (such as the NIO Files.readAllLines) that handle a bunch of leg work for you.
The more significant point is that designing functions that pass aggregates (lists) of values among themselves, and composing these functions, is really the wrong way to go. It leads every function to have to write a loop inside of it. What we really want to do is write functions that each operate on a single value, and then let the new Streams library in Java 8 take care of the aggregation for us.
The key function to extract here from the code described by the comment "do some more magic" which converts List<String> into List<Item>. We want to extract the computation that converts one String into an Item, like this:
class Item {
static Item fromString(String s) {
// do a little bit of magic
}
}
Once you have this, then you can let the Streams and NIO libraries do a bunch of the work for you:
public List<Item> processFile(String fileName) {
try (Stream<String> lines = Files.lines(Paths.get(fileName))) {
return lines.map(Item::fromString)
.collect(Collectors.toList());
} catch (IOException ioe) {
ioe.printStackTrace();
return Collections.emptyList();
}
}
(Note that more half of this short method is for dealing with the IOException.)
Now if you want to do some unit testing, what you really need to test is that little bit of magic. So you wrap it into a different stream pipeline, like this:
void testItemCreation() {
List<Item> result =
Arrays.asList("first", "second", "third")
.stream()
.map(Item::fromString)
.collect(Collectors.toList());
// make assertions over result
}
(Actually, even this isn't quite right. You'd want to write unit tests for converting a single line into a single Item. But maybe you have some test data somewhere, so you'd convert it to a list of items this way, and then make global assertions over the relationship of the resulting items in the list.)
I've wandered pretty far from your original question of how to break apart a lambda. Please forgive me for indulging myself.
The lambda in the original example is pretty unfortunate since the Java I/O libraries are quite cumbersome, and there are new APIs in the NIO library that turn the example into a one-liner.
Still, the lesson here is that instead of composing functions that process aggregates, compose functions that process individual values, and let streams handle the aggregation. This way, instead of testing by mocking out bits of a complex lambda, you can test by plugging together stream pipelines in different ways.
I'm not sure if that's what you're asking, but you could extract a lambda from lambda i.e. to another class or as is and pass it as a parameter. In an example below I mock reader creation:
public static void processFile(String fileName, Function<String, BufferedReader> readerSupplier) {
// do some magic..
Function<String, List<String>> reader = (name) -> {
List<String> items = new ArrayList<>();
try(BufferedReader br = readerSupplier.apply(name)){
String output;
while ((output = br.readLine()) != null) {
items.add(output);
}
} catch (IOException e) {
e.printStackTrace();
}
return items;
};
List<String> lines = reader.apply(fileName);
// do some more magic..
}
public static void main(String[] args) {
// mocked call
processFile("file.csv", name -> new BufferedReader(new StringReader("line1\nline2\n")));
//original call
processFile("1.csv", name -> {
try {
return new BufferedReader(new FileReader(name));
} catch (FileNotFoundException e) {
throw new RuntimeException(e);
}
});
}