Merging Mono and Flux in Spring WebFlux - spring-webflux

Let's say I have a method store(Flux<DataBuffer> bufferFlux) which receives some data as a flux of DataBuffers, calculates an identifier, creates an AsynchronousFileChannel and then uses DataBufferUtils the data to the channel.
I started like this. Please note, that the following code will not work. It should just illustrate how I create a FileChannel and how I would like to write the data, while releasing used buffers and closing the channel afterwards.
public Mono<Void> store(Flux<DataBuffer> bufferFlux) {
var channelMono = Mono.defer(() -> {
try {
log.info("opening file {}", filePath);
return Mono.just(AsynchronousFileChannel
.open(filePath, StandardOpenOption.CREATE_NEW, StandardOpenOption.WRITE));
} catch (IOException ex) {
log.error("error opening file", ex);
return Mono.error(ex);
}
});
// calculate identifier
// store buffers to AsynchronousFileChannel
return DataBufferUtils
.write(bufferFlux, fileChannel)
.doOnNext(DataBufferUtils.releaseConsumer())
.doFinally(f -> {
try {
fileChannel.close();
} catch (IOException ioException) {
log.error("error closing file channel", ioException);
}
})
.then();
}
The problem is, that I just started with reactive programming and have no clue how I could bring these two building blocks together, so that
the data is written to the channel
all buffers are gracefully released
the channel is closed after writing the data
the whole operation just signals complete or error (I guess this is what Mono<Void> is used for)
Can anyone help me choose the right operators or point me to a conceptual problem (perhaps there is a good reason why I cannot find a suitable operator)? :)
Thank you!

Related

Asynchronous programming for IotHub Device Registration in Java?

I am currently trying to implement the Java web service(Rest API) where the endpoint creates the device in the IoTHub and updates the device twin.
There are two methods available in the azure-iot sdk. One is
addDevice(deviceId, authenticationtype)
and another is to
addDeviceAsync(deviceId, authenticationtype)
I just wanted to figure out which one should I use in the web service(as a best practice). I am not very strong in MultiThreading/Concurrency so was wondering to receive people's expertise on this. Any suggestion/Link related to this is much appreciated
Thanks.
The Async version of AddDevice is basically the same. If you use AddDeviceAsync then a thread is created to run the AddDevice call so you are not blocked on it.
Check the code#L269 of RegistryManager doing exactly that: https://github.com/Azure/azure-iot-sdk-java/blob/master/service/iot-service-client/src/main/java/com/microsoft/azure/sdk/iot/service/RegistryManager.java#L269
public CompletableFuture<Device> addDeviceAsync(Device device) throws IOException, IotHubException
{
if (device == null)
{
throw new IllegalArgumentException("device cannot be null");
}
final CompletableFuture<Device> future = new CompletableFuture<>();
executor.submit(() ->
{
try
{
Device responseDevice = addDevice(device);
future.complete(responseDevice);
}
catch (IOException | IotHubException e)
{
future.completeExceptionally(e);
}
});
return future;
}
You can as well build your own async wrapper and call AddDevice() from there.

Catching errors on actor construction in Akka TestKit

I am trying to learn unit testing with Akka.
I have a situation where one of my tests was throwing an exception on construction and was wondering what the best way to capture this and log or otherwise throw it would be. As it stands now I had to attach a debugger and see where it threw.
I thought that I could perhaps create another actor which does logging and, on error, have a message sent to it. Breakpoints I put in the ErrorActor were never hit though. It seems as though the RootActor failed and timed out before the message was sent / received.
Is there something I'm doing wrong here or am I fundamentally off base with this? What is the the recommended way to catch errors in unit tests?
Thanks very much
[Fact]
public void CreateRootActor()
{
// Arrange
var props = Props.Create(() => new RootActor());
Sys.ActorOf(Props.Create( () =>new TestErrorActor(TestLogger)), ActorPaths.ErrorActor.Name); // register my test actor
// Act
var actor = new TestActorRef<RootActor>(this.Sys, props);
// Assert
Assert.IsType<RootActor>(actor.UnderlyingActor);
}
public class RootActor : ReceiveActor
{
private ITenantRepository tenantRepository;
public RootActor(ILifetimeScope lifetimeScope)
{
try
{
this.tenantRepository = lifetimeScope.Resolve<ITenantRepository>(); // this throws
}
catch (Exception e)
{
Context.ActorSelection(ErrorActor.Name).Tell(new TestErrorActor.RaiseError(e));
throw;
}
....
I got around this by using Akka.Logger.Serilog and a try / catch in the RootActor. I deleted the ErrorActor.

What's the point of the use function in Kotlin

I'm trying to use the inline function use with a FileInputStream instead of the classic try/catch IOException so that
try {
val is = FileInputStream(file)
// file handling...
}
catch (e: IOException) {
e.printStackTrace()
}
becomes
FileInputStream(file).use { fis ->
// do stuff with file
}
My question is, why use the function use if it stills throws exception? Do I have to wrap use in a try/catch? This seems ridiculous.
From Kotlin documentation:
Executes the given block function on this resource and then closes it
down correctly whether an exception is thrown or not.
When you use an object that implements the Closeable interface, you need to call the close() method when you are done with it, so it releases any system resources associated with the object.
You need to be careful and close it even when an exception is thrown. In this kind of situation that is error prone, cause you might not know or forget to handle it properly, it is better to automate this pattern. That's exactly what the use function does.
Your try-catch does not close the resource so you are comparing apples to oranges. If you close the resource in finally block:
val is = FileInputStream(file)
try {
...
}
catch (e: IOException) {
...
}
finally {
is.close()
}
is definitely more verbose than use which handles closing the resource.

Ignoring offers to coroutine channels after closing

Is there a good way to have channels ignore offers once closed without throwing an exception?
Currently, it seems like only try catch would work, as isClosedForSend isn't atomic.
Alternatively, is there a problem if I just never close a channel at all?
For my specific use case, I'm using channels as an alternative to Android livedata (as I don't need any of the benefits beyond sending values from any thread and listening from the main thread). In that case, I could listen to the channel through a producer that only sends values when I want to, and simply ignore all other inputs.
Ideally, I'd have a solution where the ReceiveChannel can still finish listening, but where SendChannel will never crash when offered a new value.
Channels throw this exception by design, as means of correct communication.
If you absolutely must have something like this, you can use an extension function of this sort:
private suspend fun <E> Channel<E>.sendOrNothing(e: E) {
try {
this.send(e)
}
catch (closedException: ClosedSendChannelException) {
println("It's fine")
}
}
You can test it with the following piece of code:
val channel = Channel<Int>(capacity = 3)
launch {
try {
for (i in 1..10) {
channel.sendOrNothing(i)
delay(50)
if (i == 5) {
channel.close()
}
}
println("Done")
}
catch (e: Exception) {
e.printStackTrace()
}
finally {
println("Finally")
}
}
launch {
for (c in channel) {
println(c)
delay(300)
}
}
As you'll notice, producer will start printing "It's fine" since the channel is closed, but consumer will still be able to read first 5 values.
Regarding your second question: it depends.
Channels don't have such a big overhead, and neither do suspended coroutines. But a leak is a leak, you know.
I ended up posting an issue to the repo, and the solution was to use BroadcastChannel. You can create a new ReceiveChannel through openSubscription, where closing it will not close the SendChannel.
This more accurately reflects RxJava's PublishSubject

Convert writes to OutputStream into a Flux<DataBuffer> usable by ServerResponse

I have a legacy library that I have to use to retrieve a file. This legacy library doesn't return in InputStream, as you usually expect for reading stuff, but it expects that it is passed an open OutputStream, that it can write to.
I have to write a Webflux REST service, that writes this OutputStream to the org.springframework.web.reactive.function.server.ServerResponse body.
legacyLib.BlobRead(outputStream); // writes the stream to an outputstream, that has to be provided by me, and somehow has to end up in the ServerResponse
Since I want to pass along the Stream directly to the ServerResponse, I probably have to do something like this, right?
ServerResponse.ok().body(magicOutpuStreamToFluxConverter(), DataBuffer.class);
Here is the part of the RequestHandler that's important. I left out some errorhandling/catching of exceptions, that might generally not be needed. Note that I publishedOn a different Scheduler for the read (or at least, that's what I wanted to do), so that this blocking read doesn't interfere with my main event thread:
private Mono<ServerResponse> writeToServerResponse(#NotNull FPTag tag) {
final long blobSize = tag.getBlobSize();
return ServerResponse.ok()
.contentType(MediaType.APPLICATION_OCTET_STREAM)
.body(Flux.<DataBuffer>create((FluxSink<DataBuffer> emitter) -> {
// for a really big blob I want to read it in chunks, so that my server doesn't use too much memory
for(int i = 0; i < blobSize; i+= tagChunkSize) {
// new DataBuffer that is written to, then emitted later
DefaultDataBuffer dataBuffer = new DefaultDataBufferFactory().allocateBuffer();
try (OutputStream outputStream = dataBuffer.asOutputStream()) {
// write to the outputstream of DataBuffer
tag.BlobReadPartial(outputStream, i, tagChunkSize, FPLibraryConstants.FP_OPTION_DEFAULT_OPTIONS);
// don't know if flushing is strictly neccessary
outputStream.flush();
} catch (IOException | FPLibraryException e) {
log.error("Error reading + writing from tag to http outputstream", e);
emitter.error(e);
}
emitter.next(dataBuffer);
}
// if blob is finished, send "complete" to my flux of DataBuffers
emitter.complete();
}, FluxSink.OverflowStrategy.BUFFER).publishOn(Schedulers.newElastic("centera")).doOnComplete(() -> closeQuietly(tag)), DataBuffer.class);
}