I am trying to create a communication controller for a hardware device that always responds with some delay. If I would only request one value, I could create a Single<ByteArray> and do the final conversion in .subscribe{ ...}.
But when I request more than one value I need to make sure that the second request happens after the first request has been fully closed.
Is that something that I can do with RxJava, e.g. defer? Or should I create a queue on my own and handle the sequence of events manually with my queue?
We're using RxJava anyway (and I'm obviously new to it) and of course it would be nice to use it for this purpose as well. But is that a good use-case?
Edit:
Code that I could use, but that wouldn't be generic enough:
hardware.write(byteArray)
.subscribe(
{
hardware.receiveResult().take(1)
.doFinally { /* dispose code */ }
.subscribe(
{ /* onSuccess */ }
{ /* onError */ }
.let { disposable = it }
},
{ /* onError */ }
)
All code for the next request in the queue could be put in the inner onSuccess and then the next one in that onSuccess. That would be executed sequentially but that wouldn't be generic enough. Any other class that makes a request would end up spoiling my sequence.
I am searching for a solution that builds up the queue automatic in the hardware communication controller class.
Long time passed, the project developed and we got a solution long time ago. Now I wanted to share it here:
fun writeSequential(data1: ByteArray, data2: ByteArray) {
disposable = hardwareWrite(data1)
.zipWith(hardwareWrite(data2))
.subscribe(
{
/* handle results.
it.first will be the first response,
it.second the second. */
},
{ /* handle error */ }
)
compositeDisposable.add(disposable)
}
fun hardwareWrite(data: ByteArray): Disposable {
var emitter: SingleEmitter<ByteArray>? = null
var single = Single.create<ByteArray> { emitter = it }
return hardware.write(data)
.subscribe(
{ hardwareRead(emitter) },
{ /* onError */ }
))
}
fun hardwareRead(emitter: SingleEmitter<ByteArray>): Disposable {
return hardware.receiveResult()
.take(1)
.timeout( /* your timeout */ )
.single( /* default value */ )
.doFinally( /* cleanup queue */ )
.subscribe(
{ emitter.onSuccess(it) }
{ emitter.onError(it) }
)
}
The solution is not perfect and now I see that the middle part doesn't do anything with the disposable result.
Also in out example it's a bit more complicated as hardwareWrite doesn't fire immediatelly but gets queued. This way we assure that the hardware is accessed sequentially and the result don't get mixed up.
Still I hope this might help someone, who is looking for a solution, and is maybe new to kotlin and/or RxJava stuff (like I was in the beginning of the project).
Related
I am trying to write to a peripheral in Android Kotlin using RxAndroidBle. The application writes to the peripheral and then the peripheral responds if this write request is successful, i.e.
According to the evaluation made of the information sent to the peripheral, the peripheral sends a response to the app if it is the expected information, if not the expected information, then the peripheral responds with a different response; In summary, it is a scenario very similar to an HTTP request via POST, information is sent and the server responds with a status if the information meets the requirements. I already managed to connect perfectly and read information from the peripheral in the following way:
override fun connectDeviceToGetInfoHardwareByBle(mac: String): Observable<Resource<HardwareInfoResponse>> {
val device: RxBleDevice = bleClient.getBleDevice(mac)
return Observable.defer {
device.bluetoothDevice.createBond()// it is a blocking function
device.establishConnection(false) // return Observable<RxBleConnection>
}
.delay(5, TimeUnit.SECONDS)
.flatMapSingle { connection ->
connection.requestMtu(515)
.flatMap {
Single.just(connection)
}
}
.flatMapSingle {
it.readCharacteristic(UUID.fromString(GET_HARDWARE_INFORMATION_CHARACTERISTIC))
.map { byteArray ->
evaluateHardwareInfoResponse(byteArray = byteArray)
}
}
.map {
Resource.success(data = it)
}
.take(1)
.onErrorReturn {
Timber.i("Rointe Ble* Error getting ble information. {$it}")
Resource.error(data = null, message = it.message.toString())
}
.doOnError {
Timber.i("Rointe Ble*","Error getting ble information."+it)
}
.subscribeOn(ioScheduler)
.observeOn(uiScheduler)
}
As you can see, the MTU is needed by the peripheral, and it answers what I need. After that response, I close that BLE connection and the app does another independent job on the network (HTTP). Then it is required to connect again but this time it is necessary to write JSON information to the peripheral and the device analyzes that JSON and gives some answers that I need as a return; How do I implement a write waiting for a response from the peripheral? Is it necessary to do a long-write for a JSON since I'm assigning MTU on the connection? I'm developing this in Kotlin under the Repository pattern.
The JSON sent is this:
{
"data": {
"id_hardware": "[ID_HARDWARE]",
"product_brand": <value>,
"product_type": <value>,
"product_model": <value>,
"nominal_power": <value>,
"industrialization_process_date": <value>,
"platform_api_path": "[Host_API_REST]",
"platform_streaming_path": "[Host_STREAMING]",
"updates_main_path": "[Host_UPDATES]",
"updates_alternative_path": "[Host_ALTERNATIVE_UPDATES]",
"check_updates_time": <value>,
"check_updates_day": <value>,
"auth_main_path": "[Host_AUTHORIZATION]",
"auth_alternative_path": "[Host_BACKUP_AUTHORIZATION]",
"analytics_path": "[Host_ANALYTICS]",
"idToken": "[ID_TOKEN]",
"refreshToken": "[REFRESH_TOKEN]",
"expiresIn": "3600",
"apiKey": "[API_KEY]",
"factory_wifi_ssid": "[FACTORY_WIFI_SSID]",
"factory_wifi_security_type": "[FACTORY_WIFI_TYPE]",
"factory_wifi_passphrase": "[FACTORY_WIFI_PASS]",
"factory_wifi_dhcp": 1,
"factory_wifi_device_ip": "[IPv4]",
"factory_wifi_subnet_mask": "[SubNetMask_IPv4]",
"factory_wifi_gateway": "[IPv4]"
},
"factory_version": 1,
"crc": ""
}
The peripheral analyzes that JSON and gives me some answers according to the JSON sent.
Now, the way I try to do the write expecting a response is this:
private fun setupNotifications(connection: RxBleConnection): Observable<Observable<ByteArray>> =
connection.setupNotification(UUID.fromString(SET_FACTORY_SETTINGS_CHARACTERISTIC))
private fun performWrite(connection: RxBleConnection, notifications: Observable<ByteArray>, data: ByteArray): Observable<ByteArray> {
return connection.writeCharacteristic(UUID.fromString(SET_FACTORY_SETTINGS_CHARACTERISTIC), data).toObservable()
}
override fun connectDeviceToWriteFactorySettingsByBle(mac: String, data: ByteArray): Observable<Resource<HardwareInfoResponse>> {
val device: RxBleDevice = bleClient.getBleDevice(mac)
return Observable.defer {
//device.bluetoothDevice.createBond()// it is a blocking function
device.establishConnection(false) // return Observable<RxBleConnection>
}
.delay(5, TimeUnit.SECONDS)
.flatMapSingle { connection ->
connection.requestMtu(515)
.flatMap {
Single.just(connection)
}
}
.flatMap(
{ connection -> setupNotifications(connection).delay(5, TimeUnit.SECONDS) },
{ connection, deviceCallbacks -> performWrite(connection, deviceCallbacks, data) }
)
.flatMap {
it
}
//.take(1) // after the successful write we are no longer interested in the connection so it will be released
.map {
Timber.i("Rointe Ble: Result write: ok ->{${it.toHex()}}")
Resource.success(data = evaluateHardwareInfoResponse(it))
}
//.take(1)
.onErrorReturn {
Timber.i("Rointe Ble: Result write: failed ->{${it.message.toString()}}")
Resource.error(data = HardwareInfoResponse.NULL_HARDWARE_INFO_RESPONSE, message = "Error write on device.")
}
.doOnError {
Timber.i("Rointe Ble*","Error getting ble information."+it)
}
//.subscribeOn(ioScheduler)
.observeOn(uiScheduler)
}
As can be seen, the MTU is negotiated to the maximum and a single packet is sent (json file shown).
When I run my code it connects but shows this error:
com.polidea.rxandroidble2.exceptions.BleCannotSetCharacteristicNotificationException:
Cannot find client characteristic config descriptor (code 2) with
characteristic UUID 4f4a4554-4520-4341-4c4f-520001000002
Any help on Kotlin?
Thanks a lot!!
When I run my code it connects but shows this error:
com.polidea.rxandroidble2.exceptions.BleCannotSetCharacteristicNotificationException:
Cannot find client characteristic config descriptor (code 2) with
characteristic UUID 4f4a4554-4520-4341-4c4f-520001000002
You can fix this in two ways:
Change your peripheral code to include a Client Characteristic Config Descriptor on the characteristic that you want to use notifications on – this is the preferred way as it would make the peripheral conform to Bluetooth Specification
Use COMPAT mode when setting up notification which does not set CCCD value at all
How to clean UUID's characteristics cache? what happens is the library remember in cache maybe the last UUID registered. How I clean this cache?
It is possible to clear the cache by using BluetoothGatt#refresh and subsequently getting the new set of services which will allow bypassing the library UUID helper — you need to use functions that accept BluetoothGattCharacteristic instead of UUID.
Code that refreshes BluetoothGatt:
RxBleCustomOperation<Void> bluetoothGattRefreshCustomOp = (bluetoothGatt, rxBleGattCallback, scheduler) -> {
try {
Method bluetoothGattRefreshFunction = bluetoothGatt.getClass().getMethod("refresh");
boolean success = (Boolean) bluetoothGattRefreshFunction.invoke(bluetoothGatt);
if (!success) return Observable.error(new RuntimeException("BluetoothGatt.refresh() returned false"));
return Observable.<Void>empty().delay(500, TimeUnit.MILLISECONDS);
} catch (NoSuchMethodException e) {
return Observable.error(e);
} catch (IllegalAccessException e) {
return Observable.error(e);
} catch (InvocationTargetException e) {
return Observable.error(e);
}
};
Code that discovers services bypassing the library caches:
RxBleCustomOperation<List<BluetoothGattService>> discoverServicesCustomOp = (bluetoothGatt, rxBleGattCallback, scheduler) -> {
boolean success = bluetoothGatt.discoverServices();
if (!success) return Observable.error(new RuntimeException("BluetoothGatt.discoverServices() returned false"));
return rxBleGattCallback.getOnServicesDiscovered()
.take(1) // so this RxBleCustomOperation will complete after the first result from BluetoothGattCallback.onServicesDiscovered()
.map(RxBleDeviceServices::getBluetoothGattServices);
};
I use Java Selector for both server and client. For Server side it works perfect. It stops the thread when i call select() and wakes up when i change interest ops and it is ready for this operation..
But unfortunatelt it does not work for the same way for socket client. It stops the thread and does not wake up for reading or writing when i change interestedOps.
Creation of client connection:
selector = Selector.open()
SocketChannel.open().apply {
configureBlocking(false)
connect(address)
val key = socket.register(selector, SelectionKey.OP_READ or SelectionKey.OP_CONNECT)
val connection = ClientConnection(key) // Some stuff to hold the key for events
key.attach(connection)
}
Handle selection inside while loop:
val readyChannels = selector.select()
if (readyChannels == 0) continue
val keyIterator = selector.selectedKeys().iterator()
while (keyIterator.hasNext()) {
val key = keyIterator.next()
when (key.readyOps()) {
SelectionKey.OP_CONNECT -> {
val socket = (key.channel() as SocketChannel)
socket.finishConnect()
key.interestOps(key.interestOps() and SelectionKey.OP_CONNECT.inv())
// WORKS FINE!!!!!
key.interestOps(key.interestOps() and SelectionKey.OP_WRITE)
// Does not work at all. Selector will not wake up!
Thread(){
key.interestOps(key.interestOps() and SelectionKey.OP_WRITE)
}.start()
}
SelectionKey.OP_READ -> readPackets(key)
SelectionKey.OP_WRITE -> writePackets(key)
SelectionKey.OP_READ or SelectionKey.OP_WRITE -> {
writePackets(key)
readPackets(key)
}
}
keyIterator.remove()
}
So. The changing of interestOps from different thread does not work for socket clients. But it works fine for Server sockets..
Found solutions:
selector.select(300) -> use some timeout to wake up selector
selector.selectNow() -> use non blocking method and check the count of evetns
selector.wakeUp() -> save instance and wakeup it manually..
The question is Why it does not work ? Did I do some mistake? Something Missed?
UPD: Server side socket and selector
Creation of server socket:
selector = Selector.open()
serverSocket = ServerSocketChannel.open().apply {
socket().bind(address)
configureBlocking(false)
register(selector, SelectionKey.OP_ACCEPT)
}
Iteration of the selector inside Loop:
val readyChannels = selector.select()
if (readyChannels == 0) continue
val keyIterator = selector.selectedKeys().iterator()
while (keyIterator.hasNext()) {
val key = keyIterator.next()
when (key.readyOps()) {
SelectionKey.OP_ACCEPT -> {
val socket = serverSocket.accept().apply {
configureBlocking(false)
}
val client = clientFactory.createClient(selector,socket)
// Coroutines with Another thread context.
// There interestOps will be changed to send first data
_selectionAcceptFlow.tryEmit(client)
}
SelectionKey.OP_READ -> readPackets(key)
SelectionKey.OP_WRITE -> writePackets(key)
SelectionKey.OP_READ or SelectionKey.OP_WRITE -> {
writePackets(key)
readPackets(key)
}
}
keyIterator.remove()
}
If you call key.setInterestOps from a separate thread, you are creating a race condition between that call and the call to selector.select() in the client loop.
Your initial call to register does not contain SelectorKey.OP_WRITE. The first event triggered will be SelectorKey.OP_CONNECT. When handling that event, you indicate that in the future you are also interested in processing OP_WRITE.
If you do that in the same thread, then you are guaranteed that the interestOps are set the way you want them before the client loop reaches the call to selector.select(). If there is an OP_WRITE event available, you will process it immediatelly, otherwise the call blocks until it is available.
If you do that in a separate thread, then, depending on timing, you may run into a case where the client loop reaches the call to selector.select() and blocks even though there is an OP_WRITE event available. Since the separate thread did not yet change the interestOps, the OP_WRITE event is ignored.
I've included a self-contained example (client sending a message to server). To test different cases, you can comment/uncomment sections around line 90.
import java.net.InetSocketAddress
import java.nio.ByteBuffer
import java.nio.channels.SelectionKey
import java.nio.channels.Selector
import java.nio.channels.ServerSocketChannel
import java.nio.channels.SocketChannel
import java.util.concurrent.CountDownLatch
val address = InetSocketAddress("localhost", 5454)
fun main() {
val serverSocketSignal = CountDownLatch(1)
Thread {
startServer(serverSocketSignal)
}.start()
Thread {
startClient(serverSocketSignal)
}.start()
}
fun startServer(serverSocketSignal: CountDownLatch) {
//prepare server socket
val selector = Selector.open()
val serverSocket = ServerSocketChannel.open().apply {
socket().bind(address)
configureBlocking(false)
register(selector, SelectionKey.OP_ACCEPT)
}
serverSocketSignal.countDown();
//run server loop
while (true) {
println("Server loop")
val readyChannels = selector.select()
if (readyChannels == 0) continue
val keyIterator = selector.selectedKeys().iterator()
while (keyIterator.hasNext()) {
val key = keyIterator.next()
when (key.readyOps()) {
SelectionKey.OP_ACCEPT -> {
println("Server ACCEPT")
val socket = serverSocket.accept().apply {
configureBlocking(false)
}
socket.register(selector, SelectionKey.OP_READ)
}
SelectionKey.OP_READ -> {
val buffer = ByteBuffer.allocate(1024)
val count = (key.channel() as SocketChannel).read(buffer)
val message = String(buffer.array(), 0, count)
println("Server READ - " + message)
}
}
keyIterator.remove()
}
}
}
fun startClient(serverSocketSignal: CountDownLatch) {
serverSocketSignal.await();
//prepare client socket
val selector = Selector.open()
SocketChannel.open().apply {
configureBlocking(false)
connect(address)
register(selector, SelectionKey.OP_CONNECT or SelectionKey.OP_READ)
}
//run client loop
while (true) {
println("Client loop")
val readyChannels = selector.select()
if (readyChannels == 0) continue
val keyIterator = selector.selectedKeys().iterator()
while (keyIterator.hasNext()) {
val key = keyIterator.next()
when (key.readyOps()) {
SelectionKey.OP_CONNECT -> {
println("Client CONNECT")
val socket = (key.channel() as SocketChannel)
socket.finishConnect()
key.interestOpsAnd(SelectionKey.OP_CONNECT.inv())
/*
This works
*/
key.interestOps(SelectionKey.OP_WRITE)
/*
This doesn't work because we're And-ing the interestOps an the OP_WRITE op was not specified when calling register()
*/
// key.interestOpsAnd(SelectionKey.OP_WRITE)
/*
This may or may not work, depending on which thread gets executed first
- it will work if the setting interestOps=OP_WRITE in the new thread gets executed before the selector.select() in the client loop
- it will not work if selector.select() in the client loop gets executed before setting interestOps=OP_WRITE in the new thread,
since there won't be anything to process and the selector.select() gets blocked
On my machine, pausing the client loop even for a small duration was enough to change the result (e.g. the Thread.sleep(1) below).
* */
// Thread {
// println("Client setting interestedOps to OP_WRITE from new thread")
// key.interestOps(SelectionKey.OP_WRITE)
// }.start()
// //Thread.sleep(1)
}
SelectionKey.OP_WRITE -> {
println("Client WRITE")
val buffer = ByteBuffer.wrap("test message from client".toByteArray());
(key.channel() as SocketChannel).write(buffer)
key.interestOps(0)
}
}
keyIterator.remove()
}
}
}
As for why it works for you on the server side - you would have to share the full code for the server and client (might be a timing issue or your selector might be woken up by some events you did not intend to listen for). The snippets provided in the question do not contain enough infomation.
I have a method like below in my Spring boot app.
public Flux<Data> search(SearchRequest request) {
Flux<Data> result = searchService.search(request);//this returns Flux<Data>
Mono<List<Data>> listOfData = result.collectList();
// doThisAsync() // here I want to pass this list and run some processing on it
// the processing should happen async and the search method should return immediately.
return result;
}
//this method uses the complete List<Data> returned by above method
public void doThisAsync(List<Data> data) {
//do some processing here
}
Currently, I'm using #Async annotated service class with doThisAsync, but don't know how to pass the List<Data>, because I don't want to call block.
All I have is Mono<List<Data>>.
My main problem is how to process this Mono separately and the search method should return the Flux<Data>.
1, If your fire-and-forget is already async returning Mono/Flux
public Flux<Data> search(SearchRequest request)
{
return searchService.search(request)
.collectList()
.doOnNext(data -> doThisAsync(data).subscribe()) // add error logging here or inside doThisAsync
.flatMapMany(Flux::fromIterable);
}
public Mono<Void> doThisAsync(List<Data> data) {
//do some async/non-blocking processing here like calling WebClient
}
2, If your fire-and-forget does blocking I/O
public Flux<Data> search(SearchRequest request)
{
return searchService.search(request)
.collectList()
.doOnNext(data -> Mono.fromRunnable(() -> doThisAsync(data))
.subscribeOn(Schedulers.elastic()) // delegate to proper thread to not block main flow
.subscribe()) // add error logging here or inside doThisAsync
.flatMapMany(Flux::fromIterable);
}
public void doThisAsync(List<Data> data) {
//do some blocking I/O on calling thread
}
Note that in both of the above cases you lose backpressure support. If the doAsyncThis slows down for some reason, then the data producer won't care and keep producing items. This is a natural consequence of the fire-and-foget mechanism.
Have you considered running the processing in separate threads using publishOn like in the example below?
This may not be exactly what you are asking for but allows you to continue with other matters while the processing of the results in the flux is done by one or more threads, four in my example, from a dedicated scheduler (theFourThreadScheduler).
#Test
public void processingInSeparateThreadTest() {
final Scheduler theFourThreadScheduler = Schedulers.newParallel("FourThreads", 4);
final Flux<String> theResultFlux = Flux.just("one", "two", "three", "four", "five", "six", "seven", "eight");
theResultFlux.log()
.collectList()
.publishOn(theFourThreadScheduler)
.subscribe(theStringList -> {
doThisAsync(theStringList);
});
System.out.println("Subscribed to the result flux");
for (int i = 0; i < 20; i++) {
System.out.println("Waiting for completion: " + i);
try {
Thread.sleep(300);
} catch (final InterruptedException theException) {
}
}
}
private void doThisAsync(final List<String> inStringList) {
for (final String theString : inStringList) {
System.out.println("Processing in doThisAsync: " + theString);
try {
Thread.sleep(500);
} catch (final InterruptedException theException) {
}
}
}
Running the example produce the following output, showing that the processing performed in doThisAsync() is performed in the background.
Subscribed to the result flux
Waiting for completion: 0
Processing in doThisAsync: one
Waiting for completion: 1
Processing in doThisAsync: two
Waiting for completion: 2
Waiting for completion: 3
Processing in doThisAsync: three
Waiting for completion: 4
Waiting for completion: 5
Processing in doThisAsync: four
Waiting for completion: 6
Processing in doThisAsync: five
Waiting for completion: 7
Waiting for completion: 8
Processing in doThisAsync: six
Waiting for completion: 9
Processing in doThisAsync: seven
Waiting for completion: 10
Waiting for completion: 11
Processing in doThisAsync: eight
Waiting for completion: 12
Waiting for completion: 13
Waiting for completion: 14
Waiting for completion: 15
Waiting for completion: 16
Waiting for completion: 17
Waiting for completion: 18
Waiting for completion: 19
References:
Reactor 3 Reference: Schedulers
UPDATE 2023/01/31
Actually you anyway should use .subscribeOn() because even if you call your fire-and-forget function which returns Mono<Void> it is not guaranteed that within that reactive chain will be switching of executing thread or it will happen immediately (depends on the code inside that fire-and-forget function, more specificaly, operators that used on the chain).
So you may run into situation when your fire-and-forget function will be executed on the same thread that called this function, so your method will not return until this function is completed.
The case when fire-and-forget function returns Publisher<Void>:
public Flux<Data> search(SearchRequest request) {
return searchService.search(request)
.collectList()
.doOnNext(data ->
// anyway call subscribeOn(...)
fireAndForgetOperation(data)
.subscribeOn(...)
.subscribe()
)
.flatMapMany(Flux::fromIterable);
}
public Mono<Void> fireAndForgetOperation(List<String> list) {
...
}
The case when fire-and-forget function is just a common void returning method:
public Flux<Data> search(SearchRequest request) {
return searchService.search(request)
.collectList()
.doOnNext(data ->
Mono.fromRunnable(() -> fireAndForgetOperation(data))
.subscribeOn(...)
.subscribe()
)
.flatMapMany(Flux::fromIterable);
}
public void fireAndForgetOperation(List<String> list) {
...
}
Also you should consider what Scheduler you need to provide, depending on the nature of your fire-and-forget function.
Basically there are two scenarios:
1) If your fire-and-forget function does CPU-Bound work.
Then you want to specify Schedulers.parallel() inside subsribeOn()
2) If your fire-and-forget function does IO work (actually no matter in this case if it would be blocking or non-blocking IO).
Then you want to specify Schedulers.boundedElastic() inside subsribeOn()
So, using this approach you will truly return immediately after firing your fire-and-forget function
I have an actor system that randomly fails because of messages being delivered to dead letters. Fail meaning it just does not complete
Message [UploadFileFromDropboxSuccessMessage] from
akka://MySystem-Actor-System/user/...../DropboxToBlobSourceSubmissionUploaderActor/DropboxToBlobSourceFileUploaderActor--1
to
akka://MySystem-Actor-System/user/.../DropboxToBlobSourceSubmissionUploaderActor
was not delivered. [5] dead letters encountered .This logging can be
turned off or adjusted with configuration settings
'akka.log-dead-letters' and 'akka.log-dead-letters-during-shutdown'.
private void InitialState()
{
Receive<UploadFileFromDropboxMessage>(msg =>
{
var sender = Sender;
var self = Self;
var parent = Parent;
var logger = Logger;
UploadFromDropboxToBlobStorageAsync(msg.File, msg.RelativeSourceRootDirectory, msg.BlobStorageDestinationRootPath).ContinueWith(o =>
{
if (!o.IsFaulted)
{
parent.Tell(new UploadFileFromDropboxSuccessMessage(msg.File.Path, o.Result), self);
}
else
{
parent.Tell(new UploadFileFromDropboxFailureMessage(msg.File.Path), self);
}
}, TaskContinuationOptions.ExecuteSynchronously);
});
}
I also tried
private void InitialState()
{
Receive<UploadFileFromDropboxMessage>(msg =>
{
try
{
var result = UploadFromDropboxToBlobStorageAsync(msg.File, msg.RelativeSourceRootDirectory, msg.BlobStorageDestinationRootPath).Result;
Parent.Tell(new UploadFileFromDropboxSuccessMessage(msg.File.Path, result));
}
catch (Exception ex)
{
Parent.Tell(new UploadFileFromDropboxFailureMessage(msg.File.Path, ex));
}
});
}
This happens randomly and happens on both success and complete. I have checked parent.IsNobody()... and this returns false. In the documentation it says that alocal actor can fail:
if the mailbox does not accept the message (e.g. full BoundedMailbox)
if the receiving actor fails while processing the message or is
already terminated
I can't imagine a use case where this is true but also don't really know how to check from the context of my current actor (even if its just for logging purposes).
EDIT: Does AKKA have a limitation on the total amount of messages in the entire system?
EDIT: This happens 1 would say 10% of the time.
EDIT: Eventually discovered it was an actor waaay higher in the tree being killed. I am still confused why IsNobody() returned false if its in deed dead.
I am trying to build a kind of hub service that can emit through a hot Flux (output) but you can also register/unregister Flux producers/publishers (input)
I know I can do something like:
class Hub<T> {
/**
* #return unregister function
*/
Function<Void, Void> registerProducer(final Flux<T> flux) { ... }
Disposable subscribe(Consumer<? super T> consumer) {
if (out == null) {
// obviously this will not work!
out = Flux.merge(producer1, producer2, ...).share();
}
return out;
}
}
... but as these "producers" are registered and unregistered, how do I add a new flux source to the existing subscribed to flux? or remove a unregistered source from it?
TIA!
Flux is immutable by design, so as you've implied in the question, there's no way to just "update" an existing Flux in situ.
Usually I'd recommend avoiding using a Processor directly. However, this is one of the (rare-ish) cases where a Processor is probably the only sane option, since you essentially want to be publishing elements dynamically based on the producers that you're registering. Something similar to:
class Hub<T> {
private final FluxProcessor<T, T> processor;
private final FluxSink<T> sink;
public Hub() {
this.processor = DirectProcessor.<T>create().serialize();
this.sink = processor.sink();
}
public Disposable registerProducer(Flux<T> flux) {
return flux.subscribe(sink::next);
}
public Flux<T> read() {
return processor;
}
}
If you want to remove a producer, then you can keep track of the Disposable returned from registerProducer() and call dispose() on it when done.