eProsima Fast DDS: Where is the actual send function triggered? (SHM, UDP) - data-distribution-service

I send a CacheChange with a defined fragment size via eProsima Fast DDS in SHM mode (and UDP). I want to understand where the fragments are actually send. For better understanding I activated the logging (see below). However, there appears a function call I can not trace back.
What I did: I followed the write function of the publisher down to the sync_delivery function of the StatefulWriter (I use reliability). Inside the sync_delivery function is the send_data_or_fragments function, where each fragment is added to a message. In between the loop of adding the fragments is the function call şend, which I do not understand.
It seems to me that there is a parallel thread for the send function. Where is it triggered? Where is the send function called? Is it similar for UDP? Btw, the logging is similar for UDP and matched with what I saw on Wireshark.
I appreciate any help.
This in my logging. I removed the timestamp here for a better overview.
Adding CacheChange:
[DATA_WRITER Info] Fragment size of cache change: 543 -> Function set_fragment_size_on_change
[RTPS_WRITER_HISTORY Info] Change 1 added with 6564 bytes -> Function add_change_
[RTPS_WRITER Info] Sending relevant changes as DATA/DATA_FRAG messages -> Function add_data_frag
[RTPS_WRITER Info] Sending INFO_TS message -> Function add_info_ts_in_buffer
Adding Fragment 1:
[RTPS_WRITER Info] Fragment Number is 1 and fragment start is 0 -> Function add_data_frag
[RTPS_WRITER Info] Fragment 1 added to Functor -> Function send_data_or_fragments
[RTPS_WRITER Info] Sending relevant changes as DATA/DATA_FRAG messages -> Function add_data_frag
[RTPS_WRITER Info] Sending INFO_TS message -> Function add_info_ts_in_buffer
Adding Fragment 2:
[RTPS_WRITER Info] Fragment Number is 2 and fragment start is 543 -> Function add_data_frag
[RTPS_MSG_OUT Info] (ID:140551501941760) SharedMemTransport: 628 bytes to port 7413 -> Function send
[RTPS_WRITER Info] Fragment 2 added to Functor -> Function send_data_or_fragments
[RTPS_WRITER Info] Sending relevant changes as DATA/DATA_FRAG messages -> Function add_data_frag
[RTPS_WRITER Info] Sending INFO_TS message -> Function add_info_ts_in_buffer
Adding Fragment 3:
[RTPS_WRITER Info] Fragment Number is 3 and fragment start is 1086 -> Function add_data_frag
[RTPS_MSG_OUT Info] (ID:140551501941760) SharedMemTransport: 628 bytes to port 7413 -> Function send
[RTPS_WRITER Info] Fragment 3 added to Functor -> Function send_data_or_fragments
[RTPS_WRITER Info] Sending relevant changes as DATA/DATA_FRAG messages -> Function add_data_frag
[RTPS_WRITER Info] Sending INFO_TS message -> Function add_info_ts_in_buffer
Adding Fragment 4:
[RTPS_WRITER Info] Fragment Number is 4 and fragment start is 1629 -> Function add_data_frag
[RTPS_MSG_OUT Info] (ID:140551501941760) SharedMemTransport: 628 bytes to port 7413 -> Function send
...
This is the function send, I would like to trace back:
bool SharedMemTransport::send(
const std::shared_ptr<SharedMemManager::Buffer>& buffer,
const Locator& remote_locator)
{
if (!push_discard(buffer, remote_locator))
{
return false;
}
logInfo(RTPS_MSG_OUT,
"(ID:" << std::this_thread::get_id() << ") " << "SharedMemTransport: " << buffer->size() << " bytes to port " <<
remote_locator.port);
return true;
}

I think the trick is here. So the message is automatically sent when the RTPSMessageGroup is destroyed.

Related

What is the threading model of Spring Reactor, and .subscribe() seems to not utilise more than a single thread?

Still pretty new to Reactive Programming concepts so please bear with me. I did more testing with reactive chains and below is the following code:
Flux
.range(0, 10000)
.doOnNext(i -> {
System.out.println("start " + i+ " Thread: " + Thread.currentThread().getName());
try {
Thread.sleep(2000);
} catch (InterruptedException e) {
throw new RuntimeException(e);
}
})
.flatMap(i -> {
System.out.println("end"+ i + " Thread: " + Thread.currentThread().getName());
return Mono.just(i);
})
.doOnError(err -> {
throw new RuntimeException(err.getMessage());
})
.subscribe();
System.out.println("Thread: " + Thread.currentThread().getName() + " Hello world");
My main confusion here is the output:
start 0 Thread: main
end0 Thread: main
start 1 Thread: main
end1 Thread: main
start 2 Thread: main
end2 Thread: main
start 3 Thread: main
end3 Thread: main
start 4 Thread: main
end4 Thread: main
Thread: main Hello world
How come no default threadPool is spawned/used to more efficiently process each integer? Also, based off my understanding of the use of subscribing to the publisher:
Subscribe is an asynchronous process. It means that when you call subscribe, it launch processing in the background, then return immediately.
However, my observation seems to differ where I observed a blocking behavior here since the printing of "Hello World" must first wait for the processing of the Flux Reactive chain to first finish as that chain is using and blocking(?) the main thread.
Uncommenting subscribeOn() has a different, 'correct' sort of behavior:
Flux
.range(0, 10000)
.doOnNext(i -> {
System.out.println("start " + i+ " Thread: " + Thread.currentThread().getName());
try {
Thread.sleep(2000);
} catch (InterruptedException e) {
throw new RuntimeException(e);
}
})
.flatMap(i -> {
System.out.println("end"+ i + " Thread: " + Thread.currentThread().getName());
return Mono.just(i);
})
.doOnError(err -> {
throw new RuntimeException(err.getMessage());
})
.subscribeOn(Schedulers.boundedElastic())
.subscribe();
System.out.println("Thread: " + Thread.currentThread().getName() + " Hello world");
Thread: main Hello world
start 0 Thread: boundedElastic-1
end0 Thread: boundedElastic-1
start 1 Thread: boundedElastic-1
end1 Thread: boundedElastic-1
start 2 Thread: boundedElastic-1
My understanding of this is because we now specify a threadPool the reactive chain must use for its processing, so that the main thread is thus free to behave unblocked, asynchronously printing "Hello World" first.
Likewise, if we were to now replace .subscribe() with .blockLast():
Flux
.range(0, 5)
.doOnNext(i -> {
System.out.println("start " + i+ " Thread: " + Thread.currentThread().getName());
try {
Thread.sleep(2000);
} catch (InterruptedException e) {
throw new RuntimeException(e);
}
})
.flatMap(i -> {
// if (i == 100) return Mono.error(new RuntimeException("Died cuz i = 100"));
System.out.println("end"+ i + " Thread: " + Thread.currentThread().getName());
return Mono.just(i);
})
.doOnError(err -> {
throw new RuntimeException(err.getMessage());
})
.subscribeOn(Schedulers.boundedElastic())
.blockLast();
System.out.println("Thread: " + Thread.currentThread().getName() + " Hello world");
The resulting behavior is expectedly, changed to blocking instead of Asynchronous where if my understanding is correct, is because even if we specified a different thread pool (not main) to be used for the reactive chain's processing, the main thread being the caller thread, is still blocked till the reactive chain returns either a success or error signal before releasing the main thread.
Is my understanding so far sound? And where am I going wrong in my understanding of Project Reactor's default threading behavior?
I advise you to read the section related to threading in the official documentation. It should provide you with a good understanding of what happens. I will try to sum it up as well as I can.
Why Flux is processed on the calling thread ?
Flux and Mono objects model a suspendable, resumable chain of operations. It means the engine can "park" actions, and later schedule their execution on an available thread.
Now, when you call subscribe on a Publisher, you start the chain of operation. Its first actions are launched on the calling thread, as cited from the doc:
Unless specified, the topmost operator (the source) itself runs on the Thread in which the subscribe() call was made.
In case your flow is simple enough, there is a good chance that the entire flux/mono will be processed on the same thread.
Does this mean it is called synchronously / on program foreground ?
That might give the illusion of synchronous processing, but it is not.
We can already see it in your first example. You create a range of a thousand values, but only 4 values are printed before Thread: main Hello world message. It shows that the processing has started, but has been "paused" to let your main program continue.
We can also see this more clearly in the following example:
// Assemble the flow of operations
Flux flow = Flux.range(0, 5)
.flatMap(i -> Mono
.just("flatMap on [" + Thread.currentThread().getName() + "] -> " + i)
.delayElement(Duration.ofMillis(50)));
// Launch processing
Disposable execStatus = flow.subscribe(System.out::println);
System.out.println("SUBSCRIPTION DONE");
// Prevent program to stop before the flow is over.
do {
System.out.println("Waiting for the flow to end...");
Thread.sleep(50);
} while (!execStatus.isDisposed());
System.out.println("FLUX PROCESSED");
This program prints:
SUBSCRIPTION DONE
Waiting for the flow to end...
flatMap on [main] -> 0
flatMap on [main] -> 1
flatMap on [main] -> 3
Waiting for the flow to end...
flatMap on [main] -> 4
flatMap on [main] -> 2
FLUX PROCESSED
If we look at it, we can see that messages from the main program are interleaved with the main program, so even if the flow is executed on the main thread, it is done in the background nonetheless.
This proves what is stated by subscribe(Consumer) apidoc:
Keep in mind that since the sequence can be asynchronous, this will immediately return control to the calling thread.
Why no other threads is involved ?
Now, why no other thread has been used ? Well, this is a complex question. In this case, I would say that the engine decided that no other thread was necessary to perform the pipeline with good performance. Switching between threads has a cost, so if it can be avoided, I think Reactor avoids it.
Can other threads be involved ?
The documentation states:
Some operators use a specific scheduler from Schedulers by default
It means that depending on the pipeline, its tasks might be dispatched on threads provided by a Scheduler.
In fact, flatMap should do that. If we only slightly modify the example, we will see operations being dispatched to the parallel scheduler. All we have to do is limit the concurrency (yes, I know, this is not very intuitive). By default, flatMap uses a concurrency factor of 256. It means that it can start 256 operations in the same time (roughly explained). Let's constrain it to 2:
Flux flow = Flux.range(0, 5)
.flatMap(i -> Mono
.just("flatMap on [" + Thread.currentThread().getName() + "] -> " + i)
.delayElement(Duration.ofMillis(50)),
2);
Now, the program prints:
SUBSCRIPTION DONE
Waiting for the flow to end...
flatMap on [main] -> 0
Waiting for the flow to end...
flatMap on [main] -> 1
Waiting for the flow to end...
flatMap on [parallel-1] -> 2
flatMap on [parallel-2] -> 3
Waiting for the flow to end...
flatMap on [parallel-3] -> 4
FLUX PROCESSED
We see that operations 2, 3, and 4 have happened on a thread named parallel-x. These are threads spawned by Schedulers.parallel.
NOTE: subscribeOn and publishOn methods can be used to gain more fine-grained control over threading.
About blocking
Now, does block, blockFirst and blockLast methods change how operations are scheduled/executed ? The response is no.
When you use block, internally, the flow is subscribed as with subscribe() call. However, once the flow is triggered, instead of returning, internally Reactor uses the calling thread to loop and wait until the flux is done, as I've done in my first example above (but they do it in a much clever way).
We can try it. Using the constrained flatMap, what is printed if we use block instead of manually looping ?
The program:
Flux.range(0, 5)
.flatMap(i -> Mono
.just("flatMap on [" + Thread.currentThread().getName() + "] -> " + i)
.delayElement(Duration.ofMillis(50)),
2)
.doOnNext(System.out::println)
.blockLast();
System.out.println("FLUX PROCESSED");
prints:
flatMap on [main] -> 0
flatMap on [main] -> 1
flatMap on [parallel-1] -> 2
flatMap on [parallel-2] -> 3
flatMap on [parallel-3] -> 4
FLUX PROCESSED
We see that as previously, the flux used both main thread and parallel threads to process its elements. But this time, the main program was "halted" until the flow has finished. Block prevented our program to resume until Flux completion.

load prohibited on esp32 during ISR execution

I have set up an interrupt that changes a boolean to true and in void loop(), I am constantly checking if the boolean is true like so:
TTGOClass *ttgo;
bool irq = false;
void setup()
{
Serial.begin(115200);
ttgo = TTGOClass::getWatch();
ttgo->begin();
ttgo->openBL();
ttgo->tft->fillScreen(TFT_BLACK);
ttgo->tft->drawString("T-Watch AXP202", 25, 50, 4);
ttgo->tft->setTextFont(4);
ttgo->tft->setTextColor(TFT_WHITE, TFT_BLACK);
pinMode(AXP202_INT, INPUT_PULLUP);
attachInterrupt(AXP202_INT, [] {
irq = true;
}, FALLING);
//!Clear IRQ unprocessed first
ttgo->power->enableIRQ(AXP202_PEK_SHORTPRESS_IRQ | AXP202_VBUS_REMOVED_IRQ | AXP202_VBUS_CONNECT_IRQ | AXP202_CHARGING_IRQ, true);
ttgo->power->clearIRQ();
}
void loop()
{
if (irq) {
irq = false;
ttgo->power->readIRQ();
if (ttgo->power->isVbusPlugInIRQ()) {
ttgo->tft->fillRect(20, 100, 200, 85, TFT_BLACK);
ttgo->tft->drawString("Power Plug In", 25, 100);
}
if (ttgo->power->isVbusRemoveIRQ()) {
ttgo->tft->fillRect(20, 100, 200, 85, TFT_BLACK);
ttgo->tft->drawString("Power Remove", 25, 100);
}
if (ttgo->power->isPEKShortPressIRQ()) {
ttgo->tft->fillRect(20, 100, 200, 85, TFT_BLACK);
ttgo->tft->drawString("PowerKey Press", 25, 100);
}
ttgo->power->clearIRQ();
}
delay(1000);
}
The t-watch 2020 (an esp32 smartwatch) runs on a battery and this weird method found in its example sketches (the code above is an example not the actual code because it is quite chaotic) wastes quite a lot of precious battery power. So I tried throwing the code from the if inside void loop() right into attachInterrupt(AXP202_INT, [] but... I needed to execute lengthy commands (I think that's the cause of the problem)such as Serial.print() and delay() (they are absolutely necesarry), the esp32 crashes returning the following error:
16:18:32.486 -> Guru Meditation Error: Core 0 panic'ed (LoadProhibited). Exception was unhandled.
16:18:32.486 -> Core 0 register dump:
16:18:32.486 -> PC : 0x4009947b PS : 0x00060233 A0 : 0x8009894f A1 : 0x3ffbd150
16:18:32.486 -> A2 : 0x3ffba5a8 A3 : 0x3ffbd2dc A4 : 0x00000001 A5 : 0x00000001
16:18:32.486 -> A6 : 0x00060223 A7 : 0x00000000 A8 : 0x00000000 A9 : 0x3ffba5a8
16:18:32.486 -> A10 : 0x3ffba5a8 A11 : 0x00060023 A12 : 0x00060021 A13 : 0x3ffc0910
16:18:32.486 -> A14 : 0x00000003 A15 : 0x00060023 SAR : 0x00000000 EXCCAUSE: 0x0000001c
16:18:32.519 -> EXCVADDR: 0x00000004 LBEG : 0x40093948 LEND : 0x40093964 LCOUNT : 0x00000000
16:18:32.519 ->
16:18:32.519 -> ELF file SHA256: 0000000000000000
16:18:32.519 ->
16:18:32.519 -> Backtrace: 0x4009947b:0x3ffbd150 0x4009894c:0x3ffbd170 0x4009726f:0x3ffbd190 0x400972fd:0x3ffbd1b0 0x40098fbe:0x3ffbd1d0 0x4009909f:0x3ffbd210 0x40096b06:0x3ffbd240
16:18:32.519 ->
16:18:32.519 -> Rebooting...
the code is like this:
TTGOClass *ttgo;
bool axpIrq = false; //axpIrq for button press and power plug in/remove events.
bool lenergy = false;
bool BLaudio = false;
bool keepAwake = false;
//some initializations
ttgo = TTGOClass::getWatch();
ttgo->begin();
ttgo->lvgl_begin();
pinMode(AXP202_INT, INPUT_PULLUP);
attachInterrupt(AXP202_INT, [] {
axpIrq = true;
ttgo->power->readIRQ();
if (ttgo->power->isPEKShortPressIRQ()) {
//ttgo->power->clearIRQ();
Serial.println("button pressed");
low_energy();
}
ttgo->power->clearIRQ();
}, FALLING);
low energy is defined in another file:
void low_energy() {
//portENTER_CRITICAL(&synch);
if (!keepAwake) {
if (ttgo->bl->isOn()) {
//Serial.println("backlight on, turning off");
ttgo->closeBL();
ttgo->stopLvglTick();
ttgo->bma->enableStepCountInterrupt(false);
ttgo->displaySleep();
//lenergy = true;
gpio_wakeup_enable ((gpio_num_t)AXP202_INT, GPIO_INTR_LOW_LEVEL);
gpio_wakeup_enable ((gpio_num_t)BMA423_INT1, GPIO_INTR_HIGH_LEVEL);
esp_sleep_enable_gpio_wakeup ();
if (!BLaudio) {
setCpuFrequencyMhz(20);
esp_light_sleep_start();
//Serial.println("BLaudio is off, light sleep starts");
} else {
//Serial.println("BLaudio is on, light sleep won't start");
}
//Serial.println("screen off");
} else {
//Serial.println("Waking up");
setCpuFrequencyMhz(160);
ttgo->startLvglTick();
ttgo->displayWakeup();
ttgo->rtc->syncToSystem();
lv_disp_trig_activity(NULL);
ttgo->openBL();
ttgo->bma->enableStepCountInterrupt();
displayTime(true);
}
}
//portEXIT_CRITICAL(&synch);
}
I've tried replacing bool axpIrq = false; with volatile bool axpIrq = false;, uncommenting portENTER_CRITICAL(&synch); and portEXIT_CRITICAL(&synch); and still no results. If I am right when I think that lengthy commands are the problem, how can I execute the commands while the CPU continues in void loop normally(make a callback executed on the second core? I don't know)? If I am not right, what's the actual problem and how can I solve it?
Two problems here.
First, interrupt handlers need to be defined using the IRAM_ATTR attribute in order to ensure that they're already loaded into instruction memory (IRAM). The ESP32 understandably doesn't like having to load code from flash to RAM in order to service an interrupt. You need to make sure it's already there. If you don't, you see exactly the error you're seeing. Instruction RAM is also an extremely limited resource; you don't want to occupy more of it than is absolutely necessary.
You're specifying a lambda expression as the interrupt handler. I'm not sure whether you can use IRAM_ATTR with a lambda. To be sure, break out the interrupt handler into a function that's properly defined using IRAM_ATTR:
void IRAM_ATTR handle_interrupt() {
axpIrq = true;
ttgo->power->readIRQ();
if (ttgo->power->isPEKShortPressIRQ()) {
//ttgo->power->clearIRQ();
Serial.println("button pressed");
low_energy();
}
ttgo->power->clearIRQ();
}
...
attachInterrupt(AXP202_INT, handle_interrupt, FALLING);
...
Second, you're doing WAY too much in your interrupt handler. I quoted your existing code above, but there's no way that's going to work. Unless you really know what you're doing, your handler shouldn't do much more than set a volatile boolean flag variable and return. In this case, you're calling this TTGO library, Serial, possibly Bluetooth it looks like?
You'd need to ensure that every single function you call, and every single function those functions call, etc, are defined using IRAM_ATTR. You're not going to want to do that because you'd need to modify lots of third party code, and it would take up a lot - possibly more than there is - of instruction RAM.
You also don't know that any of these functions you're calling from the interrupt handler are re-entrant. You don't know if they lock out interrupts or not. It's simply not safe to call them unless you do know this - their internal data structures may be in an inconsistent state when they're interrupted and re-entered. The best thing to assume with any high level libraries is that they're not intended to be called from interrupt handlers.
Finally, interrupts are intended to be handled quickly so that the normal flow of code can be resumed.
Your first chunk of code isn't a "weird method" - it's the normal way that interrupts are handled in Arduino code on the ESP32, and it's how you write interrupt driven code that does much more work than is safe to do from inside an interrupt service routine. This is the "right" way to do it in an ESP32 Arduino program, which is why the examples showed this kind of code.
Your real problem seems to be how to do this and not kill your battery. This is a much bigger set of issues; you'll need to learn about ESP32 sleep modes and ESP-IDF/FreeRTOS tasks, how to create one and wake it up from an interrupt handler. This all depends on the rest of the watch code as well and how it's designed to save power. If you try to go further with this the best thing to do is to come back and post questions about specific problems you encounter while trying to make that work.

Difference between flux.parallel.runOn().flatMap( blockingcall //Mono.fromCallable) vs flux.flatMap(blockingcall.subscribeOn() //Mono.fromCallable)

What is the difference between below implementation of spawning parallel computation of elements emitted by flux.
Flux.fromIterable(list)
.parallel()
.runOn(Schedulers.boundedElastic())
.flatMap(items -> Mono.fromCallable(() -> blocking database call).subscribeOn(Schedulers.boundedElastic()))
.map(dbResponse -> dbResponse.stream().map(singleObj-> createAdifferentObject(dbResponse)).collect(Collectors.toList())
.block()
And
Flux.fromIterable(list)
.flatMap(items -> Mono.fromCallable(() -> blocking database call).subscribeOn(Schedulers.boundedElastic()))
.map(dbResponse -> dbResponse.stream().map(singleObj-> createAdifferentObject(dbResponse)).collect(Collectors.toList())
.block()
For the first piece of code i referred this block
return Flux.fromIterable(urls)
.flatMap(url ->
//wrap the blocking call in a Mono
Mono.fromCallable(() -> blockingWebClient.get(url))
//ensure that Mono is subscribed in an boundedElastic Worker
.subscribeOn(Schedulers.boundedElastic())
); //each individual URL fetch runs in its own thread!
}
from this blog https://spring.io/blog/2019/12/13/flight-of-the-flux-3-hopping-threads-and-schedulers
For the second piece of code, in general in our team or org, everyone uses it. This documentation says, Schedulers.(without new word) creates a shared instance from this link https://projectreactor.io/docs/core/release/api/reactor/core/scheduler/Schedulers.html, but in terms of flow, shouldn't that be outside the flatMap? If not how does it work?

Always finish Mono.usingWhen on cancel signal

we write a reactive WebFlux application that basically gets a resource, does amount of work in closure and at the end unlocks the resource with either initial version or updated one from closure.
Ex:
Mono<ProductLock> lock = service.lock()
Mono.usingWhen(lock,
(ProductLock state) -> service.doLongOperationAsync(state),
(ProductLock state) -> service.unlock(state))
ProductLock is a definition of locked product meaning that we can do operations via HTTP API including multiple microservices.
service.doLongOperationAsync() - calls a few HTTP APIs which ARE EXPECTED to ALWAYS be finished once started (failure - is normal, but if operation started - it needs to be finished, because HTTP call cannot be rolled back)
service.unlock() - operation MUST be called only after successful or failed execution of doLongOperationAsync.
In happy scenarios everything works as expected: on success product unlocked, on failure also unlocked.
The problems come when client, who calls our service (SOAP UI, POSTMAN, any real client), drops connection or times out - the cancel signal is generated and getting up till the above code.
At this point, anything within service.doLongOperationAsync is stopped and service.unlock is called on cancel asynchronously.
Question: how can we prevent this from happening.
The requirements are:
once doLongOperationAsync started - it must finish
service.unlock(state) - must be called ONLY after doLongOperationAsync, even on cancel.
Spring Boot repro
MRE:
#Bean
public RouterFunction<ServerResponse> route() {
return RouterFunctions.route().GET("/work", request -> processRequest()
.flatMap(res -> ServerResponse.ok().bodyValue(res))).build();
}
private Mono<String> processRequest() {
//Need unlock to execute exactly after doWorkAsync in any case
return Mono.usingWhen(lock(), this::doWorkAsync, this::unlock)
.doOnNext((id) -> System.out.println("Request processed:" + id))
.doOnCancel(() -> System.out.println("Request cancelled"));
}
private Mono<UUID> lock() {
return Mono.defer(() -> Mono.just(UUID.randomUUID())
.doOnNext(id -> System.out.println("Locked:" + id)));
}
//Need this to finish no matter what
private Mono<String> doWorkAsync(UUID lockID) {
return Mono.just(lockID).map(UUID::toString)
.doOnNext(id -> System.out.println("Start working on:" + lockID))
.delayElement(Duration.ofSeconds(10))
.doOnNext(id -> System.out.println("Finished work on:" + id))
// Should never be called
.doOnCancel(() -> System.out.println("Processing cancelled:" + lockID));
}
private Mono<Void> unlock(UUID lockID) {
return Mono.fromRunnable(() -> System.out.println("Unlocking:" + lockID));
}
Apparently what does work for the usecase is using Mono/Flux create with sink. This way it's initiated when subscribed, but still not cancelled even when Cancellation requested:
Mono.create(sink -> {
doWorkAsync().subscribe(sink::next, sink::error, null, sink.currentContext())
})
In case of using, whole usingWhen should be placed inside create

Ejabberd module to send an acknowledge message

This erlang code I am using to send message acknowledgement. While using this I am getting error, error log is given below
My code:
-module(mod_ack).
-behaviour(gen_mod).
%% public methods for this module
-export([start/2, stop/1]).
-export([on_user_send_packet/3]).
-include("logger.hrl").
-include("ejabberd.hrl").
-include("jlib.hrl").
%%add and remove hook module on startup and close
start(Host, _Opts) ->
?INFO_MSG("mod_echo_msg starting", []),
ejabberd_hooks:add(user_send_packet, Host, ?MODULE, on_user_send_packet, 0),
ok.
stop(Host) ->
?INFO_MSG("mod_echo_msg stopping", []),
ejabberd_hooks:delete(user_send_packet, Host, ?MODULE, on_user_send_packet, 0),
ok.
on_user_send_packet(From, To, Packet) ->
return_message_reciept_to_sender(From, To, Packet),
Packet.
return_message_reciept_to_sender(From, _To, Packet) ->
IDS = xml:get_tag_attr_s("id", Packet),
ReturnRecieptType = "serverreceipt",
%% ?INFO_MSG("mod_echo_msg - MsgID: ~p To: ~p From: ~p", [IDS, _To, From]),
send_message(From, From, ReturnRecieptType, IDS, "").
send_message(From, To, TypeStr, IDS, BodyStr) ->
XmlBody = {xmlelement, "message",
[{"type", TypeStr},
{"from", jlib:jid_to_string(From)},
{"to", jlib:jid_to_string(To)},
{"id", IDS},
[{xmlelement, "body", [],
[{xmlcdata, BodyStr}]}]},
ejabberd_router:route(From, To, XmlBody).
I have removed the modules where i used on_user_send hook but still getting error also updated the error log.
Error log:
2015-10-06 07:13:45.796 [error] <0.437.0>#ejabberd_hooks:run_fold1:371 {function_clause,[{xml,get_tag_attr_s,[<<"id">>,{jid,<<"xxxxxx">>,<<"xxxxxx">>,<<>>,<<"xxxxxx">>,
<<"xxxxxx">>,<<>>}],[{file,"src/xml.erl"},{line,210}]},{mod_ack,return_message_reciept_to_sender,3,
[{file,"src/mod_ack.erl"},{line,36}]},{mod_ack,on_user_send_packet,4,[{file,"src/mod_ack.erl"},{line,30}]},{ejabberd_hooks,safe_apply,3,[{file,"src/ejabberd_hooks.erl"},{line,385}]},{ejabberd_hooks,run_fold1,4,[{file,"src/ejabberd_hooks.erl"},{line,368}]},{ejabberd_c2s,session_established2,2,[{file,"src/ejabberd_c2s.erl"},{line,1296}]},{p1_fsm,handle_msg,10,[{file,"src/p1_fsm.erl"},{line,582}]},{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,237}]}]}
You seem to have a module interfering (mod_send_receipt) or you have more code registering to hook. Your module is called mod_ack and it is not the one generating the crash. The crash happen because you have a hook registered to run function mod_send_receipt:on_user_send_packet which is 'undef': It means it does not exist or is not exported.
The ejabberd docs define the user_send_packet hook as an arity-4 function:
user_send_packet(Packet, C2SState, From, To) -> Packet
You are registering an arity-3 function, so when ejabberd tries to call your on_user_send_packet function, it passes 4 arguments and gets the undef function exception.
In order for your callback function to actually be called, you will need to match its argument list to what ejabberd will be sending, i.e.:
on_user_send_packet(Packet, _C2SState, From, To)