RxDart listen not working after Future delay - rxdart

I'm trying to use autoConnect in RxDart, whose description is
Returns an Observable that automatically connects (at most once) to this ConnectableObservable when the first Observer subscribes.
Observable<int> o = Observable.range(1, 2).publish().autoConnect();
o.listen((v) { print('Observer 1: $v'); });
o.listen((v) { print('Observer 2: $v'); });
await Future.delayed(const Duration(microseconds: 1));
o.listen((v) { print('Observer 3: $v'); });
Output:
Subscriber 1: 1
Subscriber 2: 1
Subscriber 1: 2
Subscriber 2: 2
As you can see, no output from the third Observer.
The similar output also happens if I use ConnectableObservable and put the .connect() call before the delay and also while using refCount()
But for refCount, I think it's expected as the description is:
Returns an Observable that stays connected to this ConnectableObservable
as long as there is at least one subscription to this
ConnectableObservable.
I don't why autoConnect behaves like refCount (what are the differences between them)
And also why the Observer after delay is unable to listen

Related

Check the condition when a timer is running in CAPL (CANoe)

I am running a script in CAPL where I am supposed to notice a change in the value of a signal (for example: signal B) coming from the ECU. At the start of the timer, I change the value of another signal (for example: signal A) and sent it to ECU over CAN Bus. While the timer is running, I want to see the changed value of signal B coming from ECU as a response to the changed value of signal A. After the timer has run out, I want to reset the signal A back to its original value.
*Note: I have called the different signals as Signal A and Signal B only for understanding the question more clearly
Signal A changes the value from 2 to 0.
Signal B has original value of 61, and the changed value can be any number between 0-60.
Timer runs for 4 seconds.
I am using while loop and command (isTimerActive(timer)==1), to check for the change in the value of signal B when the timer is running.
Below is the attached Code ->
variables
{
msTimer Execute;
}
on key 'c'
{
setTimer(Execute,4000);
Write("Test starts");
SetSignal(Signal A, 2);
while (isTimerActive(Execute)==1)
{
if ($Signal B != 61)
{
Write("Test pass");
}
else
{
Write("Test fail");
}
}
}
on timer Execute
{
write("Test over");
setSignal(Signal A, 0);
}
I am executing this code and the value of signal A changes to 2 but
there's no change in the value of signal B. I am using the
(isTimerActive (timer) ==1) in the while loop, is it the correct command
for my problem?
Also, when I run (isTimerActive (timer) ==1), CANoe becomes inactive and
I have to stop CANoe using Task manager.
Any ideas how can I correct my code and get the desired response?
Thanks and Best
CAPL is event-driven. Your only choice is to react on events by programming event handlers, i.e. the functions starting with on ....
During execution of an event handler, the system basically blocks everything until the event handler has finished.
Literally nothing else happens, no sysvars change, no signals change, no timers expire, no bus messages are handled, and so on.
For test-modules and -units the story is a little bit different. There you have the possibility to wait during execution of your code using the various testWaitFor... methods.
With your current implementation of on key ‘c‘you basically block the system, since you have an while loop there waiting for an Timer to expire.
As stated above, this blocks everything and you have to kill CANoe.
Fortunately changes of signals are also events that can be handled.
Something like this should do:
Remove the while block and instead add another event handler like this:
on signal SignalB
{
if(isTimerActive(Execute))
{
if ($SignalB != 61)
{
Write("Test pass");
}
else
{
Write("Test fail");
}
}
}
The code is called when SignalB changes. It then checks whether the Timer is still running and checks the value of the signal.
Instead of $SignalB inside of the handler you can also write this.
In an event handler this is always the object that has caused the event.

Why doesn't my code executed from top to bottom in kotlin?

I'm making a recyclerview. when using init block, I have problem.
I expected code is executed in order. but It's out of order
Some code :
inner class TodoListFragmentRecyclerViewAdapter : RecyclerView.Adapter<RecyclerView.ViewHolder>() {
val doListDTOs = ArrayList<DoListDTO>()
init {
Log.e("1","1")
doListListenerRegistration = fireStore.collection("doList").whereEqualTo("doListName",todoList_name).orderBy("doListTimestamp", Query.Direction.DESCENDING).limit(100)
.addSnapshotListener { querySnapshot, firebaseFirestoreException ->
if (querySnapshot == null) return#addSnapshotListener
doListDTOs.clear()
for (snapshot in querySnapshot.documents) {
val item = snapshot.toObject(DoListDTO::class.java)
doListDTOs.add(item!!)
Log.e("2",doListDTOs.toString())
notifyDataSetChanged()
}
}
Log.e("3",doListDTOs.toString())
}
}
I want log showed like below order
1 -> 2 -> 3
but, the actual output is
1 -> 3 -> 2
why is this?
As an additional issue, because of above order, last doListDTOs.toString() is null in log 3. but doListDTOs.toString in log 2 have some value.
If It's not a order problem, I'd be grateful if you could tell me what the problem was.
When you connect to Firestore and try to request data from firestore DB. you are acutally making a network call which is run via a background thread.
Now the main thread first print log for you as 1. and then starts a new thread A (for example). And lastly print 3.
But you have to notice that 2 will be print when data is read via thread A and is returned to callback of main thread (IPC) and that's exactly why it shows 3 before 2.
If you need to run 3 after two, you have write code inside firestore callback. when that task is completed, you will iniate your next step.
Hope it helps!

How does actors in kotlin work when ran on different threads?

In the actor example from the official kotlinlang.org documentation, an actor is launched 100 000 times which simply increments a counter inside the actor. Then a get request is sent to the actor and the counter is sent in the response with the correct amount (100 000).
This is the code:
// The messages
sealed class CounterMsg
object IncCounter : CounterMsg() // one-way message to increment counter
class GetCounter(val response: CompletableDeferred<Int>) : CounterMsg() // a two-way message to get the counter
// The actor
fun CoroutineScope.counterActor() = actor<CounterMsg> {
var counter = 0 // actor state
for (msg in channel) { // iterate over incoming messages
when (msg) {
is IncCounter -> counter++
is GetCounter -> msg.response.complete(counter)
}
}
}
fun main() {
runBlocking {
val counterActor = counterActor()
GlobalScope.massiveRun {
counterActor.send(IncCounter) // run action 100000 times
}
val response = CompletableDeferred<Int>()
counterActor.send(GetCounter(response))
println("Counter = ${response.await()}")
counterActor.close()
}
}
I have problems understanding what would happen if the counterActor coroutines would execute on multiple threads? If the coroutines would run on different threads the variable 'counter' in the actor would potentially be susceptible to a race condition, would it not?
Example: One thread runs a coroutine and this receives on the channel, and then on another thread a coroutine could receive and both of them try to update the counter variable at the same time, thus updating the variable incorrectly.
In the text that follows the code example
It does not matter (for correctness) what context the actor itself is executed in. An actor is a coroutine and a coroutine is executed sequentially, so confinement of the state to the specific coroutine works as a solution to the problem of shared mutable state.
Im having a hard time understanding this. Could someone elaborate what this exactly means, and why a race condition does nor occur. When I run the example I see all coroutines run on the same main thread so I can not prove my theory of the race condition.
"actor is launched 100 000 times"
No, actor is launched exactly 1 time, at the line
val counterActor = counterActor()
Then it receives 100000 messages, from 100 coroutines working in parallel on different threads. But they do not increment the variable counter directly, they only add messages to the actor's input message queue. Indeed, this operation, implemented in the kotlinx.coroutines library, is made thread-safe.

Ensure that AMQP exchange binding exists before publishing

The System Layout
We have three systems:
An API Endpoint (Publisher and Consumer)
The RabbitMQ Server
The main application/processor (Publisher and consumer)
System 1 and 3 both use Laravel, and use PHPAMQPLIB for interaction with RabbitMQ.
The path of a message
System 1 (the API Endpoint) sends a serialized job to the RabbitMQ Server for System 3 to process. It then immediately declares a new randomly named queue, binds an exchange to that queue with a correlation ID - and starts to listen for messages.
Meanwhile, system 3 finishes the job, and once it does, responds back with details from that job to RabbitMQ, on the exchange, with the correlation ID.
The issue and what I've tried
I often find that this process fails. The job gets sent and received, and the response gets sent - but system 1 never reads this response, and I don't see it published in RabbitMQ.
I've done some extensive debugging of this without getting to a root cause. My current theory is that System 3 is so quick at returning a response, that the new queue and exchange binding hasn't even been declared yet from System 1. This means the response from System 3 has nowhere to go, and as a result vanishes. This theory is mainly based on the fact that if I set jobs to be processed at a lower frequency on System 3, the system becomes more reliable. The faster the jobs process, the more unreliable it becomes.
The question is: How can I prevent that? Or is there something else that I'm missing? I of course want these jobs to process quickly and efficiently without breaking the Request/Response-pattern.
I've logged output from both systems - both are working with the same correlation ID's, and System 3 gets an ACK upon publishing - whilst System 1 has a declared queue with no messages that eventually just times out.
Code Example 1: Publishing a Message
/**
* Helper method to publish a message to RabbitMQ
*
* #param $exchange
* #param $message
* #param $correlation_id
* #return bool
*/
public static function publishAMQPRouteMessage($exchange, $message, $correlation_id)
{
try {
$connection = new AMQPStreamConnection(
env('RABBITMQ_HOST'),
env('RABBITMQ_PORT'),
env('RABBITMQ_LOGIN'),
env('RABBITMQ_PASSWORD'),
env('RABBITMQ_VHOST')
);
$channel = $connection->channel();
$channel->set_ack_handler(function (AMQPMessage $message) {
Log::info('[AMQPLib::publishAMQPRouteMessage()] - Message ACK');
});
$channel->set_nack_handler(function (AMQPMessage $message) {
Log::error('[AMQPLib::publishAMQPRouteMessage()] - Message NACK');
});
$channel->confirm_select();
$channel->exchange_declare(
$exchange,
'direct',
false,
false,
false
);
$msg = new AMQPMessage($message);
$channel->basic_publish($msg, $exchange, $correlation_id);
$channel->wait_for_pending_acks();
$channel->close();
$connection->close();
return true;
} catch (Exception $e) {
return false;
}
}
Code Example 2: Waiting for a Message Response
/**
* Helper method to fetch messages from RabbitMQ.
*
* #param $exchange
* #param $correlation_id
* #return mixed
*/
public static function readAMQPRouteMessage($exchange, $correlation_id)
{
$connection = new AMQPStreamConnection(
env('RABBITMQ_HOST'),
env('RABBITMQ_PORT'),
env('RABBITMQ_LOGIN'),
env('RABBITMQ_PASSWORD'),
env('RABBITMQ_VHOST')
);
$channel = $connection->channel();
$channel->exchange_declare(
$exchange,
'direct',
false,
false,
false
);
list($queue_name, ,) = $channel->queue_declare(
'',
false,
false,
true,
false
);
$channel->queue_bind($queue_name, $exchange, $correlation_id);
$callback = function ($msg) {
return self::$rfcResponse = $msg->body;
};
$channel->basic_consume(
$queue_name,
'',
false,
true,
false,
false,
$callback
);
if (!count($channel->callbacks)) {
Log::error('[AMQPLib::readAMQPRouteMessage()] - No callbacks registered!');
}
while (self::$rfcResponse === null && count($channel->callbacks)) {
$channel->wait();
}
$channel->close();
$connection->close();
return self::$rfcResponse;
}
Grateful for any advise you can offer!
I may be missing something, but when I read this:
System 1 (the API Endpoint) sends a serialized job to the RabbitMQ Server for System 3 to process. It then immediately declares a new randomly named queue, binds an exchange to that queue with a correlation ID - and starts to listen for messages.
My first thought was "why do you wait until the message is sent before declaring the return queue?"
In fact, we have a whole series of separate steps here:
Generating a correlation ID
Publishing a message containing that ID to an exchange for processing elsewhere
Declaring a new queue to receive responses
Binding the queue to an exchange using the correlation ID
Binding a callback to the new queue
Waiting for responses
The response cannot come until after step 2, so we want to do that as late as possible. The only step that can't come before that is step 6, but it's probably convenient to keep steps 5 and 6 close together in the code. So I would rearrange the code to:
Generating a correlation ID
Declaring a new queue to receive responses
Binding the queue to an exchange using the correlation ID
Publishing a message containing the correlation ID to an exchange for processing elsewhere
Binding a callback to the new queue
Waiting for responses
This way, however quickly the response is published, it will be picked up by the queue declared in step 2, and as soon as you bind a callback and start waiting, you will process it.
Note that there is nothing that readAMQPRouteMessage knows that publishAMQPRouteMessage doesn't, so you can freely move code between them. All you need when you want to consume from the response queue is its name, which you can either save into a variable and pass around, or generate yourself rather than letting RabbitMQ name it. For instant, you could name it after the correlation ID it is listening for, so that you can always work out what it is with simple string manipulation, e.g. "job_response.{$correlation_id}"

WCF Async deadlock?

Has anyone run into a situation where a WaitAny call returns a valid handle index, but the Proxy.End call blocks? Or has any recommendations or how best to debug this - tried tracing, performance counters (to check the max percentages), logging everywhere
The test scenario: 2 async. requests are going out (there's a bit more to the full implementation), and the 1st Proxy.End call return successfully, but the subsequent blocks. I've check the WCF trace and don't see anything particularly interesting. NOTE that it is self querying an endpoint that exists in the same process as well as a remote machine (=2 async requests)
As far as I can see the call is going through on the service implementation side for both queries, but it just blocks on the subsequent end call. It seems to work with just a single call though, regardless of whether it is sending the request to a remote machine or to itself; so it something to do with the multiple queries or some other factor causing the lockup.
I've tried different "concurrencymode"s and "instancecontextmode"s but it doesn't seem to have any bearing on the result.
Here's a cut down version of the internal code for parsing the handle list:
ValidationResults IValidationService.EndValidate()
{
var results = new ValidationResults();
if (_asyncResults.RemainingWaitHandles == null)
{
results.ReturnCode = AsyncResultEnum.NoMoreRequests;
return results;
}
var waitArray = _asyncResults.RemainingWaitHandles.ToArray();
if (waitArray.GetLength(0) > 0)
{
int handleIndex = WaitHandle.WaitAny(waitArray, _defaultTimeOut);
if (handleIndex == WaitHandle.WaitTimeout)
{
// Timeout on signal for all handles occurred
// Close proxies and return...
}
var asyncResult = _asyncResults.Results[handleIndex];
results.Results = asyncResult.Proxy.EndServerValidateGroups(asyncResult.AsyncResult);
asyncResult.Proxy.Close();
_asyncResults.Results.RemoveAt(handleIndex);
_asyncResults.RemainingWaitHandles.RemoveAt(handleIndex);
results.ReturnCode = AsyncResultEnum.Success;
return results;
}
results.ReturnCode = AsyncResultEnum.NoMoreRequests;
return results;
}
and the code that calls this:
validateResult = validationService.EndValidateSuppression();
while (validateResult.ReturnCode == AsyncResultEnum.Success)
{
// Update progress step
//duplexContextChannel.ValidateGroupCallback(progressInfo);
validateResult = validationService.EndValidateSuppression();
}
I've commented out the callbacks on the initiating node (FYI it's actually an 3-tier setup, but the problem is isolated to this 2nd tier calling the 3rd tier - the callbacks go from the 2nd tier to the 1st tier which have been removed in this test). Thoughts?
Sticking to the solution I left in my comment. Simply avoid chaining a callback to an aysnc calls that have different destinations (i.e. proxies)