Semantics of Timed Games and Channel Synchronisation in UPPAAL - verification

I'm struggling to understand how timed games work together with (broadcast) synchronization in UPPAAL (TiGa / Stratego). Imagine the following example:
Forcing the environment
Here, the edge receiving an event over the broadcast channel a is controlled by the environment. From what I understand, the semantics of broadcast synchronization enforce a transition in P2 from the initial state to P2.F as soon as the event is sent in P1 (assuming edge P2.<init> -> P2.F is enabled).
So naturally I would expect a strategy to exist for the controller to force the environment to transition to P2.F. This strategy would simply tell the controller to take the transition P1.<init> -> P1.F.
However, when calling the query control: A<> P2.F, UPPAL TiGa and Stratego tell me that there is no such strategy and that the counter-strategy for the environment is to stay in P2.<init> is simply wait forever.
Being forced by the environment:
When controller and environment switch sides, it looks a little different.
In that case, the query control: A[] (not P2.F) is not satisfied, indicating that there is no chance for the controller to prevent the environment from forcing a transition to P2.F.
In both examples A[] P1.F imply P2.F and E<> P1.F hold.
I'm curious about why the environment seems to be able to evade a transition that a controller can't, or if anyone can point me to some place where the timed game semantics of UPPAAL TiGa or Stratego are explained in detail.
Thank you all!

Related

Does the Operator-SDK guarantee the first call to Reconcile after restarting the operator?

I have experimentally established that after restarting the operator, Reconcile is called for each object that the operator is watching. Is it guaranteed or is it a side effect of something?
It's guaranteed / is intentional behavior where the objects in the initial List to populate the cache are sent as create events to the relevant handlers. The reasoning being that resources could be stale by the controller not having been running.
In fact this can cause some issues if for example your controller is watching configmaps as the memory usage of the controller pod on startup may be extremely high or exceed resource quotas.

How to handle stream of inputs and generate output based on input combination in UML State machine diagram

Following is a safety controller with input and output
Condition given below for designing a state machine:
Here SignalOk, SignalWeak and SignalLost are measurements signal quality of steering angle. SteeringAngle signal itself contains the original steering data. In case of 3 consecutive SignalOk, system controller will output ValidSignal with the steering angle data. In other cases, signal will be considered as CorrputSignal. I am using UML 2 state charts(Harel charts). This is so far what I have done:
N.B.:Parallel states and broadcasting is not supported yet, but nested states are supported.
I don't know how to model this stream of inputs in state machine, any kind of help will be appreciated.
First I would recommend renaming the states, so that they don't resemble actions. I suggest to name them First Ok received, Second Ok received and Ok confirmed.
Since the SteeringAngle shall be ignored the first two times, the only transition triggered by it should be an internal transition in Ok confirmed. This transition will also invoke ValidSignal.
Nothing is specified about the order of SteeringAngle and SignalOk. Therefore, SteeringAngle should be deferred in Second Ok received. This way, even it it comes first, it will stay in the event pool.
Any reception of SignalWeak or SignalLost should return to Ready. You could do this with a local transition of Operational to Ready.
One additional recommendation: Define an Initial state in Operational and target the SystemOk transition to Operational. The effect is the same, but it results in a better separation of the two top level states.

GPUImage gpus_ReturnNotPermittedKillClient crash using GPUImageFilter

I'm using GPUImageFilter in a chain, and most of the time it works OK. I've recently come across a few random crashes that match the symptoms in this github issue (albeit I'm using GPUImageFilter not live capture or video). I'm trying to find a suitable method that can ensure I've cleared the frame buffer and any other GPUImage-related activities in willResignActive.
Currently I have:
[[GPUImageContext sharedFramebufferCache] purgeAllUnassignedFramebuffers];
Is this sufficient? Should I use something else instead/in addition to?
As indicated there, seeing gpus_ReturnNotPermittedKillClient in a stack trace almost always is due to OpenGL ES operations being performed while your application is in the background or is just about to go to the background.
To deal with this, you need to guarantee that all GPUImage-related work is finished before your application heads to the background. You'll want to listen for delegate notifications that your application is heading to the background, and make sure all processing is complete before that delegate callback exits. The suggestion there by henryl is one way to ensure this. Add the following near the end of your delegate callback:
runSynchronouslyOnVideoProcessingQueue(^{
// Do some operation
});
What that will do is inject a synchronous block into the video processing pipeline (which runs on a background queue). Your delegate callback will block the main thread at that point until this block has a chance to execute, guaranteeing that all processing blocks before it have finished. That will make sure all pending operations are done (assuming you don't add new ones) before your application heads to the background.
There is a slight chance of this introducing a deadlock in your application, but I don't think any of my code in the processing pipeline calls back into the main queue. You might want to watch out for that, because if I do still have something in there that does that, this will lock your application. That internal code would need to be fixed if so.

blocked requests in io_service

I have implemented client server program using boost::asio library.
In my implementation there are times when io_service.run() blocks indefinitely. In case I pass another request to io_service, the blocked call begins to execute normally.
Is there any way to see what are the pending requests inside the io_service queue ?
I have not used work object to block the run call!
There are no official ways to query into the io_service to find all pending request. However, there are a few techniques to debug the problem:
Boost 1.47 introduced handler tracking. Simply define BOOST_ASIO_ENABLE_HANDLER_TRACKING and Boost.Asio will write debug output, including timestamps, an identifier, and the operation type, to the standard error stream.
Attach a debugger dig through the layers to find and examine operation queues. This answer covers both understanding handler tracking and using a debugger to examine an operation queue for the epoll_reactor.
Finally, if you believe it is a bug, then it may be worth updating to the latest version or checking the revision history for relevant changes. Regardless, describing the problem in more detail may allow others to help identify the source of the problem and potential solutions.
Now i spent a few hours reading and experimenting (i need more boost::asio functionality for work as well) and it turns out: Kind of.
But it is not as straightforward or readable as one might hope.
Under the hood (well, under the outermost hood) io_service has a bunch of other services registered, which do the work async_ operations of their respective fields require.
These are the "Services" described in the reference.
Now sadly, the services stay registered, wether there is work to do or not. For example if your io_service has a udp socket, it will still have all the corresponding services, even if the socket itself is inactive.
But you can ask your io_service which services it has. Lets say you want to know wether your io_service called m_io_service has an udp datagram_socket_service. Then you can call something like:
if (boost::asio::has_service<boost::asio::datagram_socket_service<boost::asio::ip::udp> >(m_io_service))
{
//Whatever
}
That does not help a lot, because it will be true no matter wether the socket is active or not. But after you know, that you have that service, you can get a ref to it using use_service instead of has_service but with the same elegant amount of <>.
And now you can inspect the service to see what it is up to. Sadly, it will not tell you what the outstanding handlers names are (probably partly because it does not know them) but if it is a socket, you can get its implemention_type and with that check whether it currently is_open or find either the local_endpoint as well as the remote_endpoint.
In case of a deadline_timer_service you can, among other stuff, find out when it expires_at.
See the reference for more information what the service is and is not willing to tell you.
http://www.boost.org/doc/libs/1_54_0/doc/html/boost_asio/reference.html
This information should then hopefully allow you to determine which async_ operation did not return.
And if not, at the very least you can cancel any unexpectedly active services.

How to keep track of MailCore operations

I'm trying to build an OS X mail client using MailCore2, and I need to know what current operations are currently running, and in what state they are — think Mail.app activity monitor window.
I've some things that I could use in the API : The MCOIMAPSession object has a operationQueueRunningChangeBlock property, but it only tells me when the session changes states (running => not running) but that is insufficient.
Right now I think I'll have to subclass/wrap those to do what I want.
MailCore does not provide an API to track running operations, nor should we, because that is your job. A typical pattern to implement this would be to either subclass the operation classes to tag each one with some kind of activity object, or aggregate activities in a separate queue and push and pop as operations are enqueued and dequeued respectively. The completion blocks of each request in the Objective-C interface should provide enough of the state of each operation for you, and some specific kinds of operations even include progress blocks/hooks.