Is it necessary to synchronize access to the device handle when calling vkWaitForFences? The specification does not mention any need for this, but doesn't mention it being free-threaded, either. In some places, namely most of the vkCreateXXX, mention this as a requirement. Given the explicit nature of the spec, I'd expect more precise wording (rather than none at all in this case).
I suspect the answer is "no", but I am unable to trust my intuition with this API or implementations behind it.
It would be strange (useless, actually), if it is necessary to guard a call to this function.
The spec uses the terms "external synchronization" and "host synchronization" to talk about objects/parameters where the application must ensure non-concurrent use. The rules are described in Section 2.5 "Threading Behavior", and in the "Host Synchronization" block after most commands. Anything not listed can be used concurrently.
I'm not sure why you think that the device parameter is supposed to be externally synchronized for vkCreate*, I couldn't find something in the spec to support that. The device object is almost never externally synchronized.
None of the parameters to vkWaitForFences is listed as Host Synchronized. But the fence(s) passed to vkQueueSubmit and vkResetFences are host synchronized, so you can't pass a fence to one of those calls while there is another thread waiting for the fence. But you could have two threads waiting on the same fence, or one thread calling vkGetFenceStatus while another thread is waiting on it.
Related
I'm implementing a custom Kotlin CoroutineScope that deals with receiving, handling and responding to messages over a WebSocket connection. The scope's lifecycle is tied to the WebSocket session, so it's active as long as the WebSocket is open. As part of the coroutine scope's context, I've installed a custom exception handler that will close the WebSocket session if there's an unhandled error. It's something like this:
val handler = CoroutineExceptionHandler { _, exception ->
log.error("Closing WebSocket session due to an unhandled error", exception)
session.close(POLICY_VIOLATION)
}
I was surprised to find that the exception handler doesn't just receive exceptions, but is actually invoked for all unhandled throwables, including subtypes of Error. I'm not sure what I should do with these, since I know from the Java API documentation for Error that "an Error [...] indicates serious problems that a reasonable application should not try to catch".
One particular situation that I ran into recently was an OutOfMemoryError due to the amount of data being handled for a session. The OutOfMemoryError was received by my CoroutineExceptionHandler, meaning it was logged and the WebSocket session was closed, but the application continued running. That makes me uncomfortable, because I know that an OutOfMemoryError can be thrown at any point during code execution and as a result can leave the application in a irrecoverable state.
My first question is this: why does the Kotlin API choose to pass these errors to the CoroutineExceptionHandler for me, the programmer, to handle?
And my second question, following directly from that, is: what is the appropriate way for me to handle it? I can think of at least three options:
Continue to do what I'm doing now, which is to close the WebSocket session where the error was raised and hope that rest of the application can recover. As I said, that makes me uncomfortable, particularly when I read answers like this one, in response to a question about catching OutOfMemoryError in Java, which recommends strongly against trying to recover from such errors.
Re-throw the error, letting it propagate to the thread. That's what I would normally do in any other situation where I encounter an Error in normal (or framework) code, on the basis that it will eventually cause the JVM to crash. In my coroutine scope, though, (as with multithreading in general), that's not an option. Re-throwing the exception just ends up sending it to the thread's UncaughtExceptionHandler, which doesn't do anything with it.
Initiate a full shutdown of the application. Stopping the application feels like the safest thing to do, but I'd like to make sure I fully understand the implications. Is there any mechanism for a coroutine to propagate a fatal error to the rest of the application, or would I need to code that capability myself? Is propagation of 'application-fatal' errors something the Kotlin coroutines API designers have considered, or might consider in a future release? How do other multithreading models typically handle these kinds of errors?
Why does the Kotlin API choose to pass these errors to the CoroutineExceptionHandler for me, the programmer, to handle?
The Kotlin docs on exceptions state:
All exception classes in Kotlin are descendants of the class Throwable.
So it seems the Kotlin documentation uses the term exception for all kinds of Throwable, including Error.
Whether an exception in a coroutine should be propagated is actually a result of choosing the coroutine builder (cf. Exception propagation):
Coroutine builders come in two flavors: propagating exceptions automatically (launch and actor) or exposing them to users (async and produce).
If you receive unhandled exceptions at the WebSocket scope it indicates a non-recoverable problem down the call chain. Recoverable exceptions are expected to be handled at the closest possible invocation level. So it is quite natural that you don't know how to respond at the WebSocket scope and indicates a problem with the code you are invoking.
The coroutine functions then choose the safe path and cancel the parent job (which includes cancelling its child jobs), as stated in Cancellation and exceptions:
If a coroutine encounters an exception other than CancellationException, it cancels its parent with that exception. This behaviour cannot be overridden and is used to provide stable coroutines hierarchies for structured concurrency.
What is the appropriate way for me to handle it?
In any case: Try to log it first (as you do already). Consider to provide as much diagnostic data as feasible (including a stack trace).
Remember that the coroutines library has already cancelled jobs for you. In many cases, this would be just good enough. Don't expect the coroutines library to do more than this (not now, not in a future release). It does not have the knowledge to do better. The application server typically provides a configuration for exception handling, e.g. as in Ktor.
Beyond that, it depends, and may involve heuristics and trade-offs. Don't blindly follow "best practices". You know your application's design and requirements better than others. Some aspects to consider:
For efficient operations, restore impacted services automatically and as quickly and seamlessly as reasonable. Sometimes the easy way (shutting down and restarting everything that might be affected) is good enough.
Evaluate the impact of recovering from an unknown state. Is it just a minor glitch, which is easily noticed or do people's lives depend on the outcome? In case of uncaught exceptions: Is the application designed in a way that resources are released and transactions rolled back? Can dependent systems continue unaffectedly?
If you have control over functions called, you might introduce a separate exception class (hierarchy) for recoverable exceptions (which have only a transitory and non-damaging effect) and treat them differently.
When trying to recover a partially working system, consider a staged approach and handle follow-up failures:
If it is sufficient to shut down your coroutines only, leave it at that. You might even keep the WebSocket session open and send a restart indication message to the client. Consider the chapter on Supervision in the Kotlin coroutines documentation.
If that would be unsafe (or a follow-up error occurs), consider shutting down the thread. This would not be relevant with coroutines dispatched to different threads, but a proper solution for systems without inter-thread coupling.
If that would still be unsafe (or a follow-up error occurs), shut down the entire JVM. It all may depend on the exception's underlying cause.
If your application modifies persistent data, make sure it is crash-proof by design (e.g. via atomic transactions or other automatic recovery strategies).
If a design goal of your entire application is to be crash-proof, consider a crash-only software design instead of (possibly complex) shutdown procedures.
In case of an OutOfMemoryError, if the cause was a singularity (e.g. one giant allocation), recovery could proceed in stages as described above. On the other hand, if the JVM cannot even allocate tiny bits, forcibly terminating the JVM via Runtime.halt() might prevent cascading follow-up errors.
I am wondering which Tell method should be used by default?
The docs at
http://getakka.net/docs/working-with-actors/sending-messages
hint that Tell(message, sender) is the preferred way of sending a message, however looking at Akka.Net code, it seems that Tell(message) calls the two argument version anyway with the sender field filled automatically.
Apart from calling simpler code with two argument version of Tell (less ifs under the hood), is there another reason why it should be used instead of a single argument version (when calling from inside an actor)?
I lean towards calling things with least amount of dependancies necessary to achieve the task at hand.
Anyway aside from that. The article you are referring to is really saying favour Tell over Ask<>. I do not think the intent is to specify which overload is preferable to use. Often you will use the one with Sender because you want a response to goto a different actor.
Calling Tell(blah, Self) seems horribly redundant which is probably why the overload exists. The times you need to be careful are when you are telling from a place where you do not have the reference to Self or a suitable Sender EG from tests.
Another common scenario is at a service layer, here (ie the surface of the system) you will often find Ask<> is appropriate if a response is synchronously required. The point of that article is to point out that reactive systems are often not wanting to be synchronous (ask based) and so you should have tell based pathways throughout (eg in a web context to a Rx hub maybe)
Basically, if I have lots of synchronised methods in a monitor. Will this effectively avoid deadlocks?
In general, no, it does not guarantee the absence of the deadlocks. Please have a look at the code examples at
Deadlocks and Synchronized methods and Deadlock in Java. The two classes, A and B, with synchronized methods only generate a perfect deadlock.
Also, in my opinion , your wording "Java monitor with Synchronised Methods", although being conceptually correct, slightly deviates from the one accepted in Java. For example the java.lang.Object.wait() javadoc puts in the following way :
"The current thread must own this object's monitor"
That implicitly suggests that the object and the monitor are not the same thing. Instead, the monitor is something we don't directly see or address.
Good day all,
I'm having a hell of a time figuring out which multithreading approach to utilize in my current work project. Since I've never written a multithreaded app in my life, this is all confusing and very overwhelming. Without further ado, here's my background story:
I've been assigned to take over work on a control application for a piece of test equipment in my companies R&D lab. The program has to be able to send and receive serial communications with three different devices semi-concurrently. The original program was written in VB 6 (no multithreading) and I did plan on just modding it to work with the newer products that need to be tested until it posed a safety hazard when the UI locked up due to excessive serial communications during a test. This resulted in part of the tester hardware blowing up, so I decided to try rewriting the app in VB.Net as I'm more comfortable with it to begin with and because I thought multithreading might help solve this problem.
My plan was to send commands to the other pieces of equipment from the main app thread and spin the receiving ends off into their own threads so that the main thread wouldn't lock up when timing is critical. However, I've yet to come to terms with my options. To add to my problems, I need to display the received communications in separate rich text boxes as they're received while the data from one particular device needs to be parsed by the main program, but only the text that results from the most current test (I need the text box to contain all received data though).
So far, I've investigated delegates, handling the threads myself, and just began looking into BackgroundWorkers. I tried to use delegates earlier today, but couldn't figure out a way to update the text boxes. Would I need to use a call back function to do this since I can't do it in the body of the delegate function itself? The problem I see with handling threads myself is figuring out how to pass data back and forth between the thread and the rest of the program. BackgroundWorkers, as I said, I just started investigating so I'm not sure what to think about them yet.
I should also note that the plan was for the spawned threads to run continuously until somehow triggered to stop. Is this possible with any of the above options? Are there other options I haven't discovered yet?
Sorry for the length and the fact that I seem to ramble disjointed bits of info, but I'm on a tight deadline and stressed out to the point I can't think straight! Any advice/info/links is more than appreciated. I just need help weighing the options so I can pick a direction and move forward. Thanks to everybody who took the time to read this mess!
OK, serial ports, inter-thread comms, display stuff in GUI components like RichTextBox, need to parse incoming data quickly to decode the protocol and fire into a state-machine.
Are all three serial ports going to fire into the same 'processControl' state-machine?
If so, then you should probably do this by assembling event/data objects and queueing them to the state-machine run by one thread,(see BlockingCollection). This is like hugely safer and easier to understand/debug than locking up the state-engine with a mutex.
Define a 'comms' class to hold data and carry it around the system. It should have a 'command' enum so that threads that get one can do the right thing by switching on the enum. An 'Event' member that can be set to whatever is used by the state-engine. A 'bool loadChar(char inChar)' that can have char-by-char data thrown into it and will return 'true' only if a complete, validated protocol-unit has been assembled, checked and parsed into data mambers. A 'string textify()' method that dumps info about the contained data in text form. A general 'status' string to hold text stuff. An 'errorMess' string and Exception member.
You probably get the idea - this comms class can transport anything around the system. It's encapsulated so that a thread can use it's data and methods without reference to any other instance of comms - it does not need any locking. It can be queued to work threads on a Blocking Collection and BeginInvoked to the GUI thread for displaying stuff.
In the serialPort objects, create a comms at startup and load a member with the serialPort instance. and, when the DataReceived event fires, get the data from the args a char at a time and fire into the comms.loadChar(). If the loadChar call returns true, queue the comms instance to the state-machine input BlockingCollection and then immediately create another comms and start loading up the new one with data. Just keep doing that forever - loading up comms instances with chars until they have a validated protocol unit and queueing them to the state-machine. It may be that each serial port has its own protocol - OK, so you may need three comms descendants that override the loadChar to correctly decode their own protocol.
In the state-machine thread, just take() comms objects from the input and do the state-engine thing, using the current state and the Event from the comms object. If the SM action routine decides to display something, BeginInvoke the comms to the GUI thread with the command set to 'displaySomeStuff'. When the GUI thread gets the comms, it can case-switch on the command to decide what to display/whatever.
Anyway, that's how I build all my process-control type apps. Data flows around the system in 'comms' object instances, no comms object is ever operated on by more than one thead at a time. It's all done by message-passing on either BlockingCollection, (or similar), queues or BeginInvoke() if going to the GUI thread.
The only locks are in the queues and so are encapsulated. There are no explicit locks at all. This means there can be no explicit deadlocks at all. I do get headaches, but I don't get lockups.
Oh - don't go near 'Thread.Join()'.
I'm currently making a client-client approach on some simulation with objective-c with two computers (mac1 and mac2).
I have a class Client, and every computer has a instance of the "Client" on it (client1,client2). I expect that both clients will be synchronized: they will both be equal apart from memory locations.
When a user presses a key on mac1, I want both client1 and client2 to receive a given method from class Client (so that they are synchronized, i.e. they are the same apart from it's memory location on each mac).
To this approach, my current idea is to make 2 methods:
- (void) sendSelector:(Client*)toClient,...;
- (void) receiveSelector:(Client*)fromClient,...;
sendSelector: uses NSStringFromSelector() to transform the method to a NSString, and send it over the network (let's not worry about sending strings over net now).
On the other hand, receiveSelector: uses NSSelectorFromString() to transform a NSString back to a selector.
My first question/issue is: to what extent is this approach "standard" on networking with objective-c?
My second question:
And the method's arguments? Is there any way of "packing" a given class instance and send it over the network? I understand the pointer's problem when packing, but every instance on my program as an unique identity, so that should be no problem since both clients will know how to retrieve the object from its identity.
Thanks for your help
Let me address your second question first:
And the method's arguments? Is there any way of "packing" a given
class instance and send it over the network?
Many Cocoa classes implement/adopt the NSCoding #protocol. This means they support some default implementation for serializing to a byte stream, which you could then send over the network. You would be well advised to use the NSCoding approach unless it's fundamentally not suited to your needs for some reason. (i.e. use the highest level of abstraction that gets the job done)
Now for the more philosophical side of your first question; I'll rephrase your question as "is it a good approach to use serialized method invocations as a means of communication between two clients over a network?"
First, you should know that Objective-C has a not-often-used-any-more, but reasonably complete, implementation for handling remote invocations between machines with a high level of abstraction. It was called Distributed Objects. Apple appears to be shoving it under the rug to some degree (with good reason -- keep reading), but I was able to find an old cached copy of the Distributed Objects Programming Topics guide. You may find it informative. AFAIK, all the underpinnings of Distributed Objects still ship in the Objective-C runtime/frameworks, so if you wanted to use it, if only to prototype, you probably could.
I can't speculate as to the exact reasons that you can't seem to find this document on developer.apple.com these days, but I think it's fair to say that, in general, you don't want to be using a remote invocation approach like this in production, or over insecure network channels (for instance: over the Internet.) It's a huge potential attack vector. Just think of it: If I can modify, or spoof, your network messages, I can induce your client application to call arbitrary selectors with arbitrary arguments. It's not hard to see how this could go very wrong.
At a high level, let me recommend coming up with some sort of protocol for your application, with some arbitrary wire format (another person mentioned JSON -- It's got a lot of support these days -- but using NSCoding will probably bootstrap you the quickest), and when your client receives such a message, it should read the message as data and make a decision about what action to take, without actually deriving at runtime what is, in effect, code from the message itself.
From a "getting things done" perspective, I like to share a maxim I learned a while ago: "Make it work; Make it work right; Make it work fast. In that order."
For prototyping, maybe you don't care about security. Maybe when you're just trying to "make it work" you use Distributed Objects, or maybe you roll your own remote invocation protocol, as it appears you've been thinking of doing. Just remember: you really need to "make it work right" before releasing it into the wild, or those decisions you made for prototyping expedience could cost you dearly. The best approach here will be to create a class or group of classes that abstracts away the network protocol and wire format from the rest of your code, so you can swap out networking implementations later without having to touch all your code.
One more suggestion: I read in your initial question a desire to 'keep an object (or perhaps an object graph) in sync across multiple clients.' This is a complex topic, but you may wish to employ a "Command Pattern" (see the Gang of Four book, or any number of other treatments in the wild.) Taking such an approach may also inherently bring structure to your networking protocol. In other words, once you've broken down all your model mutation operations into "commands" maybe your protocol is as simple as serializing those commands using NSCoding and shipping them over the wire to the other client and executing them again there.
Hopefully this helps, or at least gives you some starting points and things to consider.
These days it would seem that the most standard way is to package everything up on JSON.