What is the timing to use semMCreate or semBCreate? - vxworks

When should I use semMCreate vs. semBCreate? In my opinion, they are both the same.

semMCreate creates a Mutex Semaphore, semBCreate creates a Binary Semaphore, and they are actually quite different.
Very basically: you use the Mutex to protect a critical section of code, and you use a Binary Semaphore to synchronize access to a shared resource. Especially in VxWorks.
You can find more information in the answers to this question.
Bye!

Related

Kotlin Coroutines: Do we need to synchronize shared state?

From the official guide and samples from web, I didn't see any mentions of locking or synchronization, or how safe is modifying a shared variable in multiple launch or async calls.
Coroutines bring a concurrent programming model that may result in simultaneously executed code. Just as you know it from thread-based libraries, you have to care about synchronization as noted in the docs:
Coroutines can be executed concurrently using a multi-threaded dispatcher like the Dispatchers.Default. It presents all the usual concurrency problems. The main problem being synchronization of access to shared mutable state. Some solutions to this problem in the land of coroutines are similar to the solutions in the multi-threaded world, but others are unique.
With Kotlin Coroutines you can make use of acquainted strategies like using thread-safe data structures, confining execution to a single thread or using locks (e.g. Mutex).
Besides the common patterns, Kotlin coroutines encourage us to use a "share by communication" style. Concretely, an "actor" can be shared between coroutines. They can be used by coroutines, which may send/take messages to/from it. Also have a look at Channels.

Vulkan: vkWaitForFences syncrhonizing access to VkDevice

Is it necessary to synchronize access to the device handle when calling vkWaitForFences? The specification does not mention any need for this, but doesn't mention it being free-threaded, either. In some places, namely most of the vkCreateXXX, mention this as a requirement. Given the explicit nature of the spec, I'd expect more precise wording (rather than none at all in this case).
I suspect the answer is "no", but I am unable to trust my intuition with this API or implementations behind it.
It would be strange (useless, actually), if it is necessary to guard a call to this function.
The spec uses the terms "external synchronization" and "host synchronization" to talk about objects/parameters where the application must ensure non-concurrent use. The rules are described in Section 2.5 "Threading Behavior", and in the "Host Synchronization" block after most commands. Anything not listed can be used concurrently.
I'm not sure why you think that the device parameter is supposed to be externally synchronized for vkCreate*, I couldn't find something in the spec to support that. The device object is almost never externally synchronized.
None of the parameters to vkWaitForFences is listed as Host Synchronized. But the fence(s) passed to vkQueueSubmit and vkResetFences are host synchronized, so you can't pass a fence to one of those calls while there is another thread waiting for the fence. But you could have two threads waiting on the same fence, or one thread calling vkGetFenceStatus while another thread is waiting on it.

Vulkan: Any downsides to creating pipelines, command-buffers, etc one at a time?

Some Vulkan objects (eg vkPipelines, vkCommandBuffers) are able to be created/allocated in arrays (using size + pointer parameters). At a glance, this appears to be done to make it easier to code using common usage patterns. But in some cases (eg: when creating a C++ RAII wrapper), it's nicer to create them one at a time. It is, of course, simple to achieve this.
However, I'm wondering whether there are any significant downsides to doing this?
(I guess this may vary depending on the actual object type being created - but I didn't think it'd be a good idea to ask the same question for each object)
Assume that, in both cases, objects are likely to be created in a first-created-last-destroyed manner, and that - while the objects are individually created and destroyed - this will likely happen in a loop.
Also note:
vkCommandBuffers are also deallocated in arrays.
vkPipelines are destroyed individually.
Are there any reasons I should modify my RAII wrapper to allow for array-based creation/destruction? For example, will it save memory (significantly)? Will single-creation reduce performance?
Remember that vkPipeline creation does not require external synchronization. That means that the process is going to handle its own mutexes and so forth. As such, it makes sense to avoid locking those internal mutexes whenever possible.
Also, the process is slow. So being able to batch it up and execute it into another thread is very useful.
Command buffer creation doesn't have either of these concerns. So there, you should feel free to allocate whatever CBs you need. However, multiple creation will never harm performance, and it may help it. So there's no reason to avoid it.
Vulkan is an API designed around modern graphics hardware. If you know you want to create a certain number of objects up front, you should use the batch functions if they exist, as the driver may be able to optimize creation/allocation, resulting in potentially better performance.
There may (or may not) be better performance (depending on driver and the type of your workload). But there is obviously potential for better performance.
If you create one or ten command buffers in you application then it does not matter.
For most cases it will be like less than 5 %. So if you do not care about that (e.g. your application already runs 500 FPS), then it does not matter.
Then again, C++ is a versatile language. I think this is a non-problem. You would simply have a static member function or a class that would construct/initialize N objects (there's probably a pattern name for that).
The destruction may be trickier. You can again have static member function that would destroy N objects. But it would not be called automatically and it is annoying to have null/husk objects around. And the destructor would still be called on VK_NULL_HANDLE. There is also a problem, that a pool reset or destruction would invalidate all the command buffer C++ objects, so there's probably no way to do it cleanly/simply.

Java Monitors: Does having a Java monitor with Synchronised Methods Avoid Deadlocks?

Basically, if I have lots of synchronised methods in a monitor. Will this effectively avoid deadlocks?
In general, no, it does not guarantee the absence of the deadlocks. Please have a look at the code examples at
Deadlocks and Synchronized methods and Deadlock in Java. The two classes, A and B, with synchronized methods only generate a perfect deadlock.
Also, in my opinion , your wording "Java monitor with Synchronised Methods", although being conceptually correct, slightly deviates from the one accepted in Java. For example the java.lang.Object.wait() javadoc puts in the following way :
"The current thread must own this object's monitor"
That implicitly suggests that the object and the monitor are not the same thing. Instead, the monitor is something we don't directly see or address.

Am I missing any points in my argument in favor of atomic properties?

I read this question (and several others):
What's the difference between the atomic and nonatomic attributes?
I fully understand (at least I hope so :-D ) how the atomic/nonatomic specifier for properties works:
Atomic guarantees that a "read" operation won't be interrupted by a "write" operation.
Nonatomic doesn't guarantee this.
Neither atomic nor nonatomic solve race conditions, where one thread is reading and two threads are writing. There is no way to predict what result the read operation will return. This needs to be solved by additional synchronization.
Neither atomic nor nonatomic guarantee overall data integrity; one thread could set one property while another thread sets a second property in a state which is inconsistent with the state of the first property. This also needs to be solved by additional synchronization.
What make my eyebrow raise is that people are divided into two camps:
Pro atomic: It make sense to use nonatomic only for performance optimization.
And if you are not optimizing, then you should always use atomic because of point 1. This way you won't get some complete crap when reading this property in a multi-threaded application. And sure, if you care about points 2 and 3, you need to add more synchronizaion on top of it.
Against atomic: It doesn't make sense to use atomic at all.
Since atomic doesn't solve all the problems in a multi-threaded application, it doesn't make sense to use it at all, since you will need to add more synchronization code on top of it anyway. It will just make things slower.
I am leaning to the pro-atomic camp, but I want to do a sanity check that I didn't miss anything.
Lacking a very specific question (though still a good question), I'll answer with personal experience, FWIW.
In general, concurrency design is hard. With modern conveniences like GCD and ARC, the tools for implementing concurrent systems have certainly improved. However, the architecture of concurrency is still very hard.
And, generally, the hard part has nothing to do with individual properties; individual getters and setters. Concurrency is something that is implemented at a higher level.
The current state of the art is concurrency in isolation. That is, the parts of your app that are running concurrently are doing so using isolated graphs of objects that have extremely minimal connection to the rest of your application (typically, the "connections" are via callbacks that bundle up a bit of state and toss it over to some other queue, often the main queue for updating the UI).
By keeping the concurrency surface area -- the # of entry points into your code that must be concurrency safe -- to an absolute minimum, you reduce both complexity and the amount of time you'll spend debugging really weird, oft irreproducible, concurrency problems (that'll eat at your sanity).
Given all that, the value of atomic properties is pretty minimal. Sure, they can be useful along what should be the very very small set of interfaces -- of API -- that might be banged upon from multiple threads, but that is about it.
If you have objects for which the accessors are being banged on rapidly, making them atomic can be a significant performance hit, but premature optimization is the devil's fingers at play.