Is WriteFile thread safe? I mean,can I write the same file from multiple threads simultaneously without synchronization? MSDN says nothing about thread safety of WriteFile.
Yes it is thread safe by its own, i.e it prevents system from crashing, win API maintain internal locking when writing files and that lock is byte-range locks. You can read more here
File locking
Related
I have a NSURLConnection asynchrone in NSOperation in concurrency mode.
I know that by default, the NSOperationQueue creates the thread for the NSOperation (non-concurrent).
But, in concurrency, does it means that i have one thread in another thread ?
if yes, it's a good practice ?
(a example : http://www.dribin.org/dave/blog/archives/2009/05/05/concurrent_operations/)
Thx you :)
Dribin's code has some bugs in it which are addressed in a project exploring concurrent operations and in a smaller light-weight concurrent web fetcher. What is happening here is the operation queue creates and managers one or more threads. The NSURLConnection uses a single thread - it does not create another one. So really there is just one thread consumed in this model.
Is it possible in VB.NET to suspend and resume processes?
I would like to entirely pause a process or an external thread.
Warning: .Suspend() has been depreciated. The reason is suspending another thread is not even remotely safe.
Lets say somewhere on a lower level, there is a thread performing some important operation like writing bytes, now you go ahead and suspend the thread before it finishes what happens to that range of bytes? It will likely be paused until you Resume() which can lead to damaging results.
with that said:
Thread.Sleep(Time in ms)
will pause the current thread for x amount of milliseconds.
If you are looking to Suspend the current thread then
Thread.CurrentThread.Suspend()
and to resume the current thread
Thread.CurrentThread.Resume()
You will need to import the namespace to use this code as is.
Imports System.Threading
or change it to
System.Threading.Thread.CurrentThread.Suspend()
Be careful and if your not Multi-Threading and handling your own threads, I don't recommend it. There is a reason why MS depreciated the code. Keep that in mind.
This is the right way to do what (I think) you need.
You have to call ResumeThread() and SuspendThread() (API functions) on all the threads of target process...
I have a problem regarding the behaviour of the pthread function pthread_rwlock_wrlock. The specification linked above states that when one thread has locked the lock for writing and the same thread locks it again, it results in undefined behaviour (I could actually observe this in that on x86 Linux calling this function is a noop and on PowerPC Linux it stalls the thread).
The behaviour I need would be a read write lock that has the following characteristics:
read-locking by a thread succeeds if:
the lock is not held by any thread
the lock is only read-locked by zero or more threads (including the calling thread) and possibly read- or write locked by the calling thread
write-locking succeeds when:
the lock is not held by any other thread
only the current thread is holding the lock (for reading or writing)
With a pthread_mutex_t, the recursiveness of the lock can be controlled via an initialization flag, but this is not possible for pthread_rwlock_t.
What are my options? I've never actually had to implement this kind of concurrency primitive in C, and I think I'm missing some obvious solution here.
To be honest, recursive locking does have some uses but generally it's a hack. I can't seem to find the article right now, but Butenhof has a nice rant on this.
Back to the question. You could keep a thread-specific flag that signals: "I have the lock". Set it right after locking and unset it before unlocking. Since this is the only thread accessing it, you should be safe. So when trying to lock you simply need to check: "Hey, is this thing locked already?".
As a side note: are you sure the design is okay if a thread tries to lock twice ?
EDIT
Found the article.
But if that's all that's necessary, why does POSIX have recursive
mutexes?
Because of a dare.
I need to set a mutex before to make an asynchronous request, and then unlock the mutex in the callback of this request that is on another thread.
Apple documentation say:
Warning: The NSLock class uses POSIX
threads to implement its locking
behavior. When sending an unlock
message to an NSLock object, you must
be sure that message is sent from the
same thread that sent the initial lock
message. Unlocking a lock from a
different thread can result in
undefined behavior.
How can I avoid this "undefined behaviour" and make it work as expected?
Better yet; use an NSOperationQueue or a GCD queue as your synchronization primitive.
Locks are expensive and semaphores are, more or less, a lock with a counter.
Queue based coding is far more efficient, especially when using the built in queuing mechanisms.
Use a NSCondition for this to signal other threads that they can safely pass now.
Don't use a mutex for this. Use a semaphore initialized to 1 or some other lock mechanism that allows cross-thread communication/locking.
Rgds,
Martin
I have some code like this:
doDatabaseFetch {
...
#synchronized(self) {
...
}
}
and many objects that call doDatabaseFetch as the user uses the view.
My problem is, I have an operation (navigate to the next view) that also requires a database fetch. My problem is that it hits the same synchronize block and waits it's turn! I would ideally like this operation to kill all the threads waiting or give this thread a higher priority so that it can execute immediately.
Apple says that
The recommended way to exit a thread is to let it exit its entry point routine normally. Although Cocoa, POSIX, and Multiprocessing Services offer routines for killing threads directly, the use of such routines is strongly discouraged.
So I don't think I should kill the threads... but how can I let them exit normally if they're waiting on a synchronized block? Will I have to write my own semaphore to handle this behavior?
Thanks!
Nick.
The first question to ask here - do you need that big of a critical section so many threads are waiting to enter? What you are doing here is serializing parallel execution, i.e. making your program single-threaded again (but slower.) Reduce the lock scope as much as possible, think about reducing contention at the application level, use appropriate synchronization tools (wait/signal) - you'll find that you don't need to kill threads, pretty much ever. I know it's a very general advise, but it really helps to think that way.
Typically you cannot terminate a thread that is waiting on a synchronized block, if you need that sort of behavior, you should be using a timed wait and signal paradigm so that threads are sound asleep waiting and can be interrupted. Plus if you use a timed wait and signal paradigm, each time the timed wait expires your threads have the opportunity to not go back to sleep but rather to exit or take some other path (ie. even if you don't choose to terminate them).
Synchronized blocks are designed for uncontested locks, on an uncontested lock, the synchronization should be pretty close to a noop, but as soon as the lock becomes contested they have a very detrimental to application performance, moreso than even simply because they are serializing your parallel program.
I'm not an Objective C expert by any means, but I'm sure that there are some more advanced synchronization patterns such as barriers, conditions, atomics, etc.