I am calling a large method via multiple threads and it has been quite difficult to prevent deadlock and race conditions using synclock around global field incrementing. The method calls other methods, and I am wondering if the threads would race in those other methods that are chained as well(?).
My thoughts are that if I instead instantiate a class, start the thread in the constructor, and then instantiate as objects and their methods all method calls in the first method, race conditions should be avoided.
An instantiated class owns its methods as well, so I believe methods and sub methods in an instantiated class should never race between each other. I therefore believe I could instantiate the class numerous times instead of even using threads -- and let GC catch up (may be inefficient?).
Perhaps this would work better instead of SyncLock:
Interlocked.Increment(myGlobalVariable)
Edit: I don't believe your theories about putting the methods in classes will help to prevent a data race. It should not matter where the methods are defined, if they attempt to modify the same data from different threads without an effective synchronization technique (e.g, a mutex), then a data race is a definite possibility.
Related
I have a block of code like this in kotlin.
synchronized(this) {
// do some work and produces a String
}.also { /*it: String*/
logger.log(it)
}
can some thread come and with unlucky timing the it variable gets changed before logging happens? (There are a lot of threads executing this piece of code concurrently)
To expand on comments:
The synchronized block returns a reference that's passed into the also block as it; that reference is local to the thread, and so there's no possibility of it being affected by other threads.
In general, there's no guarantee about the object that that reference points to: if other threads have a reference to it, they could potentially change its state (or that of other objects it refers to).But in this case, it's a String; and in Kotlin, Strings are completely immutable. So there's no risk of that here.
Taking those together: the logging in OP's code is indeed thread-safe!
However:
We can't tell whether there could be race conditions or other concurrency issues within the synchronized block, before the String gets created and returned. It synchronizes on this, which prevents two threads simultaneously executing it on the same object; but if there are two instances of the class, each one could have a single thread running it.So for example there could be an issue if the block uses some common object that's not completely thread-safe, or some instance property that's set to the same reference in both instances, or if there's sharing in a more roundabout way. Obviously, this will depend upon the nature of the work done in that block, what objects it accesses, and what it does with them. So that's worth bearing in mind too.
I am currently interning in a company and just starting to get into their code. I noticed that they have tasks that use singleton classes, but inside the singleton class there is a future object that is used to fetch thread dumps.
The code goes something like this:
singltonclass{
private ExecutorService x= Executors.newFixedThreadPool(1);
getInstance method(){}
methodThatFetchsThreadDumps(){
future is used here;
}
}
Is it a good idea to use a future inside a singleton? What happens if the task using this singleton runs twice and overlaps? Wouldn’t using the singleton multiple times cause the future to give unexpected behavior?
This isn't necessarily a bad thing. The Future will make sure that the objects returned will be visible across threads. The thread pool is fixed at size of 1, so if there are concurrent requests the second one blocks until the only worker thread becomes available, by which time it has handed off its results from the previous task. No overlap should be occurring.
When creating an iOS program is there any performance hit if I pass around SELs (#selectors) and invoke them in other classes? Is this significantly slower than normal method invocations?
Why would there be a performance hit for messaging (the one thing ObjC is famous for) from other classes? Of course, compared to C functions, there is some overhead (thanks to the addition of two more parts to a method). Selectors are simply data types, so passing them to type SEL is no more costly than sending a BOOL or an int over. However, to actually call a SEL type from a passed selector, the creation of an NSInvocation object is recommended, which would slightly increase the overhead time.
And you are more or less safe in objC, as messages to nil (you did mention other classes), produce nil.
Well it might not have much of effect except on first run as compiler creates a reference to every objects and it's dependencies along with classes at compile time , so it might be a bit slow at loading time(that too in very large programs..) but not much after that provided not very big objects are created in the intermediate steps and also it doesn't involve any large dynamic operation as here i am talking only about calling a local function and calling the same function from a different class.
Anyways why would you use selector to reference to some function in a different class.
From my limited knowledge, a selector is simply an encoding of a method name. Given that in objective-c methods are called by sending messages to objects, I don't see why there should be a performance difference between an explicit method call ([object method]) and an implicit call ([objectDelegate selector]).
I just created a singleton method, and I would like to know what the function #synchronized() does, as I use it frequently, but do not know the meaning.
It declares a critical section around the code block. In multithreaded code, #synchronized guarantees that only one thread can be executing that code in the block at any given time.
If you aren't aware of what it does, then your application probably isn't multithreaded, and you probably don't need to use it (especially if the singleton itself isn't thread-safe).
Edit: Adding some more information that wasn't in the original answer from 2011.
The #synchronized directive prevents multiple threads from entering any region of code that is protected by a #synchronized directive referring to the same object. The object passed to the #synchronized directive is the object that is used as the "lock." Two threads can be in the same protected region of code if a different object is used as the lock, and you can also guard two completely different regions of code using the same object as the lock.
Also, if you happen to pass nil as the lock object, no lock will be taken at all.
From the Apple documentation here and here:
The #synchronized directive is a
convenient way to create mutex locks
on the fly in Objective-C code. The
#synchronized directive does what any
other mutex lock would do—it prevents
different threads from acquiring the
same lock at the same time.
The documentation provides a wealth of information on this subject. It's worth taking the time to read through it, especially given that you've been using it without knowing what it's doing.
The #synchronized directive is a convenient way to create mutex locks on the fly in Objective-C code.
The #synchronized directive does what any other mutex lock would do—it prevents different threads from acquiring the same lock at the same time.
Syntax:
#synchronized(key)
{
// thread-safe code
}
Example:
-(void)AppendExisting:(NSString*)val
{
#synchronized (oldValue) {
[oldValue stringByAppendingFormat:#"-%#",val];
}
}
Now the above code is perfectly thread safe..Now Multiple threads can change the value.
The above is just an obscure example...
#synchronized block automatically handles locking and unlocking for you. #synchronize
you have an implicit lock associated with the object you are using to synchronize. Here is very informative discussion on this topic please follow How does #synchronized lock/unlock in Objective-C?
Excellent answer here:
Help understanding class method returning singleton
with further explanation of the process of creating a singleton.
#synchronized is thread safe mechanism. Piece of code written inside this function becomes the part of critical section, to which only one thread can execute at a time.
#synchronize applies the lock implicitly whereas NSLock applies it explicitly.
It only assures the thread safety, not guarantees that. What I mean is you hire an expert driver for you car, still it doesn't guarantees car wont meet an accident. However probability remains the slightest.
Most iPhone code examples use the nonatmoc attribute in their properties. Even those that involve [NSThread detachNewThreadSelector:....]. However, is this really an issue if you are not accessing those properties on the separate thread?
If that is the case, how can you be sure nonatomic properties won't be accessed on this different in the future, at which point you may forget those properties are set as nonatomic. This can create difficult bugs.
Besides setting all properties to atomic, which can be impractical in a large app and may introduce new bugs, what is the best approach in this case?
Please note these these questions are specifically for iOS and not Mac in general.
First,know that atomicity by itself does not insure thread safety for your class, it simply generates accessors that will set and get your properties in a thread safe way. This is a subtle distinction. To create thread safe code, you will very likely need to do much more than simply use atomic accessors.
Second, another key point to know is that your accessors can be called from background or foreground threads safely regardless of atomicity. The key here is that they must never be called from two threads simultaneously. Nor can you call the setter from one thread while simultaneously calling the getter from another, etc. How you prevent that simultaneous access depends on what tools you use.
That said, to answer your question, you can't know for sure that your accessors won't be accessed on another thread in the future. This is why thread safety is hard, and a lot of code isn't thread safe. In general, if youre making a framework or library, yeah, you can try to make your code thread safe for the purposes of "defensive programming", or you can leave it non-thread safe. The atomicity of your properties is only a small part of that. Whichever you choose, though, be sure to document it so users of your library don't have to wonder.