Does JVM know anything about java.util.concurrent package? - jvm

Below in try block, there are 3 statements must be executed in that order. Is there any possibility that statements could run out of order? Does JVM look ahead in j.u.c classes to see synchronization indicators (synchronized, volatile) and figure out it must not reorder the execution?
private Deque<Integer> deque = new LinkedList<Integer>();
private Lock lock = new ReentrantLock();
private Condition condition = lock.newCondition();
class Producer implements Runnable {
#Override
public void run() {
while (true) {
try {
lock.lock();
deque.add(1);
condition.signalAll();
} finally {
lock.unlock();
}
}
}
}

The JVM will never reorder function calls unless it has fully inlined them. Any side effect at all can hide behind them, so unless it can prove that this is not so, it cannot reorder them.

The JVM understands and obeys ordering such that you will not be able to observe anything illegal. It can (and will) take shortcuts in areas that are not observable to you. This is true both generally for Java code and specifically to j.u.c classes (where the JVM knows stuff such that it can further optimize while remaining safe.).

Related

How to avoid if..else(or any conditionals) while deciding which method to be called?

How to follow Open Close Principle without violating LSP while deciding which method to be invoked with different parameters in a statically typed language?
Consider the requirement like
Action 1: perform DB operation on Table 1
Action 2: Perform DB operation on Table 2 based on input
Action 3: Do Nothing
Code for above requirement would look like
process(obj) {
if(obj.type === action1) {
db.updateTable1()
}
if(obj.type === action2) {
db.updateTable2(obj.status)
}
if(obj.type === action3) {
//May be log action 3 recieved
}
}
Figured out a way to follow OCP in above code for additional actions, by moving body of if statement to method and maintain a map of keys with action as name. Reference
However feels solution is violating the OCP as method wrapping the contents of first if block will not receive any parameter, second method wrapping the contents of second if block will have a parameter.
Either it forces all method to follow the same signature in trade off following OCP but violating LSP or give up OCP itself and thereby live with multi if statements.
A simple solution would be to define a strategy, which execute the code currently contained in the if / else if / else branches:
interface Strategy {
String getType();
void apply();
}
The strategies need to be registered:
class Executor {
private Map<String, Strategy> strategies;
void registerStrategy(strategy Strategy) {
strategies.put(strategy.getType(), strategy);
}
void process(obj) {
if (strategies.containsKey(obj.type)) {
// apply might execute db.updateTable1(),
// depending on the interface's implementation
strategies.get(obj.type).apply();
} else {
System.out.println("No strategy registered for type: " + obj.type);
}
}
}
The tradeoffs you recognise are unfortunately what you'll have to deal with when working with OOP in Java, C++, C# etc as the systems are dynamically put together and SOLID is kind of addresses the flaws. But the SOLID principles are intended to provide guidance, I wouldn't follow them idiomatically.
I hoped to find an example by better programmers than myself illustrating the command pattern. But I was just finding really bad examples which were not really addressing your question.
The problem of defining an associating an intent (defined as string or enum, a button click) with an action (an object, a lambda function) will always require a level of indirection we have to deal with. Some layers of abstractions are acceptable, for example: never call a model or service directly in a view. You could also think of implementing am event dispatcher and corresponding listeners, which would help with the loose coupling. But at some lower level you'll have to look up all listeners ...
The nature of obj is ambiguous, but I would recommend having a well-defined interface and pass it throughout your code where the class implementation of your interface would be equivalent to your 'action'. Here's an example of what that might look like in Typescript:
interface someDBInterface {
performAction() : void;
}
function process(obj : someDBInterface) {
let result = obj.performAction();
}
class action1 implements someDBInterface {
status: any
performAction() {
//db.updateTable1();
}
}
class action2 implements someDBInterface {
status : any
performAction() {
//db.updateTable1(this.status);
}
}
class action3 implements someDBInterface {
performAction() {
//May be log action 3 recieved
}
}
If this doesn't meet your requirements, feel free to reach out :)

How to lock two coroutines but allow original coroutine to enter

I launch a Coroutine to do some work. I need this to have a mutex. However, sometimes the doWork function calls one() again but a deadlock happens.
private val scope = CoroutineScope(Dispatchers.IO)
private val a = A()
fun start() {
scope.launch {
a.one()
}
}
Then
class A {
private val mutex = Mutex()
suspend fun one() {
mutex.withLock {
doWork()
}
}
}
What I am doing causes a deadlock, because the one() is already locked. Ideally I would get something like #Synchronized in Java which lets the same thread come in, but I know Coroutines are not threads.
Is there anything I can use to solve this? I cannot change the problem too much because some of this code I cannot change myself.
Use communicating coroutines
You said you can't change some of the code, so this solution may not be an option for you. Locks aren't often a good fit with coroutines, though. A more idiomatic solution is to manage your shared resources by having different coroutines communicate with one another.
Instead of using a lock, you make it so that only one coroutine is ever allowed to access the shared resource. Other coroutines may send it work to do, but they may not access the shared resource directly. This guarantees that only one thing ever accesses the shared resource at any given time.
Say our shared resource is a function doSomething() that isn't thread-safe and should only be called by one thread at a time. We launch an actor coroutine that will receive requests. This coroutine is the 'owner' of the shared resource. Anything that wants to call doSomething() must do so by sending a request to this actor. Many things may send requests to the actor, but it will process the requests one at a time. Each time the actor receives a request, it simply calls the doSomething() function. Here I've used a Request class which can contain whatever parameters you need to pass to the shared function. It looks like this:
data class Request(...)
fun start() {
val requests = scope.actor<Request> {
consumeEach { request ->
doSomething(request)
}
}
scope.launch {
requests.send(Request(...))
}
}
suspend fun doSomething(request: Request) {
// do some non-thread-safe work
}

Kotlin/Native multithreading using coroutines

I've been having a shot at kotlin multiplatform and it's brilliant, but threading stumps me. The freezing of state between threads makes sense conceptually, and works fine in simple examples where small objects or primitives are passed back and forth, but in real world applications I can't get around InvalidMutabilityException.
Take the following common code snippet from an android app
class MainViewModel(
private val objectWhichContainsNetworking: ObjectWhichContainsNetworking
)
private var coreroutineSupervisor = SupervisorJob()
private var coroutineScope: CoroutineScope = CoroutineScope(Dispatchers.Main + coreroutineSupervisor)
private fun loadResults() {
// Here: Show loading
coroutineScope.launch {
try {
val result = withContext(Dispatchers.Default) { objectWhichContainsNetworking.fetchData() }
// Here: Hide loading and show results
} catch (e: Exception) {
// Here: Hide loading and show error
}
}
}
Nothing very complex, but if used in common code and run from Kotlin/Native then pow InvalidMutabilityException on MainViewModel.
It seems the reason for this is that anything passed in withContext is frozen recursively so because objectWhichContainsNetworking is a property of MainViewModel and is used in withContext then MainViewModel gets caught in the freeze.
So my question is, is this just a limitation of the current Kotlin/Native memory model? Or perhaps the current version of coroutines? And are there any ways round this?
Note: coroutines version: 1.3.9-native-mt. kotlin version 1.4.0.
Edit 1:
So it appears that the above slimmed down code actually works fine. It turns out the incriminating code was an updateable var in the view model (used to keep a reference to the last view state) which becomes frozen and then throws the exception when it tries to be mutated. I'm going to make an attempt of using Flow/Channels to ensure there's no var reference needed and see if this fixes the overall problem.
Note: if there is a way to avoid MainViewModel being frozen in the first place it would still be fantastic!
Edit 2:
Replaced the var with Flow. I couldn't get standard flow collecting in iOS until using the helpers here: https://github.com/JetBrains/kotlinconf-app/blob/master/common/src/mobileMain/kotlin/org/jetbrains/kotlinconf/FlowUtils.kt.
MainViewModel still gets frozen, but as all it's state is immutable it's no longer a problem. Hope it helps someone!
In your original code, you are referencing a field of the parent object, which causes you to capture the whole parent and freeze it. It is not an issue with coroutines. Coroutines follows the same rules as all the other concurrency libraries in Kotlin/Native. It freezes the lambda when you cross threads.
class MainViewModel(
private val objectWhichContainsNetworking: ObjectWhichContainsNetworking
)
//yada yada
private fun loadResults() {
coroutineScope.launch {
try {
val result = withContext(Dispatchers.Default) {
//The reference to objectWhichContainsNetworking is a field ref and captures the whole view model
objectWhichContainsNetworking.fetchData()
}
} catch (e: Exception) {}
}
}
To prevent this from happening:
class MainViewModel(
private val objectWhichContainsNetworking: ObjectWhichContainsNetworking
){
init{
ensureNeverFrozen()
}
//Etc
The most complicated thing with the memory model is this. Getting used to what's being captured and avoiding it. It's not that hard when you get used to it, but you need to learn the basics.
I've talked about this at length:
Practical Kotlin/Native Concurrency
Kotlin Native Concurrency Hands On
KotlinConf KN Concurrency
The memory model is changing, but it'll be quite a while before that lands. Once you get used to the memory model, the immutable issues are generally straightforward to diagnose.

A method can only be called after the object is initialized - how to proceed?

I am finding a recurring pattern in my day-to-day coding, as follows:
var foo = new Foo();
foo.Initialize(params);
foo.DoSomething();
In these cases, foo.Initialize is absolutely needed so that it can actually DoSomething, otherwise some foo properties would still be null/non-initialized.
Is there a pattern to it? How to be safely sure DoSomething will only/always be called after Initialize? And how to proceed if it doesn't: should I raise an exception, silent ignore it, check some flag...?
Essentially you're saying Initialize is a constructor. So that code really should be part of the constructor:
var foo = new Foo(params);
foo.DoSomething();
That's exactly what a constructor is for: it's code which is guaranteed to run before any of the object methods are run, and its job is to check pre-conditions and provide a sane environment for other object methods to run.
If there really is a lot of work taking place in the initialization, then I can certainly see the argument that it's "too much to put in a constructor". (I'm sure somebody with a deeper familiarity of language mechanics under the hood could provide some compelling explanations on the matter, but I'm not that person.)
It sounds to me like a factory would be useful here. Something like this:
public class Foo
{
private Foo()
{
// trivial initialization operations
}
private void Initialize(SomeType params)
{
// non-trivial initialization operations
}
public static Foo CreateNew(SomeType params)
{
var result = new Foo();
result.Initialize(params);
return result;
}
}
And the consuming code becomes:
var foo = Foo.CreateNew(params);
foo.DoSomething();
All manner of additional logic could be put into that factory, including a variety of sanity checks of the params or validating that heavy initialization operations completed successfully (such as if they rely on external resources). It would be a good place to inject dependencies as well.
This basically comes down to a matter of cleanly separating concerns. The constructor's job is to create an instance of the object, the initializer's job is to get the complex object ready for intended use, and the factory's job is to coordinate these efforts and only return ready-for-use objects (handling any errors accordingly).

Why Finalize should be put protected?

Reading this MSDN article, I came across that simple example, which is really fitted for me since I making some RAII classes over some native c++ interfaces doing the whole job: (and I do it for the first time)
ref class Wrapper {
Native *pn;
public:
// resource acquisition is initialization
Wrapper( int val ) { pn = new Native( val ); }
// this will do our disposition of the native memory
~Wrapper(){ delete pn; }
void mfunc();
protected:
// an explicit Finalize() method—as a failsafe
!Wrapper() { delete pn; }
};
This class corresponds exactly to what I have written so far. Save that I had not implemented the Finalize method. But while wondering about its peculiarity and usage, and before I can grasp it much deeper... I was wondering if it is general use and good habit to put the Finalizer method in protected scope.
The access modifier for a finalizer is essentially ignored as there are special rules for finalizers:
They can't be called directly (even from within the class itself).
When called by the system, they automatically call their base class finalizers.
Officially, the finalizer is a protected virtual method declared on Object: http://msdn.microsoft.com/en-us/library/system.object.finalize.aspx. In C# you cannot place an accesibilty modifier on the finalizer.
In C++/cli, you can specify any access modifier, but it is essentially ignored. That is, making it public or private changes nothing: the special rules are still enforced.
So, I'd say, just continue to make it protected just based on convention.