Is Kotlin synchronized() not locking basic types? - kotlin

class Notification(val context: Context, title: String, message: String) {
private val channelID = "TestMessages"
companion object ID {
var s_notificationID = -1
}
init {
var notificationID = -1
synchronized(s_notificationID) {
if (++s_notificationID == 0)
createNotificationChannel()
notificationID = s_notificationID
}
The above is being called simultaneously from two threads. A breakpoint in createNotificationChannel() clearly showed that sometimes s_notificationID equals 1.
However, if I change
synchronized(s_notificationID)
to synchronized(ID)
then it seems to lock fine.
Is synchronized() not locking basic types? And if so, why does it compile?

A look at the generated JVM bytecode indicates that the ID example looks like
synchronized(ID) { ... }
which is what you'd expect. However, the s_notificationID example looks more like
synchronized(Integer.valueOf(s_notificationID)) { ... }
In Java, we can only synchronize on objects, not on primitives. Kotlin mostly removes this distinction, but it looks like you've found one place where the implementation still seeps through. Since s_notificationID is an int as far as the JVM is concerned (hence, not an object) but synchronized expects an object, Kotlin is "smart" enough to wrap the value in Integer.valueOf on demand. Unfortunately for you, that produces wildly inconsistent results, because
This method will always cache values in the range -128 to 127, inclusive, and may cache other values outside of this range.
So for small numbers, this is guaranteed to lock on some cached object in memory that you don't control. For large ones, it may be a fresh object (hence always unlocked) or it might again end up on a cached object out of your hands.
The lesson here, it seems, is: Don't synchronize on primitive types.

Silvio Mayolo explained why it is not a good idea to synchronize on primitives (actually, I think the compiler should warn about this). But I believe there is another problem with this code, probably the main one that makes your synchronized blocks work in parallel.
The problem is that you replace the value of s_notificationID. Even if it would be an object, not a primitive, your synchronized blocks would still run in parallel, because each call to synchronized uses a different object. This is why in Java we usually synchronize on this and not on a field that we need to modify.

TL;DR The lesson here, it seems, is: Don't synchronize on primitive types.
synchronized(i) where i is Int, is actually synchronized(Integer.valueOf(i)).
Only in the range -128 to 127 this value is guaranteed to be a cached value.
Another fact is that ++i cannot be looked at as a mutation of the "object" i, but rather as replacing i by a new "object" with the value i+1.
Thank you broot & Silvio Mayolo for the above.
Experiments I did prove the above.
In my original code I have removed the ++ from
++s_notificationID. Amazingly or not, the lock worked now.
Now with that change I changed var s_notificationID = -1 to be var s_notificationID = -1000. Even more amazing, now the lock again stopped working.
Still, I think this anomaly of basic types undermines the attempt of Kotlin to see basic types as objects, and I think this should have been mentioned clearly in Kotlin documentation.

Related

Should I use an explicit return type for a String variable in Kotlin?

In Kotlin, We can declare a string read-only variable with type assignment and without type assignment (inferred) as below.
val variable_name = "Hello world"
or
val variable_name: String = "Hello world"
I'm trying to figure out what is the best in Kotlin and why it is the best way. Any idea?
If this is a public variable, using an explicit return type is always a good idea.
It can make the code easier to read and use. This is why your IDE probably shows the return type anyway, even when you omit it from the code. It's less important for simple properties like yours where the return type is easy to see at a glance, but when the property or method is more than a few lines it makes much more difference.
It prevents you from accidentally changing the type. With an explicit return type, if you change the contents of the property so that it doesn't actually return the correct type, you'll get an immediate compile error in that method or property. With an implicit type, if you change the contents of the method you could see cascading errors throughout your code base, making it hard to find the source of the error.
It can actually speed up your IDE! See this blog post from the JetBrains team for more information.
For private variables, explicit return types are much less important, because the above points don't generally apply.
Personally either one works and for me nothing is wrong, but I would choose the later if this is a team project, where project size increase and feature inheritance(members leaving, new hiring or worse shuffling people) is probable. Also I consider the later as more of a courtesy.
There are situations where regardless of the dogma every member follows, such as clean architecture, design-patterns or clean-coding, bloated codes or files are always expected to occur in such big projects occasionally, so the later would help anyone especially new members to easily recognize at first glance what data type they are dealing with.
Again this this is not about right or wrong, as kotlin is created to be idiomatic, I think this is Autoboxing, it was done in kotlin for codes to be shorter and cleaner as few of its many promise, but again regardless of the language, sometimes its the developer's discretion to have a readable code or not.
This also applies with function return types, I always specify my function return types just so the "new guy" or any other developer will understand my function signatures right away, saving him tons of brain cells understanding whats going on.
fun isValidEmail() : Boolean = if (condition) true else false
fun getValidatedPerson(): Person = repository.getAuthenticatedPersonbyId(id)
fun getCurrentVisibleScreen(): #Composable ()-> Unit = composables.get()
fun getCurrentContext(): Context if (isActivity) activityContext else applicationContext

How to achieve lateinit effect for primitive types?

I've read Why doesn't Kotlin allow to use lateinit with primitive types?.
However, there is a benefit of using the lateinit, that is, if the error is caused by no initialization, it can be immediately known from the error message. But for primitive types that cannot use lateinit, such as Int, the user have to assign a value of 0. But if the appropriate value should be a value that is much greater than 0 and must be determined later, and then, the user forgot to init the value, the program produced an error later, is there any way to make the user who read the error message immediately realize that the error is not caused by other reasons?
thanks a lot.
And lateinit var v:Int? = null is very bad, which makes operations like v-- become very complex.
The answer linked by you explains why it is technically impossible to support lateinit for primitive types. So even if there are benefits of having them, then... well, see above, it is technically impossible.
You can use a property delegate for a very similar effect:
var v by Delegates.notNull<Int>()

Kotlin [1..n] constructor parameter

Is there a way to enforce 1..* parameters in Kotlin that will still allow the spread operator?
I've tried:
class Permission(
// 1..n compliance
accessiblePage: Webpage,
vararg accessiblePages: Webpage
) {
And that does enforce 1..*, but it also means that Permission(*pages) won't work, so that's a pretty awkward interface.
Is there an easy way to enforce 1..* without a runtime constructor error?
There is, unfortunately, no way to check this in Kotlin at compile time aside from the way you mentioned. Since vararg parameters are really just syntactic sugar for an array, your code is essentially
class Permission (
accessiblePage: Webpage,
accessiblePages: Array<Webpage>
)
So the question then becomes "Can you ensure that an array has at least one element in it at compile time?" For most languages, that's a clear no, although the Kotlin team did at one point experiment with it:
[C]urrently, Kotlin compiler doesn't collect static information about
collections size. FYI, at some point Kotlin team tried to collect such
information and use it for warnings about possible
IndexOutOfBoundException and stuff like that, but it was found that
there were a very little demand on such diagnostics in real-life
projects, so, given complexity of such analysis, it was abandoned[.]
(https://github.com/Kotlin/KEEP/issues/139#issuecomment-405551324)
It's possible that this metadata will be added at some point, but you shouldn't expect it soon.
That said, you could always combine a runtime check in the case of an Array with an overloaded signature in the case of varargs. This would mean that your vararg example would work the same, but passing an array to the function would subject it to a runtime check (you'd also not have to use the spread operator anymore):
class Permission (
accessiblePage: Webpage
vararg accessiblePages: Webpage
) {
constructor(accessiblePages: Array<Webpage>) {
require(accessiblePages.isNotEmpty()) {
"Must have at least one accessible page."
}
}
}
called like
val permission1 = Permission(Webpage(), Webpage())
val permission2 = Permission() // Would fail at compile time
val pages = arrayOf()
val permission3 = Permission(pages) // Would fail at runtime. Note also the lack of the spread operator.

Determine whether a String is a compile-time constant

Given a reference to any String, is it possible to programmatically determine whether this is a reference to a compile time constant?
Or if it's not, then whether it's stored in the intern pool without doing s.intern() == s?
isConst("foo") -> true
isConst("foo" + "bar") -> true // 2 literals, 1 compile time string
isConst(SomeClass.SOME_CONST_STRING) -> true
isConst(readFromFile()) -> false
isConst(readFromFile().intern()) -> false // true would be acceptable too
(context for comments below: the question originally asked about literals)
To clarify the original question, every string literal is a compile-time constant, but not every compile-time constant has to originate from a string literal.
At runtime, there is no difference between a String object that has been constructed for a compile-time constant or constructed by other means. Strings constructed for compile-time constants are automatically added to a pool, but other strings may be added to the same pool manually via intern(). Since strings are constructed and added lazily, it is even possible to construct and add a string manually, so that compile-time constants with the same value get resolved to that string later-on. This answer exploits this possibility, to detect when the String instance for a compile-time constant is actually resolved.
It’s possible to derive from that answer a method to simply detect whether a string is in the pool or not:
public static boolean isInPool(String s) {
return s == new String(s.toCharArray()).intern();
}
new String(s.toCharArray()) constructs a string with the same contents, which is not in the pool and calling intern() on it must resolve to the same reference as s if s refers to an instance in the pool. Otherwise, intern() may resolve to another existing object or add our string or a newly constructed string and return a reference to it, depending on the implementation, but in either case, the returned reference will be different to s.
Note that this method has the side effect of adding a string to the pool if it wasn’t there before, which will stay there at least to the next garbage collection cycle, perhaps up to the next full gc, depending on the implementation.
The test method might be nice for debugging or satisfying curiosity, but there is no point in ever using it in production code. Application code should not depend on that property and the use case proposed in a comment, enforcing pooled strings in performance critical code, is not a good idea.
Besides the point that the test itself is expensive and counteracting the purpose of performance improvement, the underlying assumption that pooled strings are better than non-pooled is flawed. Not being in the pool doesn’t imply that the application will perform an expensive reconstruction every time it invokes the performance critical code. It may simply hold a reference in a variable or use a HashMap, both approaches way more efficient than calling intern(). In fact, even temporary strings can be the most efficient solution in some cases.

Will code written in this style be optimized out by RVO in C++11?

I grew up in the days when passing around structures was bad mojo because they are often large, so pointers were always the way to go. Now that C++11 has quite good RVO (right value optimization), I'm wondering if code like the following will be efficient.
As you can see, my class has a bunch of vector structures (not pointers to them). The constructor accepts value structures and stores them away.
My -hope- is that the compiler will use move semantics so that there really is no copying of data going on; the constructor will (when possible) just assume ownership of the values passed in.
Does anyone know if this is true, and happens automagically, or do I need a move constructor with the && syntax and so on?
// ParticleVertex
//
// Class that represents the particle vertices
class ParticleVertex : public Vertex
{
public:
D3DXVECTOR4 _vertexPosition;
D3DXVECTOR2 _vertexTextureCoordinate;
D3DXVECTOR3 _vertexDirection;
D3DXVECTOR3 _vertexColorMultipler;
ParticleVertex(D3DXVECTOR4 vertexPosition,
D3DXVECTOR2 vertexTextureCoordinate,
D3DXVECTOR3 vertexDirection,
D3DXVECTOR3 vertexColorMultipler)
{
_vertexPosition = vertexPosition;
_vertexTextureCoordinate = vertexTextureCoordinate;
_vertexDirection = vertexDirection;
_vertexColorMultipler = vertexColorMultipler;
}
virtual const D3DVERTEXELEMENT9 * GetVertexDeclaration() const
{
return particleVertexDeclarations;
}
};
Yes, indeed you should trust the compiler to optimally "move" the structures:
Want Speed? Pass By Value
Guideline: Don’t copy your function arguments. Instead, pass them by value and let the compiler do the copying
In this case, you'd move the arguments into the constructor call:
ParticleVertex myPV(std::move(pos),
std::move(textureCoordinate),
std::move(direction),
std::move(colorMultipler));
In many contexts, the std::move will be implicit, e.g.
D3DXVECTOR4 getFooPosition() {
D3DXVECTOR4 result;
// bla
return result; // NRVO, std::move only required with MSVC
}
ParticleVertex myPV(getFooPosition(), // implicit rvalue-reference moved
RVO means Return Value Optimization not Right value optimization.
RVO is a optimization performed by the compiler when the return of a function is by value, and its clear that the code returns a temporary object created in the body, so the copy can be avoided. The function returns the created object directly.
What C++11 introduces is Move Semantics. Move semantics allows us to "move" the resource from a certain temporary to a target object.
But, move implies that the object wich the resource comes from, is in a unusable state after the move. This is not the case (I think) you want in your class, because the vertex data is used by the class, even if the user calls to this function or not.
So, use the common return by const reference to avoid copies.
On the other hand,, DirectX provides handles to the resources (Pointers), not the real resource. Pointers are basic types,its copying is cheap, so don't worry about performance. In your case, you are using 2d/3d vectors. Its copying is cheap too.
Personally, I think that returning a pointer to an internal resource is a very bad idea, always. I think that in this case the best aproach is to return by const reference.