Kotlin lazy var throwing ClassCastException: kotlin.UNINITIALIZED_VALUE - kotlin

I've reached the hair pulling out stage with this one! I have a lazy var declared in a Kotlin data class. It is a Boolean and calculates it's value based on other fields in the class.
The way that the Lazy.kt implementation works is to creat an anonymous object called UNINITIALIZED_VALUE to use as the basis of determining whether or not the lazy var has been initialized. Each lazy var instance will have its own unique instance of UNINITIALIZED_VALUE. This object is stored in the _value field of the Lazy object.
When the value of the lazy var is accessed a check is made to see whether the current value matches the UNINITIALIZED_VALUE by doing a reference comparison. If not, then the initializer is called to create the value which is the stored in the lazy var _value field.
What appears to be happening in my case is that the UNINITALIZED_VALUE object is being created and assigned to the value during the lazy var creation - so far so good. However, when I attempt to retrieve the value, inexplicably, the UNITIALIZED_VALUE object has somehow been set to a different object value to the one originally stored in the _value field and so the comparison fails, the lazy implementation thinks the value has been set and so attempts to cast the UNINITIALIZED_VALUE object to a Boolean and boom! Exception!
What's really weird is that I am the only person in the team experiencing this issue - I have deleted and reinstalled everything - Android Studio, full git clone, gradle cache, emulators. I can't think of anything else and am unable to see the UNINITIALIZED_VALUE object being set twice with the debugger.

Related

Kotlin modifying dataclass object key from map changes the reference after modifying variable

I have a MutableMap that its keys are objects from a DataClass (User dataclass), and the values are arrays from other Dataclass (Dog dataclass). If i have a variable with a User object, and i put it in the MutableMap and i test if the map contains the User, it says that is true. But after putting the user in the MutableMap if i change one of the attributes of the User object using the variable that holds the User object, the Map says that it doesnt contains the user object.
This is an example
data class User(
var name: String,
var project: String,
)
data class Dog(
var kind: String
)
fun main(args: Array<String>) {
var mapUserDogs: MutableMap<User, MutableList<Dog>> = mutableMapOf()
var userSelected = User("name2", "P2")
mapUserDogs.put(
User("name1", "P1"),
mutableListOf(Dog("R1"), Dog("R2"))
)
mapUserDogs.put(
userSelected,
mutableListOf(Dog("R21"), Dog("R31"))
)
println(userSelected)
println(mapUserDogs.keys.toString())
println(mapUserDogs.contains(userSelected))
println(mapUserDogs.values.toString())
println("\n")
userSelected.name = "Name3"
println(userSelected)
println(mapUserDogs.keys.toString())
println(mapUserDogs.contains(userSelected))
println(mapUserDogs.values.toString())
}
The prints statements show this:
User(name=name2, project=P2)
[User(name=name1, project=P1), User(name=name2, project=P2)]
true
[[Dog(kind=R1), Dog(kind=R2)], [Dog(kind=R21), Dog(kind=R31)]]
User(name=Name3, project=P2)
[User(name=name1, project=P1), User(name=Name3, project=P2)]
false
[[Dog(kind=R1), Dog(kind=R2)], [Dog(kind=R21), Dog(kind=R31)]]
Process finished with exit code 0
But it doesn't make sense. Why the map says that it doesn't contains the user object if its clear that it still holds the reference to it after being modified?
User(name=Name3, project=P2)
[User(name=name1, project=P1), User(name=Name3, project=P2)]
The user in the keys collection was also changed when i modified the userSelected variable, so now the object has it attribute name as "Name3" in both the variable and in the Map keys, but it still says that it doesnt contains it.
What can i do so that i can change the attributes in the userSelected object and the Map still return true when using the "contains" method?. And doing the same process in reverse shows the same. If i get from the map the user and i modify it, the userVariable is also modified but if i later test if the map contains the userVariable, it says false.
What can i do so that i can change the attributes in the userSelected object and the Map still return true when using the "contains" method?
There is nothing you can do that preserves both your ability to look up the entry in the map and your ability to modify the key.
Make your data class immutable (val instead of var, etc.), and when you need to change a mapping, remove the old key and put in the new key. That's really the only useful thing you can do.
To add to Louis Wasserman's correct answer:
This is simply the way that maps work in Kotlin: their contract requires that keys don't change significantly once stored. The docs for java.util.Map* spell this out:
Note: great care must be exercised if mutable objects are used as map keys. The behavior of a map is not specified if the value of an object is changed in a manner that affects equals comparisons while the object is a key in the map.
The safest approach is to use only immutable objects as keys. (Note that not just the object itself, but any objects it references, and so on, must all be immutable for it to be completely safe.)
You can get away with mutable keys as long as, once the key is stored in the map, you're careful never to change anything that would affect the results of calling equals() on it. (This may be appropriate if the object needs some initial set-up that can't all be done in its constructor, or to avoid having both mutable and immutable variants of a class.) But it's not easy to guarantee, and leaves potential problems for future maintenance, so full immutability is preferable.
The effects of mutating keys can be obvious or subtle. As OP noticed, mappings may appear to vanish, and maybe later reappear. But depending on the exact map implementation, it may cause further problems such as errors when fetching/adding/removing unrelated mappings, memory leaks, or even infinite loops. (“The behaviour… is not specified” means that anything can happen!)
What can i do so that i can change the attributes in the userSelected object and the Map still return true when using the "contains" method?
What you're trying to do there is to change the mapping. If you store a map from key K1 to value V, and you mutate the key to hold K2, then you're effectively saying “K1 no longer maps to V; instead, K2 now maps to V.”
So the correct way to do that is to remove the old mapping, and then add the new one. If the key is immutable, that's what you have to do — but even if the key is mutable, you must remove the old mapping before changing it, and then add a new mapping after changing it, so that it never changes while it's stored in the map.
(* The Kotlin library docs don't address this, unfortunately — IMHO this is one of many areas in which they're lacking, as compared to the exemplary Java docs…)
That happens because data classes in Kotlin are compared by value, unlike regular classes which are compared by reference. When you use a data class as a key, the map gets searched for a User with the same string values for the name and project fields, not for the object itself in memory.
For example:
data class User(
var name: String,
var project: String,
)
val user1 = User("Daniel", "Something Cool")
val user2 = User("Daniel", "Something Cool")
println(user1 == user2) // true
works because, even though they are different objects (and thus different references), they have the same name and project values.
However, if I were to do this:
user1.name = "Christian"
println(user1 == user2) // false
the answer would be false because they don't share the same value for all of their fields.
If I made User a standard class:
class User(
var name: String,
var project: String,
)
val user1 = User("Daniel", "Something Cool")
val user2 = User("Daniel", "Something Cool")
println(user1 == user2) // false
the answer would be false because they are different references, even though they share the same values.
For your code to work the way you want, make User a regular class instead of a data class.
That's the key difference between regular classes and data classes: a class is passed by reference, a data class is passed by value. Data classes are nothing more than collections of values with (optionally) some methods attached to them, classes are individual objects.

What's the difference between an object and a data object?

The other day I noticed that I sometimes put data in front of objects and other times not:
object A
data object B
What's the difference between an object and a data object?
The fact that data is allowed on an object declaration is in fact a bug (KT-6486) which should be fixed.
data is an annotation which causes the compiler to generate equals, hashCode, toString, copy and componentN functions. It doesn't make much sense when applied to an object declaration for two reasons:
An object declaration cannot have a constructor, and all these functions work based on properties defined in the primary constructor.
There's only one instance of any object at runtime.
So no componentN functions would be generated, copy can't work, and the generated equals/hashCode/toString implementations will be equivalent to the default ones from Any which are based on identity.

JackSon ObjectMapper conversion Issues

I am new to using org.codehaus.jackson.map.ObjectMapper and while doing something, landed into certain issues that I wanted to run by experts.
I am using org.codehaus.jackson.map.ObjectMapper to convert my java objects to a JSON Object and then finally pass that to my views for further usage. I am using Spring Wiring for doing this all and here is what I face -
While conversion, it converts all my float or double (basically decimal) values to integer. Is there a way I can stop it from happening, by simply disabling or enabling a feature in my ObjectMapper class wiring or in some other way.
I also noticed that the Object that I am converting (has got another objects inside it) is losing the order of elements inside a collection. So say I have an Object that has another Collection of Objects as an instance variable, and that inner object has got the order of elements in "XYZ, when it gets converted it loses this order and comes up with it's own.
How to preserve the exact same order that I had in my Java Object before it as sent to conversion?
Some of the methods in my Object (the one I am converting) has return boolean values and have some logic inside, and some of the times because of the data differences, it ends up getting Null Pointer exception while the conversion. I know I should check for nulls before conversion, but is there a way for Jackson mapper to ignore all these null objects that it encounters during the conversion.

GPARs async functions and passing references that are being updated by another thread

I am using GPARs asynchronous functions to fire off a process as each line in a file is parsed.
I am seeing some strange behavior that makes me wonder if I have an issue with thread safety.
Let's say I have a current object that is being loaded up with values from the current row in an input spreadsheet, like so:
Uploader {
MyRowObject currentRowObject
}
Once it has all the values from the current row, I fire off an async closure that looks a bit like this:
Closure processCurrentRowObject = { ->
myService.processCurrentRowObject (currentRowObject)
}.asyncFun()
It is defined in the same class, so it has access to the currentRowObject.
While that is off and running, I parse the next row, and start by creating a new object:
MyObject currentObject = new MyObject()
and start loading it up with values.
I assumed that this would be safe, that the asynchronous function would be pointing to the previous object. However, I wonder if because I am letting the closure bind to the reference, if somehow the reference is getting updated in the async function, and I am pulling the object instance out from under it, so to speak - changing it while it's trying to work on the previous instance.
If so, any suggestions for fixing? Or am I safe?
Thanks!
I'm not sure I fully understand your case, however, here's a quick tip.
Since it is always dangerous to share a single mutable object among threads, I'd recommend to completely separate the row objects used for different rows:
final localRowObject = currentRowObject
currentRowObject = null
Closure processCurrentRowObject = { ->
myService.processCurrentRowObject (localRowObject)
}.asyncFun()

Ensuring inserts after a call to a custom NHibernate IIdentifierGenerator

The setup
Some of the "old old old" tables of our database use an exotic primary key generation scheme [1] and I'm trying to overlay this part of the database with NHibernate. This generation scheme is mostly hidden away in a stored procedure called, say, 'ShootMeInTheFace.GetNextSeededId'.
I have written an IIdentifierGenerator that calls this stored proc:
public class LegacyIdentityGenerator : IIdentifierGenerator, IConfigurable
{
// ... snip ...
public object Generate(ISessionImplementor session, object obj)
{
var connection = session.Connection;
using (var command = connection.CreateCommand())
{
SqlParameter param;
session.ConnectionManager.Transaction.Enlist(command);
command.CommandText = "ShootMeInTheFace.GetNextSeededId";
command.CommandType = CommandType.StoredProcedure;
param = command.CreateParameter() as SqlParameter;
param.Direction = ParameterDirection.Input;
param.ParameterName = "#sTableName";
param.SqlDbType = SqlDbType.VarChar;
param.Value = this.table;
command.Parameters.Add(param);
// ... snip ...
command.ExecuteNonQuery();
// ... snip ...
return ((IDataParameter)command
.Parameters["#sTrimmedNewId"]).Value as string);
}
}
The problem
I can map this in the XML mapping files and it works great, BUT....
It doesn't work when NHibernate tries to batch inserts, such as in a cascade, or when the session is not Flush()ed after every call to Save() on a transient entity that depends on this generator.
That's because NHibernate seems to be doing something like
for (each thing that I need to save)
{
[generate its id]
[add it to the batch]
}
[execute the sql in one big batch]
This doesn't work because, since the generator is asking the database every time, NHibernate just ends up getting the same ID generated multiple times, since it hasn't actually saved anything yet.
The other NHibernate generators like IncrementGenerator seem to get around this by asking the database for the seed value once and then incrementing the value in memory during subsequent calls in the same session. I would rather not do this in my implementation if I have to, since all of the code that I need is sitting in the database already, just waiting for me to call it correctly.
Is there a way to make NHibernate actually issue the INSERT after each call to generating an ID for entities of a certain type? Fiddling with the batch size settings don't seem to help.
Do you have any suggestions/other workarounds besides re-implementing the generation code in memory or bolting on some triggers to the legacy database? I guess I could always treat these as "assigned" generators and try to hide that fact somehow within the guts of the domain model....
Thanks for any advice.
The update: 2 months later
It was suggested in the answers below that I use an IPreInsertEventListener to implement this functionality. While this sounds reasonable, there were a few problems with this.
The first problem was that setting the id of an entity to the AssignedGenerator and then not actually assigning anything in code (since I was expecting my new IPreInsertEventListener implementation to do the work) resulted in an exception being thrown by the AssignedGenerator, since its Generate() method essentially does nothing but check to make sure that the id is not null, throwing an exception otherwise. This is worked around easily enough by creating my own IIdentifierGenerator that is like AssignedGenerator without the exception.
The second problem was that returning null from my new IIdentifierGenerator (the one I wrote to overcome the problems with the AssignedGenerator resulted in the innards of NHibernate throwing an exception, complaining that a null id was generated. Okay, fine, I changed my IIdentifierGenerator to return a sentinel string value, say, "NOT-REALLY-THE-REAL-ID", knowing that my IPreInsertEventListener would replace it with the correct value.
The third problem, and the ultimate deal-breaker, was that IPreInsertEventListener runs so late in the process that you need to update both the actual entity object as well as an array of state values that NHibernate uses. Typically this is not a problem and you can just follow Ayende's example. But there are three issues with the id field relating to the IPreInsertEventListeners:
The property is not in the #event.State array but instead in its own Id property.
The Id property does not have a public set accessor.
Updating only the entity but not the Id property results in the "NOT-REALLY-THE-REAL-ID" sentinel value being passed through to the database since the IPreInsertEventListener was unable to insert in the right places.
So my choice at this point was to use reflection to get at that NHibernate property, or to really sit down and say "look, the tool just wasn't meant to be used this way."
So I went back to my original IIdentifierGenreator and made it work for lazy flushes: it got the high value from the database on the first call, and then I re-implemented that ID generation function in C# for subsequent calls, modeling this after the Increment generator:
private string lastGenerated;
public object Generate(ISessionImplementor session, object obj)
{
string identity;
if (this.lastGenerated == null)
{
identity = GetTheValueFromTheDatabase();
}
else
{
identity = GenerateTheNextValueInCode();
}
this.lastGenerated = identity;
return identity;
}
This seems to work fine for a while, but like the increment generator, we might as well call it the TimeBombGenerator. If there are multiple worker processes executing this code in non-serializable transactions, or if there are multiple entities mapped to the same database table (it's an old database, it happened), then we will get multiple instances of this generator with the same lastGenerated seed value, resulting in duplicate identities.
##$##$#.
My solution at this point was to make the generator cache a dictionary of WeakReferences to ISessions and their lastGenerated values. This way, the lastGenerated is effectively local to the lifetime of a particular ISession, not the lifetime of the IIdentifierGenerator, and because I'm holding WeakReferences and culling them out at the beginning of each Generate() call, this won't explode in memory consumption. And since each ISession is going to hit the database table on its first call, we'll get the necessary row locks (assuming we're in a transaction) we need to prevent duplicate identities from happening (and if they do, such as from a phantom row, only the ISession needs to be thrown away, not the entire process).
It is ugly, but more feasible than changing the primary key scheme of a 10-year-old database. FWIW.
[1] If you care to know about the ID generation, you take a substring(len - 2) of all of the values currently in the PK column, cast them to integers and find the max, add one to that number, add all of that number's digits, and append the sum of those digits as a checksum. (If the database has one row containing "1000001", then we would get max 10000, +1 equals 10001, checksum is 02, resulting new PK is "1000102". Don't ask me why.
A potential workaround is to generate and assign the ID in an event listener rather than using an IIdentifierGenerator implementation. The listener should implement IPreInsertEventListener and assign the ID in OnPreInsert.
Why dont you just make private string lastGenerated; static?