Declaring computed python-level property in pydantic - properties

I have a class deriving from pydantic.BaseModel and would like to create a "fake" attribute, i.e. a computed property. The propery keyword does not seem to work with Pydantic the usual way. Below is the MWE, where the class stores value and defines read/write property called half with the obvious meaning. Reading the property works fine with Pydantic, but the assignment fails.
I know Pydantic is modifying low-level details of attribute access; perhaps there is a way to define computed field in Pydantic in a different way?
import pydantic
class Object(object):
def __init__(self,*,value): self.value=value
half=property(lambda self: .5*self.value,lambda self,h: setattr(self,'value',h*2))
class Pydantic(pydantic.BaseModel):
class Config:
extra='allow'
value: float
half=property(lambda self: .5*self.value,lambda self,h: setattr(self,'value',h*2))
o,p=Object(value=1.),Pydantic(value=1.)
print(o.half,p.half)
o.half=p.half=2
print(o.value,p.value)
outputs (value=1. was not modified by assigning half in the Pydantic case):
0.5 0.5
4 1.0

I happened to be working on the same problem today. Officially it is not supported yet, as discussed here.
However, I did find the following example which works well:
class Person(BaseModel):
first_name: str
last_name: str
full_name: str = None
#validator("full_name", always=True)
def composite_name(cls, v, values, **kwargs):
return f"{values['first_name']} {values['last_name']}"
Do make sure your derived field comes after the fields you want to derive it from, else the values dict will not contain the needed values (e.g. full_name comes after first_name and last_name that need to be fetched from values).

Instead of using a property, here's an example which shows how to use pydantic.root_validator to compute the value of an optional field:
https://daniellenz.blog/2021/02/20/computed-fields-in-pydantic/
I've adapted this for a similar application:
class Section (BaseModel):
title: constr(strip_whitespace=True)
chunks: conlist(min_items=1, item_type=Chunk)
size: typing.Optional[ PositiveInt ] = None
role: typing.Optional[ typing.List[ str ]] = []
license: constr(strip_whitespace=True)
#root_validator
def compute_size (cls, values) -> typing.Dict:
if values["size"] is None:
values["size"] = sum([
chunk.get_size()
for chunk in values["chunks"]
])
return values
In this case each element of the discriminated union chunks has a get_size() method to compute its size. If the size field isn't specified explicitly in serialization (e.g., input from a JSON file) then it gets computed.

Created a pip package that allows you to easily create computed properties.
Here you can check it out:
https://pypi.org/project/pydantic-computed/
By using the package the example with getting the half of a value would look like this :
from pydantic import BaseModel
from pydantic_computed import Computed, computed
class SomeModel(BaseModel):
value: float
value_half: Computed[float]
#computed("value_half")
def compute_value_half(value: float):
return value / 2

Related

Kotlin: Omitting enum name when its unambiguous

With Swift enums you can omit the name of the enum in cases where only a value of that type can be used.
So when given the enum (Swift/Kotlin)
enum (class) CompassPoint {
case north
case south
case east
case west
}
Swift only needs the enums name when creating a new variable:
// type unclear, enum name needed
var directionToHead = CompassPoint.west
// type clear, enum name can be dropped
directionToHead = .east
// type clear, enum name can be dropped
switch directionToHead {
case .north:
print("Lots of planets have a north")
case .south:
print("Watch out for penguins")
case .east:
print("Where the sun rises")
case .west:
print("Where the skies are blue")
}
While in Kotlin, for the same situation you'd have to write
// type unclear, enum name needed
var directionToHead = CompassPoint.west
// type clear, enum name still needed
directionToHead = CompassPoint.east
// each case needs the enum name
when(directionToHead) {
CompassPoint.north -> println("Lots of planets have a north")
CompassPoint.south -> println("Watch out for penguins")
CompassPoint.east -> println("Where the sun rises")
CompassPoint.west -> println("Where the skies are blue")
}
Is there a reason for this, and/or are there situations in Kotlin where just .north or north can be used?
Edit: It seems importing the enum 'fixes' this and is necessary even when the enum is defined in the same file as it is used.
While this helped practically, I still don't understand why the import is needed.
Just use import, so you can use enum values without enum name
import CompassPoint.*
Edit: It seems importing the enum 'fixes' this and is necessary even when the enum is defined in the same file as it is used.
While this helped practically, I still don't understand why the import is needed.
Simply because it isn't treated specially. import CompassPoint.* lets you write just <name> for anything you'd write as CompassPoint.<name> without it (if the name doesn't conflict with anything). If you happen to be in the file where CompassName is defined, it works exactly the same.
You can refer to the values as north etc. without imports inside the enum definition, exactly like you can refer to an object's members inside this object:
object Obj {
val v = 0
val v1 = v
}
val v2 = Obj.v
or
import Obj.* // or import Obj.v
val v2 = v
FWIW, the Kotlin team is considering implementing unqualified enum constants when the enum type is unambiguous.
They're currently doing a feature survey to gather feedback, and unqualified enum constants are on there too.

Use apply on a data class in Kotlin

I know how to use the apply function on a normal Kotlin class but have not been able to use it with a data class:
data class Person(name: String)
val person = Person().apply {
name = "Tony Stark"
}
I get a compile message of:
No value passed for parameter 'name'
The issue is that name is a constructor parameter only and not made a property, which is invalid for the data class concept anyway. Fix like this:
data class Person(val name: String)
The apply function works similar with any class. But there are some errors in your code snippet:
Parameter in Person constructor didn't mentioned as var or val, so there is no fields name in that class. It would be better to make it var to be able to change value.
You made class's constructor with 1 parameter, but trying to use empty constructor - it is error.

Kotlin: Intrinsics.areEqual infinite loop (stack overflow)

java.lang.StackOverflowError
at kotlin.jvm.internal.Intrinsics.areEqual(Intrinsics.java:164)
at plugin.interaction.inter.teleports.Category.equals(Category.kt)
at kotlin.jvm.internal.Intrinsics.areEqual(Intrinsics.java:164)
at plugin.interaction.inter.teleports.Destination.equals(Destination.kt)
Happens from a .equals comparison between two non-relationship data classes.
Major bug.
data class Category(val name: String, val destinations: MutableList<Destination>)
data class Destination(val category: Category, val name: String)
Data classes in Kotlin are just syntactic sugar for Java POJOs.
The main culprit in your example is this cycle:
val destinations: MutableList<Destination> in Category &
val category: Category in Destination
You must remove this cycle by moving either of the two variables out of the primary data class constructor.
However, there is also a much bigger sideeffect: data class Category(..) is mutable, which will cause for it (and any other data class using categories in it's primary constructor!) to be unsafe to use as keys in any hash-based collection. For more information, see: Are mutable hashmap keys a dangerous practice?
Given that data classes are meant for pure data, I recommend removing val category: Category in data class Destination(..), and change type of val destinations: MutableList<Destination> in data class Category(..) to read-only List<Destination>. In order to break immutable state after said changes, you will have to either perform unsafe casts from Kotlin or create an instance of the class from Java.
If you however absolutely require a backreference to categories in destinations (and aren't using your classes in hashmaps/-sets/etc.), you could either make Destination a regular class and implement equals/hashCode yourself, or move the category out of the primary constructor. This is a bit tricky, but can be done with a secondary constructor:
data class Destination private constructor(val name: String) {
private lateinit var _category: Category
val category get() = _category
constructor(category: Category, name: String) : this(name) {
_category = category
}
}
Well in my case I was overriding equals method like:
override fun equals(other: Any?): Boolean {
// some code here
if (other==this)
return true
// some code here
}
equals and == in java
In java when we use equals(for ex: str1.equals(str2)) it checks the content of two object(for custom objects you must have to override equals and check all the values of objects otherwise Object class's equals method just compare reference, which is same as ==), but if we use ==(for ex: str1==str2) operator, it checks the reference of both objects.
== in kotlin
But in case of kotlin when we use == operator, it checks the content(data or variable) of objects only if they are object of data class. And == operator checks reference for normal class.
when we use == it will call the equals method.
So in my overridden equals method when other==this will execute it will call eaquals method again, and that will call eaquals method again and make an infinite loop.
So to make it work we need to change == to ===(this will check the reference of both operator), like:
if (other===this)
return true
Note: .equals and == are same until we use them with Float or
Double. .equals disagrees with the IEEE 754 Standard for
Floating-Point Arithmetic, it returns a false when -0.0 was compared
with 0.0 whereas == and === returns true
You can check reference here

Scala class inheritance

Tagged as homework.
I'm having trouble in the object oriented world while trying to implement a class.
I'm implenting various functions to take action on lists, that I'm using to mock a set.
I'm not too worried about my logic on how to find union, for example, but really just the structure.
For eg:
abstract class parentSet[T] protected () {
def union(other:parentSet[T]):parentSet[T]
}
Now I want a new class extending parentSet:
class childSet[T] private (l: List[T]) extends parentSet[T] {
def this() = this(List())
private val elems = l
val toList = List[T] => new List(l)
def union(other:parentSet[T]):childSet[T] = {
for (i <- this.toList) {
if (other contains i) {}
else {l :: i}
}
return l
}
}
Upon compiling, I receive errors such that type childSet isn't found in def union, nor is type T to keep it parametric. Also, I assume my toList isn't correct as it complains that it isn't a member of the object; to name a few.
Where in my syntax am I wrong?
EDIT
Now I've got that figured out:
def U(other:parentSet[T]):childSet[T] = {
var w = other.toList
for (i <- this.toList) {
if (!(other contains i)) {w = i::w}
}
return new childSet(w)
}
Now, I'm trying to do the same operations with map, and this is what I'm working on/with:
def U(other:parentSet[T]):MapSet[T] = {
var a = Map[T,Unit]
for (i <- this.toList) {
if (!(other contains i)) {a = a + (i->())}
}
return new MapSet(elems + (a->()))
}
I still want to use toList to make it easily traversable, but I'm still getting type errors while messing with maps..
This code has a few problems:
It seems that you are not realizing that List[T] is an immutable type, meaning you cannot change its value once created. So if you have a List[T] and you call the :: method to prepend a value, the function returns a new list and leaves your existing one unchanged. Scala has mutable collections such as ListBuffer which may behave more like you expect. So when you return l, you're actually returning the original list.
Also, you have the order wrong in using ::. It should go i :: l, since :: is a right-binding function (because it ends with a :).
Lastly, in your union method you are doing (other contains i). Maybe it's just the Scala syntax that's confusing you, but this is the same as doing (other.contains(i)) and clearly contains is not a defined method of parentSet. It is a method on the List[T] type, but you're not calling contains on a list.
You tagged this as homework so I'm not going to fix your code, but I think you should
Look at some examples of correct Scala code involving lists, try here for starters
Play around in the Scala REPL and try creating and working with some lists, so you get a feel for how immutable collections work.
To answer your direct question, even though childSet is inheriting parentSet the original method specify parentSet as the return type and not childSet. You can either only use parentSet as the type OR you could specify the return type to be anything that inherits parentSet.

Why is method overloading not defined for different return types?

In Scala, you can overload a method by having methods that share a common name, but which either have different arities or different parameter types. I was wondering why this wasn't also extended to the return type of a method? Consider the following code:
class C {
def m: Int = 42
def m: String = "forty two"
}
val c = new C
val i: Int = C.m
val s: String = C.m
Is there a reason why this shouldn't work?
Thank you,
Vincent.
Actually, you can make it work by the magic of 'implicit'. As following:
scala> case class Result(i: Int,s: String)
scala> class C {
| def m: Result = Result(42,"forty two")
| }
scala> implicit def res2int(res: Result) = res.i
scala> implicit def res2str(res: Result) = res.s
scala> val c = new C
scala> val i: Int = c.m
i: Int = 42
scala> val s: String = c.m
s: String = forty two
scala>
You can of course have overloading for methods which differ by return type, just not for methods which differ only by return type. For example, this is fine:
def foo(s: String) : String = s + "Hello"
def foo(i: Int) : Int = i + 1
That aside, the answer to your question is evidently that it was a design decision: the return type is part of the method signature as anyone who has experienced an AbstractMethodError can tell you.
Consider however how allowing such overloading might work in tandem with sub-typing:
class A {
def foo: Int = 1
}
val a: A = //...lookup an A
val b = a.foo
This is perfectly valid code of course and javac would uniquely resolve the method call. But what if I subclass A as follows:
class B extends A {
def foo: String = "Hello"
}
This causes the original code's resolution of which method is being called to be broken. What should b be? I have logically broken some existing code by subtyping some existing class, even though I have not changed either that code or that class.
The main reason is complexity issues: with a "normal" compiler approach, you go inside-out (from the inner expression to the outer scope), building your binary step by step; if you add return-type-only differentiation, you need to change to a backtracking approach, which greatly increases compile time, compiler complexity (= bugs!).
Also, if you return a subtype or a type that can be automatically converted to the other, which method should you choose? You'd give ambiguity errors for perfectly valid code.
Not worth the trouble.
All in all, you can easily refactor your code to avoid return-type-only overload, for example by adding a dummy parameter of the type you want to return.
I've never used scala, so someone whack me on the head if I'm wrong here, but this is my take.
Say you have two methods whose signatures differ only by return type.
If you're calling that method, how does the compiler (interpreter?) know which method you actually want to be calling?
I'm sure in some situations it might be able to figure it out, but what if, for example, one of your return types is a subclass of the other? It's not always easy.
Java doesn't allow overloading of return types, and since scala is built on the java JVM, it's probably just a java limitation.
(Edit)
Note that Covariant returns are a different issue. When overriding a method, you can choose to return a subclass of the class you're supposed to be returning, but cannot choose an unrelated class to return.
In order to differentiate between different function with the same name and argument types, but different return types, some syntax is required, or analysis of the site of an expression.
Scala is an expression oriented language (every statement is an expression). Generally expression oriented languages prefer to have the semantics of expressions to be dependent only on the scope evaluation occurs in, not what happens to the result, so for the expression foo() in i_take_an_int( foo() ) and i_take_any_type ( foo()) and foo() as a statement all call the same version of foo().
There's also the issue that adding overloading by return type to a language with type inference will make the code completely incomprehensible - you'd have to keep an incredible amount of the system in mind in order to predict what will happen when code gets executed.
All answers that say the JVM does not allow this are straight up wrong. You can overload based on return type. Surprisingly, the JVM does allow this; it's the compilers for languages that run on the JVM that don't allow this. But there are ways to get around compiler limitations in Scala.
For example, consider the following snippet of code:
object Overload{
def foo(xs: String*) = "foo"
def foo(xs: Int*) = "bar"
}
This will throw a compiler error (Because varargs, indicated by the * after the argument type, type erase to Seq):
Error:(217, 11) double definition:
def foo(xs: String*): String at line 216 and
def foo(xs: Any*): String at line 217
have same type after erasure: (xs: Seq)String
def foo(xs: Any*) = "bar";
However, if you change value of the second foo to 3 instead of bar (that way changing the return type from String to Int) as follows:
object Overload{
def foo(xs: String*) = "foo"
def foo(xs: Int*) = 3
}
... you won't get a compiler error.
So you can do something like this:
val x: String = Overload.foo()
val y: Int = Overload.foo()
println(x)
println(y)
And it will print out:
3
foo
However, the caveat to this method is having to add varargs as the last (or only) argument for the overloaded functions, each with with their own distinct type.
Source: http://www.drmaciver.com/2008/08/a-curious-fact-about-overloading-in-scala/