I wondered what slices were in Rust. As it turned out, it's just a struct with a data pointer and the size inside. I've looked at the source for indexing and I've found this:
impl<T> ops::Index<usize> for [T] {
type Output = T;
fn index(&self, index: usize) -> &T {
// NB built-in indexing
&(*self)[index]
}
}
I'm not a Rust expert but &(*self) seems like a pointer for me and there is no pointer indexing in Rust as far as I know. So how does this indexing thing work? Is it just a compiler built-in thing?
Is it just a compiler built-in thing?
Yes. The source code comment also says that same. [T] is an unsized type and needs some extra rules anyway. For example, unsized types can't live on the stack (are pretty difficult to handle). But references to unsized types consist of a pointer and a size (specifically "something that completes the type").
Note, however, that the expression is evaluated like this: &((*self)[index]). This means that self (type &[T]) is dereferenced to be type [T] and then indexed. This returns a T, but we only want a reference to it, thus the &.
Related
So I have this function in Kotlin:
fun getJooqOperator(field: TableField<*,*>, value: String): org.jooq.Condition {
// In this case, "this" is the operator enum.
// The combination of field and operator is strictly handled in the front-end (Elm)
return when (this) {
EQ -> (field as TableField<*, String>).eq(value)
NEQ -> (field as TableField<*, String>).ne(value)
GT -> (field as TableField<*, Int>).gt(Integer.parseInt(value))
}
}
This piece is used in a Kotlin class that will deserialize some JSON from the front-end. From this JSON, the class will build a jOOQ-query based on the input from the user. This part is about using the correct operator with the corresponding column and input value.
However this will not result in compile errors, IntelliJ is complaining about the casting of the field. I know for sure that these fields can be casted safely for the specific operator enum. This is the warning IntelliJ throws:
I don't want to Change type arguments to <*, *>, because without the casting, the jOOQ-condition won't work with the values.
Is there a way to rewrite my function properly, or can I safely ignore these warnings?
Casting
[...] or can I safely ignore these warnings?
I'm not sure what kinds of safety you're expecting here. Your assumption that EQ and NEQ are String based, whereas GT is Int based is quite a strong one. You probably have your reassons for this. If you encode things this way, and know why that is, then yes, you can safely ignore the warnings. This doesn't translate to such casts being "safe" in general, they're not. The * translates to any unknown type, yet in your cast, you make an assumption that effectively, the type should have been known, it's just not possible to express it in your type system.
Coercion
You can always use data type coercion as well, e.g.
return when (this) {
EQ -> field.coerce(SQLDataType.VARCHAR).eq(value)
NEQ -> field.coerce(SQLDataType.VARCHAR).ne(value)
GT -> field.coerce(SQLDataType.INTEGER).gt(Integer.parseInt(value))
}
Although, that does have an effect on the expression tree you're building, unlike the unsafe cast. Probably not an important effect in your case.
class Example(private val childrenByParent: HashMap<String, List<String>>) {
private val parents: List<String> = childrenByParent.keys.toList()
fun getChildrenCount(parentPosition: Int): Int {
return childrenByParent[parents[parentPosition]].size
// error, recommends using "?." or "!!"
}
}
The compiler won't let me call size directly but I don't understand why. There are no nullable types in sight.
If I let the compiler infer the type by doing this:
val infer = childrenByParent[parents[parentPosition]]
I can see that it assumes it's a List<String>?
It seems that I'm quite confused about nullability still. Would appreciate some help. I have a feeling I'm doing something incredibly dumb, but after some searching and testing I failed at fixing this.
I would like for this function to not use ?. or even worse, !!. Is it possible? At least, using HashMap and List<String>.
HashMap.get(Object) returns null when there is no element matching the key you provided, so its return type is effectively nullable, regardless of whether the values are or not.
So unfortunately you have to account for the case in which the key doesn't exist, so your choices are either implementing a case where it doesn't, or just declaring it as non-null with !! if you are sure the key exists.
Otherwise you can use HashMap.containsKey(String) to ensure the key exists and then you can be confident that using !! on the value won't result in a NullPointerException.
However as #gidds pointed out, this is not naturally thread-safe without some more work, so it might be best to just handle the case of the key not being in the map. Also I cannot actually think of many cases where you could be sure that key exists, in which a Map is the most appropriate data structure to use.
Also, even though this is not the case here, remember that nullability is just a feature of Kotlin, so when using some classes originally written in Java, whether an element is nullable or not is unknown. The IDE will usually represent this as Type! where the single ! tells you it is a platform type.
In a language with dependent types you can have Type-in-Type which simplifies the language and gives it a lot of power. This makes the language logically inconsistent but this might not be a problem if you are interested in programming only and not theorem proving.
In the Cayenne paper (a dependently typed language for programming) it is mentioned about Type-in-Type that "the unstratified type system would make it impossible during type checking to determine if an expression corresponds to a type or a real value and it would be impossible to remove the types at runtime" (section 2.4).
I have two questions about this:
In some dependently typed languages (like Agda) you can explicitly say which variables should be erased. In that case does Type-in-Type still cause problems?
We could extend the hierarchy one extra step with Kind where Type : Kind and Kind : Kind. This is still inconsistent but it seems that now you can know if a term is a type or a value. Is this correct?
the unstratified type system would make it impossible during type
checking to determine if an expression corresponds to a type or a real
value and it would be impossible to remove the types at runtime
This is not correct. Type-in-type prevents erasure of proofs, but it does not prevent erasure of types, assuming that we have parametric polymorphism with no typecase operation. Recent GHC Haskell is an example for a system which supports type-in-type, type erasure and type-level computation at the same time, but which does not support proof erasure. In dependently typed settings, we always know if a term is a type or not; we just check whether its type is Type.
Type erasure is just erasure of all things with type Type.
Proof erasure is more complicated. Let's assume that we have a Prop universe like in Coq, which is intended to be a universe of computationally irrelevant types. Here, we can use some p : Bool = Int proof to coerce Bool-s to Int. If the language is consistent, there is no closed proof of Bool = Int so closed program execution never encounters such coercion. Thus, closed program execution is safe even if we erase all coercions.
If the language is inconsistent, and the only way of proving contradiction is by an infinite loop, there is a diverging closed proof of Bool = Int. Now, closed program execution can actually hit a proof of falsehood; but we can still have type safety, by requiring that coercion must evaluate the proof argument. Then, the program loops whenever we coerce by falsehood, so execution never reaches the unsound parts of the program.
Probably the key point here is that A = B : Prop supports coercion, which eliminates into computationally relevant universe, but a parametric Type universe has no elimination principle at all and cannot influence computation.
Erasure can be generalized in several ways. For example, we may have any inductive data type with a single constructor (and no stored data which is not available from elsewhere, e.g. type indices), and try to erase every matching on that constructor. This is again sound if the language is total, and not otherwise. If we don't have a Prop universe, we can still do erasure like this. IIRC Idris does this a lot.
I just want to add a note that I believe is related to the question. Formality, a minimal proof language based on self-types, is non-terminating. I was involved in a Reddit discussion about whether Formality can segfault. One way that could happen is if you could prove Nat == String, then cast 42 :: Nat to 42 :: String and then print it as if it was a string, for example. But that's not the case. While you can prove String == Int in Formality:
nat_is_string: Nat == String
nat_is_string
And you can use it to cast a Nat to a String:
nat_str: String
42 :: rewrite x in x with nat_is_string
Any attempt to print nat_str, your program will not segfault, it will just hang. That's because you can't erase the equality evidence in Formality. To understand why, let's see the definition of Equal.rewrite (which is used to cast 42 to String):
Equal.rewrite<A: Type, a: A, b: A>(e: Equal(A,a,b))<P: A -> Type>(x: P(a)): P(b)
case e {
refl: x
} : P(e.b)
Once we erase the types, the normal form of rewrite becomes λe. λx. e(x). The e there is the equality evidence. In the example above, the normal form of nat_str is not 42, but nat_is_string(42). Since nat_is_string is an equality proof, then it has two options: either it will halt and become identity, in which case it will just return 42, or it will hang forever. In this case, it doesn't halt, so nat_is_string(42) will never return 42. As such, it can't be printed, and any attempt to use it will cause your entire program to hang, but not segfault.
So, in short, the insight is that self types allow us to encode the Equal, rewrite / subst, and erase all the type information, but not the equality evidence itself.
How can i make an arraylist of functions, and call each function easily? I have already tried making an ArrayList<Function<Unit>>, but when i tried to do this:
functionList.forEach { it }
and this:
for(i in 0 until functionList.size) functionList[i]
When i tried doing this: it() and this: functionList[i](), but it wouldn't even compile in intellij. How can i do this in kotlin? Also, does the "Unit" in ArrayList<Function<Unit>> mean return value or parameters?
Just like this:
val funs:List<() -> Unit> = listOf({}, { println("fun")})
funs.forEach { it() }
The compiler can successfully infer the type of funs here which is List<() -> Unit>. Note that () -> Unit is a function type in Kotlin which represents a function that does not take any argument and returns Unit.
I think there are two problems with the use of the Function interface here.
The first problem is that it doesn't mean what you might think. As I understand it, it's a very general interface, implemented by all functions, however many parameters they take (or none). So it doesn't have any invoke() method. That's what the compiler is complaining about.
Function has several sub-interfaces, one for each 'arity' (i.e. one for each number of parameters): Function0 for functions that take no parameters, Function1 for functions taking one parameter, and so on. These have the appropriate invoke() methods. So you could probably fix this by replacing Function by Function0.
But that leads me on to the second problem, which is that the Function interfaces aren't supposed to be used this way. I think they're mainly for Java compatibility and/or for internal use by the compiler.
It's usually much better to use the Kotlin syntax for function types: (P1, P2...) -> R. This is much easier to read, and avoids these sorts of problems.
So the real answer is probably to replace Function<Unit> by () -> Unit.
Also, in case it's not clear, Kotlin doesn't have a void type. Instead, it has a type called Unit, which has exactly one value. This might seem strange, but makes better sense in the type system, as it lets the compiler distinguish functions that return without an explicit value, from those which don't return. (The latter might always throw an exception or exit the process. They can be defined to return Nothing -- a type with no values at all.)
When I am talking about Go, I am speaking about the gc compiler implementation.
As far as I know, Go performs escape analysis.
The following idiom is seen pretty often in Go code:
func NewFoo() *Foo
Escape analysis would notice that Foo escapes NewFoo and allocate Foo on the heap.
This function could also be written as:
func NewFoo(f *Foo)
and would be used like
var f Foo
NewFoo(&f)
In this case, as long as f doesn't escape anywhere else, f could be allocated on the stack.
Now to my actual question.
Would it be possible for the compiler to optimize every foo() *Foo into foo(f *Foo), even possibly over multiple levels where Foo is returned in each?
If not, in what kind of cases does this approach fail?
Thank you in advance.
(Not quite an answer but too big for a comment.)
From the comments it seems you might be interested in this small example:
package main
type Foo struct {
i, j, k int
}
func NewFoo() *Foo {
return &Foo{i: 42}
}
func F1() {
f := NewFoo()
f.i++
}
func main() {
F1()
}
On Go1.5 running go build -gcflags="-m" gives:
./new.go:7: can inline NewFoo
./new.go:11: can inline F1
./new.go:12: inlining call to NewFoo
./new.go:16: can inline main
./new.go:17: inlining call to F1
./new.go:17: inlining call to NewFoo
./new.go:8: &Foo literal escapes to heap
./new.go:12: F1 &Foo literal does not escape
./new.go:17: main &Foo literal does not escape
So it inlines NewFoo into F1 into main (and says that it could further inline main if someone was to call it).
Although it does say that in NewFoo itself &Foo escapes, it does not escape when inlined.
The output from go build -gcflags="-m -S" confirms this with a main initializing the object on the stack and not doing any function calls.
Of course this is a very simple example and any complications (e.g. calling fmt.Print with f) could easily cause it to escape.
In general, you shouldn't worry about this too much unless profiling has told you that you have a problem area(s) and you are trying to optimize a specific section of code.
Idiomatic and readable code should trump optimization.
Also note that using go test -bench -benchmem (or preferably using testing.B's ReportAllocs method) can report on allocations of benchmarked code which can help identify something doing more allocations than expected/desired.
After doing some more research I found what I was looking for.
What I was describing is apparently called "Return value optimization" and is well doable, which pretty much answers my question about whether this was possible in Go as well.
Further information about this can be found here:
What are copy elision and return value optimization?