Does implementation of ++i vs. i++ vary from language to language? - variables

I recently read:
"The expressions (++i) and (i++) have values and side effects.
The side effect is that the value in i is increased by 1.
The value of (i++) is the value before the increment and
the value of (++i) is the value after the increment,
but whether the increment or the evaluation takes place first,
is not part of C."
I know the evaluative step takes place first in Java... is it the same for all other languages?

At least in C++, operators can be overloaded, so the semantics of ++i and i++ are not guaranteed - they can in fact be overloaded to do very different things, and can even be made to do something that has nothing to do with increment. So the answer to your question is that no - in at least one language, the postfix and prefix ++ operator for classes can do whatever the programmer wishes.
But just because someone can do that, it doesn't mean they should. Since the pre- and post-increment operators have very well known semantics, (decent) C++ programmers try to not violate that, lest the code that uses them will be most surprised.
A good example of operator overloading in C++ is the STL iterators. Iterators to containers like linked lists define a class that overloads the preincrement and postincrement operators in such a way that it mimics a pointer (iterators in C++ are in fact a generalization of pointers).

Related

In a dependently typed programming language is Type-in-Type practical for programming?

In a language with dependent types you can have Type-in-Type which simplifies the language and gives it a lot of power. This makes the language logically inconsistent but this might not be a problem if you are interested in programming only and not theorem proving.
In the Cayenne paper (a dependently typed language for programming) it is mentioned about Type-in-Type that "the unstratified type system would make it impossible during type checking to determine if an expression corresponds to a type or a real value and it would be impossible to remove the types at runtime" (section 2.4).
I have two questions about this:
In some dependently typed languages (like Agda) you can explicitly say which variables should be erased. In that case does Type-in-Type still cause problems?
We could extend the hierarchy one extra step with Kind where Type : Kind and Kind : Kind. This is still inconsistent but it seems that now you can know if a term is a type or a value. Is this correct?
the unstratified type system would make it impossible during type
checking to determine if an expression corresponds to a type or a real
value and it would be impossible to remove the types at runtime
This is not correct. Type-in-type prevents erasure of proofs, but it does not prevent erasure of types, assuming that we have parametric polymorphism with no typecase operation. Recent GHC Haskell is an example for a system which supports type-in-type, type erasure and type-level computation at the same time, but which does not support proof erasure. In dependently typed settings, we always know if a term is a type or not; we just check whether its type is Type.
Type erasure is just erasure of all things with type Type.
Proof erasure is more complicated. Let's assume that we have a Prop universe like in Coq, which is intended to be a universe of computationally irrelevant types. Here, we can use some p : Bool = Int proof to coerce Bool-s to Int. If the language is consistent, there is no closed proof of Bool = Int so closed program execution never encounters such coercion. Thus, closed program execution is safe even if we erase all coercions.
If the language is inconsistent, and the only way of proving contradiction is by an infinite loop, there is a diverging closed proof of Bool = Int. Now, closed program execution can actually hit a proof of falsehood; but we can still have type safety, by requiring that coercion must evaluate the proof argument. Then, the program loops whenever we coerce by falsehood, so execution never reaches the unsound parts of the program.
Probably the key point here is that A = B : Prop supports coercion, which eliminates into computationally relevant universe, but a parametric Type universe has no elimination principle at all and cannot influence computation.
Erasure can be generalized in several ways. For example, we may have any inductive data type with a single constructor (and no stored data which is not available from elsewhere, e.g. type indices), and try to erase every matching on that constructor. This is again sound if the language is total, and not otherwise. If we don't have a Prop universe, we can still do erasure like this. IIRC Idris does this a lot.
I just want to add a note that I believe is related to the question. Formality, a minimal proof language based on self-types, is non-terminating. I was involved in a Reddit discussion about whether Formality can segfault. One way that could happen is if you could prove Nat == String, then cast 42 :: Nat to 42 :: String and then print it as if it was a string, for example. But that's not the case. While you can prove String == Int in Formality:
nat_is_string: Nat == String
nat_is_string
And you can use it to cast a Nat to a String:
nat_str: String
42 :: rewrite x in x with nat_is_string
Any attempt to print nat_str, your program will not segfault, it will just hang. That's because you can't erase the equality evidence in Formality. To understand why, let's see the definition of Equal.rewrite (which is used to cast 42 to String):
Equal.rewrite<A: Type, a: A, b: A>(e: Equal(A,a,b))<P: A -> Type>(x: P(a)): P(b)
case e {
refl: x
} : P(e.b)
Once we erase the types, the normal form of rewrite becomes λe. λx. e(x). The e there is the equality evidence. In the example above, the normal form of nat_str is not 42, but nat_is_string(42). Since nat_is_string is an equality proof, then it has two options: either it will halt and become identity, in which case it will just return 42, or it will hang forever. In this case, it doesn't halt, so nat_is_string(42) will never return 42. As such, it can't be printed, and any attempt to use it will cause your entire program to hang, but not segfault.
So, in short, the insight is that self types allow us to encode the Equal, rewrite / subst, and erase all the type information, but not the equality evidence itself.

Constant vs Unchanged Variable?

Is there any advantage to using a constant (unchangable) than just not changing a variable?
Depending on your language and compiler, a constant may get inlined & optimized when built. Variables will likely eat up stack space even if it never changes.
By making the value constant, the compiler can just substitute it. If you have x / 2, for example, the compiler can compute the value and use that instead of having to emit code to retrieve the value of x and then divide it by 2.
Also, you don't have to worry about accidentally changing the value. For example, in C-like languages you might accidentally type if (x = 2) when you meant if (x == 2) which will change the value of x if it's a variable.
Anyone maintaining your code in the future (including you) won't have to look around to see where (if anywhere) a constant is changed when finding a bug or adding a feature - they'll know right off the bat that it can't be changed.
In some program languages, declaring something to be constant will allow a compiler to make optimizations which would not otherwise be possible. Further, declaring something to be constant can be a useful way of documenting that there are places in the code which might be broken should the value change.
Unfortunately, some programming languages sometimes do evil things with things that are declared constant. For example, in some .net languages, if a value type which is declared read-only is passed by modifiable reference, the compiler will, rather than refusing to allow such an action, instead make a copy and pass that. Such implicit copying will impair efficiency, and may result in unexpected semantics.

Most appropriate data structure for dynamic languages field access

I'm implementing a dynamic language that will compile to C#, and it's implementing its own reflection API (.NET's is too slow, and the DLR is limited only to more recent and resourceful implementations).
For this, I've implemented a simple .GetField(string f) and .SetField(string f, object val) interface. Until recently, the implementation just switches over all possible field string values and makes the corresponding action.
Also, this dynamic language has the possibility to define anonymous objects. For those anonymous objects, at first, I had implemented a simple hash algorithm.
By now, I am looking for ways to optimize the dynamic parts of the language, and I have come across the fact that a hash algorithm for anonymous objects would be overkill. This is because the objects are usually small. I'd say the objects contain 2 or 3 fields, normally. Very rarely, they would contain more than 15 fields. It would take more time to actually hash the string and perform the lookup than if I would test for equality between them all. (This is not tested, just theoretical).
The first thing I did was to -- at compile-time -- create a red-black tree for each anonymous object declaration and have it laid onto an array so that the object can look for it in a very optimized way.
I am still divided, though, if that's the best way to do this. I could go for a perfect hashing function. Even more radically, I'm thinking about dropping the need for strings and actually work with a struct of 2 longs.
Those two longs will be encoded to support 10 chars (A-za-z0-9_) each, which is mostly a good prediction of the size of the fields. For fields larger than this, a special function (slower) receiving a string will also be provided.
The result will be that strings will be inlined (not references), and their comparisons will be as cheap as a long comparison.
Anyway, it's a little hard to find good information about this kind of optimization, since this is normally thought on a vm-level, not a static language compilation implementation.
Does anyone have any thoughts or tips about the best data structure to handle dynamic calls?
Edit:
For now, I'm really going with the string as long representation and a linear binary tree lookup.
I don't know if this is helpful, but I'll chuck it out in case;
If this is compiling to C#, do you know the complete list of fields at compile time? So as an idea, if your code reads
// dynamic
myObject.foo = "some value";
myObject.bar = 32;
then during the parse, your symbol table can build an int for each field name;
// parsing code
symbols[0] == "foo"
symbols[1] == "bar"
then generate code using arrays or lists;
// generated c#
runtimeObject[0] = "some value"; // assign myobject.foo
runtimeObject[1] = 32; // assign myobject.bar
and build up reflection as a separate array;
runtimeObject.FieldNames[0] == "foo"; // Dictionary<int, string>
runtimeObject.FieldIds["foo"] === 0; // Dictionary<string, int>
As I say, thrown out in the hope it'll be useful. No idea if it will!
Since you are likely to be using the same field and method names repeatedly, something like string interning would work well to quickly generate keys for your hash tables. It would also make string equality comparisons constant-time.
For such a small data set (expected upper bounds of 15) I think almost any hashing will be more expensive then a tree or even a list lookup, but that is really dependent on your hashing algorithm.
If you want to use a dictionary/hash then you'll need to make sure the objects you use for the key return a hash code quickly (perhaps a single constant hash code that's built once). If you can prevent collisions inside of an object (sounds pretty doable) then you'll gain the speed and scalability (well for any realistic object/class size) of a hash table.
Something that comes to mind is Ruby's symbols and message passing. I believe Ruby's symbols act as a constant to just a memory reference. So comparison is constant, they are very lite, and you can use symbols like variables (I'm a little hazy on this and don't have a Ruby interpreter on this machine). Ruby's method "calling" really turns into message passing. Something like: obj.func(arg) turns into obj.send(:func, arg) (":func" is the symbol). I would imagine that symbol makes looking up the message handler (as I'll call it) inside the object pretty efficient since it's hash code most likely doesn't need to be calculated like most objects.
Perhaps something similar could be done in .NET.

Separate Namespaces for Functions and Variables in Common Lisp versus Scheme

Scheme uses a single namespace for all variables, regardless of whether they are bound to functions or other types of values. Common Lisp separates the two, such that the identifier "hello" may refer to a function in one context, and a string in another.
(Note 1: This question needs an example of the above; feel free to edit it and add one, or e-mail the original author with it and I will do so.)
However, in some contexts, such as passing functions as parameters to other functions, the programmer must explicitly distinguish that he's specifying a function variable, rather than a non-function variable, by using #', as in:
(sort (list '(9 A) '(3 B) '(4 C)) #'< :key #'first)
I have always considered this to be a bit of a wart, but I've recently run across an argument that this is actually a feature:
...the
important distinction actually lies in the syntax of forms, not in the
type of objects. Without knowing anything about the runtime values
involved, it is quite clear that the first element of a function form
must be a function. CL takes this fact and makes it a part of the
language, along with macro and special forms which also can (and must)
be determined statically. So my question is: why would you want the
names of functions and the names of variables to be in the same
namespace, when the primary use of function names is to appear where a
variable name would rarely want to appear?
Consider the case of class names: why should a class named FOO prevent
the use of variables named FOO? The only time I would be referring the
class by the name FOO is in contexts which expect a class name. If, on
the rare occasion I need to get the class object which is bound to the
class name FOO, there is FIND-CLASS.
This argument does make some sense to me from experience; there is a similar case in Haskell with field names, which are also functions used to access the fields. This is a bit awkward:
data Point = Point { x, y :: Double {- lots of other fields as well --} }
isOrigin p = (x p == 0) && (y p == 0)
This is solved by a bit of extra syntax, made especially nice by the NamedFieldPuns extension:
isOrigin2 Point{x,y} = (x == 0) && (y == 0)
So, to the question, beyond consistency, what are the advantages and disadvantages, both for Common Lisp vs. Scheme and in general, of a single namespace for all values versus separate ones for functions and non-function values?
The two different approaches have names: Lisp-1 and Lisp-2. A Lisp-1 has a single namespace for both variables and functions (as in Scheme) while a Lisp-2 has separate namespaces for variables and functions (as in Common Lisp). I mention this because you may not be aware of the terminology since you didn't refer to it in your question.
Wikipedia refers to this debate:
Whether a separate namespace for functions is an advantage is a source of contention in the Lisp community. It is usually referred to as the Lisp-1 vs. Lisp-2 debate. Lisp-1 refers to Scheme's model and Lisp-2 refers to Common Lisp's model. These names were coined in a 1988 paper by Richard P. Gabriel and Kent Pitman, which extensively compares the two approaches.
Gabriel and Pitman's paper titled Technical Issues of Separation in Function Cells and Value Cells addresses this very issue.
Actually, as outlined in the paper by Richard Gabriel and Kent Pitman, the debate is about Lisp-5 against Lisp-6, since there are several other namespaces already there, in the paper are mentioned type names, tag names, block names, and declaration names. edit: this seems to be incorrect, as Rainer points out in the comment: Scheme actually seems to be a Lisp-1. The following is largely unaffected by this error, though.
Whether a symbol denotes something to be executed or something to be referred to is always clear from the context. Throwing functions and variables into the same namespace is primarily a restriction: the programmer cannot use the same name for a thing and an action. What a Lisp-5 gets out of this is just that some syntactic overhead for referencing something from a different namespace than what the current context implies is avoided. edit: this is not the whole picture, just the surface.
I know that Lisp-5 proponents like the fact that functions are data, and that this is expressed in the language core. I like the fact that I can call a list "list" and a car "car" without confusing my compiler, and functions are a fundamentally special kind of data anyway. edit: this is my main point: separate namespaces are not a wart at all.
I also liked what Pascal Constanza had to say about this.
I've met a similar distinction in Python (unified namespace) vs Ruby (distinct namespaces for methods vs non-methods). In that context, I prefer Python's approach -- for example, with that approach, if I want to make a list of things, some of which are functions while others aren't, I don't have to do anything different with their names, depending on their "function-ness", for example. Similar considerations apply to all cases in which function objects are to be bandied around rather than called (arguments to, and return values from, higher-order functions, etc, etc).
Non-functions can be called, too (if their classes define __call__, in the case of Python -- a special case of "operator overloading") so the "contextual distinction" isn't necessarily clear, either.
However, my "lisp-oid" experience is/was mostly with Scheme rather than Common Lisp, so I may be subconsciously biased by the familiarity with the uniform namespace that in the end comes from that experience.
The name of a function in Scheme is just a variable with the function as its value. Whether I do (define x (y) (z y)) or (let ((x (lambda (y) (z y)))), I'm defining a function that I can call. So the idea that "a variable name would rarely want to appear there" is kind of specious as far as Scheme is concerned.
Scheme is a characteristically functional language, so treating functions as data is one of its tenets. Having functions be a type of their own that's stored like all other data is a way of carrying on the idea.
The biggest downside I see, at least for Common Lisp, is understandability. We can all agree that it uses different namespaces for variables and functions, but how many does it have? In PAIP, Norvig showed that it has "at least seven" namespaces.
When one of the language's classic books, written by a highly respected programmer, can't even say for certain in a published book, I think there's a problem. I don't have a problem with multiple namespaces, but I wish the language was, at the least, simple enough that somebody could understand this aspect of it entirely.
I'm comfortable using the same symbol for a variable and for a function, but in the more obscure areas I resort to using different names out of fear (colliding namespaces can be really hard to debug!), and that really should never be the case.
There's good things to both approaches. However, I find that when it matters, I prefer having both a function LIST and a a variable LIST than having to spell one of them incorrectly.

const vs enum in D

Check out this quote from here, towards the bottom of the page. (I believe the quoted comment about consts apply to invariants as well)
Enumerations differ from consts in that they do not consume any space
in the final outputted object/library/executable, whereas consts do.
So apparently value1 will bloat the executable, while value2 is treated as a literal and doesn't appear in the object file.
const int value1 = 0xBAD;
enum int value2 = 42;
Back in C++ I always assumed this was for legacy reasons, and old compilers that couldn't optimize away constants. But if this is still true in D, there must be a deeper reason behind this. Anyone know why?
Just like in C++, an enum in D seems to be a "conserved integer literal" (edit: amazing, D2 even supports floats and strings). Its enumerators have no location. They are just immaterial as values without identity.
Placing enum is new in D2. It first defines a new variable. It is not an lvalue (so you also cannot take its address). An
enum int a = 10; // new in D2
Is like
enum : int { a = 10 }
If i can trust my poor D knowledge. So, a in here is not an lvalue (no location and you can't take its address). A const, however, has an address. If you have a global (not sure whether this is the right D terminology) const variable, the compiler usually can't optimize it away, because it doesn't know what modules can access that variable or could take its address. So it has to allocate storage for it.
I think if you have a local const, the compiler can still optimize it away just as in C++, because the compiler knows by looking at its scope whether or not anyone is interested in its address or whether everyone just takes its value.
Your actual question; why enum/const is the same in D as in C++; seems to be unanswered. Sadly there exists no good reason for this choice whatsoever. I believe that this was just an unintentional side effect in C++ that became a de facto pattern. In D the same pattern was needed, and Walter Bright decided that it should be done as in C++ such that those coming from that place would recognize what to do ... In fact, before this rather IMHO silly decision, the keyword manifest was used instead of enum for this usecase.
I think a good compiler/linker should still remove the constant. It's just that with the enum, it's actually guaranteed in the spec. The difference is primarily a matter of semantics. (Also keep in mind that 2.0 isn't complete yet)
The real purpose of enum being expanded syntactically to support single manifest constants, from what I understand, is that Don Clugston, a D template guru, was doing some crazy stuff with templates. He kept running into long build times, ridiculous compiler memory usage, etc. because the compiler kept creating internal data strucutres for const variables. One key thing about const/immutable variables compared to enums is that const/immutable variables are lvalues and can have their address taken. This means there is some extra overhead for the compiler. This usually doesn't matter, but when you're executing really complicated compile-time metaprograms, even if const variables are optimized away, this is still significant overhead at compile time.
It sounds like the enum value will be used "inline" in expressions where as the const will actually take storage and any expression referencing it will be loading the value from the memory storage.
This sound similar to the difference between const vs. readonly in C#. The former is a compile-time constant and the later is a run-time constant. This definitely affected versioning of assemblies (since assemblies referencing a readonly would receive a copy at compile time and would not get a change to the value if the referenced assembly was rebuilt with a different value).