Variable Overloading - language-construct

This question originated in a discussion with a colleague and it is purely academic.
Is there any programming language that has variable overloading?
In Java and many other languages, there is function overloading, where multiple functions/methods with the same name can be declared, and the compiler chooses which function to execute, based on the parameters the function was called with.
Is there any programming language (including exotic ones) that uses variable overloading, where multiple variables with the same name but different types can be created and the compiler chooses the variable based on the required type?
E.g.
int x = 1;
String x = "test";
print(x); // prints "test" because the print function requires a string.
I can't think of a reason why you would want this, so the question is purely academic.

Related

About Kotlin and functions

So Ive started learning Kotlin and I have a question about functions.
In Kotlin you can do the javascript thing of creating a variable that can hold any type. But functions need to have their parameters typed.
So is the practice in Kotlin to type all variables anyway?
Is it not kind of pointless allowing the variables to be untyped but forcing types for the parameters and return values of functions?
When you write
val x = "Pizza"
kotlin infers from the declaration that 'x' is a string, there isn't some magic going on, if you try
var x = "Pizza"
x = 42
it won't work, because x is of a type String.
kotlin translates to java, and java is a "Statically typed language", which means the type of a field have to be known at runtime,
other languages, like Javascript are a "Dynamically typed languages", which in them the variables types doesn't have to be known at runtime, so it can make the developer life a bit easier ( or harder ).

How to declare variables with a type in Lua

Is it possible to create variables to be a specific type in Lua?
E.g. int x = 4
If this is not possible, is there at least some way to have a fake "type" shown before the variable so that anyone reading the code will know what type the variable is supposed to be?
E.g. function addInt(int x=4, int y=5), but x/y could still be any type of variable? I find it much easier to type the variable's type before it rather than putting a comment at above the function to let any readers know what type of variable it is supposed to be.
The sole reason I'm asking isn't to limit the variable to a specific data type, but simply to have the ability to put a data type before the variable, whether it does anything or not, to let the reader know what type of variable that it is supposed to be without getting an error.
You can do this using comments:
local x = 4 -- int
function addInt(x --[[int]],
y --[[int]] )
You can make the syntax a = int(5) from your other comment work using the following:
function int(a) return a end
function string(a) return a end
function dictionary(a) return a end
a = int(5)
b = string "hello, world!"
c = dictionary({foo = "hey"})
Still, this doesn't really offer any benefits over a comment.
The only way I can think of to do this, would be by creating a custom type in C.
Lua Integer type
No. But I understand your goal is to improve understanding when reading and writing functions calls.
Stating the expected data type of parameters adds only a little in terms of giving a specification for the function. Also, some function parameters are polymorphic, accepting a specific value, or a function or table from which to obtain the value for a context in which the function operates. See string.gsub, for example.
When reading a function call, the only thing known at the call site is the name of the variable or field whose value is being invoked as a function (sometimes read as the "name" of the function) and the expressions being passed as actual parameters. It is sometimes helpful to refactor parameter expressions into named local variables to add to the readability.
When writing a function call, the name of the function is key. The names of the formal parameters are also helpful. But still, names (like types) do not comprise much of a specification. The most help comes from embedded structured documentation used in conjunction with an IDE that infers the context of a name and performs content assistance and presentations of available documentation.
luadoc is one such a system of documentation. You can write luadoc for function you declare.
Eclipse Koneki LDT is one such an IDE. Due to the dynamic nature of Lua, it is a difficult problem so LDT is not always as helpful as one would like. (To be clear, LDT does not use luadoc; It evolved its own embedded documentation system.)

Why would a programming language need such way to declare variables?

I'm learning C and I kinda know how to program on Mathematica.
On Mathematica, I can declare a variable simply by writing:
a=9
a="b"
a=9.5
And it seems that Mathematica understands naturally what kind of variable is this by simply reading and finding some kind of pattern on it. (Int, char, float). I guess Python has the same feature.
While on C, I must say what it is first:
int num;
char ch;
float f;
num=9;
ch='b';
f=9.5;
I'm aware that this extends to other languanges. So my question is: Why would a programming languange need this kind of variable declaration?
References on the topic are going to be very useful.
Mathematica, Python, and other "dynamically typed" languages have variables that consist of a value and a type, whereas "statically typed" languages like C have variables that just consist of a value. Not only does this mean that less memory is needed for storing variables, but dynamically typed languages have to set and examine the variable type at runtime to know what type of value the variable contains, whereas with statically typed languages the type, and thus what operations can be/need to be performed on it, are known at compile time. As a result, statically typed languages are considerably faster.
In addition, modern statically typed languages (such as C# and C++11) have type inference, which often makes it unnecessary to mention the type. In some very advanced statically typed languages with type inference like Haskell, large programs can be written without ever specifying a type, providing the efficiency of statically typed languages with the terseness and convenience of dynamically typed languages.
Declarations are necessary for C to be compiled into an efficient binary. Basically, it's a big part of why C is much faster than Mathematica.
In contrast to what most other answers seem to suggest, this has little to do with types, which could easily be inferred (let alone efficiency). It is all about unambiguous semantics of scoping.
In a language that allows non-trivial nesting of language constructs it is important to have clear rules about where a variable belongs, and which identifiers refer to the same variable. For that, every variable needs an unambiguous scope that defines where it is visible. Without explicit declarations of variables (whether with or without type annotations) that is not possible in the general case.
Consider a simple function (you can construct similar examples with other forms of nested scope):
function f() {
i = 0
while (i < 10) {
doSomething()
i = i + 1
}
}
function g() {
i = 0
while (i < 20) {
f()
i = i + 1
}
}
What happens? To tell, you need to know where i will be bound: in the global scope or in the local function scopes? The latter implies that the variables in both functions are completely separate, whereas the former will make them share -- and this particular example loop forever (although the global scope may be what is intended in other examples).
Contrast the above with
function f() {
var i = 0
while (i < 10) {
doSomething()
i = i + 1
}
}
function g() {
var i = 0
while (i < 20) {
f()
i = i + 1
}
}
vs
var i
function f() {
i = 0
while (i < 10) {
doSomething()
i = i + 1
}
}
function g() {
i = 0
while (i < 20) {
f()
i = i + 1
}
}
which makes the different possible meanings perfectly clear.
In general, there are no good rules that are able to (1) guess what the programmer really meant, and (2) are sufficiently stable under program extensions or refactorings. It gets nastier the bigger and more complex programs get.
The only way to avoid hairy ambiguities and surprising errors is to require explicit declarations of variables -- which is what all reasonable languages do. (This is language design 101 and has been for 50 years, which, unfortunately, doesn't prevent new generations of language "designers" from repeating the same old mistake over and over again, especially in so-called scripting languages. Until they learn the lesson the hard way and correct the mistake, e.g. JavaScript in ES6.)
Variable types are necessary for the compiler to be able to verify that correct value types are assigned to a variable. The underlying needs vary from language to language.
This doesn't have anything to do with types. Consider JavaScript, which has variable declarations without types:
var x = 8;
y = 8;
The reason is that JavaScript needs to know whether you are referring to an old name, or creating a new one. The above code would leave any existing xs untouched, but would destroy the old contents of an existing y in a surrounding scope.
For a more extensive example, see this answer.
Fundamentally you are comparing two types of languages, ones that are being interpreted by high level virtual machines or interpreters (python, Mathematica ...) and others that are being compiled down to native binaries (C, C++ ...) being executed on a physical machine, obviously if you can define your own virtual machine this gives you amazing flexibility to how dynamic and powerful your language is vs a physical machine which is quite limited, with very limited number of structures and basic instruction set and so on ....
While its true that some languages that are compiled down to virtual machines or interpreted, still require types to be declared (java, C#), this simply done for performance, imagine having to deduce the type at run time or having to use base type for every possible type, this would consume quite a bit of resources not to mention make it quite hard to implement JIT (just in time compiler) that would run some things natively.

Is currying the same as overloading?

Is currying for functional programming the same as overloading for OO programming? If not, why? (with examples if possible)
Tks
Currying is not specific to functional programming, and overloading is not specific to object-oriented programming.
"Currying" is the use of functions to which you can pass fewer arguments than required to obtain a function of the remaining arguments. i.e. if we have a function plus which takes two integer arguments and returns their sum, then we can pass the single argument 1 to plus and the result is a function for adding 1 to things.
In Haskellish syntax (with function application by adjacency):
plusOne = plusCurried 1
three = plusOne 2
four = plusCurried 2 2
five = plusUncurried 2 3
In vaguely Cish syntax (with function application by parentheses):
plusOne = plusCurried(1)
three = plusOne(2)
four = plusCurried(2)(2)
five = plusUncurried(2, 3)
You can see in both of these examples that plusCurried is invoked on only 1 argument, and the result is something that can be bound to a variable and then invoked on another argument. The reason that you're thinking of currying as a functional-programming concept is that it sees the most use in functional languages whose syntax has application by adjacency, because in that syntax currying becomes very natural. The applications of plusCurried and plusUncurried to define four and five in the Haskellish syntax merge to become completely indistinguishable, so you can just have all functions be fully curried always (i.e. have every function be a function of exactly one argument, only some of them will return other functions that can then be applied to more arguments). Whereas in the Cish syntax with application by parenthesised argument lists, the definitions of four and five look completely different, so you need to distinguish between plusCurried and plusUncurried. Also, the imperative languages that led to today's object-oriented languages never had the ability to bind functions to variables or pass them to other functions (this is known as having first-class functions), and without that facility there's nothing you can actually do with a curried-function other than invoke it on all arguments, and so no point in having them. Some of today's OO languages still don't have first-class functions, or only gained them recently.
The term currying also refers to the process of turning a function of multiple arguments into one that takes a single argument and returns another function (which takes a single argument, and may return another function which ...), and "uncurrying" can refer to the process of doing the reverse conversion.
Overloading is an entirely unrelated concept. Overloading a name means giving multiple definitions with different characteristics (argument types, number of arguments, return type, etc), and have the compiler resolve which definition is meant by a given appearance of the name by the context in which it appears.
A fairly obvious example of this is that we could define plus to add integers, but also use the same name plus for adding floating point numbers, and we could potentially use it for concatenating strings, arrays, lists, etc, or to add vectors or matrices. All of these have very different implementations that have nothing to do with each other as far as the language implementation is concerned, but we just happened to give them the same name. The compiler is then responsible for figuring out that plus stringA stringB should call the string plus (and return a string), while plus intX intY should call the integer plus (and return an integer).
Again, there is no inherent reason why this concept is an "OO concept" rather than a functional programming concept. It simply happened that it fit quite naturally in statically typed object-oriented languages that were developed; if you're already resolving which method to call by the object that the method is invoked on, then it's a small stretch to allow more general overloading. Completely ad-hoc overloading (where you do nothing more than define the same name multiple times and trust the compiler to figure it out) doesn't fit as nicely in languages with first-class functions, because when you pass the overloaded name as a function itself you don't have the calling context to help you figure out which definition is intended (and programmers may get confused if what they really wanted was to pass all the overloaded definitions). Haskell developed type classes as a more principled way of using overloading; these effectively do allow you to pass all the overloaded definitions at once, and also allow the type system to express types a bit like "any type for which the functions f and g are defined".
In summary:
currying and overloading are completely unrelated
currying is about applying functions to fewer arguments than they require in order to get a function of the remaining arguments
overloading is about providing multiple definitions for the same name and having the compiler select which definition is used each time the name is used
neither currying nor overloading are specific to either functional programming or object-oriented programming; they each simply happen to be more widespread in historical languages of one kind or another because of the way the languages developed, causing them to be more useful or more obvious in one kind of language
No, they are entirely unrelated and dissimilar.
Overloading is a technique for allowing the same code to be used at different types -- often known in functional programming as polymorphism (of various forms).
A polymorphic function:
map :: (a -> b) -> [a] -> [b]
map f [] = []
map f (x:xs) = f x : map f xs
Here, map is a function that operates on any list. It is polymorphic -- it works just as well with a list of Int as a list of trees of hashtables. It also is higher-order, in that it is a function that takes a function as an argument.
Currying is the transformation of a function that takes a structure of n arguments, into a chain of functions each taking one argument.
In curried languages, you can apply any function to some of its arguments, yielding a function that takes the rest of the arguments. The partially-applied function is a closure.
And you can transform a curried function into an uncurried one (and vice-versa) by applying the transformation invented by Curry and Schonfinkel.
curry :: ((a, b) -> c) -> a -> b -> c
-- curry converts an uncurried function to a curried function.
uncurry :: (a -> b -> c) -> (a, b) -> c
-- uncurry converts a curried function to a function on pairs.
Overloading is having multiple functions with the same name, having different parameters.
Currying is where you can take multiple parameters, and selectively set some, so you may just have one variable, for example.
So, if you have a graphing function in 3 dimensions, you may have:
justgraphit(double[] x, double[] y, double[] z), and you want to graph it.
By currying you could have:
var fx = justgraphit(xlist)(y)(z) where you have now set fx so that it now has two variables.
Then, later on, the user picks another axis (date) and you set the y, so now you have:
var fy = fx(ylist)(z)
Then, later you graph the information by just looping over some data and the only variability is the z parameter.
This makes complicated functions simpler as you don't have to keep passing what is largely set variables, so the readability increases.

What other programming languages have a Smalltalk-like message-passing syntax?

What languages are there with a message-passing syntax similar to Smalltalk's? Objective-C is the only one I'm familiar with. Specifically, I was wondering if any other language implementations exist which allow for syntax in a form like: [anObject methodWithParam:aParam andParam:anotherParam], having messages that allow for named parameters as part of method definitions.
In general I find that this syntax can be conducive to more consistent method names that more clearly show the methods' intent, and that the price you pay in wordiness is generally worth it. I would love to know if there are any other languages that support this.
Here is a list of languages supporting keyword messages syntax similar to Smalltalk:
Objective-J, Objective-Modula-2. These are language extensions similar to Objective-C.
Strongtalk, a Smalltalk dialect with optional strong-typing
F-script, an embeddable Smalltalk dialect with APL-inspired array operations extensions.
Self
Newspeak
Slate
Atomo, an embeddable language for Haskell
In addition to the other languages mentioned here, Fancy:
osna = City new: "Osnabrück"
p = Person new: "Christopher" age: 23 city: osna
p println
berlin = City new: "Berlin"
p go_to: berlin
p println
See e.g. Self.
Also, many languages support optional named parameters, e.g. Python or C# (starting with v4).
Python and Common Lisp (probably among others) allow for keyword arguments. You can make calls to functions which include the parameter names.
These are not equivalent to Obj-C's method names, because the keyword args ignore position, but they answer your question about readability.*
make_an_omelet(num_eggs=2, cheese=u"Gruyère", mushrooms=True)
(make-a-salad :greens 'boston-lettuce
:dressing 'red-wine-vinaigrette
:other-ingredients '(hazelnuts dried-apricots))
This is not, of course, message passing, just plain old function calling.
*They have other uses than this, such as specifying default values.
Ada supports named parameters.
function Minimum (A, B : Integer) return Integer is
begin
if A <= B then
return A;
else
return B;
end if;
end Minimum;
Then call:
Answer := Minimum (A=>100, B=>124);
Ruby can send messages to objects in order to call their methods, pretty much like objc does:
class Foo
def bar a,b
a + b
end
end
f = Foo.new
f.send(:bar, a=4, b=5)
>> 9
Indeed, among other things, this makes possible the integration between Cocoa and Ruby in MacRuby
Erlang do not claim to be object oriented but message passing is a key concept in that language.