Is there a language that enables variable types to be changed? - dynamic

I'm a pretty junior level developer (first year CS student) and I've been learning about the differences between static typed and dynamically typed languages. Correct me if I'm wrong, but it's my understanding that a dynamically typed language allows the programmer to initialize a variable without giving it a type, then give that variable a type later in the program. Just for the sake of curiosity, is there any languages out there that allow you to change the type/class of the object without initializing a brand new variable?

I think that what you're looking for is weak typing. Note that weak vs. strong typing is not the same as static vs. dynamic typing.

It all depends on what you call a brand new variable. For example, in PHP:
<?php
$var = NULL; // $var is now of type null
$var = 1; // $var is now of type integer
?>
And so on. However, there is no guarantee that the space previously used for storing the NULL value is now used for storing the 1, so you could say that you just got yourself a brand new variable with the same name.

It depends on how you define types, but JavasScript doesn't have "classes" and allows you to easily change the interface to an object.
I don't know of any language with a strong OO basis that allows you to do something like:
typeof dog // Dog
dog.turnIntoCat()
typeof dog // Cat
However almost all OO languages support something like:
typeof dog // Dog
cat = dog.turnIntoCat()
typeof cat // Cat
And I think all dynamically typed languages (at least all that I know of) allow this:
typeof dog // Dog
dog = new Cat()
typeof dog // Cat

There are a lot of definitions of static/dynamic typing and strong/weak typing, so it's hard to answer any general question very concretely. That being said, the (very high level) definition I use for them tends to convey the general idea fairly well (at least, I think so).
Static vs Dynamic Typing
A statically typed language applies types to variables. The variable count can be defined as an integer. It can only hold integer values.
A dynamically typed language applies types to values, but not variables. The value 123 is an integer and "abc" is a string, but the variable result could be assigned to either or both at different points in time.
Strong vs Weak Typing
In a strongly typed language, a value has a type and it is only that type. For example, "123" is a string where 123 is an integer. You can't treat the string as an integer and vice versa. You can convert between them (ie "123".toint() or such), but you can't just treat one type as another (ie. the following wouldn't be valid: "123" + 456 == 579)
In a weakly typed language, a value is just a value and you can treat it as various types depending on it's use. For example, you CAN say "123" + 234 and get a useful result (357 or 123234 depending on the language).
There are a LOT of grey areas between static and dynamic, and between strong and weak, but the definitions above give a general idea.
On a related topic, there's also explicit vs implicit typing (programmer designates types vs compiler figures out types), which is a really interesting topic all on it's own.

Related

Object oriented lua sans underlying associative container mechanics

Is there any syntactic sugar to use object oriented lua by leveraging the array part of the lua table construct ?
-- foo_index == number
local foo_index = global_bar_object_prototype.foo;
bar[foo_index]("hello world"];
--vs.
-- file 2 bar.foo type == function
bar.foo("hello world");
-- both versions call the same function with the same input
I was hoping luajit would do inter chunk string interning to optimise/cache the string-key access giving it array-like access characteristics. However my naive benchmark disproved the assumption. I am hoping my benchmark logic is flawwed in which case I would not need to look for syntactic sugar.
What Are the idioms that make object oriented lua have O(1)(function lookup) characteristics for high performance scripting purposes ? I'm sure game interface programmers have seen these first hand.
Not sure I understand the question, but if you're asking whether there is a way to define a bar table as an object such that
bar[foo_index]("hello")
will work, yes there is a way: the metatable of bar should define the __index so it can take an integer as key and return the associated method. Somewhere in the bar "constructor" you define the mapping of indices to "methods"; the __index would look at that mapping and return the function.
You would probably also add a method that would take a method name and return the corresponding index, so caller doesn't have to know what mapping constructor creates:
foo_index = bar.getMethodIndex('foo')
bar[foo_index]("hello")
Another optimization allowed by lua is
foo_meth = bar.foo
foo_meth(bar, "hello")

Enum defining forms in Objective-C

What is the difference between
typedef enum {
...
} Name;
and
enum {
...
};
typedef NSUInteger Name;
? If functionality is the same, what is the second form good for? Isn't it unnecessarily messy?
enum is as old as C, therefore a part of Objective-C.
It is just explicit coding of an int type. It's quite useful for debugging and most newer compilers can make optimizations based on it. (Which you should totally ignore). It's most useful in making your code more readable (to anyone else, or to yourself after you've slept).
typedef enum {
...
} NameType ;
would be followed by
NameType name;
and that's typically the preferred style of a typedef,
your second example will not tie the typedef to the values you want to specify should only be of the given type.
Note that this does not prevent you from executing
name = 10244; // some non-valid value not listed in the enumeration
but some compilers might generate a warning in that case,
I ran across Apple's use of the following today:
enum {
NSFetchedResultsChangeInsert = 1,
NSFetchedResultsChangeDelete = 2,
NSFetchedResultsChangeMove = 3,
NSFetchedResultsChangeUpdate = 4
};
typedef NSUInteger NSFetchedResultsChangeType;
They do this because they really want the NSFetchedResultsChangeType to be of the type they have defined as NSUInteger. This can be an int but it can also be something else. And with values of 1, 2, 3, and 4, it's somewhat irrelevant to us what the type is. But they are coding to a different level of abstraction because they are a tools provider.
You should never look to Apple for coding style hints. If you see something that looks like it's cleaner/better way to code, it usually is. As Kevin mentioned, API stability is of paramount importance for them.
Edit (Jan 2013) If you have access to the WWDC 2012 Session Videos, you should watch Session 405 - Modern Objective-C 6:00-10:00. There is discussion a new syntax in the newer compiler that allows explicit sizing of a type and tight bonding of values to types. (borrowed from C++ 11)
enum NSFetchedResultsChangeType : NSUInteger {
NSFetchedResultsChangeInsert = 1,
NSFetchedResultsChangeDelete = 2,
NSFetchedResultsChangeMove = 3,
NSFetchedResultsChangeUpdate = 4
};
The former defines a type name to refer to an enum. This is the way most enums are named in C. The latter is a bit different though, and it's prevalent in the Cocoa frameworks. There's two reasons to use the latter. The first is if your enum defines a bitfield, and you'd want it here because when you're providing a "Name" value you'll be providing a combination of the enum values. In other words, if you say something like
[self doSomethingWithBitfield:(Enum1 | Enum2)]
you're not passing a value of Name but rather an integer that's a combination of the two.
However, Cocoa frameworks use this idiom even for non-bitfield values, for a very good reason: API stability. According to the C standard, the underlying integral type of an enum is requires to be able to contain all values in the enum, but is otherwise chosen by the compiler. This means that adding a new enum value could change the integral type of the enum (e.g. adding -1 can make it signed, adding 6 billion can make it into a long long, etc). This is a bad thing from an API stability standpoint, because the type encoding of methods which take values of this enum could change unexpectedly and potentially break existing code and binaries. In order to prevent this, the Cocoa frameworks generally define the type as being an NSUInteger (or NSInteger if they need negative numbers), so the API and type encodings stay stable.

What's the benefit of postfixing arrays with "List" for variables?

I have seen numerous developers postfixing variable names with a "List" when it's an array. What's the point of this and would you encourage this style? For example:
// Java
String[] fileList;
String file;
// PHP
$fileList = array();
$file = '';
The idea applies to any language with support for arrays.
The idea? Readability - you can tell the variable is a collection in one glance. This can be achieved with pluralizing the variable name (ie. files).
If the fact that the data type is a list is significant (assuming several different collection types), then using that postfix is the right thing to do (as opposed to simply being a collection).
I personally tend to pluralize variable names, using a list postfix only if it adds information or can't be inferred from the name otherwise (say a list of lists).
I think this is mostly a matter of taste. I tend to name things to reflect what they are or what they do. So if I have a collection or array of File instances, the variable would most probably named files, or have a more specific name if the context allows it. Naming an array of Files fileList is in my humble opinion plain wrong, because, at least in Java, an array is not a List. But then, the compiler won't complain...
More complex collections like a Map get names like keyToValue. So if I had a map which assigns teachers to classrooms this would be called teacherToRoom in my code. I hate grepping through the code to find out what the variables are meant to do, so I try to be as specific as needed with the names.
In conclusion it's all about correct code, and variable names can not influence this outcome from the compiler perspective. But they can very well affect the outcome when it comes to humans working with the code, so it's best to do whatever works for the majority of people working on a codebase.

What is the difference between the concept of 'class' and 'type'?

i know this question has been already asked, but i didnt get it quite right, i would like to know, which is the base one, class or the type. I have few questions, please clear those for me,
Is type the base of a programing data type?
type is hard coded into the language itself. Class is something we can define ourselves?
What is untyped languages, please give some examples
type is not something that fall in to the oop concepts, I mean it is not restricted to oop world
Please clear this for me, thanks.
I didn't work with many languages. Maybe, my questions are correct in terms of : Java, C#, Objective-C
1/ I think type is actually data type in some way people talk about it.
2/ No. Both type and class we can define it. An object of Class A has type A. For example if we define String s = "123"; then s has a type String, belong to class String. But the vice versa is not correct.
For example:
class B {}
class A extends B {}
B b = new A();
then you can say b has type B and belong to both class A and B. But b doesn't have type A.
3/ untyped language is a language that allows you to change the type of the variable, like in javascript.
var s = "123"; // type string
s = 123; // then type integer
4/ I don't know much but I think it is not restricted to oop. It can be procedural programming as well
It may well depend on the language. I treat types and classes as the same thing in OO, only making a distinction between class (the definition of a family of objects) and instance (or object), specific concrete occurrences of a class.
I come originally from a C world where there was no real difference between language-defined types like int and types that you made yourself with typedef or struct.
Likewise, in C++, there's little difference (probably none) between std::string and any class you put together yourself, other than the fact that std::string will almost certainly be bug-free by now. The same isn't always necessary in our own code :-)
I've heard people suggest that types are classes without methods but I don't believe that distinction (again because of my C/C++ background).
There is a fundamental difference in some languages between integral (in the sense of integrated rather than integer) types and class types. Classes can be extended but int and float (examples for C++) cannot.
In OOP languages, a class specifies the definition of an object. In many cases, that object can serve as a type for things like parameter matching in a function.
So, for an example, when you define a function, you specify the type of data that should be passed to the function and the type of data that is returned:
int AddOne(int value) { return value+1; } uses int types for the return value and the parameter being passed in.
In languages that have both, the concepts of type and class/object can almost become interchangeable. However, there are many languages that do not have both. For instance, I believe that standard C has no support for custom-defined objects, but it certainly does still have types. On the otherhand, both PHP and Javascript are examples of languages where type is very loosely defined (basically, types are either single item, collection/array/object, or undefined [js only]), but they have full support for classes/objects.
Another key difference: you can have methods and custom-functions associated with a class/object, but not with a standard data-type.
Hopefully that clarified some. To answer your specific questions:
In some ways, type could be considered a base concept of programming, yes.
Yes, with the exception that classes can be treated as types in functions, as in the example above.
An untyped language is one that lets you use any type of variable interchangeably. Meaning that you can handle a string with the same code that handles an int, for instance. In practice most 'untyped' languages actually implement a concept called duck-typing, so named because they say that 'if it acts like a duck, it should be treated like a duck' and attempt to use any variable as the type that makes sense for the code encountered. Again, php and javascript are two languages which do this.
Very true, type is applicable outside of the OOP world.

Non-nullable reference types

I'm designing a language, and I'm wondering if it's reasonable to make reference types non-nullable by default, and use "?" for nullable value and reference types. Are there any problems with this? What would you do about this:
class Foo {
Bar? b;
Bar b2;
Foo() {
b.DoSomething(); //valid, but will cause exception
b2.DoSomething(); //?
}
}
My current language design philosophy is that nullability should be something a programmer is forced to ask for, not given by default on reference types (in this, I agree with Tony Hoare - Google for his recent QCon talk).
On this specific example, with the unnullable b2, it wouldn't even pass static checks: Conservative analysis cannot guarantee that b2 isn't NULL, so the program is not semantically meaningful.
My ethos is simple enough. References are an indirection handle to some resource, which we can traverse to obtain access to that resource. Nullable references are either an indirection handle to a resource, or a notification that the resource is not available, and one is never sure up front which semantics are being used. This gives either a multitude of checks up front (Is it null? No? Yay!), or the inevitable NPE (or equivalent). Most programming resources are, these days, not massively resource constrained or bound to some finite underlying model - null references are, simplistically, one of...
Laziness: "I'll just bung a null in here". Which frankly, I don't have too much sympathy with
Confusion: "I don't know what to put in here yet". Typically also a legacy of older languages, where you had to declare your resource names before you knew what your resources were.
Errors: "It went wrong, here's a NULL". Better error reporting mechanisms are thus essential in a language
A hole: "I know I'll have something soon, give me a placeholder". This has more merit, and we can think of ways to combat this.
Of course, solving each of the cases that NULL current caters for with a better linguistic choice is no small feat, and may add more confusion that it helps. We can always go to immutable resources, so NULL in it's only useful states (error, and hole) isn't much real use. Imperative technqiues are here to stay though, and I'm frankly glad - this makes the search for better solutions in this space worthwhile.
Having reference types be non-nullable by default is the only reasonable choice. We are plagued by languages and runtimes that have screwed this up; you should do the Right Thing.
This feature was in Spec#. They defaulted to nullable references and used ! to indicate non-nullables. This was because they wanted backward compatibility.
In my dream language (of which I'd probably be the only user!) I'd make the same choice as you, non-nullable by default.
I would also make it illegal to use the . operator on a nullable reference (or anything else that would dereference it). How would you use them? You'd have to convert them to non-nullables first. How would you do this? By testing them for null.
In Java and C#, the if statement can only accept a bool test expression. I'd extend it to accept the name of a nullable reference variable:
if (myObj)
{
// in this scope, myObj is non-nullable, so can be used
}
This special syntax would be unsurprising to C/C++ programmers. I'd prefer a special syntax like this to make it clear that we are doing a check that modifies the type of the name myObj within the truth-branch.
I'd add a further bit of sugar:
if (SomeMethodReturningANullable() into anotherObj)
{
// anotherObj is non-nullable, so can be used
}
This just gives the name anotherObj to the result of the expression on the left of the into, so it can be used in the scope where it is valid.
I'd do the same kind of thing for the ?: operator.
string message = GetMessage() into m ? m : "No message available";
Note that string message is non-nullable, but so are the two possible results of the test above, so the assignment is value.
And then maybe a bit of sugar for the presumably common case of substituting a value for null:
string message = GetMessage() or "No message available";
Obviously or would only be validly applied to a nullable type on the left side, and a non-nullable on the right side.
(I'd also have a built-in notion of ownership for instance fields; the compiler would generate the IDisposable.Dispose method automatically, and the ~Destructor syntax would be used to augment Dispose, exactly as in C++/CLI.)
Spec# had another syntactic extension related to non-nullables, due to the problem of ensuring that non-nullables had been initialized correctly during construction:
class SpecSharpExampleClass
{
private string! _nonNullableExampleField;
public SpecSharpExampleClass(string s)
: _nonNullableExampleField(s)
{
}
}
In other words, you have to initialize fields in the same way as you'd call other constructors with base or this - unless of course you initialize them directly next to the field declaration.
Have a look at the Elvis operator proposal for Java 7. This does something similar, in that it encapsulates a null check and method dispatch in one operator, with a specified return value if the object is null. Hence:
String s = mayBeNull?.toString() ?: "null";
checks if the String s is null, and returns the string "null" if so, and the value of the string if not. Food for thought, perhaps.
A couple of examples of similar features in other languages:
boost::optional (C++)
Maybe (Haskell)
There's also Nullable<T> (from C#) but that is not such a good example because of the different treatment of reference vs. value types.
In your example you could add a conditional message send operator, e.g.
b?->DoSomething();
To send a message to b only if it is non-null.
Have the nullability be a configuration setting, enforceable in the authors source code. That way, you will allow people who like nullable objects by default enjoy them in their source code, while allowing those who would like all their objects be non-nullable by default have exactly that. Additionally, provide keywords or other facility to explicitly mark which of your declarations of objects and types can be nullable and which cannot, with something like nullable and not-nullable, to override the global defaults.
For instance
/// "translation unit 1"
#set nullable
{ /// Scope of default override, making all declarations within the scope nullable implicitly
Bar bar; /// Can be null
non-null Foo foo; /// Overriden, cannot be null
nullable FooBar foobar; /// Overriden, can be null, even without the scope definition above
}
/// Same style for opposite
/// ...
/// Top-bottom, until reset by scoped-setting or simply reset to another value
#set nullable;
/// Nullable types implicitly
#clear nullable;
/// Can also use '#set nullable = false' or '#set not-nullable = true'. Ugly, but human mind is a very original, mhm, thing.
Many people argue that giving everyone what they want is impossible, but if you are designing a new language, try new things. Tony Hoare introduced the concept of null in 1965 because he could not resist (his own words), and we are paying for it ever since (also, his own words, the man is regretful of it). Point is, smart, experienced people make mistakes that cost the rest of us, don't take anyones advice on this page as if it were the only truth, including mine. Evaluate and think about it.
I've read many many rants on how it's us poor inexperienced programmers who really don't understand where to really use null and where not, showing us patterns and antipatterns that are meant to prevent shooting ourselves in the foot. All the while, millions of still inexperienced programmers produce more code in languages that allow null. I may be inexperienced, but I know which of my objects don't benefit from being nullable.
Here we are, 13 years later, and C# did it.
And, yes, this is the biggest improvement in languages since Barbara and Stephen invented types in 1974.:
Programming With Abstract Data Types
Barbara Liskov
Massachusetts Institute of Technology
Project MAC
Cambridge, Massachusetts
Stephen Zilles
Cambridge Systems Group
IBM Systems Development Division
Cambridge, Massachusetts
Abstract
The motivation
behind the work in very-high-level languages is to ease the
programming task by providing the programmer with a language
containing primitives or abstractions suitable to his problem area.
The programmer is then able to spend his effort in the right place; he
concentrates on solving his problem, and the resulting program will be
more reliable as a result. Clearly, this is a worthwhile goal.
Unfortunately, it is very difficult for a designer to select in
advance all the abstractions which the users of his language might
need. If a language is to be used at all, it is likely to be used to
solve problems which its designer did not envision, and for which the
abstractions embedded in the language are not sufficient. This paper
presents an approach which allows the set of built-in abstractions to
be augmented when the need for a new data abstraction is discovered.
This approach to the handling of abstraction is an outgrowth of work
on designing a language for structured programming. Relevant aspects
of this language are described, and examples of the use and
definitions of abstractions are given.
I think null values are good: They are a clear indication that you did something wrong. If you fail to initialize a reference somewhere, you'll get an immediate notice.
The alternative would be that values are sometimes initialized to a default value. Logical errors are then a lot more difficult to detect, unless you put detection logic in those default values. This would be the same as just getting a null pointer exception.