Why would a programming language need such way to declare variables? - variables

I'm learning C and I kinda know how to program on Mathematica.
On Mathematica, I can declare a variable simply by writing:
a=9
a="b"
a=9.5
And it seems that Mathematica understands naturally what kind of variable is this by simply reading and finding some kind of pattern on it. (Int, char, float). I guess Python has the same feature.
While on C, I must say what it is first:
int num;
char ch;
float f;
num=9;
ch='b';
f=9.5;
I'm aware that this extends to other languanges. So my question is: Why would a programming languange need this kind of variable declaration?
References on the topic are going to be very useful.

Mathematica, Python, and other "dynamically typed" languages have variables that consist of a value and a type, whereas "statically typed" languages like C have variables that just consist of a value. Not only does this mean that less memory is needed for storing variables, but dynamically typed languages have to set and examine the variable type at runtime to know what type of value the variable contains, whereas with statically typed languages the type, and thus what operations can be/need to be performed on it, are known at compile time. As a result, statically typed languages are considerably faster.
In addition, modern statically typed languages (such as C# and C++11) have type inference, which often makes it unnecessary to mention the type. In some very advanced statically typed languages with type inference like Haskell, large programs can be written without ever specifying a type, providing the efficiency of statically typed languages with the terseness and convenience of dynamically typed languages.

Declarations are necessary for C to be compiled into an efficient binary. Basically, it's a big part of why C is much faster than Mathematica.

In contrast to what most other answers seem to suggest, this has little to do with types, which could easily be inferred (let alone efficiency). It is all about unambiguous semantics of scoping.
In a language that allows non-trivial nesting of language constructs it is important to have clear rules about where a variable belongs, and which identifiers refer to the same variable. For that, every variable needs an unambiguous scope that defines where it is visible. Without explicit declarations of variables (whether with or without type annotations) that is not possible in the general case.
Consider a simple function (you can construct similar examples with other forms of nested scope):
function f() {
i = 0
while (i < 10) {
doSomething()
i = i + 1
}
}
function g() {
i = 0
while (i < 20) {
f()
i = i + 1
}
}
What happens? To tell, you need to know where i will be bound: in the global scope or in the local function scopes? The latter implies that the variables in both functions are completely separate, whereas the former will make them share -- and this particular example loop forever (although the global scope may be what is intended in other examples).
Contrast the above with
function f() {
var i = 0
while (i < 10) {
doSomething()
i = i + 1
}
}
function g() {
var i = 0
while (i < 20) {
f()
i = i + 1
}
}
vs
var i
function f() {
i = 0
while (i < 10) {
doSomething()
i = i + 1
}
}
function g() {
i = 0
while (i < 20) {
f()
i = i + 1
}
}
which makes the different possible meanings perfectly clear.
In general, there are no good rules that are able to (1) guess what the programmer really meant, and (2) are sufficiently stable under program extensions or refactorings. It gets nastier the bigger and more complex programs get.
The only way to avoid hairy ambiguities and surprising errors is to require explicit declarations of variables -- which is what all reasonable languages do. (This is language design 101 and has been for 50 years, which, unfortunately, doesn't prevent new generations of language "designers" from repeating the same old mistake over and over again, especially in so-called scripting languages. Until they learn the lesson the hard way and correct the mistake, e.g. JavaScript in ES6.)

Variable types are necessary for the compiler to be able to verify that correct value types are assigned to a variable. The underlying needs vary from language to language.

This doesn't have anything to do with types. Consider JavaScript, which has variable declarations without types:
var x = 8;
y = 8;
The reason is that JavaScript needs to know whether you are referring to an old name, or creating a new one. The above code would leave any existing xs untouched, but would destroy the old contents of an existing y in a surrounding scope.
For a more extensive example, see this answer.

Fundamentally you are comparing two types of languages, ones that are being interpreted by high level virtual machines or interpreters (python, Mathematica ...) and others that are being compiled down to native binaries (C, C++ ...) being executed on a physical machine, obviously if you can define your own virtual machine this gives you amazing flexibility to how dynamic and powerful your language is vs a physical machine which is quite limited, with very limited number of structures and basic instruction set and so on ....
While its true that some languages that are compiled down to virtual machines or interpreted, still require types to be declared (java, C#), this simply done for performance, imagine having to deduce the type at run time or having to use base type for every possible type, this would consume quite a bit of resources not to mention make it quite hard to implement JIT (just in time compiler) that would run some things natively.

Related

Variable Overloading

This question originated in a discussion with a colleague and it is purely academic.
Is there any programming language that has variable overloading?
In Java and many other languages, there is function overloading, where multiple functions/methods with the same name can be declared, and the compiler chooses which function to execute, based on the parameters the function was called with.
Is there any programming language (including exotic ones) that uses variable overloading, where multiple variables with the same name but different types can be created and the compiler chooses the variable based on the required type?
E.g.
int x = 1;
String x = "test";
print(x); // prints "test" because the print function requires a string.
I can't think of a reason why you would want this, so the question is purely academic.

Most appropriate data structure for dynamic languages field access

I'm implementing a dynamic language that will compile to C#, and it's implementing its own reflection API (.NET's is too slow, and the DLR is limited only to more recent and resourceful implementations).
For this, I've implemented a simple .GetField(string f) and .SetField(string f, object val) interface. Until recently, the implementation just switches over all possible field string values and makes the corresponding action.
Also, this dynamic language has the possibility to define anonymous objects. For those anonymous objects, at first, I had implemented a simple hash algorithm.
By now, I am looking for ways to optimize the dynamic parts of the language, and I have come across the fact that a hash algorithm for anonymous objects would be overkill. This is because the objects are usually small. I'd say the objects contain 2 or 3 fields, normally. Very rarely, they would contain more than 15 fields. It would take more time to actually hash the string and perform the lookup than if I would test for equality between them all. (This is not tested, just theoretical).
The first thing I did was to -- at compile-time -- create a red-black tree for each anonymous object declaration and have it laid onto an array so that the object can look for it in a very optimized way.
I am still divided, though, if that's the best way to do this. I could go for a perfect hashing function. Even more radically, I'm thinking about dropping the need for strings and actually work with a struct of 2 longs.
Those two longs will be encoded to support 10 chars (A-za-z0-9_) each, which is mostly a good prediction of the size of the fields. For fields larger than this, a special function (slower) receiving a string will also be provided.
The result will be that strings will be inlined (not references), and their comparisons will be as cheap as a long comparison.
Anyway, it's a little hard to find good information about this kind of optimization, since this is normally thought on a vm-level, not a static language compilation implementation.
Does anyone have any thoughts or tips about the best data structure to handle dynamic calls?
Edit:
For now, I'm really going with the string as long representation and a linear binary tree lookup.
I don't know if this is helpful, but I'll chuck it out in case;
If this is compiling to C#, do you know the complete list of fields at compile time? So as an idea, if your code reads
// dynamic
myObject.foo = "some value";
myObject.bar = 32;
then during the parse, your symbol table can build an int for each field name;
// parsing code
symbols[0] == "foo"
symbols[1] == "bar"
then generate code using arrays or lists;
// generated c#
runtimeObject[0] = "some value"; // assign myobject.foo
runtimeObject[1] = 32; // assign myobject.bar
and build up reflection as a separate array;
runtimeObject.FieldNames[0] == "foo"; // Dictionary<int, string>
runtimeObject.FieldIds["foo"] === 0; // Dictionary<string, int>
As I say, thrown out in the hope it'll be useful. No idea if it will!
Since you are likely to be using the same field and method names repeatedly, something like string interning would work well to quickly generate keys for your hash tables. It would also make string equality comparisons constant-time.
For such a small data set (expected upper bounds of 15) I think almost any hashing will be more expensive then a tree or even a list lookup, but that is really dependent on your hashing algorithm.
If you want to use a dictionary/hash then you'll need to make sure the objects you use for the key return a hash code quickly (perhaps a single constant hash code that's built once). If you can prevent collisions inside of an object (sounds pretty doable) then you'll gain the speed and scalability (well for any realistic object/class size) of a hash table.
Something that comes to mind is Ruby's symbols and message passing. I believe Ruby's symbols act as a constant to just a memory reference. So comparison is constant, they are very lite, and you can use symbols like variables (I'm a little hazy on this and don't have a Ruby interpreter on this machine). Ruby's method "calling" really turns into message passing. Something like: obj.func(arg) turns into obj.send(:func, arg) (":func" is the symbol). I would imagine that symbol makes looking up the message handler (as I'll call it) inside the object pretty efficient since it's hash code most likely doesn't need to be calculated like most objects.
Perhaps something similar could be done in .NET.

Can you write any algorithm without an if statement?

This site tickled my sense of humour - http://www.antiifcampaign.com/ but can polymorphism work in every case where you would use an if statement?
Smalltalk, which is considered as a "truly" object oriented language, has no "if" statement, and it has no "for" statement, no "while" statement. There are other examples (like Haskell) but this is a good one.
Quoting Smalltalk has no “if” statement:
Some of the audience may be thinking
that this is evidence confirming their
suspicions that Smalltalk is weird,
but what I’m going to tell you is
this:
An “if” statement is an abomination in an Object Oriented language.
Why? Well, an OO language is composed
of classes, objects and methods, and
an “if” statement is inescapably none
of those. You can’t write “if” in an
OO way. It shouldn’t exist.
Conditional execution, like everything
else, should be a method. A method of
what? Boolean.
Now, funnily enough, in Smalltalk,
Boolean has a method called
ifTrue:ifFalse: (that name will look
pretty odd now, but pass over it for
now). It’s abstract in Boolean, but
Boolean has two subclasses: True and
False. The method is passed two blocks
of code. In True, the method simply
runs the code for the true case. In
False, it runs the code for the false
case. Here’s an example that hopefully
explains:
(x >= 0) ifTrue: [
'Positive'
] ifFalse: [
'Negative'
]
You should be able to see ifTrue: and
ifFalse: in there. Don’t worry that
they’re not together.
The expression (x >= 0) evaluates to
true or false. Say it’s true, then we
have:
true ifTrue: [
'Positive'
] ifFalse: [
'Negative'
]
I hope that it’s fairly obvious that
that will produce ‘Positive’.
If it was false, we’d have:
false ifTrue: [
'Positive'
] ifFalse: [
'Negative'
]
That produces ‘Negative’.
OK, that’s how it’s done. What’s so
great about it? Well, in what other
language can you do this? More
seriously, the answer is that there
aren’t any special cases in this
language. Everything can be done in an
OO way, and everything is done in an
OO way.
I definitely recommend reading the whole post and Code is an object from the same author as well.
That website is against using if statements for checking if an object has a specific type. This is completely different from if (foo == 5). It's bad to use ifs like if (foo instanceof pickle). The alternative, using polymorphism instead, promotes encapsulation, making code infinitely easier to debug, maintain, and extend.
Being against ifs in general (doing a certain thing based on a condition) will gain you nothing. Notice how all the other answers here still make decisions, so what's really the difference?
Explanation of the why behind polymorphism:
Take this situation:
void draw(Shape s) {
if (s instanceof Rectangle)
//treat s as rectangle
if (s instanceof Circle)
//treat s as circle
}
It's much better if you don't have to worry about the specific type of an object, generalizing how objects are processed:
void draw(Shape s) {
s.draw();
}
This moves the logic of how to draw a shape into the shape class itself, so we can now treat all shapes the same. This way if we want to add a new type of shape, all we have to do is write the class and give it a draw method instead of modifying every conditional list in the whole program.
This idea is everywhere in programming today, the whole concept of interfaces is all about polymorphism. (Shape is an interface defining a certain behavior, allowing us to process any type that implements the Shape interface in our method.) Dynamic programming languages take this even further, allowing us to pass any type that supports the necessary actions into a method. Which looks better to you? (Python-style pseudo-code)
def multiply(a,b):
if (a is string and b is int):
//repeat a b times.
if (a is int and b is int):
//multiply a and b
or using polymorphism:
def multiply(a,b):
return a*b
You can now use any 2 types that support the * operator, allowing you to use the method with types that haven't event been created yet.
See polymorphism and what is polymorhism.
Though not OOP-related: In Prolog, the only way to write your whole application is without if statements.
Yes actually, you can have a turing-complete language that has no "if" per se and only allows "while" statements:
http://cseweb.ucsd.edu/classes/fa08/cse200/while.html
As for OO design, it makes sense to use an inheritance pattern rather than switches based on a type field in certain cases... That's not always feasible or necessarily desirable though.
#ennuikiller: conditionals would just be a matter of syntactic sugar:
if (test) body; is equivalent to x=test; while (x) {x=nil; body;}
if-then-else is a little more verbose:
if (test) ifBody; else elseBody;
is equivalent to
x = test; y = true;
while (x) {x = nil; y = nil; ifBody;}
while (y) {y = nil; elseBody;}
the primitive data structure is a list of lists. you could say 2 scalars are equal if they are lists of the same length. you would loop over them simultaneously using the head/tail operators and see if they stop at the same point.
of course that could all be wrapped up in macros.
The simplest turing complete language is probably iota. It contains only 2 symbols ('i' and '*').
Yep. if statements imply branches which can be very costly on a lot of modern processors - particularly PowerPC. Many modern PCs do a lot of pipeline re-ordering and so branch mis-predictions can cost an order of >30 cycles per branch miss.
On console programming it's sometimes faster to just execute the code and ignore it than check if you should execute it!
Simple branch avoidance in C:
if (++i >= 15)
{
i = 0;
)
can be re-written as
i = (i + 1) & 15;
However, if you want to see some real anti-if fu then read this
Oh and on the OOP question - I'll replace a branch mis-prediction with a virtual function call? No thanks....
The reasoning behind the "anti-if" campaign is similar to what Kent Beck said:
Good code invariably has small methods and
small objects. Only by factoring the system into many small pieces of state
and function can you hope to satisfy the “once and only once” rule. I get lots
of resistance to this idea, especially from experienced developers, but no one
thing I do to systems provides as much help as breaking it into more pieces.
If you don't know how to factor a program with composition and inheritance, then your classes and methods will tend to grow bigger over time. When you need to make a change, the easiest thing will be to add an IF somewhere. Add too many IFs, and your program will become less and less maintainable, and still the easiest thing will be to add more IFs.
You don't have to turn every IF into an object collaboration; but it's a very good thing when you know how to :-)
You can define True and False with objects (in a pseudo-python):
class True:
def if(then,else):
return then
def or(a):
return True()
def and(a):
return a
def not():
return False()
class False:
def if(then,else):
return false
def or(a):
return a
def and(a):
return False()
def not():
return True()
I think it is an elegant way to construct booleans, and it proves that you can replace every if by polymorphism, but that's not the point of the anti-if campaign. The goal is to avoid writing things such as (in a pathfinding algorithm) :
if type == Block or type == Player:
# You can't pass through this
else:
# You can
But rather call a is_traversable method on each object. In a sense, that's exactly the inverse of pattern matching. "if" is useful, but in some cases, it is not the best solution.
I assume you are actually asking about replacing if statements that check types, as opposed to replacing all if statements.
To replace an if with polymorphism requires a method in a common supertype you can use for dispatching, either by overriding it directly, or by reusing overridden methods as in the visitor pattern.
But what if there is no such method, and you can't add one to a common supertype because the super types are not maintained by you? Would you really go to the lengths of introducing a new supertype along with subtypes just to get rid of a single if? That would be taking purity a bit far in my opinion.
Also, both approaches (direct overriding and the visitor pattern) have their disadvantages: Overriding the method directly requires that you implement your method in the classes you want to switch on, which might not help cohesion. On the other hand, the visitor pattern is awkward if several cases share the same code. With an if you can do:
if (o instanceof OneType || o instanceof AnotherType) {
// complicated logic goes here
}
How would you share the code with the visitor pattern? Call a common method? Where would you put that method?
So no, I don't think replacing such if statements is always an improvement. It often is, but not always.
I used to write code a lot as the recommend in the anti-if campaign, using either callbacks in a delegate dictionary or polymorphism.
It's quite a beguiling argument, especially if you are dealing with messy code bases but to be honest, although it's great for a plugin model or simplifying large nested if statements, it does make navigating and readability a bit of a pain.
For example F12 (Go To Definition) in visual studio will take you to an abstract class (or, in my case an interface definition).
It also makes quick visual scanning of a class very cumbersome, and adds an overhead in setting up the delegates and lookup hashes.
Using the recommendations put forward in the anti-if campaign as much as they appear to be recommending looks like 'ooh, new shiny thing' programming to me.
As for the other constructs put forward in this thread, albeit it has been done in the spirit of a fun challenge, are just substitutes for an if statement, and don't really address what the underlying beliefs of the anti-if campaign.
You can avoid ifs in your business logic code if you keep them in your construction code (Factories, builders, Providers etc.). Your business logic code would be much more readable, easier to understand or easier to maintain or extend. See: http://www.youtube.com/watch?v=4F72VULWFvc
Haskell doesn't even have if statements, being pure functional. ;D
You can do it without if per se, but you can't do it without a mechanism that allows you to make a decision based on some condition.
In assembly, there's no if statement. There are conditional jumps.
In Haskell for instance, there's no explicit if, instead, you define a function multiple times, I forgot the exact syntax, but it's something like this:
pseudo-haskell:
def posNeg(x < 0):
return "negative"
def posNeg(x == 0):
return "zero"
def posNeg(x):
return "positive"
When you call posNeg(a), the interpreter will look at the value of a, if it's < 0 then it will choose the first definition, if it's == 0 then it will choose the second definition, otherwise it will default to the third definition.
So while languages like Haskell and SmallTalk don't have the usual C-style if statement, they have other means of allowing you to make decisions.
This is actually a coding game I like to play with programming languages. It's called "if we had no if" which has its origins at: http://wiki.tcl.tk/4821
Basically, if we disallow the use of conditional constructs in the language: no if, no while, no for, no unless, no switch etc.. can we recreate our own IF function. The answer depends on the language and what language features we can exploit (remember using regular conditional constructs is cheating co no ternary operators!)
For example, in tcl, a function name is just a string and any string (including the empty string) is allowed for anything (function names, variable names etc.). So, exploiting this we can do:
proc 0 {true false} {uplevel 1 $false; # execute false code block, ignore true}
proc 1 {true false} {uplevel 1 $true; # execute true code block, ignore flase}
proc _IF {boolean true false} {
$boolean $true $false
}
#usage:
_IF [expr {1<2}] {
puts "this is true"
} {
#else:
puts "this is false"
}
or in javascript we can abuse the loose typing and the fact that almost anything can be cast into a string and combine that with its functional nature:
function fail (discard,execute) {execute()}
function pass (execute,discard) {execute()}
var truth_table = {
'false' : fail,
'true' : pass
}
function _IF (expr) {
return truth_table[!!expr];
}
//usage:
_IF(3==2)(
function(){alert('this is true')},
//else
function(){alert('this is false')}
);
Not all languages can do this sort of thing. But languages I like tend to be able to.
The idea of polymorphism is to call an object without to first verify the class of that object.
That doesn't mean the if statement should not be used at all; you should avoid to write
if (object.isArray()) {
// Code to execute when the object is an array.
} else if (object.inString()) {
// Code to execute if the object is a string.
}
It depends on the language.
Statically typed languages should be able to handle all of the type checking by sharing common interfaces and overloading functions/methods.
Dynamically typed languages might need to approach the problem differently since type is not checked when a message is passed, only when an object is being accessed (more or less). Using common interfaces is still good practice and can eliminate many of the type checking if statements.
While some constructs are usually a sign of code smell, I am hesitant to eliminate any approach to a problem apriori. There may be times when type checking via if is the expedient solution.
Note: Others have suggested using switch instead, but that is just a clever way of writing more legible if statements.
Well, if you're writing in Perl, it's easy!
Instead of
if (x) {
# ...
}
you can use
unless (!x){
# ...
}
;-)
In answer to the question, and as suggested by the last respondent, you need some if statements to detect state in a factory. At that point you then instantiate a set of collaborating classes that solve the state specific problem. Of course, other conditionals would be required as needed, but they would be minimized.
What would be removed of course would be the endless procedural state checking rife in so much service based code.
Interesting smalltalk is mentioned, as that's the language I used before being dragged across into Java. I don't get home as early as I used to.
I thought about adding my two cents: you can optimize away ifs in many languages where the second part of a boolean expression is not evaluated when it won't affect the result.
With the and operator, if the first operand evaluates to false, then there is no need to evaluate the second one. With the or operator, it's the opposite - there's no need to evaluate the second operand if the first one is true. Some languages always behave like that, others offer an alternative syntax.
Here's an if - elseif - else code made in JavaScript by only using operators and anonymous functions.
document.getElementById("myinput").addEventListener("change", function(e) {
(e.target.value == 1 && !function() {
alert('if 1');
}()) || (e.target.value == 2 && !function() {
alert('else if 2');
}()) || (e.target.value == 3 && !function() {
alert('else if 3');
}()) || (function() {
alert('else');
}());
});
<input type="text" id="myinput" />
This makes me want to try defining an esoteric language where blocks implicitly behave like self-executing anonymous functions and return true, so that you would write it like this:
(condition && {
action
}) || (condition && {
action
}) || {
action
}

const vs enum in D

Check out this quote from here, towards the bottom of the page. (I believe the quoted comment about consts apply to invariants as well)
Enumerations differ from consts in that they do not consume any space
in the final outputted object/library/executable, whereas consts do.
So apparently value1 will bloat the executable, while value2 is treated as a literal and doesn't appear in the object file.
const int value1 = 0xBAD;
enum int value2 = 42;
Back in C++ I always assumed this was for legacy reasons, and old compilers that couldn't optimize away constants. But if this is still true in D, there must be a deeper reason behind this. Anyone know why?
Just like in C++, an enum in D seems to be a "conserved integer literal" (edit: amazing, D2 even supports floats and strings). Its enumerators have no location. They are just immaterial as values without identity.
Placing enum is new in D2. It first defines a new variable. It is not an lvalue (so you also cannot take its address). An
enum int a = 10; // new in D2
Is like
enum : int { a = 10 }
If i can trust my poor D knowledge. So, a in here is not an lvalue (no location and you can't take its address). A const, however, has an address. If you have a global (not sure whether this is the right D terminology) const variable, the compiler usually can't optimize it away, because it doesn't know what modules can access that variable or could take its address. So it has to allocate storage for it.
I think if you have a local const, the compiler can still optimize it away just as in C++, because the compiler knows by looking at its scope whether or not anyone is interested in its address or whether everyone just takes its value.
Your actual question; why enum/const is the same in D as in C++; seems to be unanswered. Sadly there exists no good reason for this choice whatsoever. I believe that this was just an unintentional side effect in C++ that became a de facto pattern. In D the same pattern was needed, and Walter Bright decided that it should be done as in C++ such that those coming from that place would recognize what to do ... In fact, before this rather IMHO silly decision, the keyword manifest was used instead of enum for this usecase.
I think a good compiler/linker should still remove the constant. It's just that with the enum, it's actually guaranteed in the spec. The difference is primarily a matter of semantics. (Also keep in mind that 2.0 isn't complete yet)
The real purpose of enum being expanded syntactically to support single manifest constants, from what I understand, is that Don Clugston, a D template guru, was doing some crazy stuff with templates. He kept running into long build times, ridiculous compiler memory usage, etc. because the compiler kept creating internal data strucutres for const variables. One key thing about const/immutable variables compared to enums is that const/immutable variables are lvalues and can have their address taken. This means there is some extra overhead for the compiler. This usually doesn't matter, but when you're executing really complicated compile-time metaprograms, even if const variables are optimized away, this is still significant overhead at compile time.
It sounds like the enum value will be used "inline" in expressions where as the const will actually take storage and any expression referencing it will be loading the value from the memory storage.
This sound similar to the difference between const vs. readonly in C#. The former is a compile-time constant and the later is a run-time constant. This definitely affected versioning of assemblies (since assemblies referencing a readonly would receive a copy at compile time and would not get a change to the value if the referenced assembly was rebuilt with a different value).

Does static typing mean that you have to cast a variable if you want to change its type?

Are there any other ways of changing a variable's type in a statically typed language like Java and C++, except 'casting'?
I'm trying to figure out what the main difference is in practical terms between dynamic and static typing and keep finding very academic definitions. I'm wondering what it means in terms of what my code looks like.
Make sure you don't get static vs. dynamic typing confused with strong vs. weak typing.
Static typing: Each variable, method parameter, return type etc. has a type known at compile time, either declared or inferred.
Dynamic typing: types are ignored/don't exist at compile time
Strong typing: each object at runtime has a specific type, and you can only perform those operations on it that are defined for that type.
Weak typing: runtime objects either don't have an explicit type, or the system attempts to automatically convert types wherever necessary.
These two opposites can be combined freely:
Java is statically and strongly typed
C is statically and weakly typed (pointer arithmetics!)
Ruby is dynamically and strongly typed
JavaScript is dynamically and weakly typed
Genrally, static typing means that a lot of errors are caught by the compiler which are runtime errors in a dynamically typed language - but it also means that you spend a lot of time worrying about types, in many cases unnecessarily (see interfaces vs. duck typing).
Strong typing means that any conversion between types must be explicit, either through a cast or through the use of conversion methods (e.g. parsing a string into an integer). This means more typing work, but has the advantage of keeping you in control of things, whereas weak typing often results in confusion when the system does some obscure implicit conversion that leaves you with a completely wrong variable value that causes havoc ten method calls down the line.
In C++/Java you can't change the type of a variable.
Static typing: A variable has one type assigned at compile type and that does not change.
Dynamic typing: A variable's type can change while runtime, e.g. in JavaScript:
js> x="5" <-- String
5
js> x=x*5 <-- Int
25
The main difference is that in dynamically typed languages you don't know until you go to use a method at runtime whether that method exists. In statically typed languages the check is made at compile time and the compilation fails if the method doesn't exist.
I'm wondering what it means in terms of what my code looks like.
The type system does not necessarily have any impact on what code looks like, e.g. languages with static typing, type inference and implicit conversion (like Scala for instance) look a lot like dynamically typed languages. See also: What To Know Before Debating Type Systems.
You don't need explicit casting. In many cases implicit casting works.
For example:
int i = 42;
float f = i; // f ~= 42.0
int b = f; // i == 42
class Base {
};
class Subclass : public Base {
};
Subclass *subclass = new Subclass();
Base *base = subclass; // Legal
Subclass *s = dynamic_cast<Subclass *>(base); // == subclass. Performs type checking. If base isn't a Subclass, NULL is returned instead. (This is type-safe explicit casting.)
You cannot, however, change the type of a variable. You can use unions in C++, though, to achieve some sort of dynamic typing.
Lets look at Java for he staitically typed language and JavaScript for the dynamc. In Java, for objects, the variable is a reference to an object. The object has a runtime type and the reference has a type. The type of the reference must be the type of the runtime object or one of its ancestors. This is how polymorphism works. You have to cast to go up the hierarchy of the reference type, but not down. The compiler ensures that these conditions are met. In a language like JavaScript, your variable is just that, a variable. You can have it point to whatever object you want, and you don't know the type of it until you check.
For conversions, though, there are lots of methods like toInteger and toFloat in Java to do a conversion and generate an object of a new type with the same relative value. In JavaScript there are also conversion methods, but they generate new objects too.
Your code should actally not look very much different, regardless if you are using a staticly typed language or not. Just because you can change the data type of a variable in a dynamically typed language, doesn't mean that it is a good idea to do so.
In VBScript, for example, hungarian notation is often used to specify the preferred data type of a variable. That way you can easily spot if the code is mixing types. (This was not the original use of hungarian notation, but it's pretty useful.)
By keeping to the same data type, you avoid situations where it's hard to tell what the code actually does, and situations where the code simply doesn't work properly. For example:
Dim id
id = Request.QueryString("id") ' this variable is now a string
If id = "42" Then
id = 142 ' sometimes turned into a number
End If
If id > 100 Then ' will not work properly for strings
Using hungarian notation you can spot code that is mixing types, like:
lngId = Request.QueryString("id") ' putting a string in a numeric variable
strId = 42 ' putting a number in a string variable