Is static typing a subset of dynamic typing? - dynamic

I was going to add this as a comment to my previous question about type theory, but I felt it probably deserved its own exposition:
If you have a dynamic typing system and you add a "type" member to each object and verify that this "type" is a specific value before executing a function on the object, how is this different than static typing? (Other than the fact that it is run-time instead of compile-time).

Technically, it actually is the other way round: a "dynamically typed" language is a special case of a statically typed language, namely one with only a single type (in the mathematical sense). That at least is the view point of many in the type systems community.
Edit regarding static vs dynamic checking: only local properties can be checked dynamically, whereas properties that require some kind of global knowledge cannot. Think of properties such as something being unique, something not being aliased, a computation being free of race conditions. A suitable static type system can verify such properties, because it has the ability to establish certain invariants on the context of the expression that is being checked.

static typing happens at compile-time, not at run-time! And that difference is essential!!
See B.Pierce's book Types and Programming Languages for more.

Related

why are languages generally either statically typed or dynamically typed (not both)?

I don't understand this. I understand the pros and cons of each, but why don't languages like Python allow you to specify the variable type yourself at initialization and function argument types and return types when you wish so the interpreter won't waste time checking it at runtime, for programs or just parts of your code where speed is important, and not do it yourself when it isn't?
It just seems waste of time for users to switch between languages kind of needlessly in these situation and for developers of the language to lose some users or not have them use their language for all of their projects because of this.
Initializing a variable (with a specific type) in a dynamically typed language would be pointless because the variable could be reassigned with a different type later on. And the type of the variable is determined by the variable to which it is assigned anyway. So making statically typing variables optional wouldn't actually provide any extra functionality.
Second, compile-time checking of function arguments wouldn't work either because the type of the variables passed to it couldn't be determined until runtime. And functions can be coded to check the types of their own arguments in a dynamically typed language, so there's no need to implement another system for this.

What is open recursion?

What is open recursion? Is it specific to OOP?
(I came across this term in this tweet by Daniel Spiewak.)
just copying http://www.comlab.ox.ac.uk/people/ralf.hinze/talks/Open.pdf:
"Open recursion Another handy feature offered by most languages with objects and classes is the ability for one method body to invoke another method of the same object via a special variable called self or, in some langauges, this. The special behavior of self is that it is late-bound, allowing a method defined in one class to invoke another method that is defined later, in some subclass of the first. "
This paper analyzes the possibility of adding OO to ML, with regards to expressivity and complexity. It has the following excerpt on objects, which seems to make this term relatively clear –
3.3. Objects
The simplest form of object is just a record of functions that share a common closure environment that
carries the object state (we can call these simple objects). The function members of the record may or may not
be defined as mutually recursive. However, if one wants to support inheritance with overriding, the structure
of objects becomes more complicated. To enable open recursion, the call-graph of the method functions
cannot be hard-wired, but needs to be implemented indirectly, via object self-reference. Object self-reference
can be achieved either by construction, making each object a recursive, self-referential value (the fixed-point
model), or dynamically, by passing the object as an extra argument on each method call (the self-application
or self-passing model).5 In either case, we will call these self-referential objects.
The name "open recursion" is a bit misleading at first, because it has nothing to do with the recursion that normally is used (a function calling itself); and to that extent, there is no closed recursion.
It basically means, that a thing is referring to itself. I can only guess, but I do think that the term "open" comes from open as in "open for extension".
In that sense an object is open to extension, but still referring to itself.
Perhaps a small example can shed some light on the concept.
Imaging you write a Python class like this one:
class SuperClass:
def method1(self):
self.method2()
def method2(self):
print(self.__class__.__name__)
If you ran this by
s = SuperClass()
s.method1()
It will print "SuperClass".
Now we create a subclass from SuperClass and override method2:
class SubClass(SuperClass):
def method2(self):
print(self.__class__.__name__)
and run it:
sub = SubClass()
sub.method1()
Now "SubClass" will be printed.
Still, we only call method1() as before. Inside method1() the method2() is called, but both are bound to the same reference (self in Python, this in Java). During sub-classing SuperClass method2() is changed, which means that an object of SubClass refers to a different version of this method.
That is open recursion.
In most cases, you override methods and call the overridden methods directly.
This scheme here is using an indirection over self-reference.
P.S.: I don't think this has been invented but discovered and then explained.
Open recursion allows to call another methods of object from within, through special variable like this or self.
In short, open recursion is about something actually not related to OOP, but more general.
The relation with OOP comes from the fact that many typical "OOP" PLs have such properties, but it is essentially not tied to any distinguishing features about OOP.
So there are different meanings, even in same "OOP" language. I will illustrate it later.
Etymology
As mentioned here, the terminology is likely coined in the famous TAPL by BCP, which illustrates the meaning by concrete OOP languages.
TAPL does not define "open recursion" formally. Instead, it points out the "special behavior of self (or this) is that it is late-bound, allowing a method defined in one class to invoke another method that is defined later, in some subclass of the first".
Nevertheless, neither of "open" and "recursion" comes from the OOP basis of a language. (Actually, it is also nothing to do with static types.) So the interpretation (or the informal definition, if any) in that source is overspecified in nature.
Ambiguity
The mentioning in TAPL clearly shows "recursion" is about "method invocation". However, it is not that simple in real languages, which usually do not have primitive semantic rules on the recursive invocation itself. Real languages (including the ones considered as OOP languages) usually specify the semantics of such invocation for the notation of the method calls. As syntactic devices, such calls are subject to the evaluation of some kind of expressions relying on the evaluations of its subexpressions. These evaluations imply the resolution of method name, under some independent rules. Specifically, such rules are about name resolution, i.e. to determine the denotation of a name (typically, a symbol, an identifier, or some "qualified" name expressions) in the subexpression. Name resolution often respects to scoping rules.
OTOH, the "late-bound" property emphasizes how to find the target implementation of the named method. This is a shortcut of evaluation of specific call expressions, but it is not general enough, because entities other than methods can also have such "special" behavior, even make such behavior not special at all.
A notable ambiguity comes from such insufficient treatment. That is, what does a "binding" mean. Traditionally, a binding can be modeled as a pair of a (scoped) name and its bound value, i.e. a variable binding. In the special treatment of "late-bound" ones, the set of allowed entities are smaller: methods instead of all named entities. Besides the considerably undermining the abstraction power of the language rules at meta level (in the language specification), it does not cease the necessity of traditional meaning of a binding (because there are other non-method entities), hence confusing. The use of a "late-bound" is at least an instance of bad naming. Instead of "binding", a more proper name would be "dispatching".
Worse, the use in TAPL directly mix the two meanings when dealing with "recusion". The "recursion" behavior is all about finding the entity denoted by some name, not just specific to method invocation (even in those OOP language).
The title of the chapter (Case Study: Imperative Objects) also suggests some inconsistency. Obviously, the so-called late binding of method invocation has nothing to do with imperative states, because the resolution of the dispatching does not require mutable metadata of invocation. (In some popular sense of implementation, the virtual method table need not to be modifiable.)
Openness
The use of "open" here looks like mimic to open (lambda) terms. An open term has some names not bound yet, so the reduction of such a term must do some name resolution (to compute the value of the expression), or the term is not normalized (never terminate in evaluation). There is no difference between "late" or "early" for the original calculi because they are pure, and they have the Church-Rosser property, so whether "late" or not does not alter the result (if it is normalized).
This is not the same in the language with potentially different paths of dispatching. Even that the implicit evaluation implied by the dispatching itself is pure, it is sensitive to the order among other evaluations with side effects which may have dependency on the concrete invocation target (for example, one overrider may mutate some global state while another can not). Of course in a strictly pure language there can be no observable differences even for any radically different invocation targets, a language rules all of them out is just useless.
Then there is another problem: why it is OOP-specific (as in TAPL)? Given that the openness is qualifying "binding" instead of "dispatching of method invocation", there are certainly other means to get the openness.
One notable instance is the evaluation of a procedure body in traditional Lisp dialects. There can be unbound symbols in the body and they are only resolved when the procedure being called (rather than being defined). Since Lisps are significant in PL history and the are close to lambda calculi, attributing "open" specifically to OOP languages (instead of Lisps) is more strange from the PL tradition. (This is also a case of "making them not special at all" mentioned above: every names in function bodies are just "open" by default.)
It is also arguable that the OOP style of self/this parameter is equivalent to the result of some closure conversion from the (implicit) environment in the procedure. It is questionable to treat such features primitive in the language semantics.
(It may be also worth noting, the special treatment of function calls from symbol resolution in other expressions is pioneered by Lisp-2 dialects, not any of typical OOP languages.)
More cases
As mentioned above, different meanings of "open recursion" may coexist in a same "OOP" language.
C++ is the first instance here, because there are sufficient reasons to make them coexist.
In C++, name resolution are all static, normatively name lookup. The rules of name lookup vary upon different scopes. Most of them are consistent with identifier lookup rules in C (except for the allowance of implicit declarations in C but not in C++): you must first declare the name, then the name can be lookup in the source code (lexically) later, otherwise the program is ill-formed (and it is required to issue an error in the implementation of the language). The strict requirement of such dependency of names are considerable "closed", because there are no later chance to recover from the error, so you cannot directly have names mutually referenced across different declarations.
To work around the limitation, there can be some additional declarations whose sole duty is to break the cyclic dependency. Such declarations are called "forward" declarations. Using of forward declarations still does not require "open" recursion, because every well-formed use must statically see the previous declaration of that name, so each name lookup does not require additional "late" binding.
However, C++ classes have special name lookup rules: some entities in the class scope can be referred in the context prior to their declaration. This makes mutual recursive use of name across different declarations possible without any additional "forward" declarations to break the cycle. This is exactly the "open recursion" in TAPL sense except that it is not about method invocation.
Moreover, C++ does have "open recursion" as per the descriptions in TAPL: this pointer and virtual functions. Rules to determine the target (overrider) of virtual functions are independent to the name lookup rules. A non-static member defined in a derived class generally just hide the entities with same name in the base classes. The dispatching rules kick in only on virtual function calls, after the name lookup (the order is guaranteed since evaulations of C++ function calls are strict, or applicative). It is also easy to introduce a base class name by using-declaration without worry about the type of the entity.
Such design can be seen as an instance of separate of concerns. The name lookup rules allows some generic static analysis in the language implementation without special treatment of function calls.
OTOH, Java have some more complex rules to mix up name lookup and other rules, including how to identify the overriders. Name shadowing in Java subclasses is specific to the kind of entities. It is more complicate to distinguish overriding with overloading/shadowing/hiding/obscuring for different kinds. There also cannot be techniques of C++'s using-declarations in the definition of subclasses. Such complexity does not make Java more or less "OOP" than C++, anyway.
Other consequences
Collapsing the bindings about name resolution and dispatching of method invocation leads to not only ambiguity, complexity and confusion, but also more difficulties on the meta level. Here meta means the fact that name binding can exposing properties not only available in the source language semantics, but also subject to the meta languages: either the formal semantic of the language or its implementation (say, the code to implement an interpreter or a compiler).
For example, as in traditional Lisps, binding-time can be distinguished from evaluation-time, because program properties revealed in binding-time (value binding in the immediate contexts) is more close to meta properties compared to evaluation-time properties (like the concrete value of arbitrary objects). An optimizing compiler can deploy the code generation immediately depending on the binding-time analysis either statically at the compile-time (when the body is to be evaluate more than once) or derferred at runtime (when the compilation is too expensive). There is no such option for languages blindly assume all resolutions in closed recursion faster than open ones (and even making them syntactically different at the very first). In such sense, OOP-specific open recursion is not just not handy as advertised in TAPL, but a premature optimization: giving up metacompilation too early, not in the language implementation, but in the language design.

In what cases should public fields be used instead of properties? [duplicate]

This question already has answers here:
Closed 12 years ago.
Possible Duplicate:
Public Data members vs Getters, Setters
In what cases should public fields be used, instead of properties or getter and setter methods (where there is no support for properties)? Where exactly is their use recommended, and why, or, if it is not, why are they still allowed as a language feature? After all, they break the Object-Oriented principle of encapsulation where getters and setters are allowed and encouraged.
If you have a constant that needs to be public, you might as well make it a public field instead of creating a getter property for it.
Apart from that, I don't see a need, as far as good OOP principles are concerned.
They are there and allowed because sometimes you need the flexibility.
That's hard to tell, but in my opinion public fields are only valid when using structs.
struct Simple
{
public int Position;
public bool Exists;
public double LastValue;
};
But different people have different thoughts about:
http://kristofverbiest.blogspot.com/2007/02/public-fields-and-properties-are-not.html
http://blogs.msdn.com/b/ericgu/archive/2007/02/01/properties-vs-public-fields-redux.aspx
http://www.markhneedham.com/blog/2009/02/04/c-public-fields-vs-automatic-properties/
If your compiler does not optimize getter and setter invocations, the access to your properties might be more expensive than reading and writing fields (call stack). That might be relevant if you perform many, many invocations.
But, to be honest, I know no language where this is true. At least in both .NET and Java this is optimized well.
From a design point of view I know no case where using fields is recommended...
Cheers
Matthias
Let's first look at the question why we need accessors (getters/setters)? You need them to be able to override the behaviour when assigning a new value/reading a value. You might want to add caching or return a calculated value instead of a property.
Your question can now be formed as do I always want this behaviour? I can think of cases where this is not useful at all: structures (what were structs in C). Passing a parameter object or a class wrapping multiple values to be inserted into a Collection are cases where one actually does not need accessors: The object is merely a container for variables.
There is one single reason(*) why to use get instead of public field: lazy evaluation. I.e. the value you want may be stored in a database, or may be long to compute, and don't want your program to initialize it at startup, but only when needed.
There is one single reason(*) why to use set instead of public field: other fields modifications. I.e. you change the value of other fields when you the value of the target field changes.
Forcing to use get and set on every field is in contradiction with the YAGNI principle.
If you want to expose the value of a field from an object, then expose it! It is completely pointless to create an object with four independent fields and mandating that all of them uses get/set or properties access.
*: Other reasons such as possible data type change are pointless. In fact, wherever you use a = o.get_value() instead of a = o.value, if you change the type returned by get_value() you have to change at every use, just as if you would have changed the type of value.
The main reason is nothing to do with OOP encapsulation (though people often say it is), and everything to do with versioning.
Indeed from the OOP position one could argue that fields are better than "blind" properties, as a lack of encapsulation is clearer than something that pretends to encapsulation and then blows it away. If encapsulation is important, then it should be good to see when it isn't there.
A property called Foo will not be treated the same from the outside as a public field called Foo. In some languages this is explicit (the language doesn't directly support properties, so you've got a getFoo and a setFoo) and in some it is implicit (C# and VB.NET directly support properties, but they are not binary-compatible with fields and code compiled to use a field will break if it's changed to a property, and vice-versa).
If your Foo just does a "blind" set and write of an underlying field, then there is currently no encapsulation advantage to this over exposing the field.
However, if there is a later requirement to take advantage of encapsulation to prevent invalid values (you should always prevent invalid values, but maybe you didn't realise some where invalid when you first wrote the class, or maybe "valid" has changed with a scope change), to wrap memoised evaluation, to trigger other changes in the object, to trigger an on-change event, to prevent expensive needless equivalent sets, and so on, then you can't make that change without breaking running code.
If the class is internal to the component in question, this isn't a concern, and I'd say use fields if fields read sensibly under the general YAGNI principle. However, YAGNI doesn't play quite so well across component boundaries (if I did need my component to work today, I certainly am probably going to need that it works tomorrow after you've changed your component that mine depends on), so it can make sense to pre-emptively use properties.

Duck typing, must it be dynamic?

Wikipedia used to say* about duck-typing:
In computer programming with
object-oriented programming languages,
duck typing is a style of dynamic
typing in which an object's current
set of methods and properties
determines the valid semantics, rather
than its inheritance from a particular
class or implementation of a specific
interface.
(* Ed. note: Since this question was posted, the Wikipedia article has been edited to remove the word "dynamic".)
It says about structural typing:
A structural type system (or
property-based type system) is a major
class of type system, in which type
compatibility and equivalence are
determined by the type's structure,
and not through explicit declarations.
It contrasts structural subtyping with duck-typing as so:
[Structural systems] contrasts with
... duck typing, in which only the
part of the structure accessed at
runtime is checked for compatibility.
However, the term duck-typing seems to me at least to intuitively subsume structural sub-typing systems. In fact Wikipedia says:
The name of the concept [duck-typing]
refers to the duck test, attributed to
James Whitcomb Riley which may be phrased as
follows: "when I see a bird that walks
like a duck and swims like a duck and
quacks like a duck, I call that bird a
duck."
So my question is: why can't I call structural subtyping duck-typing? Do there even exist dynamically typed languages which can't also be classified as being duck-typed?
Postscript:
As someone named daydreamdrunk on reddit.com so eloquently put-it "If it compiles like a duck and links like a duck ..."
Post-postscript
Many answers seem to be basically just rehashing what I already quoted here, without addressing the deeper question, which is why not use the term duck-typing to cover both dynamic typing and structural sub-typing? If you only want to talk about duck-typing and not structural sub-typing, then just call it what it is: dynamic member lookup. My problem is that nothing about the term duck-typing says to me, this only applies to dynamic languages.
C++ and D templates are a perfect example of duck typing that is not dynamic. It is definitely:
typing in which an
object's current set of methods and
properties determines the valid
semantics, rather than its inheritance
from a particular class or
implementation of a specific
interface.
You don't explicitly specify an interface that your type must inherit from to instantiate the template. It just needs to have all the features that are used inside the template definition. However, everything gets resolved at compile time, and compiled down to raw, inscrutable hexadecimal numbers. I call this "compile time duck typing". I've written entire libraries from this mindset that implicit template instantiation is compile time duck typing and think it's one of the most under-appreciated features out there.
Structural Type System
A structural type system compares one entire type to another entire type to determine whether they are compatible. For two types A and B to be compatible, A and B must have the same structure – that is, every method on A and on B must have the same signature.
Duck Typing
Duck typing considers two types to be equivalent for the task at hand if they can both handle that task. For two types A and B to be equivalent to a piece of code that wants to write to a file, A and B both must implement a write method.
Summary
Structural type systems compare every method signature (entire structure). Duck typing compares the methods that are relevant to a specific task (structure relevant to a task).
Duck typing means If it just fits, it's OK
This applies to both dynamically typed
def foo obj
obj.quak()
end
or statically typed, compiled languages
template <typename T>
void foo(T& obj) {
obj.quak();
}
The point is that in both examples, there has not been any information on the type given. Just when used (either at runtime or compile-time!), the types are checked and if all requirements are fulfilled, the code works. Values don't have an explicit type at their point of declaration.
Structural typing relies on explicitly typing your values, just as usual - The difference is just that the concrete type is not identified by inheritance but by it's structure.
A structurally typed code (Scala-style) for the above example would be
def foo(obj : { def quak() : Unit }) {
obj.quak()
}
Don't confuse this with the fact that some structurally typed languages like OCaml combine this with type inference in order to prevent us from defining the types explicitly.
I'm not sure if it really answers your question, but...
Templated C++ code looks very much like duck-typing, yet is static, compile-time, structural.
template<typename T>
struct Test
{
void op(T& t)
{
t.set(t.get() + t.alpha() - t.omega(t, t.inverse()));
}
};
It's my understanding that structural typing is used by type inferencers and the like to determine type information (think Haskell or OCaml), while duck typing doesn't care about "types" per se, just that the thing can handle a specific method invocation/property access, etc. (think respond_to? in Ruby or capability checking in Javascript).
There are always going to be examples from some programming languages that violate some definitions of various terms. For example, ActionScript supports doing duck-typing style programming on instances that are not technically dynamic.
var x:Object = new SomeClass();
if ("begin" in x) {
x.begin();
}
In this case we tested if the object instance in "x" has a method "begin" before calling it instead of using an interface. This works in ActionScript and is pretty much duck-typing, even though the class SomeClass() may not itself be dynamic.
There are situations in which dynamic duck typing and the similar static-typed code (in i.e. C++) behave differently:
template <typename T>
void foo(T& obj) {
if(obj.isAlive()) {
obj.quak();
}
}
In C++, the object must have both the isAlive and quak methods for the code to compile; for the equivalent code in dynamically typed languages, the object only needs to have the quak method if isAlive() returns true. I interpret this as a difference between structure (structural typing) and behavior (duck typing).
(However, I reached this interpretation by taking Wikipedia's "duck-typing must be dynamic" at face value and trying to make it make sense. The alternate interpretation that implicit structural typing is duck typing is also coherent.)
I see "duck typing" more as a programming style, whereas "structural typing" is a type system feature.
Structural typing refers to the ability of the type system to express types that include all values that have certain structural properties.
Duck typing refers to writing code that just uses the features of values that it is passed that are actually needed for the job at hand, without imposing any other constraints.
So I could use structural types to code in a duck typing style, by formally declaring my "duck types" as structural types. But I could also use structural types without "doing duck typing". For example, if I write interfaces to a bunch of related functions/methods/procedures/predicates/classes/whatever by declaring and naming a common structural type and then using that everywhere, it's very likely that some of the code units don't need all of the features of the structural type, and so I have unnecessarily constrained some of them to reject values on which they could theoretically work correctly.
So while I can see how there is common ground, I don't think duck typing subsumes structural typing. The way I think about them, duck typing isn't even a thing that might have been able to subsume structural typing, because they're not the same kind of thing. Thinking of duck typing in dynamic languages as just "implicit, unchecked structural types" is missing something, IMHO. Duck typing is a coding style you choose to use or not, not just a technical feature of a programming language.
For example, it's possible to use isinstance checks in Python to fake OO-style "class-or-subclass" type constraints. It's also possible to check for particular attributes and methods, to fake structural type constraints (you could even put the checks in an external function, thus effectively getting a named structural type!). I would claim that neither of these options is exemplifying duck typing (unless the structural types are quite fine grained and kept in close sync with the code checking for them).

When should weak types be discouraged?

When should weak types be discouraged? Are weak types discouraged in big projects? If the left side is strongly typed like the following would that be an exception to the rule?
int i = 5
string sz = i
sz = sz + "1"
i = sz
Does any languages support similar syntax to the above? Tell me more about pros and cons to weak types and situations related.
I think you are confusing "weak typing" with "dynamic typing".
The term "weak typing" means "not strongly typed", which means that the value of a memory location is allowed to vary from what it's type indicates it should be.
C is an example of a weakly typed language. It allows code like this to be written:
typedef struct
{
int x;
int y;
} FooBar;
FooBar foo;
char * pStr = &foo;
pStr[0] = 'H';
pStr[1] = 'i';
pStr[2] = '\0';
That is, it allows a FooBar instance to be treated as if it was an array of characters.
In a strongly typed language, that would not be allowed. Either a compiler error would be generated, or a run time exception would be thrown, but never, at any time, would a FooBar memory address contain data that was not a valid FooBar.
C#, Java, Lisp, Java Script, and Ruby are examples of languages where this type of thing would not be allowed. They are strongly typed.
Some of those languages are "statically typed", which means that variable types are assigned at compile time, and some are "dynamically typed", which means that variable types are not known until runtime. "Static vs Dynamic" and "Weak vs Strong" are orthogonal issues. For example, Lisp is a "strong dynamically typed" language, whereas "C" is a "weak statically typed language".
Also, as others have pointed out, there is a distinction between "inferred types" and types specified by the programmer. The "var" keyword in C# is an example of type inference. However, it's still a statically typed construct because the compiler infers the type of a variable at compile time, rather than at runtime.
So, what your question really is asking is:
What are the relative merits and
drawbacks of static typing, dynamic
typing, weak typing, stong typing,
inferred static types, and user
specified static types.
I provide answers to all of these below:
Static typing
Static typing has 3 primary benefits:
Better tooling support
A Reduced likely hood of certain types of bugs
Performance
The user experience and accuracy of things like intellisence, and refactoring is improved greatly in a statically typed language because of the extra information that the static types provide. If you type "a." in a code editor and "a" has a static type then the compiler knows everything that could legally come after the "." and can thus show you an accurate completion list. It's possible to support some scenarios in a dynamically typed language, but they are much more limited.
Also, in a program without compiler errors a refactoring tool can identify every place a particular method, variable, or type is used. It's not possible to do that in a dynamically typed language.
The second benefit is somewhat controversial. Proponents of statically typed languages like to make that claim. Opponents of statically typed languages, however, contend that the bugs they catch are trivial, and that they would get caught by testing anyways. But, you do get notification of things like misspelled variable or method names up front, which can be helpful.
Statically typed languages also enable better "data flow analysis", which when combined with things like Microsoft's SAL (or similar tools) can help find potential security problems.
Finally, with static typing, compilers can do a lot more optimization, and so can produce faster code.
Drawbacks:
The main drawback for static typing is that it restricts the things you can do. You can write programs in dynamically typed languages that you can't write in statically typed languages. Ruby on Rails is a good example of this.
Dynamic Typing
The big advantage of dynamic typing is that it's much more powerful than static typing. You can do a lot of really cool stuff with it.
Another one is that it requires less typing. You don't have to specify types all over the place.
Drawbacks:
Dynamic typing has 2 main draw backs:
You don't get as much "hand holding" from the compiler or IDE
It's not suitable for critical performance scenarios. For example, no one writes OS Kernels in Ruby.
Strong typing:
The biggest benefit of strong typing is security. Enforcing strong typing usually requires some type of runtime support. If a program can proove type safety then a lot of security issues, such as buffer overuns, just go away.
Weak typing:
The big drawback of strong typing, and the big benefit of weak typing, is performance.
When you can access memory any way you like, you can write faster code. For example a database can swap objects out to disk just by writing out their raw bytes, and not needing to resort to things like "ISerializable" interfaces. A video game can throw away all the data associated with one level by just running a single free on a large buffer, rather than running destructors for many small objects.
Being able to do those things requires weak typing.
Type inference
Type inference allows a lot of the benefits of static typing without requiring as much typing.
User specified types
Some people just don't like type inference because they like to be explicit. This is more of a style thing.
Weak typing is an attempt at language simplification. While this is a worthy goal, weak typing is a poor solution.
Weak typing such as is used in COM Variants was an early attempt to solve this problem, but it is fraught with peril and frankly causes more trouble than it's worth. Even Visual Basic programmers, who will put up with all sorts of rubbish, correctly pegged this as a bad idea and backronymed Microsoft's ETC (Extended Type Conversion) to Evil Type Cast.
Do not confuse inferred typing with weak typing. Inferred typing is strong typing inferred from context at compile time. A good example is the var keyword, used in C# to declare a variable suitable to receive the value of a LINQ expression.
By contrast, weak typing is inferred each and every time an expression is evaluated. This is illustrated in the question's sample code. Another example would be use of untyped pointers in C. Very handy yet begging for trouble.
Inferred typing addresses the same issue as weak typing, without introducing the problems associated with weak typing. It is therefore a preferred alternative whenever the host language makes it available.
They should almost always be discouraged. The only type of code that I can think of where it would be required is low-level code that requires some pointer voodoo.
And to answer your question, C supports code like that (except of course for not having a string type), and that sounds like something PHP or Perl would have (but I could be totally wrong on that).
"
When should weak types be discouraged? Are weak types discouraged in
big projects? If the left side is strongly typed like the following
would that be an exception to the rule?
int i = 5 string sz = i sz = sz + "1" i = sz
Does any languages support similar syntax to the above? Tell me more
about pros and cons to weak types and situations related.
"
Perhaps you could program your own library to do that.
In C++ you can use something called an "operator overload", which means that you can declare a variable of one type to be initialized as a variable of another type. That is what makes the statement:
[std::string str = "Hello World";][1]
specifically you would define a function (where the variable's type is T and B is the type you want to set it as)
work, even though any text between quotes is interpreted as an array of chars.
T& T::operator= ( const B s );
Please note that this is a class's member function
Also note that you will probably want to have some sort of function that reverses this manipulation if you want to use it liberally - something like
B& T::operator= ( const T s);
C++ is powerful enough to allow you to make an object generally weakly typed, but if you want to treat it purely weakly typed, you will want to make just a single variable type that can be used as any primitive, and use only functions that take a pointer to void.
Believe me, it is a lot easier to use strongly typed programming when it is available.
I personally prefer strongly typed, because I don't need to worry about the errors that come when I don't know what a variable is meant to do. For example, if I wanted to write a function to talk to a person - and that function used the person's height, weight, name, number of children, etc. - but you gave me a color, I would get an error because you can't really determine most of these things for a color using an algorithm that is very simple.
As far as the pros of weakly typed, you might want to get used to loosely typed programming if you are programming something to be run within a program(i.e. a web browser or a UNIX shell). JavaScript and Shell Script are weakly typed.
I would suggest that a programming language like assembly language is one of the only harware-level weakly typed languages, but the flavor of Assembly language I've seen attaches a type to each variable depending on the allocated size, i.e. word, dword, qword.
I hope I gave you a good explanation and did not put any words in your mouth.
Weak types are by their very nature less robust than strong types, because you don't tell the machine exactly what to do - instead the machine has to figure out what you meant. This often works quite adequately, but in general it is not clear what the result should be. What is, for example, a string multiplied by float?
Does any languages support similar syntax to the above?
Perl allows you to treat some numbers and strings interchangeably. For example, "5" + "1" will give you 6. The problem with this sort of thing in general is that it can be hard to avoid ambiguity: should "5" + 1 be "51" or "6"? Perl gets around this by having a separate operator for string concatenation, and reserving + for numeric addition.
Other languages would have to sort out whether you mean to do a concatenation or an addition, and (if relevant) what type or representation the result will be.
I did ASP/VBScript coding and work with legacy code without "option strict" which allows weak typing.
It was a hell in many times, especially in the hands of less experienced programmers. We got all stupid errors takes ages to diagnose.
One of the stupid examples was like this:
'Config
Dim pass
pass = "asdasd"
If NOT pass = Request("p") Then
Response.Write "login failed"
REsponse.End()
End If
So far so good but if the user changes pass to an integer password, guess what it won't work anymore because int pass != string pass (from querystring). I thought it supposed to work but it didn't I can't remember the exact piece of code.
I hate weak typing, instead of stupid debugging session I can spend extra seconds for typing exact type of a variable.
Simply put, in my experience especially in the big projects and especially with unexperienced developers it's just trouble.