What's the benefit of case-sensitivity in a program language? [duplicate] - language-design

This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
Is there any advantage of being a case-sensitive programming language?
My first programming experiences where with the Basic family (MSX Basix, Q-basic, VB).
These are all not case-sensitive. Now, it might be because of these first experiences, but I've never grasped the benefit of a language being case sensitive. On the contrary, I think it is a source of unneeded overhead and bugs, and it still annoys me when I use e.g. Java or C.
Now, I just read on Clojure (a Lisp-dialect) and noticed - to my surprise - that one of the differences with Lisp is case-sensitivity.
So: what is actually the benefit (to the programmer) of having a case-sensitive language?
The only things I can think of are:
double the number of symbols
visual feedback and easier reading for complex variables using techniques like CamelCase, e.g. HopCount
However, the first argument doesn't hold because of being a major source for bugs (bad practice to use hopcount and HopCount in one method).
The second argument doesn't hold either, as a decent IDE can provide this also in an other way. A good example is the VBA IDE, which has a very good approach: the langauge is case-insensitive but as soon as you type a variable it will change it to the case used in its definition. For example, if you defined Dim thisIsMyVariable as string, it will change any occurrence of thisismyvariable into thisIsMyVariable). That provides the programmer with an immediate clue that the variable was actually typed-in correctly (because it changed appearance).
Edit: added ... benefit to the programmer ...

One point is, like you said, visual aid. Most programming languages (and even frameworks) have conventions on how to capitalize variables, names, etc.
Also, it enforces using uniform names everywhere, so you don't have a mess with the same variable referred to as "var", "Var" or even "VaR".
I can't remember of ever having bugs related to capitalization, so that point seems kind of contrived to me.
Using 2 variables of the same name but different capitalization to me sounds like a conscious attempt to shoot yourself in the foot. Different capitalization conventions almost everywhere signify objects of completely different type (classes, variables, methods and so on), so it's pretty hard to make such a mistake due to the completely different semantics.
I'd like to think of it in this way: what do we gain by NOT having case-sensitivity?
We introduce ambiguity, we encourage sloppiness and poor style.
This is a slightly subjective matter of course.

Many naming conventions demand that symbols denoting objects from different semantic classes (types, functions, variables) have their own name casing rules. In Java, for example, types names always begin with a upper case letter, while variables, member function names etc. begin with a lower case letter. This effectively puts type names in a different namespace and gives a visual clue what a statement actually means.
// declare and initialize a new Point
Point point=new Point();
// calls a static member function of type Point
Point.fooBar();
// calls a member function of Point
point.moveTo(x,y);

Related

Reasoning for Language-Required Variable Name Prefixes

The browser-based software StudyTRAX ( http://wiki.studytrax.com ), used for research data management, allows for custom form and form variable management via JavaScript. However, a StudyTRAX "variable" (essentially, a representation of both an element of a form [HTML properties included] and its corresponding parameter, with some data typing/etc.) must be referred to with #<varname>, while regular JavaScript variables will just be <varname>.
Is this sort of thing done to make parsing easier, or is it just to distinguish between the two so that researchers who aren't so technologically-inclined won't have as much trouble figuring out what they're doing? Given the nature of JavaScript, I would think the StudyTRAX "variables" are just regular JavaScript objects defined in such a way to make form design and customization simpler, and thus the latter would make more sense, but am I wrong?
Also, I know that there are other programming languages that do require specific variable prefixes (though I can't think of some off the top of my head at the moment); what is/was the usual reasoning for that choice in language design?
Two part answer, StudyTRAX is almost certainly using a preprocessor to do some magic. JavaScript makes this relativity easy, but not as easy as a Lisp would. You still need to parse the code. By prefixing, the parser can ignore a lot of the complicated syntax of JavaScript and get to the good part without needing a "picture perfect" compiler. Actually, a lot of templeting systems do this. It is an implementation of Lisp's quasi-quote (see Greenspun's Tenth Rule).
As for prefixes in general, the best way to understand them is to try to write a parser for a language without them. For very dynamic and pure languages like Lisp and JavaScript where everything is a List / object it is not too bad. When you get languages where methods are distinct from objects, or functions are not first class the parser begins having to ask itself what type of thing doe "foo" refer to? An annoying example from Ruby: an unprefixed identifier is either a local variable or a method implicitly on self. In Rails there are a few functions that are implemented with method_missing. Person.find_first_by_rank works fine, but
Class Person < ActiveRecord::Base
def promotion(name)
p = find_first_by_rank
[...]
end
end
gives an error because find_first_by_rank looks like it might be a local variable and Ruby is scared to call method_missing on something that might just be a misspelled local variable.
Now imagine trying to distinguish between instance variables (prefix-#), class-variables (prefix-##), global variables (prefix-$), Constants (first letter Capitol), method names and local variables (no prefix small case) by context alone.
(From a Compiler & Language Hobbyst Designer).
Your question is more especific to the "StudyTRAX" software.
In early days of programming, variables in Basic used prefixes as $ (for strings, "a$"), to difference from numeric values. Today, some programming languages such as PHP prefixes variables with "$". COBNOL used variables starting with A to I, for integers, and later letters for floats.
Transforming, and later, executing some code, its a complex task, that's why many programmers, use shortcuts like adding prefixes or suffixes to programming languages.
In many Collegues or Universities, exist specialized classes / courses for transforming code from a programming language, to something that the computer does, like "Compilers", "Automatons", "Language Design", because its not an easy task.
Perl requires different variable prefixes, depending on the type of data:
$scalar = 4.2;
#array = (1, 4, 9, 16);
%map = ("foo" => 42, "bar" => 17, "baz" => 137);
As I understand it, this is so the reader can immediately identify what kind of object they're dealing with. It's not a matter of whether the reader is technologically inclined or not: if you reduce the programmer's cognitive load, he can use his brainpower for more important things than figuring out fiddly syntactic details.
Whether Perl's design is successful in this respect is another question, but I believe that's the reasoning behind the feature.

Objective-C class naming convention vs Uncle Bob

In Chapter 2: Meaningful Names Uncle Bob writes:
Don't Add Gratuitous Context
In an imaginary application called "Gas Station Deluxe," it is bad idea to prefix every class with GDS. Frankly, you are working against your tools. You type G and the press completion key and are rewarded with a mile-long list of every class in your system
Actually that what I discovered during my first days with Objective-C a bit more than one year ago. After Java it was quite disappointing but I thought I'm only one who annoyed about that :)
I understand, that "Clean Code" book refers to Java most of the time and Java has namespaces (packages) unlike Objective-C.
Do you use 2-3 letters prefix in your classes if you're building an app, not a library?
What do you think, is it bad language design, language "feature" or Uncle Bob wasn't right here?
Perhaps the key word here is gratuitous. In Objective-C, prefixes serve the important purpose of reducing the chance of name collisions. In other languages like Java and C++, the existence of support for namespaces makes the use of prefixes gratuitous (and a violation of the oft-cited DRY principle). In Objective-C, however, prefixes are meaningful, useful, and not gratuitous.
I was tempted to close this question, but I don't think I've seen a similar one asked before and it's a valid question. Here are my rather disorganized thoughts on the matter.
Many languages have a feature called namespaces, where the "fully qualified" class name is prefixed by a hierarchical series of names. For example, the String class in Java is, properly, java.lang.String, and a custom class is properly com.whatever.foobar.MyClass.
Unfortunately, namespaces have never been added to Objective-C, which means that Objective-C symbols (class names, protocol names, and a few various other types) cannot be placed in a namespace even when using Objective-C++ (which has a namespace feature for functions, constants, structures, etc.)
The only solution to prevent symbol collisions in shared code, then, is to use some form of name mangling to make your symbol names unique. In Objective-C, the convention is to use a prefix of two characters (sometimes the number varies) for all your classes.
This Uncle Bob fellow is a twit for telling you not to do this, because while you'll end up with a program that doesn't compile, you'll lose any benefit of namespaces that prefixes still offer. Does your app use plugins? You need to prefix. Does your app have a public API? You need to prefix.
In theory, code within a single application that never touches the outside world can do without prefixes, but screw it--keep coding cleanly, and add a prefix even there. It'll save you grief later.
Personally I almost never use prefixes. The only exceptions are classes that are somehow connected to each other or they all should be present.
An example:
Some client app for chat. Let's call that chat an ExampleChat.
Then I'd use ECMessage, ECUser, ECRoom, etc. to easily see which classes should there be.
Or if I make some custom cells for UITableView I'd use prefixes to keep them all close to each other and not struggle with searching them in a "mile-long list". Example:
ECTextMessageCell, ECSoundMessageCell, ECUploadMessageCell, ECJoinOrLeaveMessageCell, etc.
That's my personal opinion, which can not be the best. But it's still easiest for me.
Hope it helps
Well if you do not have Namespaces, name conflicts are likely to occur. You can see that in a lot of C libraries that they are using some kind of prefix. So I guess there are good reasons to have those prefixes and other reasons not to use it. But what should be the big problem to modify the completion to either just ignore the prefix of typing three letters instead of just one.
So in the end it seems to me a matter of taste. I guess it would be more important to have good structures classes with prefixes instead of a mess of classes without prefix....
It has nothing to do with bad language design IMHO. There was a time where software was not everywhere and why should one waste extra effort on namespaces? And still as we can see even nowadays languages without namespaces are used.....
I would say, that the world is not black or white. I do programming in java, with packages and yes, it is annoying to have a prefix in each class, as well as it is annoying and arguable to start interfaces with I (just like .Net used to do it).
Sometimes it does annoying me in objective-c however it has some legitimacy if you do not have packages in your language, since you can 'build' artificial groups of classes like 'NS', 'UI', 'MK' and so on in objc and cocoa.
Beyond avoiding collisions, one of the benefits that name prefixes gives is that you're immediately aware of what type you're really dealing with. Suppose you had the following code:
Color c = ...;
MultiValueMap m = ...;
From a cursory glance at the code and depending on what libraries you've used, those types could be from a number of different sources. You may have to lookup which include/import statement was made to understand what the type can do (e.g. you want to modify it but it's missing a method that you're sure is there).
In the iOS world, you would immediately know whether it's a UIColor vs. a CGColor and gain immediate context.
In the past at WWDC, Apple would host a session where they explained Cocoa/Objective-C coding conventions. I believe they mention this aspect of name prefixes so you might want to find one of the recordings that are made available. Other C developers (e.g. Linux kernel developers) also do not seem to think highly of C++ namespaces (among other C++ features) for various reasons.

What are some best practices to follow when naming variables?

What are some recommended best practices to follow when naming variables? Global variables?
When working with a solution having many projects, insure that all public names indicate a relevant context. Do not use identical names in different projects. Compilation works but maintenance can be a nightmare.
To a large extent it does not matter what standards you decide to adopt. The most important factor is that you stick to it! Consistency is really important and as long as you manage that your code will be significantly easier to read and maintain in the future.
As one idea you could check out the hungarian notation used for Win32 and C++ programming under windows.
Notation Definition (PDF)
Keep your names meaningful, the code should self document, avoid abbreviations the length of the name isn't usually a problem in most languages.
Boolean variables should begin is* or has*, try to choose a name that avoids requiring negation in tests as the ! can often be missed.
Group variables associated with an item by using a common prefix i.e. documentTitle, documentType, documentSize etc.
Avoid using numbers to distinguish variables unless an index is involved.
Forget about Hungarian notation.
Some broad strokes:
Use i, j, k for loop variables. It's very common practice and easy to understand.
For boolean (true/false) variables, use predicate names like isDirectory or canExecute.
Whether you camelCase or use_underscores is just a matter of preference.
It may be a good idea to decorate variables with Hungarian notation describing the meaning of the variable, e.g. iMax could be the index of the maximum element in an array. It's less useful to decorate names with the language-level type information. For a very entertaining explanation of the difference, and why one is good and the other bad, see Joel's essay.
Best to not start them with numbers or symbols in some languages. Also, don't use reserved functions of the language you're using. For example: in C# you wouldn't want to name it "if", "else", "void" "try" etc...
I'm by no means an experienced programmer, but I've somewhat had it drilled into me at college and uni, and have seen it on sites like this, that when naming variables they should mean something.
Maybe this is an education thing, but it does make sense - the variable name should make it easily apparent what that variable is used for, anywhere in your code. It comes down to, I think, the fact that code shouldn't need masses of comments - it should explain itself. Variable naming is a part of that.

Explanation of naming conventions for classes, methods and variables

I'm currently in University and they're pretty particular about following their standards.
They've told me this:
All classes must start with a capital
letter
Correct
public class MyClass {}
Incorrect
public class myClass {}
public class _myClass {}
All methods must start with a
lowercase letter
Correct
public void doSomething() {}
Incorrect
public void DoSomething() {}
public void _doSomething() {}
all variables must start with a
lowercase letter
Correct
string myString;
Incorrect
string MyString;
string _myString;
Yet in my last year of programming, I've been finding that people are using much different rules. It wouldn't matter if it were just a few people using the different rules, but almost everywhere I see these different practices being used.
So I just wanted to know what the reasoning behind the above standards is and why some of these other standards are being used: (are they wrong/old standards?)
Most methods I've seen start with a capital letter rather than a lowercase-- Pretty much any of Microsoft's methods I've been using from their imported namespaces. This is probably the most common one I've seen that I don't understand
A lot of people use _ for class variables.
I've seen capitals on variables ie. string MyString;
I know I've missed a few as well, if you can think of any that you could add in and give an explanation for that would be helpful. I know everyone develops their own coding styles, but many of these practices have reasons behind them and I would rather stick with what makes the most sense.
Thanks,
Matt
There is no valuable reason to choose one coding style rather than an other one.
The most important thing is to agree on a coding style with the people you are working on. And to help you to all agree on a coding style, your professor told you a coding style.
Most of the time, it is just a point of view. So, just follow your professor's coding style if you have to code with the university....
standards are arbitrary, like which side of the road to drive on; just do it like they tell you to do it ;-)
Most people are talking about naming convention style, but there are other things to consider when approaching naming conventions, such as what you actually name a routine.
Routine (methods, functions, and procedures) names should typically by in the form of a strong verb + object, regardless of how you format it. For example:
paginateResponse()
or
empty_input_buffer()
as (respectively) opposed to
dealWithResponse()
or
process_input_buffer()
Both "dealWith" and "process" are verbs, but they are ambiguous and cause any other programmers working with your code in the future to have to consult the actual routine definition to determine what it really does.
"Strong" verbs, on the other hand, as shown in the first two examples, are much more powerful in their descriptive power and really pin down what the routine is doing.
This makes your code easier to read as it is self-documenting and leads to higher levels of cohesion.
Also, as a personal point of style, I try to avoid at all costs using "my" in any name.
Standards are only standards if they are followed, and every company or institution has their own standards. It is one of the worst parts of programming. :D
Speaking specifically about the leading _. From my experience this is mostly used on variables that are declared private within a class. They are usually coupled with a method to retrieve them that has the same name without the leading _.
I am trying to follow the rules from Framework Design Guidelines: Conventions, Idioms, and Patterns for Reusable .NET Libraries by Krzysztof Cwalina and Brad Abrams
Guidelines in this book are presented in four major forms: Do, Consider, Avoid, and Do not. These directives help focus attention on practices that should always be used, those that should generally be used, those that should rarely be used, and those that should never be used. Every guideline includes a discussion of its applicability, and most include a code example to help illuminate the dialogue.
Also, you can use FxCop to check your compliance with those rules.
Standards help with readability, and therefore improve maintainability. (because when you can read the code faster, easier and more accurately, you can debug and repair it, or enhance it, in less time and with less effort.)
They have no effect on reliability or availability, cause the computer doesn't care what the variables are named or how the souurce code is formatted.
If you code is well-organized and readable, you have achieved the objective, regardless of whether or not it conforms exactly to anyone elses "standard".
This says nothing, of course, about how to handle the environment where "standards" are high on someone's list of developer evaluation tools, or management metrics...
I see logic behind capitalisation of classes and variables; it means you can do things like
Banana banana; // Makes a new Banana called banana
I've been learning Qt recently, and they follow your conventions to the letter. I wouldn't ever follow Microsoft's naming conventions!
The standards I've seen echo what's in the Framework Design Guidelines. In the examples you've stated above, I don't see you distinguishing between visibility (public/private).
For example:
Public facing methods should be PascalCase: public void MyMethod() ...
Parameters to methods should be camelCase: public void MyMethod(string myParameter) ...
Fields which should always be private, should be camelCase. Some prefer the underscore prefix (i do) to distinguish it from method parameters.
The best bet on standards is to have your team agree upon conventions up front when the project kicks off, you'll find everything much more consistent.
Coding styles are based on personal preferences and to a large extent the features of the language that you're using.
My personal take is that it's more important to be consistent with a convention than picking the "right one". People can be dogmatic about they're preferred style and things can often delve into a religious war.
All classes must start with a capital letter - This goes hand-in-hand with variable naming and helps prevent confusion that would arise if you had both classes and variables named with the same rules. My preference is a capital letter because I'm used to it and it follows the guidelines for my preferred language (C#).
All methods must start with a lowercase letter - same goes, although I start my methods with an uppercase character (as per C# guidelines).
All variables must start with a lowercase letter - this, I believe, is dependent on you language's scoping features. Often people prefix variables (usually an underscore or a character like "g") to indicate a variable's scope ("g" might mean "global"). This can help prevent confusion where variables have the same names in different scopes. My C# driven preference: all variables have start with a lowercase letter and I use "this." to reference a global variable of the same name where scope is a problem (this usually only occurs in a class's constructor).
I can't let 3. go by without mentioning Hungarian notation (which is grossly misused and misunderstood). Joel has a great article that helped me understand these better.
In addition to the main point, that while any specific standard is essentially arbitrary but it's important to have some agreed upon standard, I'd also add that some standards are ubiquitous enough to have achieved the status of the "correct" way to do things.
For example, in java, class names in professional code are always in CamelCase. I'll qualify the always in saying that your code will compile if you break the standard, and you may occasionally find some open source projects that break the convention as well, but I believe that most people would take that as a sign that the author is not too familiar with the language. Most of your professors guidelines are fairly standard (for java, in any case). Being radically different in this case, apart from annoying your professor, will probably irritate total strangers ;)
It's interesting to me that some languages seem to have taken this standardization to heart, and enforce capitalization to have specific meaning (e.g. Haskell).
The rules you're citing are those used pretty universally in the Java world.
Are you doing Java code at university? If not, it may be that they were previously teaching Java, then switched to C# but kept the naming conventions.

When should weak types be discouraged?

When should weak types be discouraged? Are weak types discouraged in big projects? If the left side is strongly typed like the following would that be an exception to the rule?
int i = 5
string sz = i
sz = sz + "1"
i = sz
Does any languages support similar syntax to the above? Tell me more about pros and cons to weak types and situations related.
I think you are confusing "weak typing" with "dynamic typing".
The term "weak typing" means "not strongly typed", which means that the value of a memory location is allowed to vary from what it's type indicates it should be.
C is an example of a weakly typed language. It allows code like this to be written:
typedef struct
{
int x;
int y;
} FooBar;
FooBar foo;
char * pStr = &foo;
pStr[0] = 'H';
pStr[1] = 'i';
pStr[2] = '\0';
That is, it allows a FooBar instance to be treated as if it was an array of characters.
In a strongly typed language, that would not be allowed. Either a compiler error would be generated, or a run time exception would be thrown, but never, at any time, would a FooBar memory address contain data that was not a valid FooBar.
C#, Java, Lisp, Java Script, and Ruby are examples of languages where this type of thing would not be allowed. They are strongly typed.
Some of those languages are "statically typed", which means that variable types are assigned at compile time, and some are "dynamically typed", which means that variable types are not known until runtime. "Static vs Dynamic" and "Weak vs Strong" are orthogonal issues. For example, Lisp is a "strong dynamically typed" language, whereas "C" is a "weak statically typed language".
Also, as others have pointed out, there is a distinction between "inferred types" and types specified by the programmer. The "var" keyword in C# is an example of type inference. However, it's still a statically typed construct because the compiler infers the type of a variable at compile time, rather than at runtime.
So, what your question really is asking is:
What are the relative merits and
drawbacks of static typing, dynamic
typing, weak typing, stong typing,
inferred static types, and user
specified static types.
I provide answers to all of these below:
Static typing
Static typing has 3 primary benefits:
Better tooling support
A Reduced likely hood of certain types of bugs
Performance
The user experience and accuracy of things like intellisence, and refactoring is improved greatly in a statically typed language because of the extra information that the static types provide. If you type "a." in a code editor and "a" has a static type then the compiler knows everything that could legally come after the "." and can thus show you an accurate completion list. It's possible to support some scenarios in a dynamically typed language, but they are much more limited.
Also, in a program without compiler errors a refactoring tool can identify every place a particular method, variable, or type is used. It's not possible to do that in a dynamically typed language.
The second benefit is somewhat controversial. Proponents of statically typed languages like to make that claim. Opponents of statically typed languages, however, contend that the bugs they catch are trivial, and that they would get caught by testing anyways. But, you do get notification of things like misspelled variable or method names up front, which can be helpful.
Statically typed languages also enable better "data flow analysis", which when combined with things like Microsoft's SAL (or similar tools) can help find potential security problems.
Finally, with static typing, compilers can do a lot more optimization, and so can produce faster code.
Drawbacks:
The main drawback for static typing is that it restricts the things you can do. You can write programs in dynamically typed languages that you can't write in statically typed languages. Ruby on Rails is a good example of this.
Dynamic Typing
The big advantage of dynamic typing is that it's much more powerful than static typing. You can do a lot of really cool stuff with it.
Another one is that it requires less typing. You don't have to specify types all over the place.
Drawbacks:
Dynamic typing has 2 main draw backs:
You don't get as much "hand holding" from the compiler or IDE
It's not suitable for critical performance scenarios. For example, no one writes OS Kernels in Ruby.
Strong typing:
The biggest benefit of strong typing is security. Enforcing strong typing usually requires some type of runtime support. If a program can proove type safety then a lot of security issues, such as buffer overuns, just go away.
Weak typing:
The big drawback of strong typing, and the big benefit of weak typing, is performance.
When you can access memory any way you like, you can write faster code. For example a database can swap objects out to disk just by writing out their raw bytes, and not needing to resort to things like "ISerializable" interfaces. A video game can throw away all the data associated with one level by just running a single free on a large buffer, rather than running destructors for many small objects.
Being able to do those things requires weak typing.
Type inference
Type inference allows a lot of the benefits of static typing without requiring as much typing.
User specified types
Some people just don't like type inference because they like to be explicit. This is more of a style thing.
Weak typing is an attempt at language simplification. While this is a worthy goal, weak typing is a poor solution.
Weak typing such as is used in COM Variants was an early attempt to solve this problem, but it is fraught with peril and frankly causes more trouble than it's worth. Even Visual Basic programmers, who will put up with all sorts of rubbish, correctly pegged this as a bad idea and backronymed Microsoft's ETC (Extended Type Conversion) to Evil Type Cast.
Do not confuse inferred typing with weak typing. Inferred typing is strong typing inferred from context at compile time. A good example is the var keyword, used in C# to declare a variable suitable to receive the value of a LINQ expression.
By contrast, weak typing is inferred each and every time an expression is evaluated. This is illustrated in the question's sample code. Another example would be use of untyped pointers in C. Very handy yet begging for trouble.
Inferred typing addresses the same issue as weak typing, without introducing the problems associated with weak typing. It is therefore a preferred alternative whenever the host language makes it available.
They should almost always be discouraged. The only type of code that I can think of where it would be required is low-level code that requires some pointer voodoo.
And to answer your question, C supports code like that (except of course for not having a string type), and that sounds like something PHP or Perl would have (but I could be totally wrong on that).
"
When should weak types be discouraged? Are weak types discouraged in
big projects? If the left side is strongly typed like the following
would that be an exception to the rule?
int i = 5 string sz = i sz = sz + "1" i = sz
Does any languages support similar syntax to the above? Tell me more
about pros and cons to weak types and situations related.
"
Perhaps you could program your own library to do that.
In C++ you can use something called an "operator overload", which means that you can declare a variable of one type to be initialized as a variable of another type. That is what makes the statement:
[std::string str = "Hello World";][1]
specifically you would define a function (where the variable's type is T and B is the type you want to set it as)
work, even though any text between quotes is interpreted as an array of chars.
T& T::operator= ( const B s );
Please note that this is a class's member function
Also note that you will probably want to have some sort of function that reverses this manipulation if you want to use it liberally - something like
B& T::operator= ( const T s);
C++ is powerful enough to allow you to make an object generally weakly typed, but if you want to treat it purely weakly typed, you will want to make just a single variable type that can be used as any primitive, and use only functions that take a pointer to void.
Believe me, it is a lot easier to use strongly typed programming when it is available.
I personally prefer strongly typed, because I don't need to worry about the errors that come when I don't know what a variable is meant to do. For example, if I wanted to write a function to talk to a person - and that function used the person's height, weight, name, number of children, etc. - but you gave me a color, I would get an error because you can't really determine most of these things for a color using an algorithm that is very simple.
As far as the pros of weakly typed, you might want to get used to loosely typed programming if you are programming something to be run within a program(i.e. a web browser or a UNIX shell). JavaScript and Shell Script are weakly typed.
I would suggest that a programming language like assembly language is one of the only harware-level weakly typed languages, but the flavor of Assembly language I've seen attaches a type to each variable depending on the allocated size, i.e. word, dword, qword.
I hope I gave you a good explanation and did not put any words in your mouth.
Weak types are by their very nature less robust than strong types, because you don't tell the machine exactly what to do - instead the machine has to figure out what you meant. This often works quite adequately, but in general it is not clear what the result should be. What is, for example, a string multiplied by float?
Does any languages support similar syntax to the above?
Perl allows you to treat some numbers and strings interchangeably. For example, "5" + "1" will give you 6. The problem with this sort of thing in general is that it can be hard to avoid ambiguity: should "5" + 1 be "51" or "6"? Perl gets around this by having a separate operator for string concatenation, and reserving + for numeric addition.
Other languages would have to sort out whether you mean to do a concatenation or an addition, and (if relevant) what type or representation the result will be.
I did ASP/VBScript coding and work with legacy code without "option strict" which allows weak typing.
It was a hell in many times, especially in the hands of less experienced programmers. We got all stupid errors takes ages to diagnose.
One of the stupid examples was like this:
'Config
Dim pass
pass = "asdasd"
If NOT pass = Request("p") Then
Response.Write "login failed"
REsponse.End()
End If
So far so good but if the user changes pass to an integer password, guess what it won't work anymore because int pass != string pass (from querystring). I thought it supposed to work but it didn't I can't remember the exact piece of code.
I hate weak typing, instead of stupid debugging session I can spend extra seconds for typing exact type of a variable.
Simply put, in my experience especially in the big projects and especially with unexperienced developers it's just trouble.