When should weak types be discouraged? Are weak types discouraged in big projects? If the left side is strongly typed like the following would that be an exception to the rule?
int i = 5
string sz = i
sz = sz + "1"
i = sz
Does any languages support similar syntax to the above? Tell me more about pros and cons to weak types and situations related.
I think you are confusing "weak typing" with "dynamic typing".
The term "weak typing" means "not strongly typed", which means that the value of a memory location is allowed to vary from what it's type indicates it should be.
C is an example of a weakly typed language. It allows code like this to be written:
typedef struct
{
int x;
int y;
} FooBar;
FooBar foo;
char * pStr = &foo;
pStr[0] = 'H';
pStr[1] = 'i';
pStr[2] = '\0';
That is, it allows a FooBar instance to be treated as if it was an array of characters.
In a strongly typed language, that would not be allowed. Either a compiler error would be generated, or a run time exception would be thrown, but never, at any time, would a FooBar memory address contain data that was not a valid FooBar.
C#, Java, Lisp, Java Script, and Ruby are examples of languages where this type of thing would not be allowed. They are strongly typed.
Some of those languages are "statically typed", which means that variable types are assigned at compile time, and some are "dynamically typed", which means that variable types are not known until runtime. "Static vs Dynamic" and "Weak vs Strong" are orthogonal issues. For example, Lisp is a "strong dynamically typed" language, whereas "C" is a "weak statically typed language".
Also, as others have pointed out, there is a distinction between "inferred types" and types specified by the programmer. The "var" keyword in C# is an example of type inference. However, it's still a statically typed construct because the compiler infers the type of a variable at compile time, rather than at runtime.
So, what your question really is asking is:
What are the relative merits and
drawbacks of static typing, dynamic
typing, weak typing, stong typing,
inferred static types, and user
specified static types.
I provide answers to all of these below:
Static typing
Static typing has 3 primary benefits:
Better tooling support
A Reduced likely hood of certain types of bugs
Performance
The user experience and accuracy of things like intellisence, and refactoring is improved greatly in a statically typed language because of the extra information that the static types provide. If you type "a." in a code editor and "a" has a static type then the compiler knows everything that could legally come after the "." and can thus show you an accurate completion list. It's possible to support some scenarios in a dynamically typed language, but they are much more limited.
Also, in a program without compiler errors a refactoring tool can identify every place a particular method, variable, or type is used. It's not possible to do that in a dynamically typed language.
The second benefit is somewhat controversial. Proponents of statically typed languages like to make that claim. Opponents of statically typed languages, however, contend that the bugs they catch are trivial, and that they would get caught by testing anyways. But, you do get notification of things like misspelled variable or method names up front, which can be helpful.
Statically typed languages also enable better "data flow analysis", which when combined with things like Microsoft's SAL (or similar tools) can help find potential security problems.
Finally, with static typing, compilers can do a lot more optimization, and so can produce faster code.
Drawbacks:
The main drawback for static typing is that it restricts the things you can do. You can write programs in dynamically typed languages that you can't write in statically typed languages. Ruby on Rails is a good example of this.
Dynamic Typing
The big advantage of dynamic typing is that it's much more powerful than static typing. You can do a lot of really cool stuff with it.
Another one is that it requires less typing. You don't have to specify types all over the place.
Drawbacks:
Dynamic typing has 2 main draw backs:
You don't get as much "hand holding" from the compiler or IDE
It's not suitable for critical performance scenarios. For example, no one writes OS Kernels in Ruby.
Strong typing:
The biggest benefit of strong typing is security. Enforcing strong typing usually requires some type of runtime support. If a program can proove type safety then a lot of security issues, such as buffer overuns, just go away.
Weak typing:
The big drawback of strong typing, and the big benefit of weak typing, is performance.
When you can access memory any way you like, you can write faster code. For example a database can swap objects out to disk just by writing out their raw bytes, and not needing to resort to things like "ISerializable" interfaces. A video game can throw away all the data associated with one level by just running a single free on a large buffer, rather than running destructors for many small objects.
Being able to do those things requires weak typing.
Type inference
Type inference allows a lot of the benefits of static typing without requiring as much typing.
User specified types
Some people just don't like type inference because they like to be explicit. This is more of a style thing.
Weak typing is an attempt at language simplification. While this is a worthy goal, weak typing is a poor solution.
Weak typing such as is used in COM Variants was an early attempt to solve this problem, but it is fraught with peril and frankly causes more trouble than it's worth. Even Visual Basic programmers, who will put up with all sorts of rubbish, correctly pegged this as a bad idea and backronymed Microsoft's ETC (Extended Type Conversion) to Evil Type Cast.
Do not confuse inferred typing with weak typing. Inferred typing is strong typing inferred from context at compile time. A good example is the var keyword, used in C# to declare a variable suitable to receive the value of a LINQ expression.
By contrast, weak typing is inferred each and every time an expression is evaluated. This is illustrated in the question's sample code. Another example would be use of untyped pointers in C. Very handy yet begging for trouble.
Inferred typing addresses the same issue as weak typing, without introducing the problems associated with weak typing. It is therefore a preferred alternative whenever the host language makes it available.
They should almost always be discouraged. The only type of code that I can think of where it would be required is low-level code that requires some pointer voodoo.
And to answer your question, C supports code like that (except of course for not having a string type), and that sounds like something PHP or Perl would have (but I could be totally wrong on that).
"
When should weak types be discouraged? Are weak types discouraged in
big projects? If the left side is strongly typed like the following
would that be an exception to the rule?
int i = 5 string sz = i sz = sz + "1" i = sz
Does any languages support similar syntax to the above? Tell me more
about pros and cons to weak types and situations related.
"
Perhaps you could program your own library to do that.
In C++ you can use something called an "operator overload", which means that you can declare a variable of one type to be initialized as a variable of another type. That is what makes the statement:
[std::string str = "Hello World";][1]
specifically you would define a function (where the variable's type is T and B is the type you want to set it as)
work, even though any text between quotes is interpreted as an array of chars.
T& T::operator= ( const B s );
Please note that this is a class's member function
Also note that you will probably want to have some sort of function that reverses this manipulation if you want to use it liberally - something like
B& T::operator= ( const T s);
C++ is powerful enough to allow you to make an object generally weakly typed, but if you want to treat it purely weakly typed, you will want to make just a single variable type that can be used as any primitive, and use only functions that take a pointer to void.
Believe me, it is a lot easier to use strongly typed programming when it is available.
I personally prefer strongly typed, because I don't need to worry about the errors that come when I don't know what a variable is meant to do. For example, if I wanted to write a function to talk to a person - and that function used the person's height, weight, name, number of children, etc. - but you gave me a color, I would get an error because you can't really determine most of these things for a color using an algorithm that is very simple.
As far as the pros of weakly typed, you might want to get used to loosely typed programming if you are programming something to be run within a program(i.e. a web browser or a UNIX shell). JavaScript and Shell Script are weakly typed.
I would suggest that a programming language like assembly language is one of the only harware-level weakly typed languages, but the flavor of Assembly language I've seen attaches a type to each variable depending on the allocated size, i.e. word, dword, qword.
I hope I gave you a good explanation and did not put any words in your mouth.
Weak types are by their very nature less robust than strong types, because you don't tell the machine exactly what to do - instead the machine has to figure out what you meant. This often works quite adequately, but in general it is not clear what the result should be. What is, for example, a string multiplied by float?
Does any languages support similar syntax to the above?
Perl allows you to treat some numbers and strings interchangeably. For example, "5" + "1" will give you 6. The problem with this sort of thing in general is that it can be hard to avoid ambiguity: should "5" + 1 be "51" or "6"? Perl gets around this by having a separate operator for string concatenation, and reserving + for numeric addition.
Other languages would have to sort out whether you mean to do a concatenation or an addition, and (if relevant) what type or representation the result will be.
I did ASP/VBScript coding and work with legacy code without "option strict" which allows weak typing.
It was a hell in many times, especially in the hands of less experienced programmers. We got all stupid errors takes ages to diagnose.
One of the stupid examples was like this:
'Config
Dim pass
pass = "asdasd"
If NOT pass = Request("p") Then
Response.Write "login failed"
REsponse.End()
End If
So far so good but if the user changes pass to an integer password, guess what it won't work anymore because int pass != string pass (from querystring). I thought it supposed to work but it didn't I can't remember the exact piece of code.
I hate weak typing, instead of stupid debugging session I can spend extra seconds for typing exact type of a variable.
Simply put, in my experience especially in the big projects and especially with unexperienced developers it's just trouble.
Related
When working in a language which is considered strongly typed, does static code analysis offer anything that dynamic code analysis cannot?
To answer your question, yes, a strongly typed language which does static checking offers benefits.
Why?
As an example, consider a programming language that does static type checking (a functional language like OCaml), versus a language like Python that does dynamic type checking.
Static type checking allows for type-safety before the code is executed at runtime. Whereas dynamic type checking only checks for type-safety during runtime.
What this means is that if you did not use the right types in a language that does static type checking, it is caught at compile time, and will not execute at all. It will catch all those type errors before the code is ran. If it does not encounter any problems, only then would it execute.
On the other hand, in a dynamically typed language, it will compile and execute even if there are unresolved type errors, and during the execution, if it does encounter a type error it cannot resolve, it will throw an exception and just quit.
On small programs these don't look like a big difference, but if you think about it on a large scale, if your program takes a long time to compute something, and on a dynamically typed language, only catches the error nearing the end of the execution, you just wasted a lot of time and resources right? (At least this was an example that helped me understand what benefits it offers that dynamic code analysis does not)
In case you're wondering, Java is a strongly typed language. SO Q&A on whether C is strongly typed or not
There are subtle differences between a strong/weak typed language and a static/dynamic language. Being strong/weak typed refers to how strict a language is with its types. Whereas being static/dynamic is when it is required. (Compile time or runtime)
Source
Hopefully that answers your question!
Some references:
(yes wikipedia is not a reference but it gives the best examples for this case IMO)
Static type checking
Dynamic Programming Languages
Why Python is dynamic but also strongly typed
The browser-based software StudyTRAX ( http://wiki.studytrax.com ), used for research data management, allows for custom form and form variable management via JavaScript. However, a StudyTRAX "variable" (essentially, a representation of both an element of a form [HTML properties included] and its corresponding parameter, with some data typing/etc.) must be referred to with #<varname>, while regular JavaScript variables will just be <varname>.
Is this sort of thing done to make parsing easier, or is it just to distinguish between the two so that researchers who aren't so technologically-inclined won't have as much trouble figuring out what they're doing? Given the nature of JavaScript, I would think the StudyTRAX "variables" are just regular JavaScript objects defined in such a way to make form design and customization simpler, and thus the latter would make more sense, but am I wrong?
Also, I know that there are other programming languages that do require specific variable prefixes (though I can't think of some off the top of my head at the moment); what is/was the usual reasoning for that choice in language design?
Two part answer, StudyTRAX is almost certainly using a preprocessor to do some magic. JavaScript makes this relativity easy, but not as easy as a Lisp would. You still need to parse the code. By prefixing, the parser can ignore a lot of the complicated syntax of JavaScript and get to the good part without needing a "picture perfect" compiler. Actually, a lot of templeting systems do this. It is an implementation of Lisp's quasi-quote (see Greenspun's Tenth Rule).
As for prefixes in general, the best way to understand them is to try to write a parser for a language without them. For very dynamic and pure languages like Lisp and JavaScript where everything is a List / object it is not too bad. When you get languages where methods are distinct from objects, or functions are not first class the parser begins having to ask itself what type of thing doe "foo" refer to? An annoying example from Ruby: an unprefixed identifier is either a local variable or a method implicitly on self. In Rails there are a few functions that are implemented with method_missing. Person.find_first_by_rank works fine, but
Class Person < ActiveRecord::Base
def promotion(name)
p = find_first_by_rank
[...]
end
end
gives an error because find_first_by_rank looks like it might be a local variable and Ruby is scared to call method_missing on something that might just be a misspelled local variable.
Now imagine trying to distinguish between instance variables (prefix-#), class-variables (prefix-##), global variables (prefix-$), Constants (first letter Capitol), method names and local variables (no prefix small case) by context alone.
(From a Compiler & Language Hobbyst Designer).
Your question is more especific to the "StudyTRAX" software.
In early days of programming, variables in Basic used prefixes as $ (for strings, "a$"), to difference from numeric values. Today, some programming languages such as PHP prefixes variables with "$". COBNOL used variables starting with A to I, for integers, and later letters for floats.
Transforming, and later, executing some code, its a complex task, that's why many programmers, use shortcuts like adding prefixes or suffixes to programming languages.
In many Collegues or Universities, exist specialized classes / courses for transforming code from a programming language, to something that the computer does, like "Compilers", "Automatons", "Language Design", because its not an easy task.
Perl requires different variable prefixes, depending on the type of data:
$scalar = 4.2;
#array = (1, 4, 9, 16);
%map = ("foo" => 42, "bar" => 17, "baz" => 137);
As I understand it, this is so the reader can immediately identify what kind of object they're dealing with. It's not a matter of whether the reader is technologically inclined or not: if you reduce the programmer's cognitive load, he can use his brainpower for more important things than figuring out fiddly syntactic details.
Whether Perl's design is successful in this respect is another question, but I believe that's the reasoning behind the feature.
When writing interpreted languages, is it faster to have weak typing or strong typing?
I was wondering this, because often the faster dynamically typed interpreted languages out there (Lua, Javascript), and in fact most interpreted languages use weak typing.
But on the other hand strong typing gives guarantees weak typing does not, so, are optimization techniques possible with one that aren't possible with the other?
With strongly typed I mean no implicit conversions between types. For example this would be illegal in a strongly typed, but (possibly) legal in a weakly typed language: "5" * 2 == 10. Especially Javascript is notorious for these type conversions.
it seems to me that the question is going to be hard to answer with explicit examples because of the lack of "strongly typed interpreted languages" (using the definitions i understand from the question comments).
i cannot think of any language that is interpreted and does not have implicit conversions. and i think this for two reasons:
interpreted languages tend not be statically typed. i think this is because if you are going to implement a statically typed language then, historically, compilation is relatively easy and gives you a significant performance advantage.
if a language is not statically typed then it is forced into having implicit conversions. the alternative would make life too hard for the programmer (they would have to keep track of types, invisible in the source, to avoid runtime errors).
so, in practice, all interpreted languages are weakly typed. but the question of a performance increase or decrease implies a comparison with some that are not. at least, it does if we want to get into a discussion of different, existing implementation strategies.
now you might reply "well, imagine one". ok. so then you are asking for the performance difference between code that detects the need for a conversion at runtime with code where the programmer has explicitly added the conversion. in that case you are comparing the difference between dynamically detecting the need for a conversion and calling an explicit function specified by the programmer.
on the face of it, detection is always going to add some overhead (in a [late-]compiled language that can be ameliorated by a jit, but you are asking about interpreters). but if you want fail-fast behaviour (type errors) then even the explicit conversion has to check types. so in practice i imagine the difference is relatively small.
and this feeds back to the original point - since the performance cost of weak typing is low (given all the other constraints/assumptions in the question), and the usability costs of the alternative are high, most (all?) interpreted languages support implicit conversion.
[sorry if i am still not understanding. i am worried i am missing something because the question - and this answer - does not seem interesting...]
[edit: maybe a better way of asking the same(?) thing would be something like "what are the comparative advantages/disadvantages of the various ways that dynamic (late binding?) languages handle type conversion?" because i think there you could argue that python's approach is particularly powerful (expressive), while having similar costs to other interpreted languages (and the question avoids having to argue whether python or any other language is "weakly typed" or not).]
With strongly typed I mean no implicit conversions between types.
"5" * 2 == 10
The problem is that "weak typing" is not a well-defined term, since there are two very different ways such "implicit conversions" can happen, which have pretty much the opposite effect on performance:
The "scripting language way": values have a runtime type and the language implicitly applies semantic rules to convert between types (such as formatting a binary number as a decimal string) when an operation calls for the different type. This will tend to decrease performance since it A) requires there to be type information at runtime and b) requires that this information be checked. Both of these requirements introduce overhead.
The "C way": at runtime, it's all just bytes. If you can convince the compiler to apply an operation that takes a 4 byte integer on a string, then depending on how exactly you do it, either the first 4 bytes of that string will simply be treated as if they were a (probably very large) integer, or you get a buffer overrun. Or demons flying out of your nose. This method requires no overhead and leads to very fast performance (and very spectacular crashes).
This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
Is there any advantage of being a case-sensitive programming language?
My first programming experiences where with the Basic family (MSX Basix, Q-basic, VB).
These are all not case-sensitive. Now, it might be because of these first experiences, but I've never grasped the benefit of a language being case sensitive. On the contrary, I think it is a source of unneeded overhead and bugs, and it still annoys me when I use e.g. Java or C.
Now, I just read on Clojure (a Lisp-dialect) and noticed - to my surprise - that one of the differences with Lisp is case-sensitivity.
So: what is actually the benefit (to the programmer) of having a case-sensitive language?
The only things I can think of are:
double the number of symbols
visual feedback and easier reading for complex variables using techniques like CamelCase, e.g. HopCount
However, the first argument doesn't hold because of being a major source for bugs (bad practice to use hopcount and HopCount in one method).
The second argument doesn't hold either, as a decent IDE can provide this also in an other way. A good example is the VBA IDE, which has a very good approach: the langauge is case-insensitive but as soon as you type a variable it will change it to the case used in its definition. For example, if you defined Dim thisIsMyVariable as string, it will change any occurrence of thisismyvariable into thisIsMyVariable). That provides the programmer with an immediate clue that the variable was actually typed-in correctly (because it changed appearance).
Edit: added ... benefit to the programmer ...
One point is, like you said, visual aid. Most programming languages (and even frameworks) have conventions on how to capitalize variables, names, etc.
Also, it enforces using uniform names everywhere, so you don't have a mess with the same variable referred to as "var", "Var" or even "VaR".
I can't remember of ever having bugs related to capitalization, so that point seems kind of contrived to me.
Using 2 variables of the same name but different capitalization to me sounds like a conscious attempt to shoot yourself in the foot. Different capitalization conventions almost everywhere signify objects of completely different type (classes, variables, methods and so on), so it's pretty hard to make such a mistake due to the completely different semantics.
I'd like to think of it in this way: what do we gain by NOT having case-sensitivity?
We introduce ambiguity, we encourage sloppiness and poor style.
This is a slightly subjective matter of course.
Many naming conventions demand that symbols denoting objects from different semantic classes (types, functions, variables) have their own name casing rules. In Java, for example, types names always begin with a upper case letter, while variables, member function names etc. begin with a lower case letter. This effectively puts type names in a different namespace and gives a visual clue what a statement actually means.
// declare and initialize a new Point
Point point=new Point();
// calls a static member function of type Point
Point.fooBar();
// calls a member function of Point
point.moveTo(x,y);
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
What features do you wish were in common languages? More precisely, I mean features which generally don't exist at all but would be nice to see, rather than, "I wish dynamic typing was popular."
I've often thought that "observable" would make a great field modifier (like public, private, static, etc.)
GameState {
observable int CurrentScore;
}
Then, other classes could declare an observer of that property:
ScoreDisplay {
observe GameState.CurrentScore(int oldValue, int newValue) {
...do stuff...
}
}
The compiler would wrap all access to the CurrentScore property with notification code, and observers would be notified immediately upon the value's modification.
Sure you can do the same thing in most programming languages with event listeners and property change handlers, but it's a huge pain in the ass and requires a lot of piecemeal plumbing, especially if you're not the author of the class whose values you want to observe. In which case, you usually have to write a wrapper subclass, delegating all operations to the original object and sending change events from mutator methods. Why can't the compiler generate all that dumb boilerplate code?
I guess the most obvious answer is Lisp-like macros. Being able to process your code with your code is wonderfully "meta" and allows some pretty impressive features to be developed from (almost) scratch.
A close second is double or multiple-dispatch in languages like C++. I would love it if polymorphism could extend to the parameters of a virtual function.
I'd love for more languages to have a type system like Haskell. Haskell utilizes a really awesome type inference system, so you almost never have to declare types, yet it's still a strongly typed language.
I also really like the way you declare new types in Haskell. I think it's a lot nicer than, e.g., object-oriented systems. For example, to declare a binary tree in Haskell, I could do something like:
data Tree a = Node a (Tree a) (Tree a) | Nothing
So the composite data types are more like algebraic types than objects. I think it makes reasoning about the program a lot easier.
Plus, mixing in type classes is a lot nicer. A type class is just a set of classes that a type implements -- sort of like an interface in a language like Java, but more like a mixin in a language like Ruby, I guess. It's kind of cool.
Ideally, I'd like to see a language like Python, but with data types and type classes like Haskell instead of objects.
I'm a big fan of closures / anonymous functions.
my $y = "world";
my $x = sub { print #_ , $y };
&$x( 'hello' ); #helloworld
and
my $adder = sub {
my $reg = $_[0];
my $result = {};
return sub { return $reg + $_[0]; }
};
print $adder->(4)->(3);
I just wish they were more commonplace.
Things from Lisp I miss in other languages:
Multiple return values
required, keyword, optional, and rest parameters (freely mixable) for functions
functions as first class objects (becoming more common nowadays)
tail call optimization
macros that operate on the language, not on the text
consistent syntax
To start things off, I wish the standard for strings was to use a prefix if you wanted to use escape codes, rather than their use being the default. E.g. in C# you can prefix with # for a raw string. Similarly, Python has the r prefix. I'd rather use #/r when I don't want a raw string and need escape codes.
More powerful templates that are actually designed to be used for metaprogramming, rather than C++ templates that are really designed for relatively simple generics and are Turing-complete almost by accident. The D programming language has these, but it's not very mainstream yet.
immutable keyword. Yes, you can make immutable objects, but that's lot pain in most of the languages.
class JustAClass
{
private int readonly id;
private MyClass readonly obj;
public MyClass
{
get
{
return obj;
}
}
}
Apparently it seems JustAClass is an immutable class. But that's not the case. Because another object hold the same reference, can modify the obj object.
So it's better to introduce new immutable keyword. When immutable is used that object will be treated immutable.
I like some of the array manipulation capabilities found in the Ruby language. I wish we had some of that built into .Net and Java. Of course, you can always create such a library, but it would be nice not to have to do that!
Also, static indexers are awesome when you need them.
Type inference. It's slowly making it's way into the mainstream languages but it's still not good enough. F# is the gold standard here
I wish there was a self-reversing assignment operator, which rolled back when out of scope. This would be to replace:
type datafoobak = item.datafoobak
item.datafoobak = 'tootle'
item.handledata()
item.datafoobak = datafoobak
with this
item.datafoobar #=# 'tootle'
item.handledata()
One could explicitely rollback such changes, but they'd roll back once out of scope, too. This kind of feature would be a bit error prone, maybe, but it would also make for much cleaner code in some cases. Some sort of shallow clone might be a more effective way to do this:
itemclone = item.shallowclone
itemclone.datafoobak='tootle'
itemclone.handledata()
However, shallow clones might have issues if their functions modified their internal data...though so would reversible assignments.
I'd like to see single-method and single-operator interfaces:
interface Addable<T> --> HasOperator( T = T + T)
interface Splittable<T> --> HasMethod( T[] = T.Split(T) )
...or something like that...
I envision it as being a typesafe implementation of duck-typing. The interfaces wouldn't be guarantees provided by the original class author. They'd be assertions made by a consumer of a third-party API, to provide limited type-safety in cases where the original authors hadn't anticipated.
(A good example of this in practice would be the INumeric interface that people have been clamboring for in C# since the dawn of time.)
In a duck-typed language like Ruby, you can call any method you want, and you won't know until runtime whether the operation is supported, because the method might not exist.
I'd like to be able to make small guarantees about type safety, so that I can polymorphically call methods on heterogeneous objects, as long as all of those objects have the method or operator that I want to invoke.
And I should be able to verify the existence of the methods/operators I want to call at compile time. Waiting until runtime is for suckers :o)
Lisp style macros.
Multiple dispatch.
Tail call optimization.
First class continuations.
Call me silly, but I don't think every feature belongs in every language. It's the "jack of all trades, master of none" syndrome. I like having a variety of tools available, each one of which is the best it can be for a particular task.
Functional functions, like map, flatMap, foldLeft, foldRight, and so on. Type system like scala (builder-safety). Making the compilers remove high-level libraries at compile time, while still having them if you run in "interpreted" or "less-compiled" mode (speed... sometimes you need it).
There are several good answers here, but i will add some:
1 - The ability to get a string representation for the current and caller code, so that i could output a variable name and its value easily, or print the name of the current class, function or a stack trace at any time.
2 - Pipes would be nice too. This feature is common in shells, but uncommon in other types of languages.
3 - The ability to delegate any number of methods to another class easily. This looks like inheritance, but even in the presence of inheritance, once in a while we need some kind of wrapper or stub which cannot be implemented as a child class, and forwarding all methods requires a lot of boilerplate code.
I'd like a language that was much more restrictive and was designed around producing good, maintainable code without any trickiness. Also, it should be designed to give the compiler the ability to check as much as possible at compile time.
Start with a newish VM based heavily OO language.
Remove complexities like Operator Overloading and multiple inheritance if they exist.
Force all non-final variables to Private.
Members should default to "Final" but should have a "Variable" tag to override it. (This may require built-in support for the builder pattern to be fully effective).
Variables should not allow a "Null" value by default, but variables and parameters should have a "nullable" tag that indicates that null is acceptable for that variable.
It would also be nice to be able to avoid some common questionable patterns:
Some built-in way to simplify IOC/DI to eliminate singletons,
Java--eliminate checked exceptions so people stop putting in empty catches.
Finally focus on code readability:
Named Parameters
Remove the ability to create methods more than, say, 100 lines long.
Add some complexity analysis to help detect complicated methods and classes.
I'm sure I haven't named 1/10 of the items possible, but basically I'm talking about something that compiles to the same bytecode as C# or Java, but is so restrictive that a programmer can hardly help but write good code.
And yes, I know there are lint-type tools that will do some of this, but I've never seen them on any project I've worked on (and they wouldn't physically run on the code I'm working on now, for instance) so they aren't being very helpful, and I would love to see a compile actually fail when you type in a 101 line method...