Why in many programming languages, the "continue" in loop statement isn't being called "next" - language-design

Why in many programming languages, the "continue" in loop statement isn't being called "next", but being called "continue"
"continue" makes no sense at all, doesn't match its actual functionality. In fact, it discontinues the current loop iteration.

I guess the answer is that continue comes from C (or even it's predecessors). Many languages are of the C family or at least influenced by C.
As for it not making sense, I never had any trouble with it. I didn't meat a programming language that used next before I met C (so your mileage may vary). As I see it, continue continues looping (and break discontinues looping).

I agree with several posts that "continue" was inherited from C. K&R were creating a modern programming language while keeping much of the efficiency and flexibility of assembly language. I think from the assembly language perspective "continue" makes more sense than "next". (Note: It has been several decades since I've even seen any assembly language code.)

Related

Are there any interpreted languages in which you can dynamically modify the interpreter?

I've been thinking about this writing (apparently) by Mark Twain in which he starts off writing in English but throughout the text makes changes to the rules of spelling so that by the end he ends up with something probably best described as pseudo-German.
This made me wonder if there is interpreter for some established language in which one has access to the interpreter itself, so that you can change the syntax and structure of the language as you go along. For example, often an if clause is a keyword; is there a language that would let you change or redefine this on the fly? Imagine beginning a console session in one language, and by the end, working in another.
Clearly one could write an interpreter and run it, and perhaps there is no concrete distinction between doing this and modifying the interpreter. I'm not sure about this. Perhaps there are limits to the modifications you can make dynamically to any given interpreter?
These more open questions aside, I would simply like to know if there are any known interpreters that allow this at all? Or, perhaps, this ability is just a matter of extent and my question is badly posed.
There are certainly languages in which this kind of self-modifying behavior at the level of the language syntax itself is possible. Lisp programs can contain macros, which allow among other things the creation of new control constructs on the fly, to the extent that two Lisp programs that depend on extensive macro programming can look almost as if they are written in two different languages. Forth is somewhat similar in that a Forth interpreter provides a core set of just a dozen or so primitive operations on which a program must be built in the language of the problem domain (frequently some kind of real-world interaction that must be done precisely and programmatically, such as industrial robotics). A Forth programmer creates an interpreter that understands a language specific to the problem he or she is trying to solve, then writes higher-level programs in that language.
In general the common idea here is that of languages or systems that treat code and data as equivalent and give the user just as much power to modify one as the other. Every Lisp program is a Lisp data structure, for example. This is in contrast to a language such as Java, in which a sharp distinction is made between the program code and the data that it manipulates.
A related subject is that of self-modifying low-level code, which was a fairly common technique among assembly-language programmers in the days of minicomputers with complex instruction sets, and which spilled over somewhat into the early 8-bit and 16-bit microcomputer worlds. In this programming idiom, for purposes of speed or memory savings, a program would be written with the "awareness" of the location where its compiled or interpreted instructions would be stored in memory, and could alter in place the actual machine-level instructions byte by byte to affect its behavior on the fly.
Forth is the most obvious thing I can think of. It's concatenative and stack based, with the fundamental atom being a word. So you write a stream of words and they are performed in the order in which they're written with the stack being manipulated explicitly to effect parameter passing, results, etc. So a simple Forth program might look like:
6 3 + .
Which is the words 6, 3, + and .. The two numbers push their values onto the stack. The plus symbol pops the last two items from the stack, adds them and pushes the result. The full stop outputs whatever is at the top of the stack.
A fundamental part of Forth is that you define your own words. Since all words are first-class members of the runtime, in effect you build an application-specific grammar. Having defined the relevant words you might end up with code like:
red circle draw
That wold draw a red circle.
Forth interprets each sequence of words when it encounters them. However it distinguishes between compile-time and ordinary words. Compile-time words do things like have a sequence of words compiled and stored as a new word. So that's the equivalent of defining subroutines in a classic procedural language. They're also the means by which control structures are implemented. But you can also define your own compile-time words.
As a net result a Forth program usually defines its entire grammar, including relevant control words.
You can read a basic introduction here.
Prolog is an homoiconic language, allowing meta interpreters (MIs) to be declined in a variety of ways. A meta interpreter - interpreting the interpreter - is a common and useful native construct in Prolog.
See this page for an introduction to this argument. An interesting and practical technique illustrated is partial execution:
The overhead incurred by implementing these things using MIs can be compiled away using partial evaluation techniques.

Why do most object oriented languages not support coroutines?

I'm currently preparing for an exam. One of the question I found in an old exam is:
"Why do most object oriented languages not support coroutines? (Hint: It's not because they support threads)"
The problem is, that I can't find a good answer. Of course you don't need coroutines if you have object orientation, but it would still be very useful to have them in some cases.
I think it is because of ideological reasons. In OOP main entity that represents the state is object. Nothing else should have state. In the world of coroutines they become one more carrier of state and that slightly contradicts with OOP. In C# there is minor version of coroutine: yield statement, but it is purely feature of C#, not CLR and .net itself, while compiled all state variables become fields of hidden class. It is because nothing except object can have a state in .net.
The purpose of a question like this in an exam is not to see if you know the answer. (There doesn't need to be a right answer.) Rather, it is to determine whether the student has developed the ability to think and reason within the subject domain.
If I were to answer this question I would observe: a) An actor-model is very much a merging of object-orientation with coroutines, in the sense that actors (agents) could receive and process messages concurrently. b) The real reason coroutines are not often in OOP languages is the same as the reason coroutines are not often in any mainstream language, viz. coroutines are difficult to implement in the presence of a conventional stack.
My response is almost certainly to late to help the original poster. I thought I'd respond anyway as coroutines and other forms of concurrency are currently a popular topic.
This is just a guess:
A coroutine uses the state of the subroutine to alter its return value, whereas a method on an object can use the object state to alter its return value.
This sounds to me like a lousy question for an exam -- it's highly subjective, and there is no one right answer or even best answer. To make a long story short, I don't think anybody can do much more than guess at this.
My own guess is that it's mostly because the languages that have included coroutines (e.g., Concurrent Pascal, Concurrent C (which actually supported the C++ of the time), and Ada Tasks are sort of similar as well), have never become particularly popular. From a technical viewpoint, these designs are already extremely good, but they've never become particularly popular. To a degree, that's probably a matter of timing as much as anything else. By the time multiprocessor computers became available to make parallel computing a real goal for most programmers, these languages were already mostly forgotten.
From a technical viewpoint, I'm not sure anybody has a lot new to add -- mostly what's needed is a good "sales pitch" to make Concurrent C or Ada 95 (etc.) sound like something new and innovative enough to get people to at least try them out. Of course, implementations from decades ago were often single-threaded under the hood -- that would need updating. For one example, however, I'm sure Ada 95 implementations have been updated so they can use multiple cores quite nicely. That doesn't seem to have done a lot of its popularity though (e.g., here on SO, the ada tag has currently been used only 90 times).
The idea of an object is to isolate state. Everything you need should be present in that object. A coroutine will 'break' this idea because now an object is no longer an isolated state, but rather it depends on another object.
Well, actually both Simula 67 and Smalltalk 80 - the definitive and ultimate OO languages - did support coroutines perfectly. So I doubt the idea of coroutines is fundamentally incompatible with OOP per se. It's more likely to be a coincidence, that sort of question like "why is the cool thing X not supported in the mainsteam languages/operating systems/etc".

Pros and Cons of VB & VBA?

On another programming related website, I saw this line in someone's signature. This is NOT the first time I've seen such sentiments, although this is the harshest:
"People who work in VB or any variant
thereof are not programmers, they are
circus chimps throwing feces into an
IDE..."
VBA is my bread and butter and I can automate quite a bit of stuff with it. Yes, I know it lacks polish and some functionality, but why so much negativity toward it? On the flip side, what do other languages have that VB doesn't?
VB6, VBScript, and VBA have the reputation because they just aren't industrial strength languages. Notably:
No OOP. Sure, you have classes and modules, but no inheritance. VB isn't a low-level language, it needs real objects.
No first-class functions, so you can't even simulate OOP or polymorphism.
Lack of a well-developed class library. VB6 has a small library of built-in functions, and almost all other functionality is delegated to Windows calls or (usually pricey) third-party components.
Lousy error-handling. ON ERROR RESUME NEXT is a pox on the planet.
Although its not the fault of the language, VBA earned a bad reputation by association with MSAccess.
Of course, VB wasn't really intended to be an industrial strength language, so maybe nothing mentioned above is really proper criticism of the language at all. Fortunately VB.NET and the latest versions of VBA fix everything above, so VB.NET is on par with any other "serious" language in the marketplace.
[anecdote]
In defense of VB, I find most people criticize the language just to go along with the status quo, not because they've actually used it.
A few years ago, in a chatroom, I ran across young neophyte railing against a VB6 developer for using such a crappy language. I innocently asked "what's wrong with VB".
The first thing he said was "Because its a WINDOWS language!" So I pointed out that Borland Delphi is a Windows only language*, but I've never heard anyone malign it for that reason. (* There was a product called Kylix which cross-compiles to Linux, but its expensive, buggy, and discontinued. Its been a while since I've used Delphi, but last I'd heard, its still not ready for Linux.)
So, he said "It has a HORRIBLE SYNTAX!" Is that really the reason people hate this language? I'd say Perl, Lisp, and C++ are worse on the eyes than VB.
Next, he says "Its too easy to learn!" Well, I'd consider that a point in favor of the language. I'll never write a GUI by hand if I have a drag-and-drop designer at my disposal. What else you got?
So finally, grasping at straws, he comments "It has... no string manipulation functions". Left, Right, Mid, Replace, InStr and Trim. QED noob.
Interestingly, VB has features found some "hacker" languages, namely variant datatypes and duck typing. Compiled code performed reasonably well, interop between COM and native windows DLLs was easy, and the GUI editor basically set the bar for all future RAD development.
[/anecdote]
Read some of Joel Spolsky's articles and you'll feel better about yourself. From his article Working on CityDesk, Part Three:
Visual Basic is an extremely productive way to write code, especially GUI code. Want bold text on a dialog box? It's one click in VB. Now try doing it in MFC. You have to create a subclassed control, it's a big mess, you have to know all about LOGFONTS and Windows window subclassing and a bunch of other things and you need about three lines of code once you have the magic class.
But many VB programs are spaghetti, either because they're done as quick and dirty one-offs, or because they're written by hack programmers without training in object oriented programming, or even structured programming.
What I wondered was, what happens if you take top-notch C++ programmers who dream in pointers, and let them code in VB. What I discovered at Fog Creek was that they become super-efficient coding machines. The code looks pretty good, it's object-oriented and robust, but you don't waste time using tools that are at a level lower than you need. I've spent years writing code for C++/MFC and years writing code in Visual Basic, and let me tell you, VB is just much, much more productive.
This simplicity attracts a lot of new programmers. Saying there are a lot of bad programmers using Visual Basic does not mean Visual Basic is a bad language; it simply means that Visual Basic is accessible to bad programmers (AKA new programmers).
I work in a place where all the code is C#, not VB .NET. One developer wrote most of the code. You know how he achieved this feat? Easy: He copied-and-pasted all over the place. A given method might have anywhere from a few to hundreds of copies throughout the system.
Good developers can be good in any language. Crappy developers can be crappy in any language.
Also just to note that VB, VBA, and VB.NET are all three different languages even though they might share some similar syntax. There's no real difference between VB.NET and C# (besides the keywords/syntax), so we shouldn't lump VB (6 and before) and VBA in with VB.NET.
The real problem that many programmers have with "VB" (just say all 3 of the languages) is really more about the people using it. Most of the time "VB" programmers have less formal education and write sloppier code. That's not true for all "VB" programmers (and that doesn't mean there's not sloppy code written in C++, Java, C#, etc.). It's just the typical expectation that someone who doesn't use VB has when they hear about VB programs.
Meh, these are just religious bigots.
There is no one true language, and most experienced folks not only know that, but instantly recognize these statements as a glaring sign of inexperience.
Average developer quality seems to be inversely proportional to popularity of language * ease of use of language. VB is very easy, and is/was widely used.
This is because
A) there's a demand for coders in popular languages, so every employer has to either lower their standards, raise their pay or go without developers.
B) people without a clue can still appear moderately productive in easy to use languages. There are enough libraries and GUI tools that they can slap together something that looks useful, even if it's complete garbage under the hood.
There's nothing inherently wrong with VB when used in the domains it was intended for, by people who know what they're doing. The same is true for almost any tool/language.
I dislike the language, but that's mostly because I worked with a vb-like language which stripped out absolutely anything that might be considered an advantage and forced "best practices" that really didn't make sense.
The biggest problem I have with VB is that there is an almost direct track from clueless non-programmer -> part time Excel/Access scripter -> VBA "guru" -> VB "programmer" -> lead programmer on the most important project in the company.
Honestly I wouldn't have believed it if I didn't see someone follow that path right in front of my eyes. I even tried to mentor the guy so that he would be familiar with OOP, exception based error handling, etc. but he just dug his head in the sand and wrote everything procedurally because that had always worked for him.
I have had a chance to work with VB.Net and as long as I treated it like an object oriended .Net langauge first and VB second it wasn't so bad. It would never be my first choice for a new project, though.
I'm visiting SO in between writing VBScript code and that statement really rings true to me -- I am currently a circus chimp. If you don't know anything else, VB and its variants seem like great languages.
In my opinion, the reason for the negativity is one basic statement -- On Error Resume Next. This makes bad code a feature of the language. If it didn't have this, it wouldn't have near the bad publicity...
Most every developer I know has worked at one point or another with a VB developer, or a developer with a heavy VB background that just didn't have a clue. Unfortunately, as with most things, all we remember are the bad things about something. So we relate VB to bad programming.
It is certainly not true that all VB programmers are poor developers. But when everybody has stories about "This one old VB guy I used to work with." The stereotype is spread.

how can a compiler that recognizes the iterators be implemented?

I have been using iterators for a while and I love them.
But although I have thought hard about it, I could not figure out "how a compiler that recognizes the iterators" be implemented. I have also researched about it, but could not find any resource explaining the situation in the compiler-design context.
To elaborate, most of the articles about Iterators imply there is some sort of 'magic' implementing the desired behaviour. They suggest the compiler maintains a state machine in order to follow where the execution is (where the last 'yield return' is seen). I am especially interested in this property of Iterators that enables the lazy evaluation.
By the way, I know what state machines are, have already taken a compiler design course, studied the Dragon Book. But appearently, I cannot relate what I have studied to the 'magics' of csc.
Any knowledge or differential thoughts are appreciated.
It's simpler than it seems. The compiler can decompose the iterator function into individual chunks; chunks are divided by yield statements.
The state machine just needs to keep track of which chunk we're currently in, and upon next invocation of the iterator, jumps directly to this chunk. We also need to keep track of all local variables (of course).
Then, we need to consider a few special cases, in particular loops containing yields. Fortunately, IL (but not C# itself) allows goto to jump into loops and resume them.
Notice that there are some very complicated edge cases, e.g. C# doesn't allow yield in finally blocks because it would be very difficult (impossible?) to leave the function upon yield, and later resume the function, perform clean-up, re-throw any exception and preserve the stack trace.
Eric Lippert has posted an in-depth description of the process. (Read the articles he has linked to, as well!)
One thing I would try would be to write a short example in C#, compile it, and then use Reflector on it. I think that this "yield return" thing is just syntax sugar, so you should be able to see how the compiler handles it in the output of the disassembler.
But, well, I don't really know much about these things so maybe I'm completely wrong.

what would be the impediments to creating an "Europanto" type universal scripting language?

After switching back and forth between several scripting languages this week, I found myself thinking how similar they all are. Yet I'm always reaching for Google (or nowadays SO) to remember details like what the local equivalents of "instanceof" and "endswith" are, or the right syntax to declare an interface, or whatever.
This reminded me of the (human) language Europonto. Just pick some vaguely English syntax and some vaguely Romance/Germanic/Slavic vocabulary, and it's all good!
So what would happen if we tried to do the same thing with a scripting language. In the mood for Python-style indented blocks today? Fine! Want to use a prototype object? Ok! Can only remember how to spell the PHP names of some library function? No problem!
Anyway, that's the wild and crazy idea. Since we need a question that admits concrete answers, let's tighten it up like this:
What would be the most significant conflicts in creating a scripting language that permitted all the native syntax and library functions of [Python, Ruby, PHP, Perl, shell, and JavaScript], such that you could freely intermix code blocks and function names between languages?
And let's say that any particular construction should be consistent at the statement level. So we'll allow:
foreach( $foo as $bar )
{
if $foo == 2:
print "hi"
}
but not, say,
foreach( $foo as $bar )
{
if $foo == 2:
print "hi"
endif
end
Conflicts can include: parser ambiguities; name collision; conflicting semantics for objects or functions or closures; etc. I'm guessing that scope will be a ginormous issue, but you tell me.
I'll start this as "community wiki" from the get go, so if you think it's a fun question but want to make it more rigorous, feel free to edit.
I would suggest that the main problem is recognising what the syntax of each statement is supposed to be.
In any case, what is the point? Almost all scripting languages have facilities to do much the same things, which is why people tend to master one that they use consistently, and stick with it.
The main difficulty would to be to allow people maintain it. With a well defined language you can only print a certain way and do sys.argv a certain way. once you allow multiple syntaxes there is no sane way to search for all the sys.argv in the code base you have.
At the syntactical level the only problem I can see would be to detect which block has which syntax, then separate them and parse them with specific parsers. Of course given very small statements there could be ambiguities as to which language it is and you could argue that it doesn't matter, but it just may be the case, that in different languages the same string of characters does different things so this could be a subtle issue.
At the API level you would have lots of different methods of doing the same thing but in a subtly different way or subset of doing it. So for example you could have no way of doing Java's string.startsWith() in let's say PHP, so you would do something different, or no way of doing PHP's strstr() (which returns a part of the string from the found needle to the end) and you would implement something different for that or even think differently about the problem. Then you would have to have all those different API methods of doing the same things and that would be huge API to implement, support and (god forbid) learn.
At the wetware level the code written by others would be totally unreadable unless you know a ton of languages and their subtle differences. I think it is difficult enough to learn a single programming language to the smallest details and so it is not practical at all to have this kind of frankensteinish beast created. I can think of an exception for use as an algorithm description language which it already is used in universities all over the world, where teacher takes some language of his liking and makes the code as readable as it can be for a human without needing to implement a parser for it.
As a side note I think this kind of system could be implemented at the least effort by somehow utilizing .NET's CLR where you have a ton of different languages each compiling to the same bytecode and accessing the same variables and stuff. All you'd need to do is split the code to clusters of different languages, then compile them separately on their respective compilers and then just merge the bytecode and somehow make sure they all point to the same variables and functions when mentioning the same names across the different languages.
I have begun to see that syntax is but one property of a language. And most of them look like C to me. The purpose of a language (object oriented, strong typing, etc) is something else again. It starts to look like syntax is not the most important aspect.
I went and read the wikipedia entry...
Europanto is a linguistic jest presented as a "constructed language" with a hodge-podge vocabulary
"Hodge-podge" sounds like the way Perl has been described to me!
I found a rather detailed discussion of closures in Ruby. It sounds like getting Ruby's behavior to coexist with JavaScript's or Python's would require some kind of ugly disambiguation.
If anybody were to add Perl to the list of languages to be covered, I think its lexical scoping rules would present a related problem?