As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
Apparently it used to be, according to this terrific account by Ed Nather. How about today? That is, is it possible, with enough knowledge of CPU/FPU/GPU/etc. architecture, to write machine code that is more efficient than what would be produced by a mainstream assembler (nasm, GAS, etc.), in any scenario? How about for GPU kernels?
EDIT: "Not constructive"? Please. This question produced #Pointy's answer, which was quite enlightening to anyone not that familiar with how assemblers work. Someone has favorited it. The fact that Pointy is, endearingly, one of the close-voters is a nice touch but hey if it's the best answer it gets accepted.
Two things:
There's no single thing called "assembly language". An assembler is a program that translates a textual encoding of instructions for a particular architecture into a form suitable for execution. Exactly what facilities a particular assembler exposes is up to its designer. Many CPU architectures have several assemblers available.
Because an assembler's job is to provide a "friendly" way for a person to request a precise sequence of machine instructions (and other aspects of a program, such as initialized memory locations, reserved blocks of storage, directives to the runtime executive, etc), if it's possible to produce a program by hand that can't be produced by some particular assembler then that really only means you've got an inadequate assembler. The assembler that Intel developed for the iAPX 86 series (not Microsoft's masm, which was a weak imitator) had a fairly typical macro facility, and it also had a sort of "micro macro" facility that would allow the dictionary of opcode mnemonics (things like MOV, ADD, BNE, etc) to be extended arbitrarily. With an assembler like that, it would clearly be possible to create any piece of code you desired.
The real topic for concern in programming is whether burdening the programmer with the responsibility for choosing a strategy for getting work done by a computer in extreme detail is worthwhile for performance. The question of course has no single answer, because there are many possible situations, many different computing devices, and mostly because things change all the time. In 1959, for example, the computing task of translating a higher-level language like FORTRAN into machine code was itself a significant workload for computers. Understanding of how programming languages should even work was in its infancy.
Today, then, the only reason to know "machine language" (and note that the word "language" isn't really accurate) is to create an instruction sequence when there's no available (or convenient) assembler. That's assuming that explicitly creating a particular instruction sequence is better than using a higher-level language for some reason. Even then, it's generally the case that if you were doing that now you'd be writing software in some high-level language to emit the chosen instruction sequence; that is, you'd effectively create a "domain-specific assembler" for some task. A good example would be the code in something like a virtual machine interpreter that builds machine language blocks on-the-fly, like a Java or JavaScript VM.
The assembler takes assembly language and turns it into machine code. Ideally, but not always the case the assembly language has a one to one relationship with the machine code instruction. MOST of the time the translation from assembly language syntax for an instruction and the machine code will be identical whether the assembler does it or if it is done by hand. naturally there are some dont care bits from time to time and the assembler and human may choose different dont care bits so the result doesnt have to be a bit for bit match but lengthwise and speedwise they will be identical, no difference at all.
The differences between the human and the assembler software will be, if any, where the assembly language is not a one to one relationship with the machine code, and/or for various reasons the programmer wants the assembler to take care of something. This could be pseudo instructions, or macros, or things having to do with externally defined variables.
Assembly language is a loaded term as it is defined by the particular assembler, you can have many different and non-compatible assembly languages for the same processor. And you can have assembly languages where there are instances where the language does not completely describe all the information needed to choose the specific instruction, near vs far jumps for example for some instruction sets with some assemblers.
So if you want to compare apples to apples there will be no difference between hand assembled code and software assembled code. Apples to apples meaning the code in question is written properly to not be vague so the software and the human assemblers can assemble it. If you do find differences other than dont care bits, then it probably has to do with an optimization which has to do with the human assembler changing the code, to make it fair the matching assembly language can/should be changed to match. This difference would have nothing to do with human vs assembly language assemblers, but one programmers program as compared to anothers. Basically you could/would get the same result in assembly language with the software assembler.
A skilled assembly-language programmer who is targeting a very specific run-time environment can probably produce code which will run better than a compiler would produce. Depending upon the nature of the code, the performance improvement may or may not be significant relative to the amount of work required.
On the other hand, frameworks such as Java or .NET allow a programmer to compile software to an "intermediate" form which can, on demand, be translated into machine code which includes specific optimizations for the environment where it's actually running. Code which is compiled to run on "any platform", when run by framework engine which was hand-tweaked for the platform it's running on, may not run as well as assembly code that was hand-tweaked for that platform, but better than code which was hand-tweaked to optimize performance on some other platform.
"Apparently it used to be, according to this terrific account by Ed Nather. "
I remember reading a story like that decades ago.
It's been doing the rounds for a long time.
The blackjack bit is an embellishment but the code dropping out at the right point and the weeks spent trying to figure out how it worked before the penny finally dropped definitely rings a very old bell
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I'm a firmware developper and I usually develop firmware in C or Assembly. However, I came across a project completely implemented in C++ in our embedded library. Now I know object oriented languages can be used on a hardware level, but I would like to know why they aren't that popular when developing embedded systems.
The real reason: because of conceptual complexity. C and assembly provide an a simple mental model to track what is going on in the system. Object oriented programs require a more complex model that makes it harder to reason about what is going on.
Embedded systems tend to be environments that call for very tight control about what is going on in the system versus the more open ended server and PC environment. This requires both simple and transparent programming constructs. Both C and assembly provide a high level of visibility on what is really going on in the system at the lowest hardware level.
Object oriented languages in general and C++ in particular, abstract away many of the details of what is going on in the system when code is executed, thus making it much harder to reason about the inner working of the system.
Here is an example to explain what I mean. Consider the following code snippet:
i++;
Seeing this in a C program gives us a mostly accurate idea about what this does and an order of magnitude about how many CPU cycles are used, how many registers are involved and so on.
Now what does the same line will do in a C++ program? well, it depends. Depends on what type i is and how has the ++ operator been overloaded. See what I mean?
None of these comes to say that C++ or object oriented is bad. It is not. It does require a much more complex mental model if one is interested about the minute details about what is really going in the system, as many embedded developers feel they need.
From technical point of view, embedded systems have limited resources. Object oriented languages tend to create much bigger binaries than pure procedural ones, thus many would choose something that's as light as possible. For instance, I work for a smartcard company and my team is the one handling extremely low cost cards, with RAM ranging only from 1.5 - 1.75 KB and EEPROM from 96 - 136 KB. For this kind of embedded environment, most object oriented languages (especially the heavy ones such as Java) won't fit. We don't even use ANY standard C library, everything is written from scratch for size. C++ might fit, with proper coding technique and use of compiler options that doesn't generate rtti, minimize vmt, use only stack based objects, etc. but it's just my guess.
"OO languages" is too broad. There are lots of object-oriented languages with radically different characteristics. "It can be done in C++" doesn't imply "it can be done in any OO language". Good luck for writing a Python program for a less powerful AVR MCU for example. The device has 2kB of RAM and 32kB of Flash memory, the Python interpreter itself doesn't even fit into them.
C++ is a language with high-level and low-level parts at the same time. It's object-oriented, but at the end, your nice OO code will be compiled down to raw machine code, just as if you wrote it in C or assembly directly. Some other object-oriented languages, which are considered "higher-level" (or high-level-only, rather), can't do the same thing. It's all about the implementation of a particular language, really.
One addition to what all others said:
We don't write much C++ embedded code because the customer demands it. In my field code may need to get a certification and certification guidelines only exist for C, not C++.
Therefore the project has to be implemented in C even if C++ would lead to a better product.
Because many developers "think" that C++ is not suitable for embedded environment at all because of code size and performance! and this is not true in most cases !
I strongly recommend to read those slides which is talking about C++ usage for embedded and talking about C++ myths like:
The "Bloat" Myth !
The "Poor Performance" Myth !
Many compiler vendors for embedded targets provide C++ compilers like Keil , IAR , CodeRed also processor manufactures provide their toolchain with C++ compilers e.g. Texas Instruments, Freescale,...
Generally the developer need to consider c++ while starting a new project and determine to use it or not based on project needs and what OOP/C++ could provide to get the job done on time on cost!
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
my development style brings me to write a lot of throw-away "assisting" code,
whether for automatic generation of code parts, semi-automated testing, and generally to build dummies, prototypes or temporary "sparring partners" for the main development; I know I'm not the only one...
since I frequently work both under windows and Unicies, I'd like to non-exclusively focus on a single "swiss army knife" tool that can work in both the environments with limited differences, that would allow me to do usual stuff like text parsing, db access, sockets, nontrivial filesystem and process manipulation
until now under unix I've used a bit of perl and massive amounts of shell scripts, but the latter are a bit limited and perl... despite being very capable and having modules for an incredible array of duties, sincerely I find it too "hostile" for me for something that goes beyond 100 lines of code.
what would you suggest?
scripting is not a requirement, it would be ok to use more static-styled languages IF it makes development faster (getting programs to actually do their work and possibly in a human readable state) and if it doesn't become nightmarish to handle errors/exception and to adapt to dynamic environments (e.g. I don't like to hardwire data /db table structure in my code, especially by hand).
I've been intrigued by python, ruby, but maybe groovy (with its ability to access the huge class library and his compact syntax) or something else is better suited
thanks a lot in advance!
(meanwhile, on a completely different note, scala looks really tempting just for the cleanliness of it, but that's - probably - a completely different story, unless you tell me the opposite...?)
Python is arguably one of the best choices. Its biggest benefit is that it has a huge built-in library for doing all sorts of stuff. It is also mature, very cross-platform, actively developed, and has many support options (mailing lists, newsgroups, etc).
In addition, it has a built-in GUI toolkit (tkinter) for those times when you need to write a quick GUI to get input from a user or display output from a running process. And if you don't like tkinter, there are other cross-platform GUI toolkits available.
I suggest Python.
For me it has a sweet spot of good libraries, documentation, community, cross-platform functionality, and ease of writing/reading.
It fills a similar niche to Perl's, but if you find Perl to be 'hostile' for longer scripts, you will probably like Python, especially when compared to Ruby, which feels more Perl-y, IMHO.
As an aside, all of these are quite easy to just try out - why not do that?
Then you can decide for yourself instead of trusting the questionable wisdom of an online forum (:
I think that Python and Ruby are your best bets, depending on exactly how you think and code.
I personally find Python EXTREMELY readable and its syntax is highly intuitive. I've heard Python described as "pseudo-code plus colons."
On the other hand, once you get around its slightly bizarre syntax, Ruby makes for high-speed development. It's built around DRY principles and convention-before-configuration, which is great for rapid prototyping.
There are other languages--especially Haskell and the Lisp dialects--that can make for super-rapid prototyping, but they don't have as large a supportive community, so there's a shortage in library and discussion supply.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
How can programming in assembly help in achieving optimization
The most likely way programming in assembly can improve your code is by improving you: teaching you more about what is happening at a low level and getting the discipline of optimization can help you make good decisions in higher-level languages.
As far as actually helping one program: as others have noted it's rarely worth it. It's just possible you can use it as a kind of advanced profile-driven optimization: try many variations until you find one that's best on your particular problem.
To start with this: write a program in C or C++ or whatever compiled language you normally use, fire up your debugger, and disassemble a small but nontrivial function, and have a think about why the compiler did what it did. Then try writing a small bit of inline assembler yourself. On modern systems assembly is mostly easily embedded within C rather than done from scratch.
Or alternatively, get a teeny machine like a PIC and make it flash a LED...
These days, you have to be very good at assembly to beat the compiler.
I can do it any day of the week, but only by viewing the compiler's output first.
And then, if it gains more than a couple of percentage points I'd be surprised.
These days, I only program in assembly when I'm doing something the compiler can't do.
In principle, you can write highly-optimized code in assembly because the compiler is limited to specific, general-purpose optimizations that should apply to many programs, while you can be creative and use your knowledge of this particular program.
To take a simple example, back when I was new to this business compilers were very limited in their ability to optimize register usage. You know that to perform any sort of arithmetic or logical operation, the CPU must generally load one of the values into a register, then perform the operation on the other, then save the result? Like to add two numbers together -- and I'll use a pseudo-assembler here because I don't know what assembly languages you know and I've forgotten most of the details myself -- you'd write something like this:
LOAD A,value1
ADD A,value2
STORE a,destination
Compilers used to generate the loads for every operation. So if your C program said:
x=x+y;
z=z+x;
The compiler would generate something like:
LOAD A,x
ADD A,y
STORE A,x
LOAD A,z
ADD A,x
STORE A,z
But a human could observe that by the time we get to the second statement, register A already contains x, and addition is commutative, so we could optimize this to:
LOAD A,x
ADD A,y
STORE A,x
ADD A,z
STORE A,z
Et cetera. One could go through all sorts of tiny micro-optimizations like this. I used to do that all the time back when I was young and the world was green.
But over the years compilers have gotten much smarter, and CPUs have gotten more powerful so the micro-optimizations don't matter as much.
Thus, I haven't written any assembly language code in, wow, probably 15 years. I used to read the assembly generated by the compiler when debugging, sometimes it would give a clue to a subtle problem, but I haven't done that in years now either.
I don't think compilers are even written in assembly any more. Instead, you write the first draft of the compiler in a high level language on some other computer, i.e. you write a cross-compiler to get yourself off the ground.
I suspect the only real use of assembly today is for extremely constrained environments, embedded systems and that sort of thing; and for programs that have to deal intimately with the hardware, like device drivers.
I'd be interested to hear if there are any assembly programmers on this forum who care to tell us why they assembly programmers.
Programming in assembly won't, in and of itself, optimize your code. The main thing about assembly is that it allows you to have very low-level access and to choose exactly what instructions the processor executes.
Since you won't have some compiler generating the assembly for you, you can perform code optimizations when you write the program yourself, if you know how.
So, you think you are smarter than gcc optimizing compiler?
If not, then fughed aboud it (learning assembly for the sake of getting better at optimization). That would be akin to learning Scheme language for the sake of getting better at recursion :)
In general, the compiler will do a fairly good job at generating optimal code. There are, however, cases where writing your own assembly can result in even more optimized (in terms of space and/or speed) code.
Typically, this happens when there is something that you know about the target system that the compiler doesn't. Compilers are designed to work on a variety of systems; if you want to take advantage of something unique to your target system, sometimes you have to go in and do it yourself. Here's an example. A few months ago, I was writing some code for a MIPS-based embedded system. There are many different types of MIPS CPUs, and some support certain opcodes that others do not. My compiler would generate MIPS code using the set of assembly operations that all MIPS architectures support. However, I knew that my chip could do more. I had a subroutine that needed to count the number of leading zeroes in a 32-bit number. The compiler synthesized this into a loop that took about 10 lines of assembly to do. I re-wrote it in one line by using the CLZ opcode that was designed to do just this. I knew that my chip supported the opcode but the compiler didn't. Admittedly, situations like this aren't very common; when they do pop up, however, it's nice to have enough of a background in assembly to take advantage of them.
Sometimes one will need to perform a task which maps particularly well onto some CPU instructions, but does not fit well into any high-level-language constructs. For example, on many processors one may easily perform extended-precision arithmetic using something like:
add r0,r4
addc r1,r5
addc r2,r6
addc r3,r7
This will regard r3:r2:r1:r0 and r7:r6:r5:r4 as numbers four words long, adding the second to the first. Four nice easy instructions, any anyone who understands assembly would know what they do. I know of no way to perform the same task in C without it not only generating bigger and slower object code, but also being an incomprehensible mess of source code.
A somewhat more extreme but specialized real-world example: Given two arrays array1[0..63] and array2[0..63], compute array1[0]*array2[0] + array1[1]*array2[1] + array1[2]*array2[2] ... + array1[63]*array2[63]. On a DSP I used, the computation could be done in machine code in about 75 machine cycles (about 67 of which are a repeating MAC instruction). There's no way C code could come anywhere close.
About the only time I can think of using Assembly language for optimizing code is when you need something very specific, like you need a GPIO on a microcontroller to toggle between high and low exactly every 9 clock cycles. that's too short a time to manage with an interrupt, and higher level language compilers don't normally offer this kind of control over the instruction stream.
Typically you wouldn't program in assembly. You would program in C, and then look at the generated assembly to see what optimzations (or not) the C compiler made automatically. Adjusting your C code (to allow for better vectorization for example) will allow the compiler to re-arrange code better, which will give you optimized assembly
More likely than being able to beat the compiler at writing assembly code. Knowing how typical tasks translate to assembly may help you write better high level language code.
Typically you do not resort to assembly for optimiziation purposes. If this is possible, usually someone already will have provided the essential code ready for you to call, for example in form of a linear algebra library.
Likewise assembly offers direct access to the processor (e.g. for atomicity, time measurement, I/O) but the important accesses will already have have been made accessible for your high level language.
Compilers do a good job of generating assembler.
However, there's a bad reason why hand-written assembler is faster. Since it's harder to write, you write less of it.
It would be nice if programmers could discipline themselves to get the same job done in minimal code, regardless of language.
When writing assembly, or even just straight raw bytes the assembler outputs, you can write programs that use computer hardware specific features or makes something otherwise very carefully specified.
There might be really high benefits if your program does the optimized part far more often than it does anything else. Always set up benchmarks before attempting optimizations.
The downcome is that your hand-written assembly works on fewer different hardware. It may even end up getting limited into the hardware model and revision!
It's rare you ever can or need to write assembly routines because commonly written software must work on almost every hardware you find and your kitten.
There's one interesting application if you know assembly. You can then write programs that produce assembly routines. Though it's mostly only fun unless you keep it really small so you can port it easily.
Read the Graphics Programming Black Book by Michael Abrash
In most modern applications, it can't to any significant degree.
Inter-Process Communication Affects Application Response Time explains why algorithms are unlikely to be bottlenecks. (But always profile - never guess.)
In general, programming in assembly will increase time-to-market, bug density, and maintenance costs. Instead, strive for simplicity and readability in your code.
As poolie mentioned, the main benefit of learning assembly today is a deeper understanding of software and hardware. From that perspective, there's quite a bit of information on Steve Gibson's site.
If you understood why there is sometimes the need to do asm, you would appreciate the strengths, costs (headaches for you).
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I was talking with some of the mentors in a local robotics competition for 7th and 8th level kids. The robot was using PBASIC and the parallax Basic Stamp. One of the major issues was this was short term project that required building the robot, teaching them to program in PBASIC and having them program the robot. All in only 2 hours or so a week over a couple months. PBASIC is kinda nice in that it has built in features to do everything, but information overload is possible to due this.
My thought are simplicity is key.
When you have kids struggling to grasp:
if X>10 then <DOSOMETHING>
There is not much point in throwing "proper" object oriented programming at them.
What are the essentials needed to foster an interest in programming?
Edit:
I like the notion of interpreted on the PC as learning tool. Due to the target platforms more than likely being somewhat resource constrained, I would like to target languages that are appropriate for embedded work. (Python and even Lua require more resources than the target likely to have. And I actually kinda like Lua.) I suppose that is one of the few virtues BASIC has, it has been ran on systems with less than 4K for over 30 years. C may not be a bad option if there are some "friendly" tools available such as Ch.
The most important is not a lot of boiler plate to make the simplest program run.
If you start of with a bunch of
import Supercalifragilistic from <expialidocious>
public void inherited security model=<apartment>
public : main .....
And tell them they "not to worry they aren't supposed to understand that" - you are going to put off both the brightest and the dumbest.
The nice thing about python is that printing "hello world" is print "hello world"
Fun, quick results. Capture the attention span of the kid.
Interpretive shells like most scripting languages offer (command line) that lets the student just type 1 or 2 liners is a big deal.
python:
>>> 1+1
2
Boom, instant feedback, kid thinks "the computer is talking back". Kids love that. Remember Eliza, anyone?
If they get bogged down in installing an IDE, creating a project, bleh bleh bleh, sometimes the tangents will take you away from the main topic.
BASIC is good too.
Look for some things online like "SIMPLE" : http://www.simplecodeworks.com/website.html
A team of researchers, beginning at Rice, then spreading out to Brown, Chicago, Northeastern, Northwestern, and Utah, have been studying this question for about 15 years. I can't summarize all their discoveries here, but here are some of their most important findings:
Irregular syntax can be a barrier to entry.
The language should be divided into concentric subsets, and you should choose a subset appropriate to the student's level of knowledge. For example, their smallest subset is called the "Beginning Student" language.
The compiler's error messages should be matched to the students' level of knowledge. If you are using subsets, different subsets might give different messages for the same error.
Beginners find it difficult to learn the phase distinction: separate phases for type checking and run time, with different kinds of errors. For this reason, beginners do better with a language where types are checked at run time, i.e., a dynamically typed language.
Beginners find it difficult to reason about mutable variables and mutable objects. If you teach pure functional programming, by contrast, you can leverage students' experience with high-school and middle-school algebra.
Beginning students are more engaged by an interactive programming environment than by the old edit-compile-link-go model.
Beginning students are engaged by splash and by interactivity. It's good if your language's standard library provides built-in support for creating and displaying images. It's better if those images are supported within the interactive programming environment, instead of requiring a separate player or viewer. And it's even better if your standard library can support moving images, or some other kind of animation.
Interestingly, they have got very good results with just 2D images. Even though we are all surrounded by examples of 3D computer graphics, students seem to get very engaged working with just two-dimensional images.
These results have been obtained primarily with college students, and they have been replicated at over 20 universities. However, the research team has also done some work with high-school and middle-school students. The first papers on that work are just coming out, so I'm less aware of the new findings and am not able to summarize them.
When you have kids struggling to grasp:
if X>10 then <DOSOMETHING>
Maybe it's a sign they shouldn't be doing programming?
What are the essentials needed to foster an interest in programming?
To see success with no or little effort. To create something running in a matter of minutes. A lot of programming languages can offer it, including the scary C++.
In order to avoid complication with #includes, multiple source files, modularization and compilation, why not have a look elsewhere? Try to write some Excel macros or use any other software with some basic built-in scripting language to automate certain tasks?
Another idea could be to play with web pages. It is not exactly programming, but at least easy to achieve something and show to others with pride.
This has been said on SO before, but... try Scratch. It's an incredible learning tool for kids. It teaches the basics of programming concepts in a hands-on and language-independent way. After a bit of playing around with it they can get down to learning a specific language's implementation of the concepts they already understand.
The common theme in languages that are easy for beginners - especially children to pick up is that there's very little barrier to entry, and immediate feedback. If "hello world" doesn't look a lot like print "Hello, world!", it's going to be harder for people to pick up. The following features along those lines come to mind:
Interpreted, or incrementally JIT compiled (which looks like an interpreter to the user)
No boilerplate
No attempt to enforce a specific programming style (e.g. Java requiring that everything be in a class definition, or Haskell enforcing purely functional design)
Dynamic typing
Implicit coercion (maybe)
A REPL
Breaking the problem (read program) down into a small set of sections (modules) that do one thing and do it very well.
You have to get them to stop thinking like a user and start thinking like a programmer. They need to take it one step at a time. Ask them what they have to think of in order to figure out the problem them selves and then write them down as steps. If you can then you break each step even more in the same mater. When done you will have the program in english making it simpler to program for real.
I did this with a friend that just could not get it and now he can. He used to look at something that I did and be bewildered and I would say that he has done more complex stuff than this.
One of the more persistently-present arguments I have had with other programmers is whether or not one's first language should require explicit type languages. Many are of the opinion that learning a language which requires you to explicitly declare type information is one which will teach you to program typefully. Conversely, it can be said that dynamic languages might present a less demanding learning curve. It goes either way, I suppose.
My advice: start with a simple model of how a computer works. I am particular to stack machines as good tools for teaching computation.
Remember that beginners are learning two disciplines at the same time: how computers work and the abstract logic involved (the basics of Computer Science), plus how to write programs that match their intended logic (learning a specific language's syntax and idioms). You have to address both concerns in an interwoven fashion in order for the students to quickly become effective. This is also the reason experienced programmers can often pick up new languages quickly.
It's worth noting Python grew out of a project for a language named ABC, which was targeted at beginners. For example, the required colon isn't strictly required, but was found to improve readability:
if some_condition:
do_this()
I got 3 words : Karel the Robot.
it's a really really simple 'language' that is designed to teach people the basis of programming :
Look for it on the web. You can look at this, though I never tried it :
http://karel.sourceforge.net/
While this isn't related to programming a robot, I think web programming is a great place to start with kids that age. It's how I started at that exact age. It easily translates to something kids understand if they use the web at all. Start with HTML, throw in Javascript, and soon they want to be doing features requiring server-side scripting or some sort, and things progress from there.
With the kind of kids who are already interested in robotics, though, I'd actually go for a different language like the ones already described. If you want to work in a field like robotics, you don't need to be convinced to try something hard. You need to be challenged.
A few years ago I saw a presentation at Ignite! Seattle from one of the people working on the project now known as Kodu who envisioned as a programming language for children. He spent time talking about what common language features could simply be thrown out in a programming environment meant to teach fundamentals.
A lot of cherished imperative constructs, like C-style for loops, were simply left out in favor of a simple object-messaging approach. Object-oriented programming isn't hard to understand when you think about "objects" and "messages"; the hard part is when you deal with things that programmers, but not children, care about, like inheritance and contracts and sweeping abstractions. I've got this thing (noun), now act on it (verb), in this way (adverb like quickly), when thing (sees/bumps into) something (with some attribute) (that's your if). Events are really conditions, and have all of the power of conditions, but it's up to the runtime to identify when those events happen.
This kind of event and messaging driven approach probably translates even better to robots than procedural programming would, anyway, so it might be a good way to look at the problem. Try not to think about what you'd "need" to know to program in C or Pascal or something; think about what you'd want to be able to make something do.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I'm looking at doing embedded coding for a device that's approximately 20MHz, has 6mb ram, with an ARM32 processor. Can anyone suggest the best / most appropriate language for programming an embedded system? I'm considering:
Lua
TinyPy
C
Java ME
C#
someone has suggested JavaScript
Any suggestions? Thanks
Edit - looks like C and Lua are the winners. Cheers all!
Edit - Real Time is not an issue, its more the limited ram/cpu dictating things.
If you're bringing the device up from scratch or interfacing directly with non-standard peripherals, C is really the only way to go.
If you've already got an embedded OS or can port one without difficulty, you might have more flexibility in adding one of the more script-y languages. C# is out of the question unless you're on WinCE, and then you'll be restricted to .NET Micro.
Beyond that, "best" has little meaning without describing what your device is going to be used for. Some languages have better support for certain tasks than others.
C is probably your best bet for such limited cpu resources.
I've used Lua on an ARM OMAP processor. Lua's tight integration with C allows going to the metal whenever you need, and its small size makes it suitable for a wide range of platforms. I developed the UI for my firmware in Lua on my mac and then brought it over to the embedded platform with no changes.
While the OMAP processor was beefy enough to run other languages like Java or Python, I didn't know what hardware I was targeting when I started the code. Lua was a safe bet.
I'd be tempted to go with straight C, but then I've been writing C for nearly 30 years. Lua and TinyPy seem too new, experimental, to me; embedded devices need to be very robust.
Java ME has good points. I don't know about C# in an embedded world.
It's important to specify what you expect this device to do. Is it some sort of control application? Does it have to implement algorithms? What about floating-point support? GUIs? Is performance critical? Are you planning on using an OS?
Answering these questions is a crucial prerequisite to picking a programming language.
That said, embedded systems have to be reliable, so I'd go for some tested solution. C is probably the most solid and best-supported option for ARM chips, but YMMV depending on your specific needs.
C is certainly the most used language in embedded systems.
It also seems to be the most talked about language in general http://www.langpop.com/
Edit: hmm. I just noticed that the 'embedded' you seem to be describing is not about adding an automation language to an application, but squeezing an application into an embedded platform. As others suggest, unless you really need it, skip embeddable languages and program your application in C. There is nearly no runtime overhead for that, except for what you actually use.
In no particular order, Lua, JavaScript and TCL are all quite well suited to embedding. Lua has been the easiest for me to embed. Javascript might be the fastest. All three have good handling for untrusted code, but TCL's is most robust, for example, untrusted code can run untrusted code (if it's trusted to do that much).
Unless you have an RTOS available that supports a variety of alternate languages, C or C++ (depending on your compiler chain) is the way to go.
Your decision is most likely to be determined by the tools avaiable for this processor.
C is by far the most supported language for embedded processors, so you can't go far wrong with that, and it will be good experience if you have to write software for other chips in the future.
C++ is becoming more popular for embedded systems. Beyond that, it depends on your priorities (time to market, resource usage, speed), and the quality of the tools you use.
C the best