Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
Background
Last year, I did an internship in a physics research group at a university. In this group, we mostly used LabVIEW to write programs for controlling our setups, doing data acquisition and analyzing our data. For the first two purposes, that works quite OK, but for data analysis, it's a real pain. On top of that, everyone was mostly self-taught, so code that was written was generally quite a mess (no wonder that every PhD quickly decided to rewrite everything from scratch). Version control was unknown, and impossible to set up because of strict software and network regulations from the IT department.
Now, things actually worked out surprisingly OK, but how do people in the natural sciences do their software development?
Questions
Some concrete questions:
What languages/environments have you used for developing scientific software, especially data analysis? What libraries? (for example, what do you use for plotting?)
Was there any training for people without any significant background in programming?
Did you have anything like version control, and bug tracking?
How would you go about trying to create a decent environment for programming, without getting too much in the way of the individual scientists (especially physicists are stubborn people!)
Summary of answers thus far
The answers (or my interpretation of them) thus far: (2008-10-11)
Languages/packages that seem to be the most widely used:
LabVIEW
Python
with SciPy, NumPy, PyLab, etc. (See also Brandon's reply for downloads and links)
C/C++
MATLAB
Version control is used by nearly all respondents; bug tracking and other processes are much less common.
The Software Carpentry course is a good way to teach programming and development techniques to scientists.
How to improve things?
Don't force people to follow strict protocols.
Set up an environment yourself, and show the benefits to others. Help them to start working with version control, bug tracking, etc. themselves.
Reviewing other people's code can help, but be aware that not everyone may appreciate that.
What languages/environments have you used for developing scientific software, esp. data analysis? What libraries? (E.g., what do you use for plotting?)
I used to work for Enthought, the primary corporate sponsor of SciPy. We collaborated with scientists from the companies that contracted Enthought for custom software development. Python/SciPy seemed to be a comfortable environment for scientists. It's much less intimidating to get started with than say C++ or Java if you're a scientist without a software background.
The Enthought Python Distribution comes with all the scientific computing libraries including analysis, plotting, 3D visualation, etc.
Was there any training for people without any significant background in programming?
Enthought does offer SciPy training and the SciPy community is pretty good about answering questions on the mailing lists.
Did you have anything like version control, bug tracking?
Yes, and yes (Subversion and Trac). Since we were working collaboratively with the scientists (and typically remotely from them), version control and bug tracking were essential. It took some coaching to get some scientists to internalize the benefits of version control.
How would you go about trying to create a decent environment for programming, without getting too much in the way of the individual scientists (esp. physicists are stubborn people!)
Make sure they are familiarized with the tool chain. It takes an investment up front, but it will make them feel less inclined to reject it in favor of something more familiar (Excel). When the tools fail them (and they will), make sure they have a place to go for help — mailing lists, user groups, other scientists and software developers in the organization. The more help there is to get them back to doing physics the better.
The course Software Carpentry is aimed specifically at people doing scientific computing and aims to teach the basics and lessons of software engineering, and how best to apply them to projects.
It covers topics like version control, debugging, testing, scripting and various other issues.
I've listened to about 8 or 9 of the lectures and think it is to be highly recommended.
Edit: The MP3s of the lectures are available as well.
Nuclear/particle physics here.
Major programing work used to be done mostly in Fortran using CERNLIB (PAW, MINUIT, ...) and GEANT3, recently it has mostly been done in C++ with ROOT and Geant4. There are a number of other libraries and tools in specialized use, and LabVIEW sees some use here and there.
Data acquisition in my end of this business has often meant fairly low level work. Often in C, sometimes even in assembly, but this is dying out as the hardware gets more capable. On the other hand, many of the boards are now built with FPGAs which need gate twiddling...
One-offs, graphical interfaces, etc. use almost anything (Tcl/Tk used to be big, and I've been seeing more Perl/Tk and Python/Tk lately) including a number of packages that exist mostly inside the particle physics community.
Many people writing code have little or no formal training, and process is transmitted very unevenly by oral tradition, but most of the software group leaders take process seriously and read as much as necessary to make up their deficiencies in this area.
Version control for the main tools is ubiquitous. But many individual programmers neglect it for their smaller tasks. Formal bug tracking tools are less common, as are nightly builds, unit testing, and regression tests.
To improve things:
Get on the good side of the local software leaders
Implement the process you want to use in your own area, and encourage those you let in to use it too.
Wait. Physicists are empirical people. If it helps, they will (eventually!) notice.
One more suggestion for improving things.
Put a little time in to helping anyone you work directly with. Review their code. Tell them about algorithmic complexity/code generation/DRY or whatever basic thing they never learned because some professor threw a Fortran book at them once and said "make it work". Indoctrinate them on process issues. They are smart people, and they will learn if you give them a chance.
This might be slightly tangential, but hopefully relevant.
I used to work for National Instruments, R&D, where I wrote software for NI RF & Communication toolkits. We used LabVIEW quite a bit, and here are the practices we followed:
Source control. NI uses Perforce. We did the regular thing - dev/trunk branches, continuous integration, the works.
We wrote automated test suites.
We had a few people who came in with a background in signal processing and communication. We used to have regular code reviews, and best practices documents to make sure their code was up to the mark.
Despite the code reviews, there were a few occasions when "software guys", like me had to rewrite some of this code for efficiency.
I know exactly what you mean about stubborn people! We had folks who used to think that pointing out a potential performance improvement in their code was a direct personal insult! It goes without saying that that this calls for good management. I thought the best way to deal with these folks is to go slowly, not press to hard for changes and if necessary be prepared to do the dirty work. [Example: write a test suite for their code].
I'm not exactly a 'natural' scientist (I study transportation) but am an academic who writes a lot of my own software for data analysis. I try to write as much as I can in Python, but sometimes I'm forced to use other languages when I'm working on extending or customizing an existing software tool. There is very little programming training in my field. Most folks are either self-taught, or learned their programming skills from classes taken previously or outside the discipline.
I'm a big fan of version control. I used Vault running on my home server for all the code for my dissertation. Right now I'm trying to get the department to set up a Subversion server, but my guess is I will be the only one who uses it, at least at first. I've played around a bit with FogBugs, but unlike version control, I don't think that's nearly as useful for a one-man team.
As for encouraging others to use version control and the like, that's really the problem I'm facing now. I'm planning on forcing my grad students to use it on research projects they're doing for me, and encouraging them to use it for their own research. If I teach a class involving programming, I'll probably force the students to use version control there too (grading them on what's in the repository). As far as my colleagues and their grad students go, all I can really do is make a server available and rely on gentle persuasion and setting a good example. Frankly, at this point I think it's more important to get them doing regular backups than get them on source control (some folks are carrying around the only copy of their research data on USB flash drives).
1.) Scripting languages are popular these days for most things due to better hardware. Perl/Python/Lisp are prevalent for lightweight applications (automation, light computation); I see a lot of Perl at my work (computational EM) since we like Unix/Linux. For performance stuff, C/C++/Fortran are typically used. For parallel computing, well, we usually manually parallelize runs in EM as opposed to having a program implicitly do it (ie split up the jobs by look angle when computing radar cross sections).
2.) We just kind of throw people into the mix here. A lot of the code we have is very messy, but scientists are typically a scatterbrained bunch that don't mind that sort of thing. Not ideal, but we have things to deliver and we're severely understaffed. We're slowly getting better.
3.) We use SVN; however, we do not have bug tracking software. About as good as it gets for us is a txt file that tells you where bugs specific bugs are.
4.) My suggestion for implementing best practices for scientists: do it slowly. As scientists, we typically don't ship products. No one in science makes a name for himself by having clean, maintainable code. They get recognition from the results of that code, typically. They need to see justification for spending time on learning software practices. Slowly introduce new concepts and try to get them to follow; they're scientists, so after their own empirical evidence confirms the usefulness of things like version control, they will begin to use it all the time!
I'd highly recommend reading "What Every Computer Scientist Should Know About Floating-Point Arithmetic". A lot of problems I encounter on a regular basis come from issues with floating point programming.
I am a physicist working in the field of condensed matter physics, building classical and quantum models.
Languages:
C++ -- very versatile: can be used for anything, good speed, but it can be a bit inconvenient when it comes to MPI
Octave -- good for some supplementary calculations, very convenient and productive
Libraries:
Armadillo/Blitz++ -- fast array/matrix/cube abstractions for C++
Eigen/Armadillo -- linear algebra
GSL -- to use with C
LAPACK/BLAS/ATLAS -- extremely big and fast, but less convenient (and written in FORTRAN)
Graphics:
GNUPlot -- it has very clean and neat output, but not that productive sometimes
Origin -- very convenient for plotting
Development tools:
Vim + plugins -- it works great for me
GDB -- a great debugging tool when working with C/C++
Code::Blocks -- I used it for some time and found it quite comfortable, but Vim is still better in my opinion.
I work as a physicist in a UK university.
Perhaps I should emphasise that different areas of research have different emphasis on programming. Particle physicists (like dmckee) do computational modelling almost exclusively and may collaborate on large software projects, whereas people in fields like my own (condensed matter) write code relatively infrequently. I suspect most scientists fall into the latter camp. I would say coding skills are usually seen as useful in physics, but not essential, much like physics/maths skills are seen as useful for programmers but not essential. With this in mind...
What languages/environments have you used for developing scientific software, esp. data analysis? What libraries? (E.g., what do you use for plotting?)
Commonly data analysis and plotting is done using generic data analysis packages such as IGOR Pro, ORIGIN, Kaleidegraph which can be thought of as 'Excel plus'. These packages typically have a scripting language that can be used to automate. More specialist analysis may have a dedicated utility for the job that generally will have been written a long time ago, no-one has the source for and is pretty buggy. Some more techie types might use the languages that have been mentioned (Python, R, MatLab with Gnuplot for plotting).
Control software is commonly done in LabVIEW, although we actually use Delphi which is somewhat unusual.
Was there any training for people without any significant background in programming?
I've been to seminars on grid computing, 3D visualisation, learning Boost etc. given by both universities I've been at. As an undergraduate we were taught VBA for Excel and MatLab but C/MatLab/LabVIEW is more common.
Did you have anything like version control, bug tracking?
No, although people do have personal development setups. Our code base is in a shared folder on a 'server' which is kept current with a synching tool.
How would you go about trying to create a decent environment for programming, without getting too much in the way of the individual scientists (esp. physicists are stubborn people!)
One step at a time! I am trying to replace the shared folder with something a bit more solid, perhaps finding a SVN client which mimics the current synching tools behaviour would help.
I'd say though on the whole, for most natural science projects, time is generally better spent doing research!
Ex-academic physicist and now industrial physicist UK here:
What languages/environments have you used for developing scientific software, esp. data analysis? What libraries? (E.g., what do you use for plotting?)
I mainly use MATLAB these days (easy to access visualisation functions and maths). I used to use Fortran a lot and IDL. I have used C (but I'm more a reader than a writer of C), Excel macros (ugly and confusing). I'm currently needing to be able to read Java and C++ (but I can't really program in them) and I've hacked Python as well. For my own entertainment I'm now doing some programming in C# (mainly to get portability / low cost / pretty interfaces). I can write Fortran with pretty much any language I'm presented with ;-)
Was there any training for people without any significant background in programming?
Most (all?) undergraduate physics course will have a small programming course usually on C, Fortran or MATLAB but it's the real basics. I'd really like to have had some training in software engineering at some point (revision control / testing / designing medium scale systems)
Did you have anything like version control, bug tracking?
I started using Subversion / TortoiseSVN relatively recently. Groups I've worked with in the past have used revision control. I don't know any academic group which uses formal bug tracking software. I still don't use any sort of systematic testing.
How would you go about trying to create a decent environment for programming, without getting too much in the way of the individual scientists (esp. physicists are stubborn people!)
I would try to introduce some software engineering ideas at undergraduate level and then reinforce them by practice at graduate level, also provide pointers to resources like the Software Carpentry course mentioned above.
I'd expect that a significant fraction of academic physicists will be writing software (not necessarily all though) and they are in dire need of at least an introduction to ideas in software engineering.
What languages/environments have you used for developing scientific software, esp. data analysis? What libraries? (E.g., what do you use for plotting?)
Python, NumPy and pylab (plotting).
Was there any training for people without any significant background in programming?
No, but I was working in a multimedia research lab, so almost everybody had a computer science background.
Did you have anything like version control, bug tracking?
Yes, Subversion for version control, Trac for bug tracing and wiki. You can get free bug tracker/version control hosting from http://www.assembla.com/ if their TOS fits your project.
How would you go about trying to create a decent environment for programming, without getting too much in the way of the individual scientists (esp. physicists are stubborn people!).
Make sure the infrastructure is set up and well maintained and try to sell the benefits of source control.
I'm a statistician at a university in the UK. Generally people here use R for data analysis, it's fairly easy to learn if you know C/Perl. Its real power is in the way you can import and modify data interactively. It's very easy to take a number of say CSV (or Excel) files and merge them, create new columns based on others and then throw that into a GLM, GAM or some other model. Plotting is trivial too and doesn't require knowledge of a whole new language (like PGPLOT or GNUPLOT.) Of course, you also have the advantage of having a bunch of built-in features (from simple things like mean, standard deviation etc all the way to neural networks, splines and GL plotting.)
Having said this, there are a couple of issues. With very large datasets R can become very slow (I've only really seen this with >50,000x30 datasets) and since it's interpreted you don't get the advantage of Fortran/C in this respect. But, you can (very easily) get R to call C and Fortran shared libraries (either from something like netlib or ones you've written yourself.) So, a usual workflow would be to:
Work out what to do.
Prototype the code in R.
Run some preliminary analyses.
Re-write the slow code into C or Fortran and call that from R.
Which works very well for me.
I'm one of the only people in my department (of >100 people) using version control (in my case using git with githuib.com.) This is rather worrying, but they just don't seem to be keen on trying it out and are content with passing zip files around (yuck.)
My suggestion would be to continue using LabView for the acquisition (and perhaps trying to get your co-workers to agree on a toolset for acquisition and making is available for all) and then move to exporting the data into a CSV (or similar) and doing the analysis in R. There's really very little point in re-inventing the wheel in this respect.
What languages/environments have you used for developing scientific software, esp. data analysis? What libraries? (E.g., what do you use for plotting?)
My undergraduate physics department taught LabVIEW classes and used it extensively in its research projects.
The other alternative is MATLAB, in which I have no experience. There are camps for either product; each has its own advantages/disadvantages. Depending on what kind of problems you need to solve, one package may be more preferable than the other.
Regarding data analysis, you can use whatever kind of number cruncher you want. Ideally, you can do the hard calculations in language X and format the output to plot nicely in Excel, Mathcad, Mathematica, or whatever the flavor du jour plotting system is. Don't expect standardization here.
Did you have anything like version control, bug tracking?
Looking back, we didn't, and it would have been easier for us all if we did. Nothing like breaking everything and struggling for hours to fix it!
Definitely use source control for any common code. Encourage individuals to write their code in a manner that could be made more generic. This is really just coding best practices. Really, you should have them teaching (or taking) a computer science class so they can get the basics.
How would you go about trying to create a decent environment for programming, without getting too much in the way of the individual scientists (esp. physicists are stubborn people!)
There is a clear split between data aquisition (DAQ) and data analysis. Meaning, it's possible to standardize on the DAQ and then allow the scientists to play with the data in the program of their choice.
Another good option is Scilab. It has graphic modules à la LabVIEW, it has its own programming language and you can also embed Fortran and C code, for example. It's being used in public and private sectors, including big industrial companies. And it's free.
About versioning, some prefer Mercurial, as it gives more liberties managing and defining the repositories. I have no experience with it, however.
For plotting I use Matplotlib. I will soon have to make animations, and I've seen good results using MEncoder. Here is an example including an audio track.
Finally, I suggest going modular, this is, trying to keep main pieces of code in different files, so code revision, understanding, maintenance and improvement will be easier. I have written, for example, a Python module for file integrity testing, another for image processing sequences, etc.
You should also consider developing with the use a debugger that allows you to check variable contents at settable breakpoints in the code, instead using print lines.
I have used Eclipse for Python and Fortran developing (although I got a false bug compiling a Fortran short program with it, but it may have been a bad configuration) and I'm starting to use the Eric IDE for Python. It allows you to debug, manage versioning with SVN, it has an embedded console, it can do refactoring with Bicycle Repair Man (it can use another one, too), you have Unittest, etc. A lighter alternative for Python is IDLE, included with Python since version 2.3.
As a few hints, I also suggest:
Not using single-character variables. When you want to search appearances, you will get results everywhere. Some argue that a decent IDE makes this easier, but then you will depend on having permanent access to the IDE. Even using ii, jj and kk can be enough, although this choice will depend on your language. (Double vowels would be less useful if code comments are made in Estonian, for instance).
Commenting the code from the very beginning.
For critical applications sometimes it's better to rely on older language/compiler versions (major releases), more stable and better debugged.
Of course you can have more optimized code in later versions, fixed bugs, etc, but I'm talking about using Fortran 95 instead of 2003, Python 2.5.4 instead of 3.0, or so. (Specially when a new version breaks backwards compatibility.) Lots of improvements usually introduce lots of bugs. Still, this will depend on specific application cases!
Note that this is a personal choice, many people could argue against this.
Use redundant and automated backup! (With versioning control).
Definitely, use Subversion to keep current, work-in-progress, and stable snapshot copies of source code. This includes C++, Java etc. for homegrown software tools, and quickie scripts for one-off processing.
With the strong leaning in science and applied engineering toward "lone cowboy" development methodology, the usual practice of organizing the repository into trunk, tag and whatever else it was - don't bother! Scientists and their lab technicians like to twirl knobs, wiggle electrodes and chase vacuum leaks. It's enough of a job to get everyone to agree to, say Python/NumPy or follow some naming convention; forget trying to make them follow arcane software developer practices and conventions.
For source code management, centralized systems such as Subversion are superior for scientific use due to the clear single point of truth (SPOT). Logging of changes and ability to recall versions of any file, without having chase down where to find something, has huge record-keeping advantages. Tools like Git and Monotone: oh my gosh the chaos I can imagine that would follow! Having clear-cut records of just what version of hack-job scripts were used while toying with the new sensor when that Higgs boson went by or that supernova blew up, will lead to happiness.
What languages/environments have you
used for developing scientific
software, esp. data analysis? What
libraries? (E.g., what do you use for
plotting?)
Languages I have used for numerics and sicentific-related stuff:
C (slow development, too much debugging, almost impossible to write reusable code)
C++ (and I learned to hate it -- development isn't as slow as C, but can be a pain. Templates and classes were cool initially, but after a while I realized that I was fighting them all the time and finding workarounds for language design problems
Common Lisp, which was OK, but not widely used fo Sci computing. Not easy to integrate with C (if compared to other languages), but works
Scheme. This one became my personal choice.
My editor is Emacs, although I do use vim for quick stuff like editing configuration files.
For plotting, I usually generate a text file and feed it into gnuplot.
For data analysis, I usually generate a text file and use GNU R.
I see lots of people here using FORTRAN (mostly 77, but some 90), lots of Java and some Python. I don't like those, so I don't use them.
Was there any training for people
without any significant background in
programming?
I think this doesn't apply to me, since I graduated in CS -- but where I work there is no formal training, but people (Engineers, Physicists, Mathematicians) do help each other.
Did you have anything like version
control, bug tracking?
Version control is absolutely important! I keep my code and data in three different machines, in two different sides of the world -- in Git repositories. I sync them all the time (so I have version control and backups!) I don't do bug control, although I may start doing that.
But my colleagues don't BTS or VCS at all.
How would you go about trying to
create a decent environment for
programming, without getting too much
in the way of the individual
scientists (esp. physicists are
stubborn people!)
First, I'd give them as much freedom as possible. (In the University where I work I could chooe between having someone install Ubuntu or Windows, or install my own OS -- I chose to install my own. I don't have support from them and I'm responsible for anything that happens with my machins, including security issues, but I do whatever I want with the machine).
Second, I'd see what they are used to, and make it work (need FORTRAN? We'll set it up. Need C++? No problem. Mathematica? OK, we'll buy a license). Then see how many of them would like to learn "additional tools" to help them be more productive (don't say "different" tools. Say "additional", so it won't seem like anyone will "lose" or "let go" or whatever). Start with editors, see if there are groups who would like to use VCS to sync their work (hey, you can stay home and send your code through SVN or GIT -- wouldn't that be great?) and so on.
Don't impose -- show examples of how cool these tools are. Make data analysis using R, and show them how easy it was. Show nice graphics, and explain how you've created them (but start with simple examples, so you can quickly explain them).
I would suggest F# as a potential candidate for performing science-related manipulations given its strong semantic ties to mathematical constructs.
Also, its support for units-of-measure, as written about here makes a lot of sense for ensuring proper translation between mathematical model and implementation source code.
First of all, I would definitely go with a scripting language to avoid having to explain a lot of extra things (for example manual memory management is - mostly - ok if you are writing low-level, performance sensitive stuff, but for somebody who just wants to use a computer as an upgraded scientific calculator it's definitely overkill). Also, look around if there is something specific for your domain (as is R for statistics). This has the advantage of already working with the concepts the users are familiar with and having specialized code for specific situations (for example calculating standard deviations, applying statistical tests, etc in the case of R).
If you wish to use a more generic scripting language, I would go with Python. Two things it has going for it are:
The interactive shell where you can experiment
Its clear (although sometimes lengthy) syntax
As an added advantage, it has libraries for most of the things you would want to do with it.
I'm no expert in this area, but I've always understood that this is what MATLAB was created for. There is a way to integrate MATLAB with SVN for source control as well.
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
Improve this question
I know that this question is too abstract. But. How much time do i need to learn LabVIEW to become average LabVIEW developer? For example, if I buy good book about LabVIEW and have 8 hours per day (on my work) dedicated to LabVIEW learning how many days i will spend on LabVIEW learning? Could you please provide example from your own experience. More information about me that can be helpful: I'm a developer and know c\c++\python and a little bit of java languages.
Like Swinders said, it might depend a lot on your sensibilities. I have seen people who had a really hard time migrating to the data flow concept. It's a different paradigm from the classic text-based languages and some people can't easily think in these concepts.
If you get past that hurdle, you'll find that the IDE handles a lot of the annoying things you used to take care of for you (things such as syntax and memory allocation). This allows you to become productive very quickly.
It doesn't mean, however, that your level would be high. One potential pit you should try hard to avoid is casting your existing experience onto LV. The most common example is probably local variables. This may be shocking to people coming from a text-based world, but LV does not have variables, per-se. Unfortunately, it does have elements called variables and people migrating from C who find them jump on them and use them as they would use variables in C, leading to LV code which looks like C code and is bad code (at least in LV).
If you do manage to work around this, I would guess you would become better than the global average in less than a month and better than most professional developers after creating three projects you would later look at and say "what the hell was I thinking?".
I never took any of the NI courses (although I understand some of the advanced architecture ones are pretty good), but I would suggest you also spend some time in some of the online communities (such as LAVA or the NI forums) and look at some of the examples and discussions there. There's a lot of material about best practices, design patterns, etc., which would allow you to become a more professional developer.
Above all, do not abandon your current professional conduct. If you have a structured process for designing and developing software, you already have a leg up on the majority of LV programmers. Just make sure you adapt and keep using such a process.
I started with no commercial programming experience (I have always programmed for fun) and followed an on-line tutorial to pick up the basics of LabVIEW. Within a week I was able to understand existing code and could develop a small application.
It is hard to give an estimate on how long it would take to become an 'average' LabVIEW developer as this depends on what you mean by 'average'. One thing to consider is how easy you are able to think in terms of data flow rather than procedural languages. If you can pick up new programming languages quickly then this will help.
Would you be the only person using LabVIEW or are there others at your place of work that could mentor you? You may also find that there are user groups operating near you which I would recommend (check the NI website or contact your local NI office).
There is then the experience that you will need to gain to allow you to produce good LabVIEW code. I was lucky to be able to attend the National Instruments training courses a few years ago which I think helped me but only by using it have I become an 'average' LabVIEW developer.
I'd say a few weeks or most with devoting the majority of your work time to it. I had a similar background to you when I started to develop in LabVIEW. The hardest part was adapting to the lack of variables. There are local variables, but it's not what you're used to at all. Additionally, their functions, called Virtual Instruments (VIs) can have multiple inputs and outputs, similar to how Python can handle n-tuples.
I will warn you, their array handling features are terrible. A lot of general concepts you might be used to are difficult to implement. My mantra when working with the language is it makes hard things easy and easy things hard. There are also a lot of "gotchas" in the language set, especially with their DAQmx function. I'm not sure what you're planning on developing and their Real-Time module has it's own issues as well, different issues from the main language set.
I would definitely spend some time on NI's website and read as many whitepapers as you can, especially about good design practices, here and here. Learn their State Machine (here or here) and Producer/Consumer pattern well, that's the backbone of many applications you'll be writing.
Good luck, it will make your head spin for a while.
There are some excellent resources to help you get started. If your employer can afford training, you can get started pretty quickly by taking a week of training run by National Instruments. The NI website also has an outstanding developer community that is highly responsive to questions even from novice developers. But I would say that the key to being comfortable with the idioms and style of the language is just plain old practice that you get by solving problems using LabVIEW on a regular basis.
You will find eventually that there is the question of hardware and instruments. Labview is really all about data acquisition-- either through NI's DAQ hardware or through traditional GPIB instruments, or through 3rd party api's (activeX, .NET assemblies). If you're using LabVIEW, you're probably interfacing to hardware of some type. This can get really challenging with complex instruments and measurements. If you're getting started, I would recommend making sure that you have unlimited access to at least some of the hardware you'll be working with. In other words, make sure that your manager understands that you need a lot of access to the hardware in order to get good at developing with it.
We are using LabVIEW to create test software for our factory test systems. In the past years I have already trained some beginners to understand LabVIEW. I would say it depends on how good you are at learning new concepts. I have trained some to be able to produce standalone applications using the queued message handler concept, doing dynamic GUIs and using hardware drivers within about 3 weeks. Unfortunately there were others aswell that were only able to learn half of that within half a year.
The most important thing in my opinion is the learning source. Having an experienced LabVIEW user that can guide you is the best option. If there is no one not available I would recommend Youtube Tutorials combined with the shipped LabVIEW examples.
The LabVIEW core tutorials are not very handy in my opinion. Those are quite boring and far from what you really need to get started.
I'm looking to develop a new Forth system, aimed at making game development easier on one or possibly several retro console platforms. I'm something of a Forth beginner, and need your help deciding which Forth codebase to start porting from.
I'm basically looking for the merits / disadvantages of particular Forths vs each other. I've read the source to JonesFORTH as well as both praise and criticism of it, and discussions on ANS forth, and have unfortunately been left feeling rather confused. The Forth community, from what I can tell, seems to be fairly brutally divided along the standards-compliance issue, with very good arguments made by both camps as to why the standard is both a good, and a terrible, thing. However I cannot seem to find good practical advice on what exactly the standard changes, other than a general sense that it makes things more complicated and bloaty than they perhaps need to be.
I'm hoping to make development easier for programmers and hobbyists currently scared off by the prospect of developing in assembly or C, so I'm leaning towards a more simple Forth, but I really don't know enough about Forth yet to make an educated decision.
I'm kind of where you are right now. I've been casually reading up on Forth for a while and also think it has a lot of potential--maybe for scripting too. However, I am nowhere near an authority, but here are my thoughts.
First of all, Forth seems to be the ultimate programmable programming language--even more than lisp. To that end, I'm not sure how much C or assembly you'll need to code. I was leaning more towards developing vocabularies in Forth itself. After all, that's how it's supposed to be used.
Second, of the implementations I've tried, I like Gforth and Pforth the best.
Which ones are you using?
Finally, here is a list of online forth reading I've discovered:
Starting Forth
Thinking Forth
Forth Primer
Forth Language
Forth Tutorials
Pforth Tutorial
Gforth Manual
Begin Forth
WikiForth
OLPC Forth
Lets Build a Compiler
Have you found any others?
A basic Forth system is very small and can quickly be implemented. I think the following page will be very useful for you: "A sometimes minimal Forth compiler and tutorial for Linux / i386 systems". It is a linear tutorial on bootstrapping a Forth system!
Be sure to implement the defining words (words executing at compile time). That is where the power of Forth comes from; making it truly extensible.
I haven't actually tried it, but I once settled for bigFORTH as it is one of the few Forths with an open source license. It is also actively maintained; for instance there was a release earlier this year.
I have collected a set of annotated links about Forth over the past few years. Some of it is hard to find stuff.
Regarding the standards talk: this is a 20 year old discussion. I think it is more important to actually do something instead of talking: implement software, provide killer examples, writing documentation, invite other to participate, etc. Charles Moore has since moved on and done Forth chips, machineForth and ColorFORTH. If you search for "machineForth" and "ColorFORTH" in the annotated list of links it is possible to find some interesting articles and statements.
Charles Moore also has a blog now.
One thing you might want to try as a first step is to write a Forth interpreter in some higher level language like Java, C++, Python, etc. It will lack the main benefits of a machine-level Forth: speed and compactness, but by writing your own virtual machine, you'll gain an understanding of the different types of threading, etc., used in various flavors of Forth.
Most new Forth implementations are basically started by looking at the CPU as a virtual machine anyway -- so making an explicit virtual machine is not knowledge wasted when it comes times to move from an interpreted version to a machine-code version. Also, you'll have a blueprint of your own code to cross-reference when building the machine version.
If, however, you are really, really comfortable with machine code, you can probably skip the above step and just port one of the standards directly.
As far as standards, I'd pick some standard, but stay minimal to start with. If you need to go beyond a standard in terms of things other than user-defined words, you can always grow towards that -- either standard or non-standard -- as implementation needs dictate. One of the coolest things about Forth is that it's so simple you can re-write the entire compiler easier than you can re-write some of the Forth code you'll be running on it, lol.
I recommend hunting up eForth, the one Ting wrote. Kind of a modern FIG forth in some ways. He has a kernel written in C, but eForth kernels/systems have already been ported to many platforms/cpus most have assembly kernels/cores. However, there's this cool kinda-forth called Factor. It started out as 'just for games' and ended up in a wild and fun corner of the programming language world
Gameduino, a game development platform on top of Arduino, has a minimal forth CPU (J1 forth) running inside.
In addition to the Verilog source code of J1 forth, there is also a PC simulator, a compiler written in google go, here by Serge Zaitsev.
J1 is already implemented in Xilinx FPGA. It is also possible to "simulate" the J1 CPU on any modern MCU.
Regarding ANSI compatibility, if you're using Forth for your own application you can leave out anything that you don't need, or even change things you don't like. Indirect threaded Forths are among the most compact on 16 bit processors ( no more on 64-bitters). So this is probably the way to go.
The famility of indirect threaded Forth's called FIG-Forth are still educative material. jonesforth is an example of Forth's that go on in the same vein. I myself am the author of ciforth that was one of the inspirations for jonesforth and of yourforth, that is probably worth your while because it tries to explain the reasons for designing things the way they are.
Ting's eforth (all over the place) and Camelforth are good examples too. gforth with its detour via c and its bulk is less valuable to you.
One general advice, if you need a code word (coded in assembler, not Forth, the lowest level) you're better off stealing it, then reimplementing. Code for a say 32 bit by 16 bit divide on all but the most obscure processors is bound to be found somewhere. So collect a few Forth's that are implemented on the processor you're using.
You must select a processor and that will lock you in pretty much Then you can just use one of the Camelforth's (because it is there for most popular processors) and start with your application. If you're not quite happy you can modify it to become your own Forth. If you're on the 6809 or the Renesas processor, or an 16 bit i86 you can use my ciforth as a starting point.
I know this is an old question but for the record, another great forth to start of with is "ff" (Felix forth).
There is the assembly version which is written in x86 (NASM) and ARM (gas) here.
And the more recent versions written with an assembler created in the forth itself from the creators own page.
I have wonder that many big applications (e.g. social websites such as facebook) are build with many languages into its platform.
They usually start with AJAX browser support, then scale down to PHP scripting, then move towards a powrful OOP technologie such as Java or .NET, and finally a primitive language to increase performance in crucial operations such as C.
My question is how should I determinate the edge of the layers between languages. When PHP, when Java, when C and so on. And the other question is if should those languages integrate in a vertcal fashion for simplicity and maintanance, or could it be cases when you decide to program on module of your app in Java and the other in native C.
What are the context variables that push me to move to a better performance language? (e.g. concurrency issues due increase of users)
Don't tell me that PHP overlaps .NET and Java Technologies. In a starter point it does, but when the network is overload you start seeing the diferences. I mean how can I achieve Multithreading in PHP as in Java with the same performance. The thing it's hard to answer my wuestion is becasue there is not so much reading about this. You maybe find some good books covering PHP, but few telling how when and why integrate different languages.
Each language was created for different purposes, Python is strong with string operations, Perl very powerful in batch scripting, PHP a very reliable application web server, C the mother of most popular languages.
Best,
Demian.
On one end of the scale, you move to a higher performance language whenever your profiling and measurements tell you that you have a bottleneck that can't be fixed with better algorithms, data structures, or other optimisation.
At the other end, you move to a higher level language (ie. more abstraction, better libraries) whenever your management allow you to do so. ;)
I believe most teams simply use what they are best familiar with.
There are also questions of licensing that can influence the decision.
That is, if you're talking about technologies that compare to each other and solve the problem on the same level (for example ASP.NET/JSF/JSP/PHP...). But you can't compare .NET with C++ for example, they are meant to solve different problems on different abstraction levels.
My criterion for any programming language is "does it help me to get the job done or does it just get in the way?" If the latter, then it's time to move on.
From an economical point of view the answer is easy: on a regular basis just look what will be cheaper. Either continue with the current technology and maybe stretch the envelope a bit more. Or switch to something new. When you compare the two alternatives the cost of the investment already done is not important anymore since you've already spent that money/effort. You only have to look ahead: cost of licenses, education, etc.
Of course this is easier said then done, but just sitting down with a few people, thinking about it, and maybe try to come up with some numbers already helps a lot. I have seen too many projects that continued with technology that really wasn't suited for the job anymore.
Also hard numbers don't tell the whole story. There will be resistance because of unfamiliar technology, experts who are losing their status, etc.
Identify the bottleneck
Solve bottleneck
Go to 1
I'm sure you can imagine that step 2 is the one where decisions like "What programming language do we use" and "where do we put the coffee machine" come into play. That's the basic rule.
There are many scripting language communities claiming that the language can be used for everything but in fact, nearly everybody uses it for one specific thing, e.g.: web development. If I take a look at Ruby, for example, they tell you its general-purpose but actually everybody is using it with rails for web development only..
Can you list me some uses of popular general-purpose scripting languages for the local PC? (except embedding) Are there any?
Is the fast development usually worth having to bring the whole interpreter with your program? Then there would be some language-dependent performance and stability problems too in most cases..
best regards,
lamas
I tend to use Python for most things that aren't compute bound, i.e. they aren't restricted by how many computations you do per second. Some of the things I've used Python for are:
General scripts to manipulate images etc. with the Python Imaging Library.
GUI frontends for command line applications using the pexpect module.
Mathematical modeling of microbial systems.
Bioinformatics.
Some web programming.
etc...
When the program/algorithm is compute bound, I use C together with Python and Ctypes. Does this fit your definition of general purpose? It's certainly useful for a wide variety of applications, but not suitable if the program needs to crunch numbers fast.
Stability: Python 2.5/2.6 is rock solid. Never had a crash that wasn't caused by self-stupidity.
Fast development: It's definitely worth it for me. For the most part, in the field where I work, programmer time is orders of magnitude more valuable than processor time. I'm quite happy to let a program run for hours if I can write it in a few days instead of a few weeks.
I often use PHP for things that I used to use bat files for. Much easier to write. Ironically, the deployment scripts to create installable materials for my web apps from the subversion sources are written in PHP.
Python is popular in the gaming community. EVE Online is written in python.
claiming that they can be used for everything but I often can't find any examples for that
You are basing your question on an incorrect assumption. Although, as pointed out, a Turing complete language will be able to compute what you require ... languages are 'viewed' by most as the sum of their most useful features and productive semantics.
The reality is:
Most scripting languages can do the same things, or support the most common things via libraries.
Some languages make a subset of operations more convenient, take Perl and regular expressions as an example
CPU time is cheap, as is RAM. Simple to understand code is the priority for most people.
The rise of the scripting languages is natural. Trying to assert any one language, approach or level of execution is good for a range of situations is usually fruitless.
What do you want?
What is the best language for that?
Is is fast enough or small enough? Usually the answer is yes
Imagine trying to use Python where you should be using Erlang, or C instead of Lisp because you thought all languages are equal. They aren't, even though, you can achieve the same things in a problem domain, in most languages/platforms with varying levels of ballache dependant on the task.
I often use ruby for what other people would create bash/sh files for. I find Ruby syntax intuitive for batch tasks along with a lot of other sorts of tasks(it's my goto language)
Perl is extremely popular for general scripting in unixes, such as there are package managers and websites and maintenance scripts written in perl.
Python is extremely popular for both web and application use.
VBA Is popular for being abused to write programs inside of Access, and also was once commonly used in ASP for websites (right?)
Nobody mentioned AppleScript!
Hahah, no seriously, Perl runs everywhere, is installed by default on (almost) any Unix-family OS (and is easy to get on Windows), and is extremely useful for gluing things together. And if you browse a bit at CPAN you'll see that it's extremely general-purpose. "Swiss army chainsaw" was intended as a slur but I think of it fondly. Performance is good too, though it hardly ever actually matters. Larry Wall's goal was "make easy things easy and hard things possible".
OK OK, so I'm a fanboy still, sigh.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
I've been in the embedded space for a while now, and it seems that most programmers I talk to seem to be doing things pretty much the same way it was done 15 years or more ago: Waterfall(ish) Development, command line tools and a small group uses lint.
Contrast this with the server/desktop environment, where there seems to be lots of activity related to all sorts of facets of programming:
XP, Scrum, Iterative, Lean/Agile
Continuous Integration
Automated Builds
Automated Unit Testing Frameworks
Refactoring tool support
Is it just that the embedded environment makes it more difficult to implement new practices or tools?
Is it that the mindset of embedded programmers steers them away from new tools/concepts?
Is it that management in the typical embedded industry behind the curve compared to IT focused fields?
I do realize that this is a generalization, and some embedded projects do use Scrum, Agile, CI, Automated Builds (in fact I worked at a company that had that in place since the 80s). But my impression is that it is a very small percentage.
We are all used to the fact that our desktop PC crashes once in a while (or at least an application on the desktop suddenly disappears). It's no big deal. The next patch will fix it.
In the embedded space, you are building something which can't be patched. Lives can depend on your device (in a car, an elevator or a medical system). Most devices are installed and then must run unattended for years. So embedded people tend to be very conservative. TCP/IP is often "too modern". They stick to their trusty serial bus with a communication "stack" that is roughly 50 lines of assembler code.
What's worse, you simply don't have the abundance of space on the device which means you can't use one of the latest programming languages which make TDD and automated builds a bliss.
Next, a lot of embedded development environments are proprietary. If your supplier doesn't support it, you won't get it. Linux has started to weaken this in the past years but a whole lot of devices are not powerful enough to run Linux, yet. And even if they were, the CPU power would be used for something else instead of running a fancy OS which comes with source.
So yes, there are powerful forces working in the background to keep the embedded space where it is.
Are embedded developers more conservative than their desktop brethrens?
Yes, because they are more concerned with the consequences of making errors. It’s a big deal to patch an embedded device. Not so much for a desktop app.
Waterfallish development is necessary in the embedded world because you are generally building hardware at the same time as the software. You need to know as soon as possible how much memory, how much processor speed, how big a flash, what if any special hardware is necessary etc...The hardware design can’t complete until you know these answers. Once you decide, that is pretty much it. The lead time for redoing a board is far too long. If you mess up then the software is going to have to work around any short-comings. Not usually an ideal situation.
As for the tools, that is largely based on what the supplier provides and any biases of the developers. On some projects I have used XP Embedded and got pretty much everything that the desktop developer gets.
XP, Scrum, Iterative, Lean/Agile:
Since most of the design is done up front (by necessity), and you usually don’t have working hardware when it is time to code, the quick turn-around processes don’t really provide much benefit.
Continuous Integration/Automated Builds
Nice to have, but not really necessary. What…it takes about 15 seconds to open the IDE and press the compile button.
Automated Unit Testing
No reason why this shouldn't be done, but only part of the code can truly be automatically tested because the other part is either hardware dependent or has some other dependencies like timing. So you can't really tell if the code is working by the automated tests.
Refactoring Tool Support
The vendors of embedded processors product is the processor. They provide the IDE support in order to encourage you to purchase their processor. They couldn't possibly afford to pay for a Visual Studio sized development team in order to add all the bells and whistles to the IDE which isn't even their product.
These some reasons I can think of:
Embedded teams are usually smaller that desktop/Web teams. Code base is smaller.
System testing is much more important than unit testing. The software needs to be tested together with hardware. Automated testing is not possible and can only be applied to a small fraction of the code base.
Embedded engineers have a different skill set than software engineers. They interact with hardware, know how to use an oscilloscope and a logic analyzer. Usually, the difficult part of their job is to find a glitch in the hardware. They do not have the time to adopt modern software methologies.
Embedded programmers are mostly electrical engineers, not computer scientists or software engineers.
They excel in their field of expertise. They bring a slower more methodical approach than most computer programmers. When it comes to programming firmware, electrical engineers know just enough to be dangerous.
Here are some of the things I've noticed electrical engineers doing in C:
All code in ONE single file
Math like variable names: x, y, z
No or missing indentation
No stardard comment headers
No comments at all
Too many comments
In their defense EE's didn't train to be computer programmers, it's not their job. I think software is the hardest part of creating embedded devices. Designing PCBs and choosing components requires skill but pales in comparison to the complexity of 10,000 lines of code.
Embedded programmers also have to deal IDE's that look and behave like the IDE's of the 90's.
MPLAB
AVR Studio
Is it just that the embedded environment makes it more difficult to implement new practices or tools?
It's partly a matter of scale. Software is NOT the product, the product is the product. however, there are thousands of different types of microcontrollers and microprocessors out there, and the most popular thousand have 3-4 different compilers that aren't completely compatible.
So a given tool is only going to be used by a few hundred or thousand engineers.
In windows development, however, there are millions of programmers of many levels - the tools produce software directly which is the product, and so it's going to get more eyeballs, and more money.
Each new product that an engineer puts out might have a different processor.
Is it that the mindset of embedded programmers steers them away from new tools/concepts?
Embedded programmers are generally software or firmware engineers, as opposed to programmers. Engineering implies a certain amount of design, design analysis, and design proof prior to implementation - in other words a ton of work is done before the first line of code is written, and the documentation, ideally, is specific enough that implementation is merely turning pseudocode like documentation into compilable code.
New tools and concepts are needed in the design phase, not the implementation phase. An IDE with intellisense may be nice, but by the time the code is being written it's useless cruft - they already know what they need.
CAD - computer aided design - tools are being developed for firmware engineers that are used in the design phase to develop models and simulations that are directly turned into code. Matlab and simulink are good examples of this. The system as a whole is designed.
In fact, one might wonder why software developers are still writing code while the engineers are making data/program flow charts and state machine diagrams. Why is UML uptake so slow in the application world? It sounds like application developers can use some of the tools in common use among embedded systems engineers...
Is it that management in the typical embedded industry behind the curve compared to IT focused fields?
Actually, it's likely the reverse. When a project starts the engineers have to pick the processor.
The processor manufacturers get less money on older chips, so they pitch the latest and greatest, and they are generally cheaper overall than the chips used in the previous design (either by die shrinks, more integration, etc).
So the design is actually using the latest and greatest chips.
The downside is that the compiler and tools are often immature. They can only build so much on the older tools, and since the target moves with each new processor, they can't focus on a lot of the nice features application programmers might like. Especially since many of those features won't be useful to an embedded engineer.
There are many other factors, some of which are enumerated by other answers, but it's really a different field even though they both involve programming.
-Adam
I would also add a couple of points here:
In general embedded projects tend to be smaller than desktop projects. This decreases the need for very elaborated software processes.
Requirements for embedded project are often precise and better defined. Therefore SCRUM and agile are not so crucial
Finally embedded projects are generally a mix of software and hardware. The software being only a part of the project embedded developpers invest less time in software processes
I agree with much that's been written here:
Old tools without the bells and whistles (far fewer refactorings available due to C/C++'s preprocessor directives, if any at all) (time consuming to choose a unit test framework vs simply using JUnit).
It's true that waterfall feels more efficient. If I'm going to open the hood and get into a hard-to-access place, I'll want to do as much as I can while I'm there, rather than exiting and closing the hood after each task just to open it again. The idea that creating the most important features first allows you the option of shipping when promised instead of going late can also be hard to grasp when you believe nothing is optional, which might be true. IME, though, when the deadline looms something always becomes unnecessary.
Less visibility into the system makes it riskier to revisit existing code to refactor or change functionality. There are often timing issues, which automated tests running on the host using stubs and mocks won't catch. It can be hard for someone who's been bitten by these issues to take a different perspective.
I'll add one more; the language of agile/scrum is in workstation programmer's terms. To an embedded developer who knows just enough C to get the job done, what is a class, object, or method? When the "user" is typically regarded as a physical person clicking and typing, and the product has no person user interface, it's easy to dismiss the idea as Not Applicable. This may change with James Grenning's forthcoming book about TDD in C. I've been reading the beta ebook and it's quite good.
I would say it's more lack of good toolsets. It's really frustrating when you want to use C++ for its compile-time features not present in C (templates, namespaces, object-orientedness, etc) rather than its run-time features (exceptions, virtual functions) -- but the device manufacturers & 3rd parties just give you a C compiler, not C++. This probably results more from market size (hundreds of millions of PCs running Windows, with hundreds of thousands or even millions of developers -- vs. hundreds of thousands of Chip X, with hundreds or low thousands of developers) than from device capability.
edit: w/r/t robustness: there are different markets out there. The car/elevator/aeronautics/medical device market is going to have to be rigorous about getting rid of bugs. Other markets (toys, MP3 players, & other consumer electronics) can be less rigorous, especially if it's possible to upgrade code in the field. ("Oops! We're sorry we deleted your music library! We just fixed that bug, you can grab the latest release at our website at your convenience!")
I'd say different sorts of problem environments.
The biggest problem with the waterfall methodology is that requirements change. In every environment I've been in, there has been at least the likelihood of a requirements change, which means that the successful methodologies are those that keep flexibility as long as possible. Even if the customer has signed off in blood, and stands to forfeit his left hand if he suggests a change, there are changes coming in the future.
In embedded programming, it is possible to nail the requirements down up front. They come from the behavior of the system as a whole, and engineers are good at nailing down system requirements. Nobody's going to come in halfway through and say that the user now wants the pacemaker to deliver syncopated impulses while the recipient is dancing.
Once the requirements are frozen beyond thawing, which never happens in software designed for human use, waterfall is a very efficient methodology. The team proceeds from well-specified requirements to overall design, then detailed design, then coding, verifying all the way that the stages are done correctly. Then it's time to debug the code (since it's never perfect when written), and final tests to make sure the code meets the requirements.
I would also posit that some fields are inherently conservative. The transportation industry for example, where trains and planes may have life spans of 30 years or so. Customers tend to require tried and true practices, probably derived from IEEE. Waterfall is what customers know, waterfall is what customers demand.