LabVIEW + National Instruments hardware or ??? - labview

I'm in the processes of buying a new data acquisition system for my company to use for various projects. At first, it's primary purpose will be to monitor up to 20 thermocouples and control the temperature of a composites oven. However, I also plan on using it to monitor accelerometers, strain gauges, and to act as a signal generator.
I probably won't be the only one to use it, but I have a good bit of programming experience with Atmel microcontrollers (C). I've used LabVIEW before, but ~5 years ago. LabVIEW would be good because it is easy to pick up on for both me and my coworkers. On the flip side, it's expensive. Right now I have a NI CompactDAQ system with 2 voltage and one thermocouple cards + LabVIEW speced out and it's going to cost $5779!
I'm going to try to get the same I/O capabilities with different NI hardware for less $ + LabVIEW to see if I can get it for less $. I'd like to see if anyone has any suggestions other than LabVIEW for me.
Thanks in advance!

Welcome to test and measurement. It's pretty expensive for pre-built stuff, but you trade money for time.
You might check out the somewhat less expensive Agilent 34970A (and associated cards). It's a great workhorse for different kinds of sensing, and, if I recall correctly, it comes with some basic software.
For simple temperature control, you might consider a PID controller (Watlow or Omega used to be the brands, but it's been a few years since I've looked at them).
You also might look into the low-cost usb solutions from NI. The channel count is lower, but they're fairly inexpensive. They do still require software of some type, though.
There are also a fair number of good smaller companies (like Hytek Automation) that produce some types of measurement and control devices or sub-assemblies, but YMMV.

There's a lot of misconception about what will and will not work with LabView and what you do and do not need to build a decent system with it.
First off, as others have said, test and measurement is expensive. Regardless of what you end up doing, the system you describe IS going to cost thousands to build.
Second, you don't NEED to use NI hardware with LabView. For thermocouples your best bet is to look into multichannel or multiple single-channel thermocouple units - something that reads from a thermocouple and outputs to something like RS-232, etc. The OMEGABUS Digital Transmitters are an example, but many others exist.
In this way, you need only a breakout card with lots of RS-232 ports and you can grow your system as it needs it. You can still use labview to acquire the data via RS-232 and then display, log, process, etc, it however you like.
Third party signal generators would also work, for example. You can pick up good ones (with GPIB connection) reasonably cheaply and with a GPIB board can integrate it into LabView as well. This if you want something like a function generator, of course (duty cycled pulses, standard sine/triangle/ramp functions, etc). If you're talking about arbitrary signal generation then this remains a reasonably expensive thing to do (if $5000 is our goalpost for "expensive").
This also hinges on what you're needing the signal generation for - if you're thinking for control signals then, again, there may be cheaper and more robust opitons available. For temperature control, for example, separate hardware PID controllers are probably the best bet. This also takes care of your thermocouple problem since PID controllers will typically accept thermocouple inputs as well. In this way you only need one interface (RS-232, for example) to the external PID controller and you have total access in LabView to temperature readings as well as the ability to control setpoints and PID parameters in one unit.
Perhaps if you could elaborate on not just the system components as you've planned them at present, but the ultimaty system functionality, it may be easier to suggest alternatives - not simply alternative hardware, but alternative system design altogether.
edit :
Have a look at Omega CNi8C22-C24 and CNiS8C24-C24 units -> these are temperature and strain DIN PID units which will take inputs from your thermocouples and strain gauges, process the inputs into proper measurements, and communicate with LabView (or anything else) via RS-232.

This isn't necessarily a software answer, but if you want low cost data aquisition, you might want to look at the labjack. It's basically a microcontroller & usb interface wrapped in a nice box (like an arduino (Atmel AVR + USB-Serial converter) but closed source) with a lot of drivers and functions for various languages, including labview.
Reading a thermocouple can be tough because microvolts are significant, so you either need a high resolution A/D or an amplifier on the input. I think NI may sell a specialized digitizer for thermocouple readings, but again you'll pay.
As far as the software answer, labview will work nicely with almost any hardware you choose -- e.g. I built my own temperature controller based on an arduino (with an AD7780) wrote a little interface using serial commands and then talked with it using labview. But if you're willing to pay a premium for a guaranteed to work out of the box solution, you can't go wrong with labview and an NI part.

LabWindows CVI is NI's C IDE, with good integration with their instrument libraries and drivers. If you're willing to write C code, maybe you could get by with the base version of LabWindows CVI, versus having to buy a higher-end LabView version that has the functionality you need. LabWindows CVI and LabView are priced identically for the base versions, so
that may not be much of an advantage.

Given the range of measurement types you plan to make and the fact that you want colleagues to be able to use this, I would suggest LabVIEW is a good choice - it will support everything you want to do and make it straightforward to put a decent GUI on it. Assuming you're on Windows then the base package should be adequate and if you want to build stand-alone applications, either to deploy on other PCs or to make a particular setup as simple as possible for your colleagues, you can buy the application builder separately later.
As for the DAQ hardware, you can certainly save money - e.g. Measurement Computing have a low cost 8-channel USB thermocouple input device - but that may cost you in setup time or be less robust to repeated changes in your hardware configuration for different tests.

I've got a bit of experience with LabView stuff, and if you can afford it, it's awesome (and useful for a lot of different applications).
However, if your applications are simple you might actually be able to hack together something with one or two arduino's here, it's OSS, and has some good cheap hardware boards.
LabView really comes into its own with real time applications or RAD (because GUI dev is super easy), so if all you're doing is running a couple of thermopiles I'd find something cheaper.

A few thousand dollars is not a lot of money for process monitoring and control systems. If you do a cost/benefit analysis, you will very quickly recover your development costs if the scope of the system is right and if it does the job it is intended to do.
Another tool to consider is National Instruments measurement studio with VB .NET. This way you can still use the NI hardware if you want and can still build nice gui's quickly.
Alternatively, as others have said, it is perfectly viable to get industrial serial based instruments and talk to them with LabVIEW, VB .NET, c# or whatever you like.
If you go down the route of serial instruments, another piece of hardware that might be useful is a serial terminal (example). These allow you to connect arbitrary numbers of devices to your network. You computers can then use them as though they were physical COM ports.

Have you looked at MATLAB. They have a toolbox called Data Acquisition. compactDAQ is a supported hardware.

LabVIEW is a great visual programming environment. In terms if we want to drag,drop and visualize our system. NI Hardware also comes with the NIDAQmx Library which can be accessed through our code. Probably a feasible solution for you would be to import the libraries into another programming language and write code for all the activities which otherwise you were going to perform using LabVIEW. Though other overheads like code optimization would be the users responsibility, you are free to tweak the normal method flow, by introducing your own improvements at suitable junctures in the DAQ process.

Related

Where do you draw the line between what is "embedded" and what is not?

ASIDE: Yes, this is can be considered a subjective question, but I hope to draw conclusions from the statistics of the responses.
There is a broad spectrum of computing devices. They range in physical sizes, computational power and electrical power. I would like to know what embedded developers think is the determining factor(s) that makes a system "embedded." I have my own determination that I will withhold for a week so as to not influence the responses.
I would say "embedded" is any device on which the end user doesn't normally install custom software of their choice. So PCs, laptops and smartphones are out, while XM radios, robot controllers, alarm clocks, pacemakers, hearing aids, the doohickey in your engine that regulates fuel injection etc. are in.
You might just start with wikipedia for a definition
http://en.wikipedia.org/wiki/Embedded_system
"An embedded system is a computer system designed to perform one or a few dedicated functions, often with real-time computing constraints. It is embedded as part of a complete device often including hardware and mechanical parts. "
Coming up with a concrete set of rules for what an embedded system is is to a large degree pointless. It's a term that means different things to different people -maybe even different things to the same people at different times.
There are some things that are pretty much never considered an embedded system, for example a Windows Desktop machine. However, there are companies that put their software on a Windows box - even a bog standard PC (maybe a laptop) - set things up so their application loads automatically and hides the desktop. They sell that as a single purposed machine that many people would call an embedded system (but many people wouldn't). Microsoft even sells a set of tools called Embedded Windows that helps enable these kinds of applications, though it's targeted more to OEMs who will customize the system at least somewhat instead of just putting it on a standard PC. Embedded Windows is used for things like ATM machines and many other devices. I think that most people would consider an ATM an embedded system.
But go into a 7-11 with an ATM that has a keyboard (I honestly don't know what the keyboard is for), press the right shift key 5 times and you'll get a nice Windows "StickyKeys" messagebox (I wonder if there's an exploit there - I sure hope not). So there's a Windows system there, just hidden and with some functionality removed - maybe not as much as the manufacturer would like. If you could convince it to open up notepad.exe somehow does the ATM suddenly stop being an embedded system?
Many, many people consider something like the iPhone or the iTouch an embedded system, but they have nearly as much functionality as a desktop system in many ways.
I think most people's definition of an embedded system might be similar to Justice Potter Stewart's definition of hard-core pornography:
I shall not today attempt further to define the kinds of material I understand to be embraced within that shorthand description; and perhaps I could never succeed in intelligibly doing so. But I know it when I see it...
I consider an embedded system one where the software is rarely developed directly on the target system. This definition includes sophisticated embedded systems like the iPhone, and excludes primitive desktop systems like the Commodore 64. Not having the development tools on the target means you have to add 'reprogram device' to the edit-compile-run cycle. Debugging is also made more complicated. This encompasses most of the embedded "feel."
Software implemented in a device not intended as a general purpose computing device is an "embedded system".
Typically the system is intended for a single purpose, and the software is static.
Often the system interacts with non-human environmental inputs (sensors) and mechanical actuators, or communication with other non-human systems.
That's off the top of my head. Other views can be read at this embedded.com article
Main factors:
Installed in a fixed place somewhere (you can't carry the device itself around, only the thing it's built into)
The run a long time (often years) with little maintenance
They don't get patched often
They are small, use little power
Small or no display
+1 for a great question.
Like many things there is a spectrum.
At the "totally embedded" end you have devices designed for a single purpose. Alarm clocks, radios, cameras. You can't load new software and make it do something else. THere is no support for changing the hardware,
At the "totally non-embedded" end you have your classic PCs where everything, both HW and SW, can be replaced.
There's still a lot in between those extremes. Laptops and netbooks, for example, have minimally expandable HW, typically only memory and hard disk can be upgraded. But, the SW can be whatever you want.
My education was as a computer engineer, so my definition of embedded is hardware oriented. I draw the line at the MMU (memory management unit). If a chip has an MMU, it usually has off-chip RAM and runs an OS. If a chip does NOT have an MMU, it usually has on-board RAM and runs an RTOS, microkernel or custom executive.
This means I usually dismiss anything running linux, which is shortsighted. I admit my answer is biased towards where I tend to work: microcontroller firmware. So I am glad I asked this question and got a full spectrum of responses.
Quoting a paragraph I've written before:
An embedded system for our purposes is
a computer system that has a specific
and deterministic
functionality\cite{LamieReal}.
Typically, processors for embedded
systems contain elements such as
onboard RAM, special-purpose
processing elements such as a digital
signal processor, analog-to-digital
and digital-to-analog converters.
Since the processors have more
flexibility than a straightforward
CPU, a common term is microcontroller.

What microcontroller (and other components) would I need to create a timer device?

As a hobby project to keep myself out of trouble, I'd like to build a little programmer timer device. It will basically accept a program which is a list of times and then count down from each time.
I'd like to use a C or Java micro controller. I have used BASIC in the past to make a little autonomous robot, so this time around I'd like something different.
What micro controller and display would you recommend? I am looking to keep it simple, so the program would be loaded into memory via computer (serial is ok, but USB would make it easier)
Just use a PIC like 16F84 or 16F877 for this. It is more than enough.
As LCD use a 16 x 2 LCD. It is easy to use + will give a nice look to your project.
LCD
The language is not a matter. You can use PIC C, Micro C or any thing you like. The LCD's interface is really easy to drive.
As other components you will just need the crystal and 2 capacitors as oscillator + pull up resister. The rest of the components depend on the input method that you are going to use to set the times.
If you are using a computer to load the list then you will need additional circuit to change the protocols. Use MAX 232 to do that. If you want to use USB, you need to go ahead and use a PIC with USB support. (18F series)
(source: sodoityourself.com)
This is a set of nice tutorials you can use. You can purchase the products from them as well. I purchased once from them.
I would go with the msp430. An ez430 is $20 and you can get them at digikey or from ti directly, then sets of 3 microcontroller boards for $10 after that. llvm and gcc (and binutils) compiler support. Super simple to program, extremely small and extremely low power.
There are many ways to do this, and a number of people have already given pretty good suggestions AVR or PIC are good starting points for a microcontroller to work with that doesn't require too much in the way of complicated setup (hardware & software) or expense (these micros are very cheap). Honestly I'm somewhat surprised that nobody has mentioned Arduino here yet, which happens to have the advantage of being pretty easy to get started with, provides a USB connection (USB->Serial, really), and if you don't like the board that the ATMega MCU is plugged into, you can later plug it in wherever you might want it. Also, while the provided programming environment provides some high level tools to easily protype things you're still free to tweak the registers on the device and write any C code you might want to run on it.
As for an LCD display to use, I would recommend looking for anything that's either based on an HD44780 or emulates the behavior of one. These will typically use a set of parallel lines for talking to the display, but there are tons code examples for interfacing with these. In Arduino's case, you can find examples for this type of display, and many others, on the Arduino Playground here: http://www.arduino.cc/playground/Code/LCD
As far as a clock is concerned, you can use the built-in clock that many 8-bit micros these days provide, although they're not always ideal in terms of precision. You can find an example for Arduino on doing this sort of thing here: http://www.arduino.cc/playground/Code/DateTime. If you want something that might be a little more precise you can get a DS1307 (Arduino example: http://www.arduino.cc/cgi-bin/yabb2/YaBB.pl?num=1191209057/0).
I don't necessarily mean to ram you towards an Arduino, since there are a huge number of ways to do this sort of thing. Lately I've been working with 32-bit ARM micros (don't do that route first, much steeper learning curve, but they have many benefits) and I might use something in that ecosystem these days, but the Arduino is easy to recommend because it's relatively inexpensive, there's a large community of people out there using it, and chances are you can find a code example for at least part of what you're trying to do. When you need something that has more horsepower, configuration options, or RAM, there are options out there.
Here are a few places where you can find some neat hardware (Arduino-related and otherwise) for projects like the one you're describing:
SparkFun Electronics
Adafruit Industries
DigiKey (this is a general electronics supplier, they have a bit of everything)
There are certainly tons more, though :-)
I agree with the other answers about using a PIC.
The PIC16F family does have C compilers available, though it is not ideally suited for C code. If performance is an issue, the 18F family would be better.
Note also that some PICs have internal RC oscillators. These aren't as precise as external crystals, but if that doesn't matter, then it's one less component (or three with its capacitors) to put on your board.
Microchip's ICD PIC programmer (for downloading and debugging your PIC software) plugs into the PC's USB port, and connects to the microcontroller via an RJ-11 connector.
Separately, if you want the software on the microcontroller to send data to the PC (e.g. to print messages in HyperTerminal), you can use a USB to RS232/TTL converter. One end goes into your PC's USB socket, and appears as a normal serial port; the other comes out to 5 V or 3.3 V signals that can be connected directly to your processor's UART, with no level-shifting required.
We've used TTL-232R-3V3 from FDTI Chip, which works perfectly for this kind of application.
There are several ways to do this, and there is a lot of information on the net. If you are going to use micro controllers then you might need to invest in some programming equipment for them. This won't cost you much though.
Simplest way is to use the sinus wave from the power grid. In Europe the AC power has a frequency of 50Hz, and you can use that as the basis for your clock signal.
I've used Atmel's ATtiny and ATmega, which are great for programming simple and advanced projects. You can program it with C or Assembly, there are lots of great projects for it on the net, and the programmers available are very cheap.
Here is a project I found by Googling AVR 7 segment clock.
A second vote for PIC. Also, I recommend the magazine Circuit Cellar Ink. Some technical bookstores carry it, or you can subscribe: http://www.circellar.com/
PIC series will be good, since you are creating a timer, I recommend C or Assembly (Assembly is good), and use MPLAB as the development environment. You can check how accurate your timer with 'Stopwatch' in MPLAB. Also PIC16F877 has built in Hardware Serial Port. Also PIC16F628 has a built in Hardware serial port. But PIC16F877 has more ports. For more accurate timers, using higher frequency oscillators is recommended.

What Skill set should a low level programmer possess?

I am an embedded SW Engineer, with less than 3 yrs of experience. I aim to "sharpen the saw" continuously. I was wondering if there was anything specific to low level programming that C/C++ coders should be proficient with.
What comes to my mind is familiarity with the hardware's architecture and instruction set. Knowing how to fiddle with bits is also important, resource management and performance have been part of my job, is there anything else?
EDIT: I work with an in-house customized RTOS, not embedded Linux.
I see a lot of high-level operating system answers here, but you specifically said low-level.
Some scattered thoughts:
Design for test. As you work through a problem only change one thing at a time per test.
You need to understand busses and interfaces, spi, i2c, usb, ethernet, etc. Number one interface, today, yesterday, and tomorrow, the uart, serial.
The steps involved in programming a flash.
Tricks to avoid making the product easily brickable.
Bootloaders in general.
Bit-banging above said interfaces on various families of parts (different chip
vendors have different ideas about io pins, pull ups, direction
controls, etc).
Board and chip bring up, you certainly never want to
boot a many tens of thousands of lines of code program on the first
power up (think led on, led off).
How to debug a product without using too much test equipment (logical analyzers and scopes), at the same time you have to learn to use a scope for debugging, you are far
more valuable if you don't HAVE TO have a tech or engineer in the lab
with you.
How would you reprogram the unit in the field? What would
you do to minimize human error when allowing the user to field
upgrade the unit? Remember field downgrades as well.
What would you do to discourage hacking (binaries, etc).
Efficient use of the flash/rom (don't wear out one bank or section, spread the wear around, or see if the flash is doing it for you).
How and when to use a watchdog timer.
State machines, very useful with bytestreams (serial and ethernet), design packet structures that stream well and are tailored to a state machine, and that have a header and checksum or other structure that insures you do not interpret partial packets or
random data as a good packet.
Specific concepts like,
Endianness (this link is to an old but good linuxjournal article)
Effective use of multithreading architectures (the Embedded site is good in general)
Debugging embedded and multithreaded systems
Understand, Learn and Follow good programming techniques (the link is very old and the point very generic and subjective, but think about it)
Other things (this IBM page on embedded linux sums up most of the other points I want to make)
One more thing -- never underestimate testing! or, planning test cases!!
Use the reference links I give as concepts,
please followup further for deeper knowledge.
I'd study the electronics of the actual chips. Learn how they work internally (such as architecture), interface with peripherals, electrical and timing characteristics, etc.
Basically, read the data sheet start to finish a few times and dig into anything you've not seen/used before.
By the way, what chips do you work with?
Similar to what Brian said, learn how to create unit tests and automated builds.
These skills are are good for all levels of software engineers to be proficient in. They will help improve the quality of your code while also making it easier to refactor and improve the code base.
If you haven't yet I think every Software Engineer should read The Pragmatic Programmer and Code Complete. I know these are not specific to low level programming, but have a large wealth of knowledge in them that applies to all sub disciplines.
Having great familiarity with pointers, the checks these languages don't do much (like buffer overflow and stuff like that), digital electronics. Operational systems internals might also help.
Get to know how stuff is represented internally, specially ready-made data structures (supposing you won't build your own one).
Above all, practice a lot. Doing it brings much more to you than just reading about it ;)
bit operations
processor architectures (caches, etc)
wcet analysis
scheduling
Edit: What I forgot to mention is model based development.
Today, the control algorithms are often implemented as some kind of automaton from which C code is generated afterwards.
Commercial available tools are for example MATLAB/Simulink, ASCET or SCADE.
Get yourself a copy of the MISRA-C book. It was originally written by members of the automotive industry, and attempts to make software written in C more robust by applying a number (quite a large number!) of rules and guidelines.
Then, buy PC-Lint (or another static analysis tool) to check your code for MISRA and other rules.
These are particularly relevant to low-level and embedded C, as between them they deal with the causes of a lot of bugs in such software, such as issues relating to pointers, memory leaks, integer promotion (there's a whole chapter on that in the MISRA book), endianness, and undefined behaviour.
Good question. Some that haven't been mentioned...
Learn your various options for achieving low-level multitasking. From basic round-robin (non-preemptive) schedulers, with timing ticks from a hardware timer, up to a preemptive RTOS. Learn why you might need an RTOS, and why you might not. If you use an RTOS, learn that beginners with a PC background probably tend to want to create too many tasks.
Getting visibility into the internals for debugging can be a challenge. There's no screen typically, so no throwing in "printf" calls wherever you want. An emulator or JTAG interface is ideal--you can set breakpoints and step through your program (as long as halting the micro doesn't make hardware go crazy, like swinging a robot arm around at full speed!). If emulator/JTAG is not available, learn how to use a spare serial port (or maybe even bit-bash a pin to make a serial port) for a debug channel, with some simple memory peek/poke commands.

embedded application

In the last two months I've worked as a simple application using a computer vision library(OpenCV).
I wish to run that application directly from the webcam without the need of an OS. I'm curious to know if that my application can be burned into a chip in order to not have the OS to run it.
Ofcorse the process can be expensive, but I'm just curious. Do you have any links about that?
ps: the application is written in C.
I'd use something bigger than a PIC, for example a small 32 bit ARM processor.
Yes. It is theoretically possible to port your app to PIC chips.
But...
There are C compilers for the PIC chip, however, due to the limitations of a microcontroller, you might find that the compiler, and the microcontroller itself is far too limited for computer vision work, especially if your initial implementation of the app was done on a full-blown PC:
You'll only have integer math available to you, in most cases, if not all (can't quote me on that, but our devs at work don't have floating point math for their PIC apps and it causes many foul words to emanate from their cubes). Either that, or you'll need to hook to an external math coprocessor.
You'll have to figure out how to get the PIC chip to talk USB to the camera. I know this is possible, but it will require additional hardware, and R&D time.
If you need strict timing control,
you might even have to program the
app in assembler.
You'd have to port portions of OpenCV to the PIC chip, if it hasn't been already. My guess is not.
If your'e not already familiar with microcontroller programming, you'll need some time to get up to speed on the differences between desktop PC programming and microcontroller programming, and you'll have to gain some experience in that. This may not be an issue for you.
Basically, it would probably be best to re-write the whole program from scratch given a PIC chip constraint. Good thing is though, you've done a lot of design work already. It would mainly be hardware/porting work.
OR...
You could try using a small embedded x86 single-board PC, perhaps in the PC/104 form factor, with your OS/app on a CF card. It's a real bone fide PC, you just add your software. Good thing is, you probably wouldn't have to re-write your app, unless it had ridiculous memory footprint. Embedded PC vendors are starting to ship boards based on 1 GHz Intel Atoms, and if you needed more help you could perhaps hook a daughterboard onto the PC-104 bus. You'll work around all of the limitations listed above, as your using an equivalent platform to the PC you developed your app on. And it has USB ports! If you do a thorough cost analysis and if your'e cool with a larger form factor, you might find it to be cheaper/quicker to use a system based on a SBC than rolling a solution using PIC chips/microcontrollers.
A quick search of PC-104 on Google would reveal many vendors of SBCs.
OR...
And this would be really cheap - just get a off-the-shelf cheap Netbook, overwrite the OEM OS, and run the code on there. Hackish, but cheap, and really easy - your hardware issues would be resolved within a week.
Just some ideas.
I think you'll find this might grow into pretty large project.
It's obviously possible to implement a stand-alone hardware solution to do something like this. Off the top of my head, Rabbit's solutions might get you to the finish-line faster. But you might be able to find some home-grown Beagle Board or Gumstix projects as well.
Two Google links I wanted to emphasize:
Rabbit: "Camera Interface Application Kit"
Gumstix: "Connecting a CMOS camera to a Gumstix Connex motherboard"
I would second Nate's recommendation to take a look at Rabbit's core modules.
Also, GHIElectronics has a product called the Embedded Master that runs .Net MicroFramework and has USB host/device capabilities built-in as well as a rich library that is a subset of the .Net framework. It runs on an Arm processor and is fairly inexpensive (> $85). Though not nearly as cheap as a single PIC chip it does come with a lot of glue logic pre-built onto the module.
CMUCam
I think you should have a look at the CMUcam project, which offers affordable hardware and an image processing library which runs on their hardware.

Is low-level / embedded systems programming hard for software developers? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
Given my background as a generalist, I can cover much of the area from analog electronics to writing simple applications that interface to a RDBMS backend.
I currently work in a company that develops hardware to solve industry-specific problems. We have an experienced programmer that have written business apps, video games, and a whole bunch of other stuff for PC's. But when I talk to him about doing low-level programming, he simultaneously express interest and also doubt/uncertainty about joining the project.
Even when talking about PC's, he seems to be more comfortable operating at the language level than the lower-level stuff (instruction sets, ISR's). Still, he's a smart guy, and I think he'd enjoy the work once he is over the initial learning hump. But maybe that's my own enthusiasm for low-level stuff talking... If he was truly interested, maybe he would already have started learning stuff in that direction?
Do you have experience in making that software-to-hardware (or low-level software) transition? Or, better yet, of taking a software only guy, and transitioning him to the low-level stuff?
Edit:
P.S. I'd love to hear from the responders what their own background is -- EE, CS, both?
At the end of the day, everything is an API.
Need to write code for an SPI peripheral inside a microcontroller? Well, get the datasheet or hardware manual, and look at the SPI peripheral. It's one, big, complex API.
The problem is that you have to understand the hardware and some basic EE fundamentals in order to comprehend what the API means. The datasheet isn't written by and for SW developers, it was written for hardware engineers, and maybe software engineers.
So it's all from the perspective of the hardware (face it - the microcontroller company is a hardware company filled with hardware/asic engineers).
Which means the transition is by no means simple and straightforward.
But it's not difficult - it's just a slightly different domain. If you can implement a study program, start off with Rabbit Semiconductor's kits. There's enough software there so a SW guy can really dig in with little effort, and the HW is easy to deal with because everything is wrapped in nice little libraries. When they want to do something complex they can dig into the direct hardware access and fiddle at the lower level, but at the same time they can do some pretty cool things such as build little webservers or pan/tilt network cameras. There are other companies with similar offerings, but Rabbit is really focused on making hardware easy for software engineers.
Alternately, get them into the Android platform. It looks like a unix system to them, until they want to do something interesting, and then they'll have the desire to attack that little issue and they'll learn about the hardware.
If you really want to jump in the deep end, go with an arduino kit - cheap, free compilers and libraries, pretty easy to start off with, but you have to hook wires up to do something interesting, which might be too big of a hurdle for a reluctant software engineer. But a little help and a few nudges in the right direction and they will be absolutely thrilled to have a little LED display that wibbles* like the nightrider lights...
-Adam
*Yes, that's a technical engineering term.
The best embedded programmers I've worked with are EE trained and learned SW on the job. The worst embedded developers are recent CS graduates who think SW is the only way to solve a problem. I like to think of embedded programming as the bottom of the SW pyramid. It's a stable abstraction layer/foundation that makes life easy for the app developers.
"Hard" is an extremely relative term. If you're used to thinking in the tight, sometimes convoluted way you need to for small embedded code (for example, you're a driver developer), then certainly it's not "hard".
Not to "bash" (no pun intended) shell scripters, but if you write perl and shell scripts all day, then it might very well be "hard".
Likewise if you're a UI guy for Windows. It's a different kind of thinking.
Why embedded development is "hard":
1) The context may switch to an interrupt between each machine instruction. Since high level language constructs may map to multiple assembly instructins, this might even be within a line of code, e.g. long var = 0xAAAA5555. If accessed in an interrupt service routine, in a 16 bit processore var might only be half set.
2) Visibility into the system is limited. You may not even have output to Hyperterm unless you write it yourself. Emulators don't always work that well or consistently (though they are way better than they used to be). You will have to know how to use oscilloscopes and logic analyzers.
3) Operations take time. For example, say your serial transmitter uses an interrupt to signal when it is time to send another byte. You could write 16 bytes to a transmit buffer, then clear interrupts and wonder why your message is never sent. Timing in general is a tricky part of embedded programming.
4) You are subject to subtle race conditions that occur only rarely and are very difficult to debug.
5) You have to read the manual. A lot. You can't make it work by fooling around. Sometimes 20 things have to be set up correctly to get what you are after.
6) The hardware doesn't always work or is easy to damage, and it takes a while to figure out that you broke it.
7) Software repairs in embedded systems are usually very expensive. You can't just update a web page. A recall can erase any profit you made on the device.
There are probably more but I've got this race condition to solve...
This is very subjective I guess, his reasons could be many. But if he's like me, I know where he's coming from. Let me explain.
In my career I've dedicated 6 years to the telecom industry, working a lot with embedding SDK middleware into low-end mobile phones etc.
Most embedded environments I've experienced are like harsh weather for a programmer, you constantly have to overcome limitations in resources etc. Some might find this a challenge and enjoy it for the challenge itself, some might feel close to "the real stuff" - the hardware, some might feel it limits their creativity.
I'm the kind who feels it limits my creativity.
I enjoy being back in Windows desktop environment and flap my wings with elaborate class designs, stretch my legs a few clockcycles extra, use unnecessary amounts of memory for diagnostics etc.
On certain embedded units in the past, I hardly had support for fseek() (an ANSI C standard file function). If lucky, a "watchdog" could give clues to where something crashed. Not to mention the pain of communicating with the user in single-threaded preemptive swamps.
Well, you know what I'm getting at. In my opinion it's not necessarily hard, but it's quite a leap, with potentially little reuse of your current experience.
Regards
Robert
There is a very real difference in mindset from user-level application development (ie, general purpose PC or Web applications) to hard deadline, real-time response application development (ie, the hardware/software interface).
Interrupts, instruction sets, context switching and hard resource constraints are relatively unknown to your average developer. I'm assuming here that your 'average developer' is not an Electrical/Electronic or other Engineer by training.
The transition for this developer you mention may be well outside his comfort zone. Some of us like stretching like that. Others of us may have decided the view isn't worth the climb.
Likewise, folks who've been in the hardware area (ie, Engineers) often have difficulty with the assumptions and language of software development.
These are gross generalities, of course, but hopefully give some insight.
He needs to be comfortable with the low-level stuff, but mostly for debugging and field issues. There is a serious learning curve depending on the architecture, but not impossible. On the other hand, the low-level code takes (in general) more time and debugging than higher-level code. So if you need to be going back to low-level all the time, then perhaps something isn't right in the design. Even for the embedded controls I've built, I spend the vast majority of time in high-level code. Although when you have issues, it is extremely advantageous to have a very good low-level knowledge.
I am an EE turned Software Engineer. I prefer programming low level. Most software developers classically trained that I know do not want to operate at this level they want apis to call. So for me it is a win win, I create the low level driver and api for them to use. There is a "new" degree, at least new since I went to college, called Computer Engineer. Hmm, it might be an electrical engineering degree not computer science, but it is a nice mix of software and digital hardware basics. The individuals that I have worked with from this field are much more comfortable with low level.
If the individual is not comfortable or willing then place them somewhere where they are comfortable. Let them do documentation or work on the user interface. If all of the work at the company requires low level work then this individual needs to do it or find another job. Dont sugar coat it.
I also think they will enjoy it once they get over the hump, the freedom you have at that level, not hindered by operating systems, etc. Recently I witnessed a few co-workers experience for the first time seeing their software run under simulation. Every net within the processor and other on chip peripherals. No you dont have a table on a gui (debugger) showing the current state of the memory, you have to look at the memory bus, look for the address you are interested in, look for a read or write signal and the data bus. I worry about the day that silicon arrives and they no longer have this level of visibility. Will be like an addict in detox.
Well, I cut my teeth on hardware when I started reading Popular Electronics at age 14 – this was BEFORE personal computers, in case you were wondering and if you weren’t well, you know anyway. lol
I’ve done the low level bit-bang stuff on the 8048/51 microprocessor, done PIC’s and some other single chip variations and of course Rabbit Semiconductor. (great if you're into C). That’s great (and fun) stuff; Yes, there is a different way of looking at things – not harder, but some of that information is a bit harder to come by as it isn’t as discussed as the software issues. (Of course, this depends on the circle of friends with which you associate, eh).
But, having said all of this, I want to remind you of a technology that started to bridge the gap for programmers into the world of hardware and has since become a very MAJOR player and that is the .NET micro framework. You can find information on this technology at the following;
http://msdn.microsoft.com/en-us/embedded/bb267253.aspx
It addresses some of the same issues that .NET web development addressed in that you can use some (quite a bit, actually) of your existing PC based knowledge in the new environments – Some caution, of course, as your target machine doesn’t have 4 GIG of RAM – it may only have 64K (or less)
Starting in version 2.5 of the .NET micro framework, you have access to networking and web services – way kewl, eh? It doesn’t stop there … Want to control the lights in your house? How about a temp recording station? All with the skills you already have. Well, mostly -- Check out the link.
The SDK plugs into your VisualStudio IDE. There are a number of “Development Kits” available for a very reasonable amount of cash – Now, what would normally take a big learning curve in components, building a circuit board and wiring up “stuff” can be done reasonably easy with a dev kit and some pretty simple code – Of course, you may need to do the occasional bit bang operation, but more and more sensor folks are providing .NET micro framework drivers – so, the hardware development may be closer than you think…
Hope it helps...
I like both. Embedded challenges me and really gets me going in a visceral way. Making something that affects the macro physical world is very satisfactory. But I've had to do a lot of catch up on the electrical/electronics end, since my bachelor's is in computer science. I've a pretty generalist background, where I studied ai, graphics, compilers, natural language, etc. Now I'm doing graduate work in embedded systems. The really tough part is adjusting to the lack of runtime facilities like an operating system.
Low-level embedded programming also tends to include low-level debugging. Which (in my experience) usually involves (at least) the use of an oscilloscope. Unless your colleague is going to be happy spending at least some of the time in physical contact with the hardware and thinking in terms of microseconds and volts, I'd be tempted to leave them be.
Agreed on the "hard" term is quite relative.
I would say different, as you would need to employ different development patterns that you won't use in other kind of environment.
The time constraint for instance could requires a learning curve.
However being curious, would be a quality for a developer, wouldn't be?
You are right in that anyone with enough knowledge not to feel completely lost in an area (over the hump?) will enjoy the challenges of learning something new.
I myself would feel quite nervous being moving to the level of instruction sets etc as there is a huge amount of background knowledge needed to feel comfortable in the environment.
It may make a difference if you are able to support the developer in learning how to do this. Having someone there you can ask and talk through issue with is a huge help in that sort of domain change.
It may be worth having the developer assigned to a smaller project with others as a first step and see how that goes. If he expresses enthusiasm to try another project, things should flow on from there.
I would say it is not any harder, it just requires a different knowledge set, different considerations.
I think that it depends on the way that they program in their chosen environment, and the type of embedded work that you're talking about.
Working on an embedded linux platform, say, is a far smaller jump than trying to write code on an 8 bit platform with no operating system at all.
If they are the type of person that has an understanding of what is going on underneath the api and environment that they are used to, then it won't be too much of a stretch to move into embedded development.
However, if their world view stops at the high level api that they've been using, and they have no concept of anything beneath that, they are going to have a really hard time.
As a (very) general statement if they are comfortable working on multithreaded applications they will probably be ok, as that shares some of the same issues of data volatility that you have when working on embedded projects.
With all of that said, I've seen more embedded programmers successfully working in PC development than I have the reverse. (of course I might not have seen a fair cross section)
"But when I talk to him about doing low-level programming, he simultaneously express interest and also doubt/uncertainty about joining the project." -- That means you let him try and you prepare to hire someone else in case he doesn't pass the learning curve.
i began as a SW engineer i'm now HW one !
the important is to understand how it works and to be motivated !