How to bench software on multiple graphic cards with cluster - testing

I would like to bench my software on multiple graphics cards without buying any existing ones!
Do you know any service providers for that?
I wonder if there is something like http://www.keynotedeviceanywhere.com/ for desktops. I guess the game industry might use this kind of service to test their graphics engine...
Thanks

What about http://www.gaikai.com/ they render in the cloud.

Hate do answer your "how do I do this?" question with "don't do that", but...
Don't do that :)
I suspect most of the kind of people who want to test their graphics software on multiple hardware can afford to buy all the multiple hardware, so there isn't much demand for the sort of thing you are asking, particularly not if you want it for free.
If you want to do stuff on the cheap, try:
Aim low. Get a $30 card or two and test on those. This will cover 99% of your potential user base.
Deal with the major brands. For desktops, this means ATI and nvidia. Not many other brands to worry about here, except intel if you want to stretch that far.
If you are brave enough, publish your benchmarking software on the web somewhere and ask other kind users to test it and submit results.

Related

How do I control a motor wirelessly?

I am a ME undergrad and am designing an implant device that requires programming knowledge. I honestly have no idea how to get started and am looking for advice. Basically what I need is a way to control a stepper motor. Stepper motor's use steps (pulses) to rotate the gear head. Now this motor I'm using needs 20 steps to revolve once. I need to be able to control the # of steps I want in a day per say. The motor I'm purchasing comes with an encoder which I'm guessing connects to the circuit board. Now what I want to do is have an external control (like a remote control for a toy)that can set these rates. I don't know anything about radio transmitters, or how to program the circuit board to do this for me. Any help would be appreciated, or books I can look into, websites, or tutorials. Thanks.
There are many ways of solving this problem, but it is more of a systems engineering question than a programming question; until you know what the system looks like, there is no way of determining what parts will be implemented in software. More details would be required to provide a specific answer.
For example what are the security/safety considerations?
What wireless technology do you need to use? e.g. RF or IR, if RF then licensing may be an issue, and that may vary from country to country. You could use BlueTooth, ZigBee, or even WiFi, but these technologies are probably more expensive and complex than necessary for such a simple application. If IR then is immunity from interference from TV remotes or PC IrDA ports or similar required?
If the commands/signals from the remote are complex you will probably need both the remote and the motor driver to incorporate a micro-controller and software. On the other hand if you just need increase/decrease functions then it would be entirely possible to implement the remote functionality you describe without any processing at all (depending on teh communication technology you choose).
What is the motor encoder for? Stepper motors do not normally need an encoder since the controller can simply count steps executed in either direction to determine position. Is the encoder incremental or absolute? If it is incremental, then it is certainly not needed; if it is absolute than it may be useful if you need to know the exact position of the motor on power-up without having to perform an initialisation or requiring end-stop switches.
You mentioned a "circuit board"; what hardware do you already have? What does it do? Do you have documentation for it? If it is commercially available, can you provide a link so we can see the documentation?
As you can see you have more system-level design issues to solve before you even consider software implementation, so the question is not yet ready to be answered here on SO. I suggest you seek out your university's EE department and team-up with someone with electronics expertise do design a complete system, then consider the software aspects.
Well worth taking a look at the Microchip site:
http://www.microchip.com/forums/f170.aspx
They produce microcontrollers that can be programmed to do exactly what you require (and a lot more).

How to demo examples of embeded systems?

It seems that a lot of small business people have a need for some customized embedded systems, but don't really know too much about the possibilities and cannot quite envisage them.
I had the same problem when trying to explain what Android could do; I was generally met with glazed eyes - and then I made a few demos. Somehow, being able to see something - to be able to touch it and play around with it – people have that cartoon lightbulb moment.
Even if it is not directly applicable to them, a demo starts them thinking about what could be useful to them.
The sort of person I am talking about may or may not be technical, but is certainly intelligent, having built from scratch a business which turns over millions.
Their needs are varied, from RFID or GPS asset & people tracking, to simple stock control systems, displays, communications, sometime satellite, sometimes VPN or LAN (wifi or RJ45). A lot of it needs a good back-end database with a web-site to display, query, data-mine …
So, to get to the question, I am looking for a simple project, or projects, which will cause that cartoon lightbulb moment. It need not be too complicated as those who need complicated solutions are generally tech-savvy, just something straightforward & showing what could be done to streamline their business and make it more profitable.
It would be nice it if could include some wifi/RJ45 comms, communicate across the internet (e.g not just a micro-controller attached to a single PC – that should then communicate with a server/web-site), an RFID reader would be nice, something actually happening (LEDs, sounds, etc), plus some database, database analysis/data-ming – something end-to-end, preferably in both directions.
A friend was suggesting a Rube Goldberg like contraption with a Lego Mindstorms attached to a local PC, but also controllable from a remote PC (representing head office) or web site. That would show remote control of devices. Maybe it could pick up some RFID tags and move them around (at random, or on command), representing stock control (or maybe employee/asset movement within a factory or warehouse (Location Based Services/GIS)), which cold then be shown on the web site, with some nice charts & graphs etc.
Any other ideas?
How best to implement it? One of those micro-controller starter kites like http://www.nerdkits.com/ ? Maybe some Lego, or similar robot kit, a few cheap RFID readers … anything else?
And – the $409,600 question – what's a good, representative demo which demonstrate as many functionalities as possible, as impressively as possible, with the least effort? (keeping it modular and allowing for easy addition of features, since there is such a wide area to cover)
p.s a tie with an Adroid slate PC would be welcome too
Your customers might respond better to a solid looking R/C truck which seeks RFID tags than to a Lego robot. Lego is cool, but it has a bit of a slapped-together 'kiddie' feel.
What if you:
scatter some RFID tags across the conference room.
add a GPS & wifi transmitter to your truck.
drive the truck to the tag
(manually - unless you want to invest a lot of time in steering algorithms).
have a PC drawing a real-time track of the trucks path.
every the truck gets within range of the tag, add it to an inventory list on the screen, showing item id, location, time recorded, total units so far.
indicate the position of the item on the map.
I'd be impressed.
Is it 'least effort'? I don't know, but I'd hope that if this is the type of solution you are pitching, that you already have a good handle on how to read GPS and RFID devices, how to establish a TCP or UDP connection with wifi, how to send and decode packets. Add some simple graphics and database lookup, and you are set.
Regarding hardware, I don't have any first hand experience with any of these, but the GadgetPC Wi-Fi G Kit + a USB RFID reader + a USB GPS reciever looks like a nice platform for experimenting with this.
Many chip manufactures have off-the-shelf demo boards. Microchip has some great demo boards for TCP/IP communications on an embedded system. I haven't seen one yet for RFID. Showing potential customers some of these demos could get them thinking about what is possible.

Is it feasible to virtualize developer machines? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 months ago.
Improve this question
It's budgeting time and Corporate is balking at the cost of replacing a coworker's machine who is due for it, needs it, and deserves it.
Our group is a small ISV/SAAS that exists as a division of a larger media group. We are not a cost center, we make money, even this year. We are owned by a mid-size media group whose business model is quite different, and seems driven only by reducing costs.
Our software stack is Visual Studio 2008, SQL 2008, on Windows Server 2008 (so that multiple root websites can be hosted and debugged on each dev's machine). Our target hardware is 3GHz quad-core workstation, 4GB RAM, and RAID 1 mirrored hard drives so that we are protected against the productivity loss of losing a developer hard drive.
Corporate wants to give us a couple powerful, but hand-me-down, decommissioned servers, and then each developer would have a virtual workstation on that server. The computers sitting on our desktops would be dumb terminals at $400-500 each.
I'm trying to be neutral but I doubt it's hard to discern my bias. I'd like to see real developer reactions to this, and I figure this is the best place to get that.
Please include arguments for or against, evidence if you've seen this tried and how well (or not) it has gone.
This sounds like a well intentioned idea, but:
In my experience you need multiple cores, lots of memory, and fast disks to be productive in today's modern IDE's. I don't see that happening in a virtual environment with any economy. Individual boxes are still better.
It's also an issue of control. In a virtual environment I can imagine all kinds of restrictions. Will you still be able to install your own tools, for example?
Ultimately, it's misguided. If this idea increases build times by any substantial amount, any savings in hardware will quickly be erased by lost productivity. Conversely, money that is spent on decent individual machines for developers will quickly pay for itself over and over in reduced build times.
Good quality individual machines are an investment, not a cost.
Development is disk-bound, i.e. you spend your time waiting for builds which is a disk-bound process most of the time. If you're all sharing a machine build times will become much worse.
Aside from all of the givens (perfomance, disk space, etc...):
I would be OK with this as long as I still had multiple monitor support.
Without that, it is a no-go.
Basic failure to understand what a developer box is actually doing much of the time:
When building its chewing through processor and disk - especially disk.
When testing you're talking about having one or more instances of Visual Studio running (once you get past two things start to get interesting), database server, website/services plus all the other stuff (browsers with a lot of tabs open, notebook software, and heaven only knows what else) all spread across multiple monitors (at least two). Lots of cores, lots of memory please!
I can quite happily accept that there's an argument for virtualisation - a good dev box should be able to host multiple, concurrent VMs in order to isolate some of the above and to provide "clean" environments for testing. Note that that's the box for ONE developer hosting multiple VMs solely for the benefit of that one developer...
Our team is developing on remote server (no GUI stuff, plain old vim) for quite some time without problems. Granted it requires rather powerful server and sometimes is starts to be bit on a slow side if everyone start to compile at the same time.
But as a bonus you are very mobile in terms where you can develop from (we all are having laptops) be it in office, home, sunny beach (last one was probably overstatement).
Bute yeah, that might not all work well for graphics heavy apps of course.
It sounds like your group is not offering the solutions that you have considered in a well documented format, otherwise corporate would not be shoving decisions down your throat. If you have a documented process for development, corporate might want to discuss changing the process with you, but as soon as you say, "this change would break our process and we would have to retool our development workflow", they will see the pain of the $$ in reworking the process and most likely back off. That said, once your process is documented, you should internally be ruthless about trying to make it more efficient and cost effective, and have an open mind about corporate's suggestions.
I assume you have machines already for SVN / TRAC, your Continuous Integration server, product demos, testing, etc. and that the only possible use your team could make of these servers is for personal VMs.
I do many things that peg my processor at 100%. Compiles certainly achieve this. Now imagine having to share that processor with 10 other developers. The loss in productivity will become quite apparent. If you have a multi-core PC, this won't be as painful. Get an Intel i7 and you probably won't even notice it when 8 people are logged in. Most programs (including my compiler) can't use more than 1 processor anyway.
That said, it's a viable solution to reduce costs. I used to work at a company who has since switched to these dumb terminals. It works fine. My university had HP UNIX machines that were dumb terminals. They logged into a server that split up the processor ownership among however many people were logged in. What people would do is log into a server and check the number of people logged in. If there were too many, they'd search for the next one, because build times are noticeably slower. I'd never log into the easy to remember server names. =)
It definitely works, but also reduces productivity due to longer build times, especially when multiple people are building at the same time. Since productivity is such a difficult thing to quantify, it might be hard to argue your point.
Graphics acceleration might also be an issue if you need to do anything with animation, video, or image editing. You can't really test video playback through an RDP session since the framerate and/or color depth isn't high enough.
Regardless of performance, at my company we are moving to laptops as developer machines. The main advantage is that developers can bring their computers to meetings, conferences, etc. Also being able to sit next to a colleague when you're helping him with a problem, and having your own development environment available, is very valuable.

Is a gaming machine better for software development? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
Improve this question
Is a gaming machine better for software development?
NO.
CPU
For software development, you need lots of cores. For gaming, you need fast but not necessarily many cores. This is slowly changing as newer games are being written to take advantage of multicore CPUs, but the general case is that most gaming machines focus on raw CPU power. For example, in my case, I'm an RoR developer, and during development I run: my editor, mongrel, solr, postgresql, and memcached. Most of the time I also have an open browser, a PDF editor, and iTunes.
RAM
Most games will be OK with 2-3GB of RAM.
For software development, especially web development - if you will be running multiple servers - you'll want at least 4GB, or even 8GB of RAM.
GPU
Top-of-the-line graphics cards for gaming can cost $500 or more. For software development, you can get away with the cheapest GPU you can get. The only aspect of the video card you'll want to concern yourself with is the capability to handle multiple large monitors.
It will actually be helpful if your development machine is so crippled (gaming-wise) that you can't play the games you like to play on that machine. No distractions! :)
I would say some aspects are the same between gaming machines and development machines, like large disks, a lot of memory, etc. So in that respect yes, a gaming machine would fit better than a low end desktop.
On the other hand, gaming machines tend to be tuned towards raw performance instead of robustness. A development machine often does not need a state of the art graphics card, nor does it want a RAID-0 to spead up the disk. If it crashes one disk you lose all your work, so RAID-1 would be much better. Same holds for memory, ECC (or what its called nowadays) is a bit slower but adds robustness.
One gotcha with powerful development machines is that they do not represent the non-functional requirements as to execution environment. If you are not aware of this enough your software will run slow on a "normal" machine because it ran great on your supercomputer :-) One take on this is that development machines should always be a tad slower than the target machines, but this cuts into your development time. A better solution is to have slower machines in the test environment and a few slower machines in the development lab.
Some attributes of gaming machines can help developers, like having a good deal of memory, or a quad core processor (so you can, respectively, run VMs without hassle, and compile faster).
But a fast GPU won't do you much good, so there's no point in spending much money on it. Unless you plan on developing or playing games, of course.
Summing up: if you plan on using the PC for fun, get a reasonable GPU. If you don't, skip it and keep the rest just like you would. You won't regret it.
If you want to develop games, sure. I should know, I have experience on both.
Unless you're programming something to do with graphics / game related, not necessarily. The video card is going to be underused otherwise. On the other hand gaming machines tend towards the high end making them ideal for many programming tasks.
I think so. I think the performance required for gaming will greatly help developers. Only overkill would be graphics, unless you use big rendering software, in which case RAM, graphics is a must.
Good CPU, Lots of fast RAM, and a fast HD will do you lots of good.
What you'll need for software development is usually a machine with ample RAM, ample HDD space (and a fast HDD or set of HDDs to boot), a fast multi-core processor (very important if you're working with compiled languages, especially the likes of C++ which take a long time to compile compared to Java or C#) and preferably the ability to drive multiple monitors. For the latter, it's a case of the more the merrier as screen real estate is one of those things that you can never have enough of.
While a lot of this does indeed sound like the spec for a gaming machine due to its raw number crunching ability, the main difference is likely to be the graphics hardware. You don't need something that can render x million polygons per second on a single monitor if you're trying to drive 3x 24" monitors as 2D displays. In fact you probably don't want a usually rather noisy gamer spec video card that only shines when rendering 3D; you're more likely to get more out of a "pro" graphics card that can drive 4 monitors instead.
So yes, I'd think the spec is quite similar and there is a lot of overlap between the two but in the end a developer spec machine is not the same as a gaming rig.
A gaming machine without the fancy video card, I think that's more suitable for a programmer. (you can use the video card money to add more RAM for example)
Gaming machines are great for everything except your wallet ;-)
Programming WPF Shader Effects is one of those particular tasks where a gaming machine can actually allow you to do more while not working in game-development. Also, GPGPU work may benefit from fast memory transfer and fast GPU.

Is low-level / embedded systems programming hard for software developers? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
Given my background as a generalist, I can cover much of the area from analog electronics to writing simple applications that interface to a RDBMS backend.
I currently work in a company that develops hardware to solve industry-specific problems. We have an experienced programmer that have written business apps, video games, and a whole bunch of other stuff for PC's. But when I talk to him about doing low-level programming, he simultaneously express interest and also doubt/uncertainty about joining the project.
Even when talking about PC's, he seems to be more comfortable operating at the language level than the lower-level stuff (instruction sets, ISR's). Still, he's a smart guy, and I think he'd enjoy the work once he is over the initial learning hump. But maybe that's my own enthusiasm for low-level stuff talking... If he was truly interested, maybe he would already have started learning stuff in that direction?
Do you have experience in making that software-to-hardware (or low-level software) transition? Or, better yet, of taking a software only guy, and transitioning him to the low-level stuff?
Edit:
P.S. I'd love to hear from the responders what their own background is -- EE, CS, both?
At the end of the day, everything is an API.
Need to write code for an SPI peripheral inside a microcontroller? Well, get the datasheet or hardware manual, and look at the SPI peripheral. It's one, big, complex API.
The problem is that you have to understand the hardware and some basic EE fundamentals in order to comprehend what the API means. The datasheet isn't written by and for SW developers, it was written for hardware engineers, and maybe software engineers.
So it's all from the perspective of the hardware (face it - the microcontroller company is a hardware company filled with hardware/asic engineers).
Which means the transition is by no means simple and straightforward.
But it's not difficult - it's just a slightly different domain. If you can implement a study program, start off with Rabbit Semiconductor's kits. There's enough software there so a SW guy can really dig in with little effort, and the HW is easy to deal with because everything is wrapped in nice little libraries. When they want to do something complex they can dig into the direct hardware access and fiddle at the lower level, but at the same time they can do some pretty cool things such as build little webservers or pan/tilt network cameras. There are other companies with similar offerings, but Rabbit is really focused on making hardware easy for software engineers.
Alternately, get them into the Android platform. It looks like a unix system to them, until they want to do something interesting, and then they'll have the desire to attack that little issue and they'll learn about the hardware.
If you really want to jump in the deep end, go with an arduino kit - cheap, free compilers and libraries, pretty easy to start off with, but you have to hook wires up to do something interesting, which might be too big of a hurdle for a reluctant software engineer. But a little help and a few nudges in the right direction and they will be absolutely thrilled to have a little LED display that wibbles* like the nightrider lights...
-Adam
*Yes, that's a technical engineering term.
The best embedded programmers I've worked with are EE trained and learned SW on the job. The worst embedded developers are recent CS graduates who think SW is the only way to solve a problem. I like to think of embedded programming as the bottom of the SW pyramid. It's a stable abstraction layer/foundation that makes life easy for the app developers.
"Hard" is an extremely relative term. If you're used to thinking in the tight, sometimes convoluted way you need to for small embedded code (for example, you're a driver developer), then certainly it's not "hard".
Not to "bash" (no pun intended) shell scripters, but if you write perl and shell scripts all day, then it might very well be "hard".
Likewise if you're a UI guy for Windows. It's a different kind of thinking.
Why embedded development is "hard":
1) The context may switch to an interrupt between each machine instruction. Since high level language constructs may map to multiple assembly instructins, this might even be within a line of code, e.g. long var = 0xAAAA5555. If accessed in an interrupt service routine, in a 16 bit processore var might only be half set.
2) Visibility into the system is limited. You may not even have output to Hyperterm unless you write it yourself. Emulators don't always work that well or consistently (though they are way better than they used to be). You will have to know how to use oscilloscopes and logic analyzers.
3) Operations take time. For example, say your serial transmitter uses an interrupt to signal when it is time to send another byte. You could write 16 bytes to a transmit buffer, then clear interrupts and wonder why your message is never sent. Timing in general is a tricky part of embedded programming.
4) You are subject to subtle race conditions that occur only rarely and are very difficult to debug.
5) You have to read the manual. A lot. You can't make it work by fooling around. Sometimes 20 things have to be set up correctly to get what you are after.
6) The hardware doesn't always work or is easy to damage, and it takes a while to figure out that you broke it.
7) Software repairs in embedded systems are usually very expensive. You can't just update a web page. A recall can erase any profit you made on the device.
There are probably more but I've got this race condition to solve...
This is very subjective I guess, his reasons could be many. But if he's like me, I know where he's coming from. Let me explain.
In my career I've dedicated 6 years to the telecom industry, working a lot with embedding SDK middleware into low-end mobile phones etc.
Most embedded environments I've experienced are like harsh weather for a programmer, you constantly have to overcome limitations in resources etc. Some might find this a challenge and enjoy it for the challenge itself, some might feel close to "the real stuff" - the hardware, some might feel it limits their creativity.
I'm the kind who feels it limits my creativity.
I enjoy being back in Windows desktop environment and flap my wings with elaborate class designs, stretch my legs a few clockcycles extra, use unnecessary amounts of memory for diagnostics etc.
On certain embedded units in the past, I hardly had support for fseek() (an ANSI C standard file function). If lucky, a "watchdog" could give clues to where something crashed. Not to mention the pain of communicating with the user in single-threaded preemptive swamps.
Well, you know what I'm getting at. In my opinion it's not necessarily hard, but it's quite a leap, with potentially little reuse of your current experience.
Regards
Robert
There is a very real difference in mindset from user-level application development (ie, general purpose PC or Web applications) to hard deadline, real-time response application development (ie, the hardware/software interface).
Interrupts, instruction sets, context switching and hard resource constraints are relatively unknown to your average developer. I'm assuming here that your 'average developer' is not an Electrical/Electronic or other Engineer by training.
The transition for this developer you mention may be well outside his comfort zone. Some of us like stretching like that. Others of us may have decided the view isn't worth the climb.
Likewise, folks who've been in the hardware area (ie, Engineers) often have difficulty with the assumptions and language of software development.
These are gross generalities, of course, but hopefully give some insight.
He needs to be comfortable with the low-level stuff, but mostly for debugging and field issues. There is a serious learning curve depending on the architecture, but not impossible. On the other hand, the low-level code takes (in general) more time and debugging than higher-level code. So if you need to be going back to low-level all the time, then perhaps something isn't right in the design. Even for the embedded controls I've built, I spend the vast majority of time in high-level code. Although when you have issues, it is extremely advantageous to have a very good low-level knowledge.
I am an EE turned Software Engineer. I prefer programming low level. Most software developers classically trained that I know do not want to operate at this level they want apis to call. So for me it is a win win, I create the low level driver and api for them to use. There is a "new" degree, at least new since I went to college, called Computer Engineer. Hmm, it might be an electrical engineering degree not computer science, but it is a nice mix of software and digital hardware basics. The individuals that I have worked with from this field are much more comfortable with low level.
If the individual is not comfortable or willing then place them somewhere where they are comfortable. Let them do documentation or work on the user interface. If all of the work at the company requires low level work then this individual needs to do it or find another job. Dont sugar coat it.
I also think they will enjoy it once they get over the hump, the freedom you have at that level, not hindered by operating systems, etc. Recently I witnessed a few co-workers experience for the first time seeing their software run under simulation. Every net within the processor and other on chip peripherals. No you dont have a table on a gui (debugger) showing the current state of the memory, you have to look at the memory bus, look for the address you are interested in, look for a read or write signal and the data bus. I worry about the day that silicon arrives and they no longer have this level of visibility. Will be like an addict in detox.
Well, I cut my teeth on hardware when I started reading Popular Electronics at age 14 – this was BEFORE personal computers, in case you were wondering and if you weren’t well, you know anyway. lol
I’ve done the low level bit-bang stuff on the 8048/51 microprocessor, done PIC’s and some other single chip variations and of course Rabbit Semiconductor. (great if you're into C). That’s great (and fun) stuff; Yes, there is a different way of looking at things – not harder, but some of that information is a bit harder to come by as it isn’t as discussed as the software issues. (Of course, this depends on the circle of friends with which you associate, eh).
But, having said all of this, I want to remind you of a technology that started to bridge the gap for programmers into the world of hardware and has since become a very MAJOR player and that is the .NET micro framework. You can find information on this technology at the following;
http://msdn.microsoft.com/en-us/embedded/bb267253.aspx
It addresses some of the same issues that .NET web development addressed in that you can use some (quite a bit, actually) of your existing PC based knowledge in the new environments – Some caution, of course, as your target machine doesn’t have 4 GIG of RAM – it may only have 64K (or less)
Starting in version 2.5 of the .NET micro framework, you have access to networking and web services – way kewl, eh? It doesn’t stop there … Want to control the lights in your house? How about a temp recording station? All with the skills you already have. Well, mostly -- Check out the link.
The SDK plugs into your VisualStudio IDE. There are a number of “Development Kits” available for a very reasonable amount of cash – Now, what would normally take a big learning curve in components, building a circuit board and wiring up “stuff” can be done reasonably easy with a dev kit and some pretty simple code – Of course, you may need to do the occasional bit bang operation, but more and more sensor folks are providing .NET micro framework drivers – so, the hardware development may be closer than you think…
Hope it helps...
I like both. Embedded challenges me and really gets me going in a visceral way. Making something that affects the macro physical world is very satisfactory. But I've had to do a lot of catch up on the electrical/electronics end, since my bachelor's is in computer science. I've a pretty generalist background, where I studied ai, graphics, compilers, natural language, etc. Now I'm doing graduate work in embedded systems. The really tough part is adjusting to the lack of runtime facilities like an operating system.
Low-level embedded programming also tends to include low-level debugging. Which (in my experience) usually involves (at least) the use of an oscilloscope. Unless your colleague is going to be happy spending at least some of the time in physical contact with the hardware and thinking in terms of microseconds and volts, I'd be tempted to leave them be.
Agreed on the "hard" term is quite relative.
I would say different, as you would need to employ different development patterns that you won't use in other kind of environment.
The time constraint for instance could requires a learning curve.
However being curious, would be a quality for a developer, wouldn't be?
You are right in that anyone with enough knowledge not to feel completely lost in an area (over the hump?) will enjoy the challenges of learning something new.
I myself would feel quite nervous being moving to the level of instruction sets etc as there is a huge amount of background knowledge needed to feel comfortable in the environment.
It may make a difference if you are able to support the developer in learning how to do this. Having someone there you can ask and talk through issue with is a huge help in that sort of domain change.
It may be worth having the developer assigned to a smaller project with others as a first step and see how that goes. If he expresses enthusiasm to try another project, things should flow on from there.
I would say it is not any harder, it just requires a different knowledge set, different considerations.
I think that it depends on the way that they program in their chosen environment, and the type of embedded work that you're talking about.
Working on an embedded linux platform, say, is a far smaller jump than trying to write code on an 8 bit platform with no operating system at all.
If they are the type of person that has an understanding of what is going on underneath the api and environment that they are used to, then it won't be too much of a stretch to move into embedded development.
However, if their world view stops at the high level api that they've been using, and they have no concept of anything beneath that, they are going to have a really hard time.
As a (very) general statement if they are comfortable working on multithreaded applications they will probably be ok, as that shares some of the same issues of data volatility that you have when working on embedded projects.
With all of that said, I've seen more embedded programmers successfully working in PC development than I have the reverse. (of course I might not have seen a fair cross section)
"But when I talk to him about doing low-level programming, he simultaneously express interest and also doubt/uncertainty about joining the project." -- That means you let him try and you prepare to hire someone else in case he doesn't pass the learning curve.
i began as a SW engineer i'm now HW one !
the important is to understand how it works and to be motivated !