Permanent DOS Attacks - Anyone Knowledgeable? - firmware

So, I'm looking into Permanent DOS attacks for a class, and I'm having a hard time coming up with concrete examples. There's a lot of information about Phlashing (flashing firmware to either brick the device, or put malicious firmware in its place, for those of you who don't know the term) but I'd like to have a broader set of examples.
That being said, there has to be a way to write code that will do something like wear out disk arms, right? Something that will have the disk seek to the end of the disk, then back to the front, on and on. Anyone have an example of how that would be accomplished? Is there some way to specify where to track to on a disk in C (similar to traversing to a certain point in a file, but for the entire HDD!)? If not, I guess there's always trying to force a file's location on the disk... which seems like less fun trying to accomplish. Again, can you do something like that programmatically?
If anyone has any insight into these types of attacks, or any good resources for me to check into, I'd appreciate it. Maybe you read a story about it on Slashdot a few years back? Let me know! The more info I can gather, the less likely I'll be forced to kill time during my talk by bricking my router in the class :) I'm not made of money OR routers!

Seems like these would primarily be limited to physical attacks and social engineering ("To enable your computer's hidden turbo function, remove the cover and pry this part). But:
Adjust screen refresh rates to insane values to blow older CRTs
Monkey with ACPI fan, charge, or battery controls if possible to cause overheating or battery failure.
Overwrite every rewritable storage device of every kind attached to any bus. Discover and overwrite any IDE, USB, etc... device you know the flash updater details for.
Of course nothing is permanent. You can replace the hard drive, BIOS chips, CPU, motherboard, memory, etc...

Although it is mostly fictional, the halt and catch fire operation would be a very convenient and permanent DOS attack.

Steve Gibson (google his name) has a paper he wrote a few years back about protocol-level vulnerabilities in TCP/IP. Some of it is still pertinent today.

Socially engineer the power company or ISP to turn off service at the location in question.

Many devices in the computer today have their own firmwares, including but not limited to CPU, DVD, HDD, VGA, motherboard (BIOS) etc. Most of these devices also have a way of updating their respective firmwares. Which can also be used to brick them pretty efficiently. Although this does require an individual approach to every device, often using privileged instructions and undocumented interfaces.

It's possible for a virus to do this. I seem to recall an actual virus doing this back in the day, but can't find anything to back that up.
I was able to find an article where the author has a conversation with the VP from Western Digital wherein he states a program could potentially access a hard drive's firmware causing such a DOS attack:
There are back doors if you will that allow us to get into places that the operating system can't go through the IDE connector

There used to be a few viruses that could cause old CRT monitors to break. They could cause invalid sync signals out the VGA point that would be too high in frequency for the video sweep. I also remember a few that would use bad sector flagging to draw images on the old versions of Scandisk (we are talking early 90’s or older.) I don't remember and of the names or have any references, but they used to be quite annoying.
Fortunately better circuits, memory protection, API abstraction have made such attacked very difficult to impossible.

Related

Is there any open hardware microcontroller?

Is there any open hardware microcontroller?. I can't find something about this.
I mean microcontroller which i can buy from vendors or somewhere and i can download and see full scheme of it. And this information enough to emulate it. something like it.
I think they opened up the code for the propeller yes? and you can get an msp430 clone on opecores or an arm2 on opencores as well as the or1k and 2k, plus a myriad of other open source cores there and elsewhere (just google it). The lm32 is open, and the mico8 is maybe, certainly can be used on a lattice part. But you can certainly find cores like that from each of the fpga/cpld vendors, tuned for and likely free on their platforms. Plus what is it the 68hc11 there are free and or for purchase cores, probably 8051s, etc. And of course there is the cortex-m1, not open but if you wanted a microcontroller in source form to implement on your platform.
The propeller is probably the closest to what you are looking for.
I am not sure what you mean with "open hardware microcontroller". For professionals it's much better to buy a microcontroller or a microcontroller design (ARM for example). Hobbyists usually don't have access to a fab and the required tooling to create their own ASIC.
If you're interested in implementations for FPGAs on the other hand, you should check out the site http://opencores.org/projects where you can find (among other things) different open-source processors.
For what it's worth, SPARC is fully "open", both in it's early conception, and then again later in life by Sun. I think short of some big-iron stuff (that's gradually been taken over by x86), it's basically dead. Maybe you could revive it?

Embedded app and wearing out flash disks

I have an embedded app that needs to do a lot of writing to a flash disk (or other). We cannot use a hard disk due to the environment. This is an industrial system subject to vibration and explosive fuel vapour.
The trouble is, flash has a lifecycle of around 100000 write cycles. Ample for your digital camera. Wears out after a year in our scenario.
Any alternatives that people have found work for them?
I was thinking of using FRAM but it's been done before here and it's slow and small.
As Nils says, commercial compact flash cards, and drive replacements (NAND) have wear levelling.
If you are using cheap onboard (NOR) flash you might have to do this yourself.
The best way is some sort of ring buffer where you are only appending data and then overwriting a full drive. Remember flash can only erase a full block (page) but can then append individual bytes to existing data in that page.
Also can you buffer a page in RAM and then write once or do you have to have individual bytes committed at all times?
Most app sheets for embedded processors will have examples of this.
You really need to provide much more information:
how much capacity do you need?
what costs are acceptable?
what physical form factor do you need?
what lifetime do you want?
If your storage needs aren't particularly huge and you can deal with the cost, There are battery-backed SRAM parts (up to at least 2 Megabytes per part) that are as fast as RAM (that's what they are) and have no limit on number of writes. But they cost a lot more than flash.
You could also get a drive with a SATA interface that's populated with DRAM.
This post referes to using embedded linux. Not sure if this is what you want.
I have a not to differnt system, but for medical use. We use a NOR flash for all parts that have low update frequency and NAND flash for the rest. I would recoment using UBI/UBIFS for the top layer om the MTD disk. UBI/UBIFS takes care of all the underlying problems for you. If you then design your system to have a lot larger physical flash than you need. Example: You need 100MB and then design your HW with 1GB flash. Then the data can be shuffeld around by UBI without any interaction from systems above.
UBIFS documentation
UBI documentation
As Michael Burr pointed out, we need more info. (Please answer his questions.)
I have an additional question: What kind of interface is this? PATA? SATA? USB?
As others have pointed out, any decent Flash Drive will provide some kind of wear leveling. Look for this in the datasheet for the device. Many vendors will boast about their wear-leveling technique.
You mention 100000 cycles. This seems pretty low to me. Most "industrial grade" flash drives can do a lot more than that (millions). Make sure you aren't using a bargain-basement device. A good flash drive will usually include an equation or calculator tool you can use to figure out the expected lifespan of the device.
(I can say from personal experience that some brands of flash drives hold up a lot better than others, particularly the "industrial" ones. Our drives go through some pretty brutal usage scenarios.)
The other thing that can help a lot is capacity. The higher capacity of flash drive, the more room the wear-leveling algorithm has to work with, which means a longer lifespan.
The other thing you can look at doing is software techniques to minimize the wearing of the flash components. Do you have a pagefile/swapfile? Maybe you don't need it. If you are creating/deleting lots of temporary files, move this to a RAM disk. Remember, it is erasure/reprogramming cycles that usually wears out a flash cell, so reducing those operations will usually help.
Use SD cards that have a built-in wear leveling controller. That way the write cycles get distributed over all the flash blocks and you get a very long life out of your flash.
I was thinking of using FRAM but it's
been done before here and it's slow
and small.
Compare with nvSRAM; that may provide the performance you need.
I have used a Compact Flash card in a embedded system with great success. It has a onboard controller that does all the thinking for you. Not all Compact Flash controllers are equal so get one that is a recent design and was intended to be used as a hard drive replacement as they have better wear levelling algorithms.

Where do you draw the line between what is "embedded" and what is not?

ASIDE: Yes, this is can be considered a subjective question, but I hope to draw conclusions from the statistics of the responses.
There is a broad spectrum of computing devices. They range in physical sizes, computational power and electrical power. I would like to know what embedded developers think is the determining factor(s) that makes a system "embedded." I have my own determination that I will withhold for a week so as to not influence the responses.
I would say "embedded" is any device on which the end user doesn't normally install custom software of their choice. So PCs, laptops and smartphones are out, while XM radios, robot controllers, alarm clocks, pacemakers, hearing aids, the doohickey in your engine that regulates fuel injection etc. are in.
You might just start with wikipedia for a definition
http://en.wikipedia.org/wiki/Embedded_system
"An embedded system is a computer system designed to perform one or a few dedicated functions, often with real-time computing constraints. It is embedded as part of a complete device often including hardware and mechanical parts. "
Coming up with a concrete set of rules for what an embedded system is is to a large degree pointless. It's a term that means different things to different people -maybe even different things to the same people at different times.
There are some things that are pretty much never considered an embedded system, for example a Windows Desktop machine. However, there are companies that put their software on a Windows box - even a bog standard PC (maybe a laptop) - set things up so their application loads automatically and hides the desktop. They sell that as a single purposed machine that many people would call an embedded system (but many people wouldn't). Microsoft even sells a set of tools called Embedded Windows that helps enable these kinds of applications, though it's targeted more to OEMs who will customize the system at least somewhat instead of just putting it on a standard PC. Embedded Windows is used for things like ATM machines and many other devices. I think that most people would consider an ATM an embedded system.
But go into a 7-11 with an ATM that has a keyboard (I honestly don't know what the keyboard is for), press the right shift key 5 times and you'll get a nice Windows "StickyKeys" messagebox (I wonder if there's an exploit there - I sure hope not). So there's a Windows system there, just hidden and with some functionality removed - maybe not as much as the manufacturer would like. If you could convince it to open up notepad.exe somehow does the ATM suddenly stop being an embedded system?
Many, many people consider something like the iPhone or the iTouch an embedded system, but they have nearly as much functionality as a desktop system in many ways.
I think most people's definition of an embedded system might be similar to Justice Potter Stewart's definition of hard-core pornography:
I shall not today attempt further to define the kinds of material I understand to be embraced within that shorthand description; and perhaps I could never succeed in intelligibly doing so. But I know it when I see it...
I consider an embedded system one where the software is rarely developed directly on the target system. This definition includes sophisticated embedded systems like the iPhone, and excludes primitive desktop systems like the Commodore 64. Not having the development tools on the target means you have to add 'reprogram device' to the edit-compile-run cycle. Debugging is also made more complicated. This encompasses most of the embedded "feel."
Software implemented in a device not intended as a general purpose computing device is an "embedded system".
Typically the system is intended for a single purpose, and the software is static.
Often the system interacts with non-human environmental inputs (sensors) and mechanical actuators, or communication with other non-human systems.
That's off the top of my head. Other views can be read at this embedded.com article
Main factors:
Installed in a fixed place somewhere (you can't carry the device itself around, only the thing it's built into)
The run a long time (often years) with little maintenance
They don't get patched often
They are small, use little power
Small or no display
+1 for a great question.
Like many things there is a spectrum.
At the "totally embedded" end you have devices designed for a single purpose. Alarm clocks, radios, cameras. You can't load new software and make it do something else. THere is no support for changing the hardware,
At the "totally non-embedded" end you have your classic PCs where everything, both HW and SW, can be replaced.
There's still a lot in between those extremes. Laptops and netbooks, for example, have minimally expandable HW, typically only memory and hard disk can be upgraded. But, the SW can be whatever you want.
My education was as a computer engineer, so my definition of embedded is hardware oriented. I draw the line at the MMU (memory management unit). If a chip has an MMU, it usually has off-chip RAM and runs an OS. If a chip does NOT have an MMU, it usually has on-board RAM and runs an RTOS, microkernel or custom executive.
This means I usually dismiss anything running linux, which is shortsighted. I admit my answer is biased towards where I tend to work: microcontroller firmware. So I am glad I asked this question and got a full spectrum of responses.
Quoting a paragraph I've written before:
An embedded system for our purposes is
a computer system that has a specific
and deterministic
functionality\cite{LamieReal}.
Typically, processors for embedded
systems contain elements such as
onboard RAM, special-purpose
processing elements such as a digital
signal processor, analog-to-digital
and digital-to-analog converters.
Since the processors have more
flexibility than a straightforward
CPU, a common term is microcontroller.

Determining failing sectors on portable flash memory

I'm trying to write a program that will detect signs of failure for portable flash memory devices (thumb drives, etc).
I have seen tools in the past that are able to detect failing sectors and other kinds of trouble on conventional mechanical hard drives, but I fear that flash memory does not have the same kind of predictable low-level access to the hardware due to the internal workings of the storage. Things like wear-leveling and other block-remapping techniques (to skip over 'dead' sectors?) lead me to believe that determining if a flash drive is failing will be difficult at best, if not impossible (short of having constant read failures and device unmounts).
Flash drives at their end-of-life should be easy to detect (constant CRC discrepancies during reads and all-out failure). But what about drives that might be failing early? Are there any tell-tale signs like slower throughput speeds that might indicate a flash drive is going to fail much sooner than normal?
Along the lines of detecting potentially bad blocks, I had considered attempting random reads/writes to a file close to or exactly the size of the entire volume, but even then is it possible that the drive might report sizes under its maximum capacity to account for 'dead' blocks?
In short, is there any way to circumvent or at least detect (algorithmically or otherwise) the use of block-remapping or other life extension techniques for flash memory?
Let me end this question by expressing my uncertainty as to whether or not this belongs on serverfault.com . This is definitely a hardware-related question, but I also desire a software solution - preferably one that I can program myself.
If this question is misplaced, I will be happy to migrate it to serverfault - but I do need a programming solution. Please let me know if you need clarification :)
Thanks!
It's interesting if badblocks can help in this case
AFAIK, Wear leveling happens at the firmware level. The hardware does not know about the bad block, till such time the firmware detects one.
And there is no known way to find this bad sectors before hand. BTW, I guess, it is not bad sectors, but bad blocks. Once a sector is bad, the whole block is marked as bad ...

Is low-level / embedded systems programming hard for software developers? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
Given my background as a generalist, I can cover much of the area from analog electronics to writing simple applications that interface to a RDBMS backend.
I currently work in a company that develops hardware to solve industry-specific problems. We have an experienced programmer that have written business apps, video games, and a whole bunch of other stuff for PC's. But when I talk to him about doing low-level programming, he simultaneously express interest and also doubt/uncertainty about joining the project.
Even when talking about PC's, he seems to be more comfortable operating at the language level than the lower-level stuff (instruction sets, ISR's). Still, he's a smart guy, and I think he'd enjoy the work once he is over the initial learning hump. But maybe that's my own enthusiasm for low-level stuff talking... If he was truly interested, maybe he would already have started learning stuff in that direction?
Do you have experience in making that software-to-hardware (or low-level software) transition? Or, better yet, of taking a software only guy, and transitioning him to the low-level stuff?
Edit:
P.S. I'd love to hear from the responders what their own background is -- EE, CS, both?
At the end of the day, everything is an API.
Need to write code for an SPI peripheral inside a microcontroller? Well, get the datasheet or hardware manual, and look at the SPI peripheral. It's one, big, complex API.
The problem is that you have to understand the hardware and some basic EE fundamentals in order to comprehend what the API means. The datasheet isn't written by and for SW developers, it was written for hardware engineers, and maybe software engineers.
So it's all from the perspective of the hardware (face it - the microcontroller company is a hardware company filled with hardware/asic engineers).
Which means the transition is by no means simple and straightforward.
But it's not difficult - it's just a slightly different domain. If you can implement a study program, start off with Rabbit Semiconductor's kits. There's enough software there so a SW guy can really dig in with little effort, and the HW is easy to deal with because everything is wrapped in nice little libraries. When they want to do something complex they can dig into the direct hardware access and fiddle at the lower level, but at the same time they can do some pretty cool things such as build little webservers or pan/tilt network cameras. There are other companies with similar offerings, but Rabbit is really focused on making hardware easy for software engineers.
Alternately, get them into the Android platform. It looks like a unix system to them, until they want to do something interesting, and then they'll have the desire to attack that little issue and they'll learn about the hardware.
If you really want to jump in the deep end, go with an arduino kit - cheap, free compilers and libraries, pretty easy to start off with, but you have to hook wires up to do something interesting, which might be too big of a hurdle for a reluctant software engineer. But a little help and a few nudges in the right direction and they will be absolutely thrilled to have a little LED display that wibbles* like the nightrider lights...
-Adam
*Yes, that's a technical engineering term.
The best embedded programmers I've worked with are EE trained and learned SW on the job. The worst embedded developers are recent CS graduates who think SW is the only way to solve a problem. I like to think of embedded programming as the bottom of the SW pyramid. It's a stable abstraction layer/foundation that makes life easy for the app developers.
"Hard" is an extremely relative term. If you're used to thinking in the tight, sometimes convoluted way you need to for small embedded code (for example, you're a driver developer), then certainly it's not "hard".
Not to "bash" (no pun intended) shell scripters, but if you write perl and shell scripts all day, then it might very well be "hard".
Likewise if you're a UI guy for Windows. It's a different kind of thinking.
Why embedded development is "hard":
1) The context may switch to an interrupt between each machine instruction. Since high level language constructs may map to multiple assembly instructins, this might even be within a line of code, e.g. long var = 0xAAAA5555. If accessed in an interrupt service routine, in a 16 bit processore var might only be half set.
2) Visibility into the system is limited. You may not even have output to Hyperterm unless you write it yourself. Emulators don't always work that well or consistently (though they are way better than they used to be). You will have to know how to use oscilloscopes and logic analyzers.
3) Operations take time. For example, say your serial transmitter uses an interrupt to signal when it is time to send another byte. You could write 16 bytes to a transmit buffer, then clear interrupts and wonder why your message is never sent. Timing in general is a tricky part of embedded programming.
4) You are subject to subtle race conditions that occur only rarely and are very difficult to debug.
5) You have to read the manual. A lot. You can't make it work by fooling around. Sometimes 20 things have to be set up correctly to get what you are after.
6) The hardware doesn't always work or is easy to damage, and it takes a while to figure out that you broke it.
7) Software repairs in embedded systems are usually very expensive. You can't just update a web page. A recall can erase any profit you made on the device.
There are probably more but I've got this race condition to solve...
This is very subjective I guess, his reasons could be many. But if he's like me, I know where he's coming from. Let me explain.
In my career I've dedicated 6 years to the telecom industry, working a lot with embedding SDK middleware into low-end mobile phones etc.
Most embedded environments I've experienced are like harsh weather for a programmer, you constantly have to overcome limitations in resources etc. Some might find this a challenge and enjoy it for the challenge itself, some might feel close to "the real stuff" - the hardware, some might feel it limits their creativity.
I'm the kind who feels it limits my creativity.
I enjoy being back in Windows desktop environment and flap my wings with elaborate class designs, stretch my legs a few clockcycles extra, use unnecessary amounts of memory for diagnostics etc.
On certain embedded units in the past, I hardly had support for fseek() (an ANSI C standard file function). If lucky, a "watchdog" could give clues to where something crashed. Not to mention the pain of communicating with the user in single-threaded preemptive swamps.
Well, you know what I'm getting at. In my opinion it's not necessarily hard, but it's quite a leap, with potentially little reuse of your current experience.
Regards
Robert
There is a very real difference in mindset from user-level application development (ie, general purpose PC or Web applications) to hard deadline, real-time response application development (ie, the hardware/software interface).
Interrupts, instruction sets, context switching and hard resource constraints are relatively unknown to your average developer. I'm assuming here that your 'average developer' is not an Electrical/Electronic or other Engineer by training.
The transition for this developer you mention may be well outside his comfort zone. Some of us like stretching like that. Others of us may have decided the view isn't worth the climb.
Likewise, folks who've been in the hardware area (ie, Engineers) often have difficulty with the assumptions and language of software development.
These are gross generalities, of course, but hopefully give some insight.
He needs to be comfortable with the low-level stuff, but mostly for debugging and field issues. There is a serious learning curve depending on the architecture, but not impossible. On the other hand, the low-level code takes (in general) more time and debugging than higher-level code. So if you need to be going back to low-level all the time, then perhaps something isn't right in the design. Even for the embedded controls I've built, I spend the vast majority of time in high-level code. Although when you have issues, it is extremely advantageous to have a very good low-level knowledge.
I am an EE turned Software Engineer. I prefer programming low level. Most software developers classically trained that I know do not want to operate at this level they want apis to call. So for me it is a win win, I create the low level driver and api for them to use. There is a "new" degree, at least new since I went to college, called Computer Engineer. Hmm, it might be an electrical engineering degree not computer science, but it is a nice mix of software and digital hardware basics. The individuals that I have worked with from this field are much more comfortable with low level.
If the individual is not comfortable or willing then place them somewhere where they are comfortable. Let them do documentation or work on the user interface. If all of the work at the company requires low level work then this individual needs to do it or find another job. Dont sugar coat it.
I also think they will enjoy it once they get over the hump, the freedom you have at that level, not hindered by operating systems, etc. Recently I witnessed a few co-workers experience for the first time seeing their software run under simulation. Every net within the processor and other on chip peripherals. No you dont have a table on a gui (debugger) showing the current state of the memory, you have to look at the memory bus, look for the address you are interested in, look for a read or write signal and the data bus. I worry about the day that silicon arrives and they no longer have this level of visibility. Will be like an addict in detox.
Well, I cut my teeth on hardware when I started reading Popular Electronics at age 14 – this was BEFORE personal computers, in case you were wondering and if you weren’t well, you know anyway. lol
I’ve done the low level bit-bang stuff on the 8048/51 microprocessor, done PIC’s and some other single chip variations and of course Rabbit Semiconductor. (great if you're into C). That’s great (and fun) stuff; Yes, there is a different way of looking at things – not harder, but some of that information is a bit harder to come by as it isn’t as discussed as the software issues. (Of course, this depends on the circle of friends with which you associate, eh).
But, having said all of this, I want to remind you of a technology that started to bridge the gap for programmers into the world of hardware and has since become a very MAJOR player and that is the .NET micro framework. You can find information on this technology at the following;
http://msdn.microsoft.com/en-us/embedded/bb267253.aspx
It addresses some of the same issues that .NET web development addressed in that you can use some (quite a bit, actually) of your existing PC based knowledge in the new environments – Some caution, of course, as your target machine doesn’t have 4 GIG of RAM – it may only have 64K (or less)
Starting in version 2.5 of the .NET micro framework, you have access to networking and web services – way kewl, eh? It doesn’t stop there … Want to control the lights in your house? How about a temp recording station? All with the skills you already have. Well, mostly -- Check out the link.
The SDK plugs into your VisualStudio IDE. There are a number of “Development Kits” available for a very reasonable amount of cash – Now, what would normally take a big learning curve in components, building a circuit board and wiring up “stuff” can be done reasonably easy with a dev kit and some pretty simple code – Of course, you may need to do the occasional bit bang operation, but more and more sensor folks are providing .NET micro framework drivers – so, the hardware development may be closer than you think…
Hope it helps...
I like both. Embedded challenges me and really gets me going in a visceral way. Making something that affects the macro physical world is very satisfactory. But I've had to do a lot of catch up on the electrical/electronics end, since my bachelor's is in computer science. I've a pretty generalist background, where I studied ai, graphics, compilers, natural language, etc. Now I'm doing graduate work in embedded systems. The really tough part is adjusting to the lack of runtime facilities like an operating system.
Low-level embedded programming also tends to include low-level debugging. Which (in my experience) usually involves (at least) the use of an oscilloscope. Unless your colleague is going to be happy spending at least some of the time in physical contact with the hardware and thinking in terms of microseconds and volts, I'd be tempted to leave them be.
Agreed on the "hard" term is quite relative.
I would say different, as you would need to employ different development patterns that you won't use in other kind of environment.
The time constraint for instance could requires a learning curve.
However being curious, would be a quality for a developer, wouldn't be?
You are right in that anyone with enough knowledge not to feel completely lost in an area (over the hump?) will enjoy the challenges of learning something new.
I myself would feel quite nervous being moving to the level of instruction sets etc as there is a huge amount of background knowledge needed to feel comfortable in the environment.
It may make a difference if you are able to support the developer in learning how to do this. Having someone there you can ask and talk through issue with is a huge help in that sort of domain change.
It may be worth having the developer assigned to a smaller project with others as a first step and see how that goes. If he expresses enthusiasm to try another project, things should flow on from there.
I would say it is not any harder, it just requires a different knowledge set, different considerations.
I think that it depends on the way that they program in their chosen environment, and the type of embedded work that you're talking about.
Working on an embedded linux platform, say, is a far smaller jump than trying to write code on an 8 bit platform with no operating system at all.
If they are the type of person that has an understanding of what is going on underneath the api and environment that they are used to, then it won't be too much of a stretch to move into embedded development.
However, if their world view stops at the high level api that they've been using, and they have no concept of anything beneath that, they are going to have a really hard time.
As a (very) general statement if they are comfortable working on multithreaded applications they will probably be ok, as that shares some of the same issues of data volatility that you have when working on embedded projects.
With all of that said, I've seen more embedded programmers successfully working in PC development than I have the reverse. (of course I might not have seen a fair cross section)
"But when I talk to him about doing low-level programming, he simultaneously express interest and also doubt/uncertainty about joining the project." -- That means you let him try and you prepare to hire someone else in case he doesn't pass the learning curve.
i began as a SW engineer i'm now HW one !
the important is to understand how it works and to be motivated !