Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 months ago.
Improve this question
It's budgeting time and Corporate is balking at the cost of replacing a coworker's machine who is due for it, needs it, and deserves it.
Our group is a small ISV/SAAS that exists as a division of a larger media group. We are not a cost center, we make money, even this year. We are owned by a mid-size media group whose business model is quite different, and seems driven only by reducing costs.
Our software stack is Visual Studio 2008, SQL 2008, on Windows Server 2008 (so that multiple root websites can be hosted and debugged on each dev's machine). Our target hardware is 3GHz quad-core workstation, 4GB RAM, and RAID 1 mirrored hard drives so that we are protected against the productivity loss of losing a developer hard drive.
Corporate wants to give us a couple powerful, but hand-me-down, decommissioned servers, and then each developer would have a virtual workstation on that server. The computers sitting on our desktops would be dumb terminals at $400-500 each.
I'm trying to be neutral but I doubt it's hard to discern my bias. I'd like to see real developer reactions to this, and I figure this is the best place to get that.
Please include arguments for or against, evidence if you've seen this tried and how well (or not) it has gone.
This sounds like a well intentioned idea, but:
In my experience you need multiple cores, lots of memory, and fast disks to be productive in today's modern IDE's. I don't see that happening in a virtual environment with any economy. Individual boxes are still better.
It's also an issue of control. In a virtual environment I can imagine all kinds of restrictions. Will you still be able to install your own tools, for example?
Ultimately, it's misguided. If this idea increases build times by any substantial amount, any savings in hardware will quickly be erased by lost productivity. Conversely, money that is spent on decent individual machines for developers will quickly pay for itself over and over in reduced build times.
Good quality individual machines are an investment, not a cost.
Development is disk-bound, i.e. you spend your time waiting for builds which is a disk-bound process most of the time. If you're all sharing a machine build times will become much worse.
Aside from all of the givens (perfomance, disk space, etc...):
I would be OK with this as long as I still had multiple monitor support.
Without that, it is a no-go.
Basic failure to understand what a developer box is actually doing much of the time:
When building its chewing through processor and disk - especially disk.
When testing you're talking about having one or more instances of Visual Studio running (once you get past two things start to get interesting), database server, website/services plus all the other stuff (browsers with a lot of tabs open, notebook software, and heaven only knows what else) all spread across multiple monitors (at least two). Lots of cores, lots of memory please!
I can quite happily accept that there's an argument for virtualisation - a good dev box should be able to host multiple, concurrent VMs in order to isolate some of the above and to provide "clean" environments for testing. Note that that's the box for ONE developer hosting multiple VMs solely for the benefit of that one developer...
Our team is developing on remote server (no GUI stuff, plain old vim) for quite some time without problems. Granted it requires rather powerful server and sometimes is starts to be bit on a slow side if everyone start to compile at the same time.
But as a bonus you are very mobile in terms where you can develop from (we all are having laptops) be it in office, home, sunny beach (last one was probably overstatement).
Bute yeah, that might not all work well for graphics heavy apps of course.
It sounds like your group is not offering the solutions that you have considered in a well documented format, otherwise corporate would not be shoving decisions down your throat. If you have a documented process for development, corporate might want to discuss changing the process with you, but as soon as you say, "this change would break our process and we would have to retool our development workflow", they will see the pain of the $$ in reworking the process and most likely back off. That said, once your process is documented, you should internally be ruthless about trying to make it more efficient and cost effective, and have an open mind about corporate's suggestions.
I assume you have machines already for SVN / TRAC, your Continuous Integration server, product demos, testing, etc. and that the only possible use your team could make of these servers is for personal VMs.
I do many things that peg my processor at 100%. Compiles certainly achieve this. Now imagine having to share that processor with 10 other developers. The loss in productivity will become quite apparent. If you have a multi-core PC, this won't be as painful. Get an Intel i7 and you probably won't even notice it when 8 people are logged in. Most programs (including my compiler) can't use more than 1 processor anyway.
That said, it's a viable solution to reduce costs. I used to work at a company who has since switched to these dumb terminals. It works fine. My university had HP UNIX machines that were dumb terminals. They logged into a server that split up the processor ownership among however many people were logged in. What people would do is log into a server and check the number of people logged in. If there were too many, they'd search for the next one, because build times are noticeably slower. I'd never log into the easy to remember server names. =)
It definitely works, but also reduces productivity due to longer build times, especially when multiple people are building at the same time. Since productivity is such a difficult thing to quantify, it might be hard to argue your point.
Graphics acceleration might also be an issue if you need to do anything with animation, video, or image editing. You can't really test video playback through an RDP session since the framerate and/or color depth isn't high enough.
Regardless of performance, at my company we are moving to laptops as developer machines. The main advantage is that developers can bring their computers to meetings, conferences, etc. Also being able to sit next to a colleague when you're helping him with a problem, and having your own development environment available, is very valuable.
Related
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I'm curious to understand what could be the motivation behind the fine-grained detail of each virtual processor that the Windows 8 task manager seems to be focusing on.
Here's a screenshot (from here):
I know this setup could only exist in a non-standard, costly, important server environment (1TB RAM!), but what is the use of a heatmap? Or, setting processor affinity:
What I'm asking is, under what circumstances a developer would care if specific processor X is being used more than processor Y (instead of just knowing that a single non-multithreaded process is maxing out a core, which would be better shown as a process heatmap, instead of a processor heatmap), or care whether a process will use this or that processor (which I can't expect a human to guess better than an auto-balancing algorithm)?
In most cases, it doesn't matter, and the heatmap does nothing more than look cool.
Big servers, though, are different. Some processors have a "NUMA", or Non-Uniform Memory Access, architecture. In these cases, some processor cores are able to access some chunks of memory faster than other cores. In these cases, adjusting the process affinity to keep the process on the cores with faster memory access might prove useful. Also, if a processor has per-core caches (as many do), there might be a performance cost if a thread were to jump from one core to another. The Windows scheduler should do a good job avoiding switches like these, but I could imagine in some strange workloads you might need to force it.
These settings could also be useful if you want to limit the number of cores an application is using (say to keep some other cores free for another dedicated task.) It might also be useful if you're running a stress test and you are trying to determine if you have a bad CPU core. It also could work around BIOS/firmware bugs such as the bugs related to high-performance timers that plagued many multi-core CPUs from a few years back.
I can't give you a good use case for this heat map (except that it looks super awesome), but I can tell you a sad story about how we used CPU affinity to fix something.
We were automating some older version of MS Office to do some batch processing of Word documents and Word was occasionally crashing. After a while of troubleshooting and desperation, we tried setting Word process' affinity to just one CPU to reduce concurrency and hence reduce the likelihood of race conditions. It worked. Word stopped crashing.
One possible scenario would be a server that is running multiple VMs where each client is paying to have access to their VM.
The administrator may set the processor affinities so that each VM has guaranteed access to X number of cores (and would charge the client appropriately).
Now, suppose that the administrator notices that the cores assigned to ABC Company Inc.'s VMs are registering highly on the heatmap. This would be a perfect opportunity to upsell ABC Company Inc and get them to pay for more cores.
Both the administrator and ABC Company Inc win - the administrator makes more money, and ABC Company Inc experience better performance.
In this way, the heatmap can function as a Decision Support System which helps ABC Company Inc decide whether their needs merit more cores, and helps the administrator to target their advertising better to the their customers that can benefit.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
Improve this question
Is a gaming machine better for software development?
NO.
CPU
For software development, you need lots of cores. For gaming, you need fast but not necessarily many cores. This is slowly changing as newer games are being written to take advantage of multicore CPUs, but the general case is that most gaming machines focus on raw CPU power. For example, in my case, I'm an RoR developer, and during development I run: my editor, mongrel, solr, postgresql, and memcached. Most of the time I also have an open browser, a PDF editor, and iTunes.
RAM
Most games will be OK with 2-3GB of RAM.
For software development, especially web development - if you will be running multiple servers - you'll want at least 4GB, or even 8GB of RAM.
GPU
Top-of-the-line graphics cards for gaming can cost $500 or more. For software development, you can get away with the cheapest GPU you can get. The only aspect of the video card you'll want to concern yourself with is the capability to handle multiple large monitors.
It will actually be helpful if your development machine is so crippled (gaming-wise) that you can't play the games you like to play on that machine. No distractions! :)
I would say some aspects are the same between gaming machines and development machines, like large disks, a lot of memory, etc. So in that respect yes, a gaming machine would fit better than a low end desktop.
On the other hand, gaming machines tend to be tuned towards raw performance instead of robustness. A development machine often does not need a state of the art graphics card, nor does it want a RAID-0 to spead up the disk. If it crashes one disk you lose all your work, so RAID-1 would be much better. Same holds for memory, ECC (or what its called nowadays) is a bit slower but adds robustness.
One gotcha with powerful development machines is that they do not represent the non-functional requirements as to execution environment. If you are not aware of this enough your software will run slow on a "normal" machine because it ran great on your supercomputer :-) One take on this is that development machines should always be a tad slower than the target machines, but this cuts into your development time. A better solution is to have slower machines in the test environment and a few slower machines in the development lab.
Some attributes of gaming machines can help developers, like having a good deal of memory, or a quad core processor (so you can, respectively, run VMs without hassle, and compile faster).
But a fast GPU won't do you much good, so there's no point in spending much money on it. Unless you plan on developing or playing games, of course.
Summing up: if you plan on using the PC for fun, get a reasonable GPU. If you don't, skip it and keep the rest just like you would. You won't regret it.
If you want to develop games, sure. I should know, I have experience on both.
Unless you're programming something to do with graphics / game related, not necessarily. The video card is going to be underused otherwise. On the other hand gaming machines tend towards the high end making them ideal for many programming tasks.
I think so. I think the performance required for gaming will greatly help developers. Only overkill would be graphics, unless you use big rendering software, in which case RAM, graphics is a must.
Good CPU, Lots of fast RAM, and a fast HD will do you lots of good.
What you'll need for software development is usually a machine with ample RAM, ample HDD space (and a fast HDD or set of HDDs to boot), a fast multi-core processor (very important if you're working with compiled languages, especially the likes of C++ which take a long time to compile compared to Java or C#) and preferably the ability to drive multiple monitors. For the latter, it's a case of the more the merrier as screen real estate is one of those things that you can never have enough of.
While a lot of this does indeed sound like the spec for a gaming machine due to its raw number crunching ability, the main difference is likely to be the graphics hardware. You don't need something that can render x million polygons per second on a single monitor if you're trying to drive 3x 24" monitors as 2D displays. In fact you probably don't want a usually rather noisy gamer spec video card that only shines when rendering 3D; you're more likely to get more out of a "pro" graphics card that can drive 4 monitors instead.
So yes, I'd think the spec is quite similar and there is a lot of overlap between the two but in the end a developer spec machine is not the same as a gaming rig.
A gaming machine without the fancy video card, I think that's more suitable for a programmer. (you can use the video card money to add more RAM for example)
Gaming machines are great for everything except your wallet ;-)
Programming WPF Shader Effects is one of those particular tasks where a gaming machine can actually allow you to do more while not working in game-development. Also, GPGPU work may benefit from fast memory transfer and fast GPU.
Our network team is thinking of setting up a virtual desktop environment (via Windows 2008 virtual host) for each developer.
So we are going to have dumb terminals/laptops and should be using the virtual desktops for all of our work.
Ours is a Microsoft shop and we work with all versions of .net framework. Not having the development environments on the laptops is making the team uncomfortable.
Are there any potential problems with that kind of setup? Is there any reason to be worried about this setup?
Unless there's a very good development-oriented reason for doing this, I'd say don't.
Your developers are going to work best in an environment they want to work in. Unless your developers are the ones suggesting it and pushing for it, you shouldn't be instituting radical changes in their work environments without very good reasons.
I personally am not at all a fan of remote virtualized instances for development work, either. They're often slower, you have to deal with network issues and latency, you often don't have as much control as you would on your own machine. The list goes on and on, and little things add up to create major annoyances.
What happens when the network goes down? Are your dev's just supposed to sit on their hands? Or maybe they could bring cards and play real solitare...
Seriously, though, Unless you have virtual 100% network uptime, and your dev's never work off-site (say, from home) I'm on the "this is a Bad Idea" side.
One option is to get rid of your network team.
Seriously though, I have worked with this same type of setup through VMWare and it wasn't much fun. The only reason why I did it was because my boss thought it might be worth a try. Since I was newly hired, I didn't object. However, after several months of programming this way, I told him that I preferred to have my development studio on my machine and he agreed.
First, the graphical interface isn't really clear with a virtual workstation since it's sending images over the network rather than having your video card's graphical driver render the image. Constant viewing of this gave me a headache.
Secondly, any install of components or tools required the network administrator's help which meant I had to hurry up and wait.
Third, your computer is going to process one application faster than your server is going to process many apps and besides that, it has to send the rendered image over the network. It doesn't sound like it slows you down but it does. Again, hurry up and wait.
Fourth, this may be specific to VMWare but the virtual disk size was fixed to 4GB which to my network guy seemed to think it was enough. This filled up rather quickly. In order for me to expand the drive, I had to wait for the network admin to run partition magic on my drive which screwed it up and I had to have him rebuild my installation.
There are several more reasons but I would strongly encourage you to protest if you can. Your company is probably trying to impliment this because it's a new fad and it can be a way for them to save money. However, your productivity time will be wasted and that needs to be considered as a cost.
Bad Idea. You're taking the most critical tool in your developers' arsenal and making it run much, much, much slower than it needs to, and introducing several critical dependencies along the way.
It's good if you ever have to develop on-site, you can move your dev environment to a laptop and hit the road.
I could see it being required for some highly confidential multiple client work - there is a proof that you didn't leak any test data or debug files from one customer to another.
Down sides:
Few VMs support multiple monitors - without multiple monitors you can't be a productive developer.
Only virtualbox 3 gets close to being able to develop for opengl/activeX on a VM.
In my experience Virtual environments are ideal for test environments (for testing deployments) and not development environments. They are great as a blank slate / clean sheet for testing. I think the risk of alienating your developers is high if you pursue this route. Developers should have all the best tools at their disposal, i.e. high spec laptop / desktop, this keeps morale and productivity high.
Going down this route precludes any home-working which may or may not be an issue. Virtual environments are by their nature slower than dedicated environments, you may also have issues with multiple monitor setups on a VM.
If you go that route, make sure you bench the system aggressively before any serious commitment.
My experience of remote desktops is that it's ok for occasional use, but seldom sufficient for intensive computations and compilation typical of development work, especially at crunch time when everyone needs resources at the same time.
Not sure if that will affect you, but both VMWare and Virtual PC work very slow when viewed via Remote Desktop. For some reason Radmin (http://www.radmin.com/ ) does a much better job.
I regularly work with remote development environments and it is OK (although it takes some time to get used to keep track in which system you're working at the moment ;) ) - but most of the time I'm alone on the system.
As described before, I work in IT consultancy and move through various customer environments. It is natural to encounter a variety of security policies, and in most environments we have had to go through a security checklist before authorizating our laptops - our mobile development workstations - for connection into their network (most of the time just development network).
There is this customer who does not allow external computers to connect to their network, so our laptops are.... expensive communication computers with mobile GSM modems. We are forced to use their desktop PCs for development, and those workstations are pretty old models with low RAM and single-core Pentium 4 CPUs and cranky disks. Needless to say, development work is sub-optimal, especially when working with Visual Studio solutions that can range 100 - 400 projects.
For small cases that can be isolated, we develop and test on our own laptops. But for the bigger cases, given that certain development servers like SeeBeyond and mainframe DB2 databases are only on the network, and the prospect of copying hundreds of projects to and fro machines is just ghastly, it does not seem like a technically sound idea.
I am not asking for tricks that violate the customer's policies (e.g. plug laptop in masquerading desktop MAC address). I just like to know what others have tried to retain some of their advantage and efficiency with their own hardware when working in such environments. Whenever I can I try to duplicate the environment with virtual servers on my own laptop, but it only goes so far with Microsoft-only server solutions. Virtualizing non-Microsoft server and software is a challenge.
That's tough. The root cause here is management that doesn't understand that there are real cost implications to their choice of environments.
Your problem is that while you may be billing by the hour, you probably aren't getting paid that way, so your customers' wasted time goes into the pockets of your boss and not to you. A lot of times, this presents a mild conflict of interest. Your company has about zero incentive to speed up your work, and your client doesn't want to make an infrastructure investment in what they see as a temporary engagement.
All I can say is that you have to run this up the flagpole with management. You have to show them that this is taking real time from the projects which could put your deliverable dates at risk, or worse, the reliability of these machines is such that it puts the delivery of the end product at risk as well. The onus is on you to make your management into a believer.
A gig of RAM at Crucial is thirty bucks. If nobody is willing to shell out 90 big ones for 3GB of RAM for your box, you have management that's actively working against you or does not respect you. If it comes to that, you've got bigger problems and need to look for your next employer.
One of the things that I did when I upgrade my current development environment was find links to productivity studies that showed how much productivity increased when the development environment was enhanced. In my particular case it was going from 2 to 3 monitors on my desktop. I was able to find 3-4 articles that described how much was gained by having the extra monitor. It seems self-evident to me that you'd want a newer, well-configured system for developers, especially since the cost of the hardware relative to the cost of the people is so small these days, but the bean counters often think differently. If you can go in armed with some industry studies that show productivity gains, I think it will be harder to dismiss your concerns as just complaints about the environment.
FWIW, I was disappointed to have to do the research for an upgrade that cost less than what the department would spend on paper in a month, but sometimes you have to do things that make no sense to you because it makes sense to someone else.
Write a decent proposal to your manager, that's about all you can do to rectify the solution. If he is unwilling or unable to fix the problem, or unwilling/unable to pass the proposal up to someone who can, then I'd say the current situation is what they've decided to use.
In that case, either live with it, or don't, ie. move on.
The proposal should contain:
A proposal for what you want done
Why it should be done
The consequences of doing it
And most importantly, the consequences of not doing it
List things like longer development time, or less testing, or less time to write quality code. Basically, a minor upgrade that doesn't cost much will improve the quality of the product tremendously.
I just went through this and found a pretty good solution : get a different job
Just synchronize incrementally. You're not typing that much code/second a gsm connection cannot keep up with it? Make sure your projects are setup to use mocks/stubs whereever possible.
Setting this up probably is beyond the capability of the systems administrators of your customer.
The dependency on the big databases should be reduced so you only need to run daily regression tests.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
Given my background as a generalist, I can cover much of the area from analog electronics to writing simple applications that interface to a RDBMS backend.
I currently work in a company that develops hardware to solve industry-specific problems. We have an experienced programmer that have written business apps, video games, and a whole bunch of other stuff for PC's. But when I talk to him about doing low-level programming, he simultaneously express interest and also doubt/uncertainty about joining the project.
Even when talking about PC's, he seems to be more comfortable operating at the language level than the lower-level stuff (instruction sets, ISR's). Still, he's a smart guy, and I think he'd enjoy the work once he is over the initial learning hump. But maybe that's my own enthusiasm for low-level stuff talking... If he was truly interested, maybe he would already have started learning stuff in that direction?
Do you have experience in making that software-to-hardware (or low-level software) transition? Or, better yet, of taking a software only guy, and transitioning him to the low-level stuff?
Edit:
P.S. I'd love to hear from the responders what their own background is -- EE, CS, both?
At the end of the day, everything is an API.
Need to write code for an SPI peripheral inside a microcontroller? Well, get the datasheet or hardware manual, and look at the SPI peripheral. It's one, big, complex API.
The problem is that you have to understand the hardware and some basic EE fundamentals in order to comprehend what the API means. The datasheet isn't written by and for SW developers, it was written for hardware engineers, and maybe software engineers.
So it's all from the perspective of the hardware (face it - the microcontroller company is a hardware company filled with hardware/asic engineers).
Which means the transition is by no means simple and straightforward.
But it's not difficult - it's just a slightly different domain. If you can implement a study program, start off with Rabbit Semiconductor's kits. There's enough software there so a SW guy can really dig in with little effort, and the HW is easy to deal with because everything is wrapped in nice little libraries. When they want to do something complex they can dig into the direct hardware access and fiddle at the lower level, but at the same time they can do some pretty cool things such as build little webservers or pan/tilt network cameras. There are other companies with similar offerings, but Rabbit is really focused on making hardware easy for software engineers.
Alternately, get them into the Android platform. It looks like a unix system to them, until they want to do something interesting, and then they'll have the desire to attack that little issue and they'll learn about the hardware.
If you really want to jump in the deep end, go with an arduino kit - cheap, free compilers and libraries, pretty easy to start off with, but you have to hook wires up to do something interesting, which might be too big of a hurdle for a reluctant software engineer. But a little help and a few nudges in the right direction and they will be absolutely thrilled to have a little LED display that wibbles* like the nightrider lights...
-Adam
*Yes, that's a technical engineering term.
The best embedded programmers I've worked with are EE trained and learned SW on the job. The worst embedded developers are recent CS graduates who think SW is the only way to solve a problem. I like to think of embedded programming as the bottom of the SW pyramid. It's a stable abstraction layer/foundation that makes life easy for the app developers.
"Hard" is an extremely relative term. If you're used to thinking in the tight, sometimes convoluted way you need to for small embedded code (for example, you're a driver developer), then certainly it's not "hard".
Not to "bash" (no pun intended) shell scripters, but if you write perl and shell scripts all day, then it might very well be "hard".
Likewise if you're a UI guy for Windows. It's a different kind of thinking.
Why embedded development is "hard":
1) The context may switch to an interrupt between each machine instruction. Since high level language constructs may map to multiple assembly instructins, this might even be within a line of code, e.g. long var = 0xAAAA5555. If accessed in an interrupt service routine, in a 16 bit processore var might only be half set.
2) Visibility into the system is limited. You may not even have output to Hyperterm unless you write it yourself. Emulators don't always work that well or consistently (though they are way better than they used to be). You will have to know how to use oscilloscopes and logic analyzers.
3) Operations take time. For example, say your serial transmitter uses an interrupt to signal when it is time to send another byte. You could write 16 bytes to a transmit buffer, then clear interrupts and wonder why your message is never sent. Timing in general is a tricky part of embedded programming.
4) You are subject to subtle race conditions that occur only rarely and are very difficult to debug.
5) You have to read the manual. A lot. You can't make it work by fooling around. Sometimes 20 things have to be set up correctly to get what you are after.
6) The hardware doesn't always work or is easy to damage, and it takes a while to figure out that you broke it.
7) Software repairs in embedded systems are usually very expensive. You can't just update a web page. A recall can erase any profit you made on the device.
There are probably more but I've got this race condition to solve...
This is very subjective I guess, his reasons could be many. But if he's like me, I know where he's coming from. Let me explain.
In my career I've dedicated 6 years to the telecom industry, working a lot with embedding SDK middleware into low-end mobile phones etc.
Most embedded environments I've experienced are like harsh weather for a programmer, you constantly have to overcome limitations in resources etc. Some might find this a challenge and enjoy it for the challenge itself, some might feel close to "the real stuff" - the hardware, some might feel it limits their creativity.
I'm the kind who feels it limits my creativity.
I enjoy being back in Windows desktop environment and flap my wings with elaborate class designs, stretch my legs a few clockcycles extra, use unnecessary amounts of memory for diagnostics etc.
On certain embedded units in the past, I hardly had support for fseek() (an ANSI C standard file function). If lucky, a "watchdog" could give clues to where something crashed. Not to mention the pain of communicating with the user in single-threaded preemptive swamps.
Well, you know what I'm getting at. In my opinion it's not necessarily hard, but it's quite a leap, with potentially little reuse of your current experience.
Regards
Robert
There is a very real difference in mindset from user-level application development (ie, general purpose PC or Web applications) to hard deadline, real-time response application development (ie, the hardware/software interface).
Interrupts, instruction sets, context switching and hard resource constraints are relatively unknown to your average developer. I'm assuming here that your 'average developer' is not an Electrical/Electronic or other Engineer by training.
The transition for this developer you mention may be well outside his comfort zone. Some of us like stretching like that. Others of us may have decided the view isn't worth the climb.
Likewise, folks who've been in the hardware area (ie, Engineers) often have difficulty with the assumptions and language of software development.
These are gross generalities, of course, but hopefully give some insight.
He needs to be comfortable with the low-level stuff, but mostly for debugging and field issues. There is a serious learning curve depending on the architecture, but not impossible. On the other hand, the low-level code takes (in general) more time and debugging than higher-level code. So if you need to be going back to low-level all the time, then perhaps something isn't right in the design. Even for the embedded controls I've built, I spend the vast majority of time in high-level code. Although when you have issues, it is extremely advantageous to have a very good low-level knowledge.
I am an EE turned Software Engineer. I prefer programming low level. Most software developers classically trained that I know do not want to operate at this level they want apis to call. So for me it is a win win, I create the low level driver and api for them to use. There is a "new" degree, at least new since I went to college, called Computer Engineer. Hmm, it might be an electrical engineering degree not computer science, but it is a nice mix of software and digital hardware basics. The individuals that I have worked with from this field are much more comfortable with low level.
If the individual is not comfortable or willing then place them somewhere where they are comfortable. Let them do documentation or work on the user interface. If all of the work at the company requires low level work then this individual needs to do it or find another job. Dont sugar coat it.
I also think they will enjoy it once they get over the hump, the freedom you have at that level, not hindered by operating systems, etc. Recently I witnessed a few co-workers experience for the first time seeing their software run under simulation. Every net within the processor and other on chip peripherals. No you dont have a table on a gui (debugger) showing the current state of the memory, you have to look at the memory bus, look for the address you are interested in, look for a read or write signal and the data bus. I worry about the day that silicon arrives and they no longer have this level of visibility. Will be like an addict in detox.
Well, I cut my teeth on hardware when I started reading Popular Electronics at age 14 – this was BEFORE personal computers, in case you were wondering and if you weren’t well, you know anyway. lol
I’ve done the low level bit-bang stuff on the 8048/51 microprocessor, done PIC’s and some other single chip variations and of course Rabbit Semiconductor. (great if you're into C). That’s great (and fun) stuff; Yes, there is a different way of looking at things – not harder, but some of that information is a bit harder to come by as it isn’t as discussed as the software issues. (Of course, this depends on the circle of friends with which you associate, eh).
But, having said all of this, I want to remind you of a technology that started to bridge the gap for programmers into the world of hardware and has since become a very MAJOR player and that is the .NET micro framework. You can find information on this technology at the following;
http://msdn.microsoft.com/en-us/embedded/bb267253.aspx
It addresses some of the same issues that .NET web development addressed in that you can use some (quite a bit, actually) of your existing PC based knowledge in the new environments – Some caution, of course, as your target machine doesn’t have 4 GIG of RAM – it may only have 64K (or less)
Starting in version 2.5 of the .NET micro framework, you have access to networking and web services – way kewl, eh? It doesn’t stop there … Want to control the lights in your house? How about a temp recording station? All with the skills you already have. Well, mostly -- Check out the link.
The SDK plugs into your VisualStudio IDE. There are a number of “Development Kits” available for a very reasonable amount of cash – Now, what would normally take a big learning curve in components, building a circuit board and wiring up “stuff” can be done reasonably easy with a dev kit and some pretty simple code – Of course, you may need to do the occasional bit bang operation, but more and more sensor folks are providing .NET micro framework drivers – so, the hardware development may be closer than you think…
Hope it helps...
I like both. Embedded challenges me and really gets me going in a visceral way. Making something that affects the macro physical world is very satisfactory. But I've had to do a lot of catch up on the electrical/electronics end, since my bachelor's is in computer science. I've a pretty generalist background, where I studied ai, graphics, compilers, natural language, etc. Now I'm doing graduate work in embedded systems. The really tough part is adjusting to the lack of runtime facilities like an operating system.
Low-level embedded programming also tends to include low-level debugging. Which (in my experience) usually involves (at least) the use of an oscilloscope. Unless your colleague is going to be happy spending at least some of the time in physical contact with the hardware and thinking in terms of microseconds and volts, I'd be tempted to leave them be.
Agreed on the "hard" term is quite relative.
I would say different, as you would need to employ different development patterns that you won't use in other kind of environment.
The time constraint for instance could requires a learning curve.
However being curious, would be a quality for a developer, wouldn't be?
You are right in that anyone with enough knowledge not to feel completely lost in an area (over the hump?) will enjoy the challenges of learning something new.
I myself would feel quite nervous being moving to the level of instruction sets etc as there is a huge amount of background knowledge needed to feel comfortable in the environment.
It may make a difference if you are able to support the developer in learning how to do this. Having someone there you can ask and talk through issue with is a huge help in that sort of domain change.
It may be worth having the developer assigned to a smaller project with others as a first step and see how that goes. If he expresses enthusiasm to try another project, things should flow on from there.
I would say it is not any harder, it just requires a different knowledge set, different considerations.
I think that it depends on the way that they program in their chosen environment, and the type of embedded work that you're talking about.
Working on an embedded linux platform, say, is a far smaller jump than trying to write code on an 8 bit platform with no operating system at all.
If they are the type of person that has an understanding of what is going on underneath the api and environment that they are used to, then it won't be too much of a stretch to move into embedded development.
However, if their world view stops at the high level api that they've been using, and they have no concept of anything beneath that, they are going to have a really hard time.
As a (very) general statement if they are comfortable working on multithreaded applications they will probably be ok, as that shares some of the same issues of data volatility that you have when working on embedded projects.
With all of that said, I've seen more embedded programmers successfully working in PC development than I have the reverse. (of course I might not have seen a fair cross section)
"But when I talk to him about doing low-level programming, he simultaneously express interest and also doubt/uncertainty about joining the project." -- That means you let him try and you prepare to hire someone else in case he doesn't pass the learning curve.
i began as a SW engineer i'm now HW one !
the important is to understand how it works and to be motivated !