Virtual desktop environment for development - development-environment

Our network team is thinking of setting up a virtual desktop environment (via Windows 2008 virtual host) for each developer.
So we are going to have dumb terminals/laptops and should be using the virtual desktops for all of our work.
Ours is a Microsoft shop and we work with all versions of .net framework. Not having the development environments on the laptops is making the team uncomfortable.
Are there any potential problems with that kind of setup? Is there any reason to be worried about this setup?

Unless there's a very good development-oriented reason for doing this, I'd say don't.
Your developers are going to work best in an environment they want to work in. Unless your developers are the ones suggesting it and pushing for it, you shouldn't be instituting radical changes in their work environments without very good reasons.
I personally am not at all a fan of remote virtualized instances for development work, either. They're often slower, you have to deal with network issues and latency, you often don't have as much control as you would on your own machine. The list goes on and on, and little things add up to create major annoyances.

What happens when the network goes down? Are your dev's just supposed to sit on their hands? Or maybe they could bring cards and play real solitare...
Seriously, though, Unless you have virtual 100% network uptime, and your dev's never work off-site (say, from home) I'm on the "this is a Bad Idea" side.

One option is to get rid of your network team.
Seriously though, I have worked with this same type of setup through VMWare and it wasn't much fun. The only reason why I did it was because my boss thought it might be worth a try. Since I was newly hired, I didn't object. However, after several months of programming this way, I told him that I preferred to have my development studio on my machine and he agreed.
First, the graphical interface isn't really clear with a virtual workstation since it's sending images over the network rather than having your video card's graphical driver render the image. Constant viewing of this gave me a headache.
Secondly, any install of components or tools required the network administrator's help which meant I had to hurry up and wait.
Third, your computer is going to process one application faster than your server is going to process many apps and besides that, it has to send the rendered image over the network. It doesn't sound like it slows you down but it does. Again, hurry up and wait.
Fourth, this may be specific to VMWare but the virtual disk size was fixed to 4GB which to my network guy seemed to think it was enough. This filled up rather quickly. In order for me to expand the drive, I had to wait for the network admin to run partition magic on my drive which screwed it up and I had to have him rebuild my installation.
There are several more reasons but I would strongly encourage you to protest if you can. Your company is probably trying to impliment this because it's a new fad and it can be a way for them to save money. However, your productivity time will be wasted and that needs to be considered as a cost.

Bad Idea. You're taking the most critical tool in your developers' arsenal and making it run much, much, much slower than it needs to, and introducing several critical dependencies along the way.

It's good if you ever have to develop on-site, you can move your dev environment to a laptop and hit the road.
I could see it being required for some highly confidential multiple client work - there is a proof that you didn't leak any test data or debug files from one customer to another.
Down sides:
Few VMs support multiple monitors - without multiple monitors you can't be a productive developer.
Only virtualbox 3 gets close to being able to develop for opengl/activeX on a VM.

In my experience Virtual environments are ideal for test environments (for testing deployments) and not development environments. They are great as a blank slate / clean sheet for testing. I think the risk of alienating your developers is high if you pursue this route. Developers should have all the best tools at their disposal, i.e. high spec laptop / desktop, this keeps morale and productivity high.
Going down this route precludes any home-working which may or may not be an issue. Virtual environments are by their nature slower than dedicated environments, you may also have issues with multiple monitor setups on a VM.

If you go that route, make sure you bench the system aggressively before any serious commitment.
My experience of remote desktops is that it's ok for occasional use, but seldom sufficient for intensive computations and compilation typical of development work, especially at crunch time when everyone needs resources at the same time.

Not sure if that will affect you, but both VMWare and Virtual PC work very slow when viewed via Remote Desktop. For some reason Radmin (http://www.radmin.com/ ) does a much better job.
I regularly work with remote development environments and it is OK (although it takes some time to get used to keep track in which system you're working at the moment ;) ) - but most of the time I'm alone on the system.

Related

Multiseat setup for fun and profit: hypervisors and other choices

I am grad student, and I am considering setting up my dream home workstation/art tool/entertainment device/all-purpose everything. I'm wondering if what I want to do is possible (and practical), and if so, get some suggestions and warnings from people who know more about virtualization and hypervisors than I do:
Aim: Set up a 2-4 headed computing station that is optimized for using different OS'sfor different tasks I do. I want to keep my work/play streams separated, and have control over the resources that each one is allowed. For example, one head would be Windows 10 for audiovisual work, media playing, and maybe some gaming. Another head would use Linux and be used mainly for data science (mostly R and Python), and some hosting for purely local use (such as running an instance of the Galaxy bioinformatics server, which I only plan to access locally).Finally, I want a VM that is purely devoted to web-browsing, probably some lightweight Linux distro.
I want each OS to have it's own keyboard and monitor(s), but ideally I want to copy-paste between OSs. The idea is to just swivel my chair to move between operating systems, or even to have one person using each.
What I think I need:
A hypervisor with PCI, USB, and network controller pass-through.
Two video cards,one each for my Windows and Linux workstations (with the web browsing VM using the on-chip CPU graphics). Obviously, a mobo and CPU that support full virtualization.
A USB card with multiple separate controllers, so that I can use a different controller for each OS. Something similar for network interface cards.
Separate SSDs for each OS and its apps.
Some sort of storage pool (probably ZFS based) to hold the bulk of my files, shared so I can access them from either guest. Ideally, I'd like to to be in a separate enclosure, but I don't trust eSATA cables (they seem to fail frequently) and care about speed of database access, so I'll probably put the drives inside the main case, even though that will make future migration more annoying.
Something like SPICE for KVM, so that I can copy and paste freely between OS's.
Is there anything I am overlooking?
What hypervisor or similar solution is best for what I want to do? I am leaning towards KVM, but am far from committed.I will consider paid solutions if there is a compelling reason to use them.
What are some pitfalls I should be wary of?
kvm will work here ideally, a lot of tutorials and lot of intel based configurations working like a charm
zfs can't share your data, u need nfs or samba share on host machine
Synergy software is for you.

Forming a web application cluster with 3 VMs running in the same physical box

Are there any advantages what so ever to form a cluster if all the nodes are Virtual machines running inside the same physical host? Our small company just purchased a server with 16GB of Ram. I propose to just setup IIS on the box to handle outside requests, but our 'Network Engineer' argue that it will be better to create 3 VMs on the box and form a cluster with the VMs for load balancing. But since they are all in the same box, are there actual benefits for taking the VM approach rather than no VMs?
THanks.
No, as the overheads of running four operating systems would take a toll on performance, plus, I believe all modern web servers (plus IIS) are multithreaded so are optimised for performance anyway.
Maybe the Network Engineer knows something that you don't. Just ask. Use common sense to analyze the answer.
That said, running VMs always needs resources - but you might not notice. Doesn't make sense? Well, even if you attach the computer with a Gigabit link to the Internet, you still won't be able to process more data than the ISP gives you. If your uplink is 1MB/s, that's the best you can get. Any VM today is able to process that little trickle of data while being bored 99.999% of the time.
Running the servers in VMs does have other advantages, though. First of all, you can take them down individually for maintenance. If the load surges because your company is extremely successful, you can easily add more VMs on other physical boxes and move virtual servers around with a mouse click. If the main server dies, you can set up a replacement machine and migrate the VMs without having to reinstall everything.
I'd certainly question this decision myself as from a hardware perspective you obviously still have a single point of failure so there is no benefit.
From an application perspective it could be somewhat tenuously suggested that this would allow zero downtime deployments by taking VMs out of the "farm" one at a time but you won't get any additional application redundancy or performance from virtualisation in this instance. What you will get is considerably more management overhead in terms of infrastructure and deployment for little gain.
If there's a plan to deploy to a "proper" load balanced environment in the near future this might be a good starting point to ensure your application works correctly in a farm (sticky sessions etc). Although this makes your apparently live environment also a QA server, which is far from ideal.
from a performance perspective, 3 VMs on the same hardware is slower
from an availability perspective, 2 VMs will give higher availability (better protects from app software failures, OS failures, you can perform maintenance on one node while the other is up).

Is it feasible to virtualize developer machines? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 months ago.
Improve this question
It's budgeting time and Corporate is balking at the cost of replacing a coworker's machine who is due for it, needs it, and deserves it.
Our group is a small ISV/SAAS that exists as a division of a larger media group. We are not a cost center, we make money, even this year. We are owned by a mid-size media group whose business model is quite different, and seems driven only by reducing costs.
Our software stack is Visual Studio 2008, SQL 2008, on Windows Server 2008 (so that multiple root websites can be hosted and debugged on each dev's machine). Our target hardware is 3GHz quad-core workstation, 4GB RAM, and RAID 1 mirrored hard drives so that we are protected against the productivity loss of losing a developer hard drive.
Corporate wants to give us a couple powerful, but hand-me-down, decommissioned servers, and then each developer would have a virtual workstation on that server. The computers sitting on our desktops would be dumb terminals at $400-500 each.
I'm trying to be neutral but I doubt it's hard to discern my bias. I'd like to see real developer reactions to this, and I figure this is the best place to get that.
Please include arguments for or against, evidence if you've seen this tried and how well (or not) it has gone.
This sounds like a well intentioned idea, but:
In my experience you need multiple cores, lots of memory, and fast disks to be productive in today's modern IDE's. I don't see that happening in a virtual environment with any economy. Individual boxes are still better.
It's also an issue of control. In a virtual environment I can imagine all kinds of restrictions. Will you still be able to install your own tools, for example?
Ultimately, it's misguided. If this idea increases build times by any substantial amount, any savings in hardware will quickly be erased by lost productivity. Conversely, money that is spent on decent individual machines for developers will quickly pay for itself over and over in reduced build times.
Good quality individual machines are an investment, not a cost.
Development is disk-bound, i.e. you spend your time waiting for builds which is a disk-bound process most of the time. If you're all sharing a machine build times will become much worse.
Aside from all of the givens (perfomance, disk space, etc...):
I would be OK with this as long as I still had multiple monitor support.
Without that, it is a no-go.
Basic failure to understand what a developer box is actually doing much of the time:
When building its chewing through processor and disk - especially disk.
When testing you're talking about having one or more instances of Visual Studio running (once you get past two things start to get interesting), database server, website/services plus all the other stuff (browsers with a lot of tabs open, notebook software, and heaven only knows what else) all spread across multiple monitors (at least two). Lots of cores, lots of memory please!
I can quite happily accept that there's an argument for virtualisation - a good dev box should be able to host multiple, concurrent VMs in order to isolate some of the above and to provide "clean" environments for testing. Note that that's the box for ONE developer hosting multiple VMs solely for the benefit of that one developer...
Our team is developing on remote server (no GUI stuff, plain old vim) for quite some time without problems. Granted it requires rather powerful server and sometimes is starts to be bit on a slow side if everyone start to compile at the same time.
But as a bonus you are very mobile in terms where you can develop from (we all are having laptops) be it in office, home, sunny beach (last one was probably overstatement).
Bute yeah, that might not all work well for graphics heavy apps of course.
It sounds like your group is not offering the solutions that you have considered in a well documented format, otherwise corporate would not be shoving decisions down your throat. If you have a documented process for development, corporate might want to discuss changing the process with you, but as soon as you say, "this change would break our process and we would have to retool our development workflow", they will see the pain of the $$ in reworking the process and most likely back off. That said, once your process is documented, you should internally be ruthless about trying to make it more efficient and cost effective, and have an open mind about corporate's suggestions.
I assume you have machines already for SVN / TRAC, your Continuous Integration server, product demos, testing, etc. and that the only possible use your team could make of these servers is for personal VMs.
I do many things that peg my processor at 100%. Compiles certainly achieve this. Now imagine having to share that processor with 10 other developers. The loss in productivity will become quite apparent. If you have a multi-core PC, this won't be as painful. Get an Intel i7 and you probably won't even notice it when 8 people are logged in. Most programs (including my compiler) can't use more than 1 processor anyway.
That said, it's a viable solution to reduce costs. I used to work at a company who has since switched to these dumb terminals. It works fine. My university had HP UNIX machines that were dumb terminals. They logged into a server that split up the processor ownership among however many people were logged in. What people would do is log into a server and check the number of people logged in. If there were too many, they'd search for the next one, because build times are noticeably slower. I'd never log into the easy to remember server names. =)
It definitely works, but also reduces productivity due to longer build times, especially when multiple people are building at the same time. Since productivity is such a difficult thing to quantify, it might be hard to argue your point.
Graphics acceleration might also be an issue if you need to do anything with animation, video, or image editing. You can't really test video playback through an RDP session since the framerate and/or color depth isn't high enough.
Regardless of performance, at my company we are moving to laptops as developer machines. The main advantage is that developers can bring their computers to meetings, conferences, etc. Also being able to sit next to a colleague when you're helping him with a problem, and having your own development environment available, is very valuable.

Working around development constraints in customer policy

As described before, I work in IT consultancy and move through various customer environments. It is natural to encounter a variety of security policies, and in most environments we have had to go through a security checklist before authorizating our laptops - our mobile development workstations - for connection into their network (most of the time just development network).
There is this customer who does not allow external computers to connect to their network, so our laptops are.... expensive communication computers with mobile GSM modems. We are forced to use their desktop PCs for development, and those workstations are pretty old models with low RAM and single-core Pentium 4 CPUs and cranky disks. Needless to say, development work is sub-optimal, especially when working with Visual Studio solutions that can range 100 - 400 projects.
For small cases that can be isolated, we develop and test on our own laptops. But for the bigger cases, given that certain development servers like SeeBeyond and mainframe DB2 databases are only on the network, and the prospect of copying hundreds of projects to and fro machines is just ghastly, it does not seem like a technically sound idea.
I am not asking for tricks that violate the customer's policies (e.g. plug laptop in masquerading desktop MAC address). I just like to know what others have tried to retain some of their advantage and efficiency with their own hardware when working in such environments. Whenever I can I try to duplicate the environment with virtual servers on my own laptop, but it only goes so far with Microsoft-only server solutions. Virtualizing non-Microsoft server and software is a challenge.
That's tough. The root cause here is management that doesn't understand that there are real cost implications to their choice of environments.
Your problem is that while you may be billing by the hour, you probably aren't getting paid that way, so your customers' wasted time goes into the pockets of your boss and not to you. A lot of times, this presents a mild conflict of interest. Your company has about zero incentive to speed up your work, and your client doesn't want to make an infrastructure investment in what they see as a temporary engagement.
All I can say is that you have to run this up the flagpole with management. You have to show them that this is taking real time from the projects which could put your deliverable dates at risk, or worse, the reliability of these machines is such that it puts the delivery of the end product at risk as well. The onus is on you to make your management into a believer.
A gig of RAM at Crucial is thirty bucks. If nobody is willing to shell out 90 big ones for 3GB of RAM for your box, you have management that's actively working against you or does not respect you. If it comes to that, you've got bigger problems and need to look for your next employer.
One of the things that I did when I upgrade my current development environment was find links to productivity studies that showed how much productivity increased when the development environment was enhanced. In my particular case it was going from 2 to 3 monitors on my desktop. I was able to find 3-4 articles that described how much was gained by having the extra monitor. It seems self-evident to me that you'd want a newer, well-configured system for developers, especially since the cost of the hardware relative to the cost of the people is so small these days, but the bean counters often think differently. If you can go in armed with some industry studies that show productivity gains, I think it will be harder to dismiss your concerns as just complaints about the environment.
FWIW, I was disappointed to have to do the research for an upgrade that cost less than what the department would spend on paper in a month, but sometimes you have to do things that make no sense to you because it makes sense to someone else.
Write a decent proposal to your manager, that's about all you can do to rectify the solution. If he is unwilling or unable to fix the problem, or unwilling/unable to pass the proposal up to someone who can, then I'd say the current situation is what they've decided to use.
In that case, either live with it, or don't, ie. move on.
The proposal should contain:
A proposal for what you want done
Why it should be done
The consequences of doing it
And most importantly, the consequences of not doing it
List things like longer development time, or less testing, or less time to write quality code. Basically, a minor upgrade that doesn't cost much will improve the quality of the product tremendously.
I just went through this and found a pretty good solution : get a different job
Just synchronize incrementally. You're not typing that much code/second a gsm connection cannot keep up with it? Make sure your projects are setup to use mocks/stubs whereever possible.
Setting this up probably is beyond the capability of the systems administrators of your customer.
The dependency on the big databases should be reduced so you only need to run daily regression tests.

Best Dual HD Set up for Development

I've got a machine I'm going to be using for development, and it has two 7200 RPM 160 GB SATA HDs in it.
The information I've found on the net so far seems to be a bit conflicted about which things (OS, Swap files, Programs, Solution/Source code/Other data) I should be installing on how many partitions on which drives to get the most benefit from this situation.
Some people suggest having a separate partition for the OS and/or Swap, some don't bother. Some people say the programs should be on the same physical drive as the OS with the data on the other, some the other way around. Same with the Swap and the OS.
I'm going to be installing Vista 64 bit as my OS and regularly using Visual Studio 2008, VMWare Workstation, SQL Server management studio, etc (pretty standard dev tools).
So I'm asking you--how would you do it?
If the drives support RAID configurations in your BIOS, you should do one of the following:
RAID 1 (Mirror) - Since this is a dev machine this will give you the fault tolerance and peace of mind that your code is safe (and the environment since they are such a pain to put together). You get better performance on reads because it can read from both/either drive. You don't get any performance boost on writes though.
RAID 0 - No fault tolerance here, but this is the fastest configuration because you read and write off both drives. Great if you just want as fast as possible performance and you know your code is safe elsewhere (source control) anyway.
Don't worry about mutiple partitions or OS/Data configs because on a dev machine you sort of need it all anyway and you shouldn't be running heavy multi-user databases or anything anyway (like a server).
If your BIOS doesn't support RAID configurations, however, then you might consider doing the OS/Data split over the two drives just to balance out their use (but as you mentioned, keep the programs on the system drive because it will help with caching). Up to you where to put the swap file (OS will give you dump files, but the data drive is probably less utilized).
If they're both going through the same disk controller, there's not going to be much difference performance-wise no matter which way you do it; if you're going to be doing lots of VM's, I would split one drive for OS and swap / Programs and Data, then keep all the VM's on the other drive.
Having all the VM's on an independant drive would let you move that drive to another machine seamlessly if the host fails, or if you upgrade.
Mark one drive as being your warehouse, put all of your source code, data, assets, etc. on there and back it up regularly. You'll want this to be stable and easy to recover. You can even switch My Documents to live here if wanted.
The other drive should contain the OS, drivers, and all applications. This makes it easy and secure to wipe the drive and reinstall the OS every 18-24 months as you tend to have to do with Windows.
If you want to improve performance, some say put the swap on the warehouse drive. This will increase OS performance, but will decrease the life of the drive.
In reality it all depends on your goals. If you need more performance then you even out the activity level. If you need more security then you use RAID and mirror it. My mix provides for easy maintenance with a reasonable level of data security and minimal bit rot problems.
Your most active files will be the registry, page file, and running applications. If you're doing lots of data crunching then those files will be very active as well.
I would suggest if 160gb total capacity will cover your needs (plenty of space for OS, Applications and source code, just depends on what else you plan to put on it), then you should mirror the drives in a RAID 1 unless you will have a server that data is backed up to, an external hard drive, an online backup solution, or some other means of keeping a copy of data on more then one physical drive.
If you need to use all of the drive capacity, I would suggest using the first drive for OS and Applications and second drive for data. Purely for the fact of, if you change computers at some point, the OS on the first drive doesn't do you much good and most Applications would have to be reinstalled, but you could take the entire data drive with you.
As for dividing off the OS, a big downfall of this is not giving the partition enough space and eventually you may need to use partitioning software to steal some space from the other partition on the drive. It never seems to fail that you allocate a certain amount of space for the OS partition, right after install you have several gigs free space so you think you are fine, but as time goes by, things build up on that partition and you run out of space.
With that in mind, I still typically do use an OS partition as it is useful when reloading a system, you can format that partition blowing away the OS but keep the rest of your data. Ways to keep the space build up from happening too fast is change the location of your my documents folder, change environment variables for items such as temp and tmp. However, there are some things that just refuse to put their data anywhere besides on the system partition. I used to use 10gb, these days I go for 20gb.
Dividing your swap space can be useful for keeping drive fragmentation down when letting your swap file grow and shrink as needed. Again this is an issue though of guessing how much swap you need. This will depend a lot on the amount of memory you have and how much stuff you will be running at one time.
For the posters suggesting RAID - it's probably OK at 160GB, but I'd hesitate for anything larger. Soft errors in the drives reduce the overall reliability of the RAID. See these articles for the details:
http://alumnit.ca/~apenwarr/log/?m=200809#08
http://permabit.wordpress.com/2008/08/20/are-fibre-channel-and-scsi-drives-more-reliable/
You can't believe everything you read on the internet, but the reasoning makes sense to me.
Sorry I wasn't actually able to answer your question.
I usually run a box with two drives. One for the OS, swap, typical programs and applications, and one for VMs, "big" apps (e.g., Adobe CS suite, anything that hits the disk a lot on startup, basically).
But I also run a cheap fileserver (just an old machine with a coupla hundred gigs of disk space in RAID1), that I use to store anything related to my various projects. I find this is a much nicer solution than storing everything on my main dev box, doesn't cost much, gives me somewhere to run a webserver, my personal version control, etc.
Although I admit, it really isn't doing much I couldn't do on my machine. I find it's a nice solution as it helps prevent me from spreading stuff around my workstation's filesystem at random by forcing me to keep all my work in one place where it can be easily backed up, copied elsewhere, etc. I can leave it on all night without huge power bills (it uses <50W under load) so it can back itself up to a remote site with a little script, I can connect to it from outside via SSH (so I can always SCP anything I need).
But really the most important benefit is that I store nothing of any value on my workstation box (at least nothing that isn't also on the server). That means if it breaks, or if I want to use my laptop, etc. everything is always accessible.
I would put the OS and all the applications on the first disk (1 partition). Then, put the data from the SQL server (and any other overflow data) on the second disk (1 partition). This is how I'd set up a machine without any other details about what you're building. Also make sure you have a backup so you don't lose work. It might even be worth it to mirror the two drives (if you have RAID capability) so you don't lose any progress if/when one of them fails. Also, backup to an external disk daily. The RAID won't save you when you accidentally delete the wrong thing.
In general I'd try to split up things that are going to be doing a lot of I/O (such as if you have autosave on VS going off fairly frequently) Think of it as sort of I/O multithreading
I've observed significant speedups by putting my virtual machines on a separate disk. Whenever Windows is doing something stupid in the VM (e.g., indexing yet again), it doesn't thrash my Mac's disk quite so badly.
Another issue is that many tools (Visual Studio comes to mind) break in frustrating ways when bits of them are on the non-primary disk.
Use your second disk for big random things.