OK, not that kind of hostile. I'm curious to hear how people deal with developing on big corporate networks that mandate all kinds of developer-unfriendly services and policies on desktops (think ProQuota, over-zealous virus scanners, no local admin, no access to SO). I've previously used virtual LANs used effectively, or completely seperated parallel networks, but these aren't always practical. Any other tips?
The most important thing (if possible) is to recruit support from your boss.
Unless he's a PHB, he will often understand the impact of these restrictions on you, your team, and indirectly on his success. If the requests are reasonable, he can provide the buffer if you do go against IT. In addition, if the entire team or other developers seek the same policies, this "group bargaining power" can be used to create special policies.
Generally speaking, large corporations are over-zealous about legal issues and information security. However, IT departments generally hate dealing with numerous requests for support from the same person. Sometimes, if you show a clear harm to productivity from a project (e.g., you use a lot of temp files and the anti virus hits them), or that your program has to be installable from administrator mode, they will sometimes reach a compromise. You may have to sign something stating you would not use an administrative access on your machine to install illegal software, but you'd still get admin.
In the few cases I have gone for job interviews (I'm mostly in academia but worked some in the industry), one of my greatest concerns was the amount of control I had about my computing environment, from hardware, to software, to administrative rights. If I cannot be trusted as a developer to manage my own windows box, I don't feel I should be trusted with a mission-critical system.
I haven't tried this myself, but I once saw someone say that the central IT gave in and let him administer his own workstation, after he complied with the policies by submitting to them a change request form with a list of the first 300 things he wanted changing on his workstation.
Anything that interferes with you doing your job is good to bring up in a meetings.
Ex:
This Virus Scanner runs 4 times a day while I am at work. During that run my compile times take 5 times as long, and the use of my other development tools is brought down to a crawl.
The web filters are overzealous. I have attempted to access sites x, y, and z for extra development information, and have been unable. The time it took to find a good resources was doubled because of this.
And so on.
Work within the (hostile) rules and give up, quit and find somewhere more enlightened or try to change the organization, your choice.
If you decide to try and change things don't go against IT alone, that will just make you the "trouble maker" and you will never get anywhere, try to get support from your boss and other developers - if you can't get support then you may be better off looking for a new job.
I would explain your issues to your boss and/or sys admin, if they are receptive and agree its a good idea to let you have control over your workstation(s) then problem solved, if not I would walk from the project/job before your probationary period is over.
I was in a similar situation once at a large goverment corporation and it turned out management not be willing to unlock developers boxes was just the tip of the iceburg of a massive buracracy, the project ended up being a huge failure and by the time I left half of the IT department (not just the project team) had quit.
Just my 2 cents
Yeah. Leave. If your organization is not willing to give you the normal tools that any normal professional programmer should be able to use, then it's time to up your networking skills and update your resume.
Bringing your own laptop with the necessary tools is always a good way to overcome these man-made hurdles
Bring you own laptop but DON'T connect it to the network (and make it obvious that you do not intend to).
Copy stuff e.g. Visio diagrams over via USB drive.
If they don't allow USB, you can access the internat from outside and email the files. Using OWA via browser sometimes gives you more rights to send files.
Sounds like they're doing you a favor. Your code is guaranteed to run as a normal user, doesn't try to write to program files or other sensitive directories, is aware of what issues virus scanners bring to the table, and can handle other issues you wouldn't have normally encountered until installing your apps on a client machine.
As for no access to SO, I'd quit.
Our workplace required a full virus scan every day, so in the morning, when I hooked my laptop up, it was a 2 hour wait before I could do work.
I finally found a solution. MSVC 6, has a built in debugger. I went into task manager, and picked the mcaffee scanner process, and told it to debug. This fired up msvc6, and the scanner froze at a breakpoint. I hit reset, and the problem was gone. About 6 months later they removed the policy and all was good.
Related
I'm having an issue with a web application I am responsible for maintaining.
The system experiences regular bugs, and our support vendors are always asking us to see if we can "replicate the error in UAT". This is obviously a reasonable request. A lot of the time, for various reasons (some of which are clear, some of which are not), these errors are not present in UAT. This lack of bug reproducability in a testing environment is adding huge amounts of friction to the bug resolution process.
There are 3 key pieces of our system architecture where these bugs are flaring (the CMS, the API layer, and the database). I am proposing we set up a system job that perpetually clones these 3 parts of the system in to a sandboxed test environment. This cloning would happen periodically (eg, once every 24 hours), and automatically.
Is there a technical term for this sort of environment? Is this an established method of helping diagnose system issues? Is there somewhere I can read up on the industry best practices for establishing something like this? Thanks.
The technical term for this kind of process is replication it is often done for some systems like databases, but normally not for testing purpose, but in order to increase available, so the replication is used as a failover spare.
An exact copy of a production system, with all the data is not you'll find often, due to the high demand on resources. Also at some points to two systems have to differ. Most systems (I know of) have tons of interfaces you just can't copy a complete system systems.
Also: you only need the copy of the production system when you actually debugging an issue. And if you are in the middle of that you probably don't want everything to go away and get replaced by a new copy.
So instead I would recommend to setup scripts that allows to obtain a copy of the relevant parts on demand.
Also you might want to consider how you might be able to modify your system to make it easier to setup a copy.
For example, when you have all the setup automated (with chef/docker or similar) you should be able to setup the same system again anywhere you want, so you now you just have to get the production data over.
Which is an interesting point. Production data often contains secret information (because it is vital to the business, or because it is personal data). You don't want this kind of stuff hang around in a test system everybody can access.
I have the following dilema: My clients (mom-n-pop pawnshops) have been using my mgmt. system, developed with ISQL, for over 20 years. Throughout these two decades, I have customized the app to each clients desire, or when changes in Laws/Regulations have required it. Most clients are single-user sites. Some have multiple stores, but have never wanted a distributed db, don't trust the reliability or security of the internet or any other type of networking. So, they all use Standard Engines. I've been able to work around some SE limitations and done some clever tricks with ISQL and SE, but sooner or later, new laws may require images of pawnshop customers, merchandise, electronic transmision, etc. and then it will be time to upgrade to IDS, re-write the app in 4GL or change to another RDBMS. The logical and easiest route would be IDS/4GL, however, when I mentioned Linux or Unix-like platforms to my clients, they reacted negatively and demanded a Windows platform, so the easiest solution could be 4Js, Querix, etc.?.. or Access, Visual FoxPro or ???.. anyone have suggestions?
This whole issue probably comes down to a couple of issues that you'll have to deal with.
The first thing is what application programming and development language Are you willing to learn and work with?
The other thing is what kind of Internet capabilities to you want?
So for example while looking at a report do you want to be able to click on a button and have the report converted to a PDF document, and then launch the e-mail client with that PDF attached?
What about after they enter all the information data into the system, perhaps each store would like their own miniature web site in which people in town could go there to check what they've have place of having to phone up the store and ask if they have a $3 used lighter (the labor of phone and checking for these cheap items is MORE than the cost of selling the item – so web really great for this type of scenario).
The other issue is what kind of interface do you want? I assume you currently have some type of green screen or text based interface? Or perhaps over the years you did convert over to a GUI (graphical user interface).
If still green screen (text based) you now you have to sit down and give a considerable amount of effort and time into the layout and how you of screens will work with a graphical based system. I can remember when going from green screens to color, all of a sudden now the choices and effort of having to choose correct colors and layouts for that screen actually increased the workload by quite a bit. And then I went from color test screens to that of a graphical interface, then again all of a sudden now we're presented with a large number of new controls, colors, and in addition to that we have large choices in terms of different fonts and sizes.
And then now with the web, not only do you deal at different kinds a button styles (round, oval, shading, shadows, glow effects), but in addition to all those hover effects and shading effects etc, you now have to get down to some pretty serious issues in terms of what kind of colors (theme) your software will adopt for the whole web site.
This really comes down to how much learning and time you are willing to invest into new tools and how much software you can and will produce for given amount of time and effort.
I quite partial to RAD tools when you get down into the smaller business marketplace. Most of the smaller businesses can not afford rates for a .net developer (it not so much the rate, as the time to build an application). So, using ms-access is a good choice in the smaller business market place. Access is still a good 3 to 5 times many of the other tools in the marketplace. So quote by .net developer to develop something might be 12,000 bucks, and the same thing in Access might be $3000. I mean that small business can not afford to pay you to write unit testing code. This type of extra cost is just not going to happen on the smaller scale projects.
The other big issue you have to deal is what kind of report writing system are you going to build into the system? This is another reason why I like for the smaller business applications is access is because the report writer is really fantastic. Access reports have a whole bunch of abilities to bake connections in from forms and queries and pass filters and parameters into those reports. And, often the forms and queries that you spend time building already can talk to reports with parameters and pass values in a way that again really reduces the workload (development costs).
I think the number one issue that you'll have to address here however is what you're going to do for your web based strategy? You absolutely have to have one. Even if you build the front end part in access, you might still want to use a free edition of SQL server for the back end part. There are several reasons for this, but one reason is then it makes it easy to connect multiple stores up over the Internet.
Another advantage of putting your data in some type of server based system, is now you can set up some type of web server for all the stores to use, and build a tiny little customize system that allows each store to have their products and listings online (but, they use YOUR web server, or one that you paying $15 per month to host all of those customers). This web part could be an optional component that maybe perhaps all customers don't necessarily want. It would work off of the data they have to enter into the system anyway.
One great advantage of adopting these web based systems is not only does it allow these stores to serve their customers far better, but it also opens up the doors for you to convert your software into a monthly fee based system, or at least some part of it such as the optional web hosting part you offer.
When I converted so my longer time applications from green screen mainframe type software into windows desktop based applications it opened up large markets for me. With remote desktop, downloading software, issuing updates from a web site, then these new software systems make all of these nuts and bolts part of delivering software very easy now and especially so for supporting customers in different cities that you've never met face to face.
So, if you talking still primarily single user and one location, Access will reduce your development costs by a lot. It really depends on how complex and rich of an application you are talking about. If the size and scope of the project is beyond one developer, then you talking more about developer scaling (source code control, object development methodology, unit testing, cost and time of setting up a server based database system like SQL server etc). So they're certainly tipping point here when you go beyond that tipping point of cost time in complex city, then I actually don't recommend access. So this all comes down to the right horse for the right course.
Perhaps that the end of the day, it really comes down to what application development system are you willing to invest the time to learn?
Look at Aubit4GL - that is, I believe, available (or can be compiled on) Windows.
Yes, IDS is verging on overkill for a single-user system, but if SE doesn't provide all the features you need, or anticipate needing in the near future, it is a perfectly sensible choice. However, with a modicum of care, it can be set up to be (essentially) completely invisible to the user. And for a non-stressful application like this, the configuration is not complicated. You, as the supplier, would need to be fairly savvy about it. But there are features like silent install such that you could have your own installer run the IDS installer to get the software onto the customer's machine without extra ado. The total size of the system would go up - IDS is a lot bigger on disk than SE is (but you get a lot more functionality). There are also mechanisms to strip out the bigger chunks of code that you won't be using - in all probability. For example, you'd probably use ON-Tape for the backups; you would therefore omit ON-Bar and ISM from what you ship to customers.
IDS is used in embedded systems where there are no users and no managers working with the system. The hardware sits in the cupboard (closet) and works, communicating over the network.
It's good to see folks still getting value out of "old school" Informix Tools. I was never adept at Perform, but the ACE report writer always suited me. We skipped Perform and went straight for FourGen, and I lament that I've never been as productive as I was with FourGen. It had it own kind of elegance from its code generators to it funky, but actually quit powerful, stand alone menu system.
I appreciate the modern UI dynamics, but, damn, is it hard to write applications today. Not just tools, but simply industry requirements et al (such as you may be experiencing in your domain). And the Web is just flat out murder.
I guess part of it is that since most "green screen" apps look the same, it's hard to make one that looks bad! With GUIs and the Web etc., you can't simply get away with a good field order and the labels lining up.
But, alas, such as it is, that is what we have.
I have not used it in, what now, 15 years, but you may also want to look at Alpha 5. It was a pretty powerful, but not overly complicated, database development package, and (apparently) still going strong.
I wouldn't be too afraid of IDS. It runs pretty simply. Out of the box with zero or little tweaking, the DB works and is efficient, and it used to be pretty trivial to install. It was no SE, in that SE's access was tied to the application (using a library) vs an independent server that is IDS. But, operationally, it's really straightforward -- especially for an app like what you're talking about. I appreciate that it might be overkill, but even today, the resource requirements won't necessarily be insane. There's a lot of functionality, of course, and flexibility that you won't use. But frankly, beyond "flat file" DBase style databases, pretty much ALL of the server based SQL databases are very powerful and capable and potentially complicated. But they don't have to be. They can still be used "simply" and easily (well, save for Oracle -- Oracle can't do anything "simply").
As far as exploring other solutions, don't be too afraid of the "OOP" stuff, as most applications, while they leverage OOP libraries, aren't really OOP themselves (they can be, they just typically aren't, they simply don't need to be). The biggest issue with many of the OOPs systems, is they're simply to finely structured. Dealing with events at far too low of a level. While many programs need to access to that fine level of control, most applications, particularly the ones much like yours, do not. So, the extra flexibility simply gets in the way or creates more boiler plate.
That said, you shouldn't be frightened away from them per se, citing lacking of expertise. They can be picked up reasonably quickly. But I would certainly exhaust the more specialized tools (like Alpha 5, or Access, etc.) first to see if they don't offer what you want.
In terms of Visual FoxPro, was and remains a peerless tool (despite flak from people who know little about it). It has a fast, native database engine, built-in SQL and powerful report designer and so on. But you also have to consider that Microsoft support will be dropped for it in 2014, there will never be a 64-bit version, and so on. And the file locking method it uses will be increasingly flaky on future versions of Windows IMO.
I am in the planning stages of writing a new program, and there's a feature I'd like to include, but I was wondering if it's too much for non-technical users to handle, or if it invites potential problems.
My program is a C# app with a SQL db. I'd like to use a version of SQL that would allow the db to be accessed from multiple computers (btw, I'd definitely build it so that only one computer at a time could have the db open.) The user would be able to install the program on multiple computers, but if they tried to open it while it's open on another computer, they'd get a message that it can't be opened at this computer until it's closed on the other one (and I don't feel bad about that restriction.)
For non-technical users, even a standard next-next-next setup can be confusing and intimidating. I was wondering if including this ability might result in the install being too complicated or if there are too many other things that could go wrong, making the feature not worth the potential down side. (I want to keep support and troubleshooting as low as possible.) I appreciate your opinions.
I really can't envision a scenario where you would have complete, singular access to a database unless your users were very technical and performing surgical-like techniques with the database.
If this is a regular application for multiple users, then the application should be coded to handle multiple users without the expense of queuing them serially.
If you have no problems with this restriction of one-user-at-a-time, then a database is really a bad idea, as almost all modern database systems are meant for access by multiple users at one time.
If I understand correctly, to get the multiple-computer feature you need to add a "next-next-next" setup, to install the database?
I don't think that's going to be an issue for non-technical users; most programs do that anyways. However, those same non-technical users might freak out when their application firewall tells them that "application SQL Server is trying to accept connections from the outside"
Should developers be limited to certain applications for development use?
For most, the answer would be as long as the development team agrees it shouldn't matter.
For a company that is audited for security certifications, is there a method that balances the risk of the company and the flexibility, performance of the developers?
Scope
coding/development software
build system software
3rd party software included with distribution (libraries, utilities)
(Additional) Remaining software on workstation
Possible solutions
Create white-list of approved software where developer must ask for approval for desired software before he/she can use it. Approval would be based on business purpose/security risk.
Create black-list for software. Developers list all software used. Review board periodically goes over list.
Has anyone had to work at a company that restricted developer tools beyond the team setting? How did they handle the situation?
Edit
Cleaned up question. Attempted to make less argumentative.
Limiting the software that developers can use on their work machines is a fantastic idea. This way, all the developers will quit, and then the company won't have to spend as much money on salaries and equipment, resulting in higher profits.
Real answer: NO!!!
No, developers should not be limited in the software they use, because it prevents them from successfully doing their jobs. Think about how much you are paying your team of developers, - do you really want all that money to go spiraling down the drain because you've artificially prevented them from solving problems?
1) Company locks down the pc and treats the developer as competent as a secretary
What happens when the developer needs to do something with administrative permissions? EG: Register a COM object, restart IIS, or install the product they're building? You've just shut them down.
2) Create a white-list of approved software...
This is also impractical due to the sheer amount of software. As a .NET developer I regularly (at least once per week) use upwards of 50 distinct applications, and am constantly evaluating newer upgrades/alternatives for many of these applications. If everything must go through a whitelist, your "approval" staff are going to be utterly swamped by just one or 2 developers, let alone a team of them.
If you take either of these actions, you'll achieve the following:
You'll burn giant piles of time and money as the developers sit on their thumbs waiting for your approval team, or doing things the long slow tedious way because they weren't allowed to install a helpful tool
You'll make yourself the enemy of the development department (not good if you want your devs to actually do what you ask them to do)
You'll depress team morale substantially. Nobody enjoys feeling like they're locked in a cage, and every time they think "This would be finished 5 hours ago if only I could install grep", they'll be unhappy.
A more acceptable answer is to create a blacklist for "problem" software (and websites) such as Pidgin, MSN messenger, etc if you have problems with developers slacking off. Some developers will also rail against this, but many will be OK with it, provided you are sensible in what you blacklist and don't go overboard.
I think developers should have total control on applications that they use as long as they can do their job with them. Developers' productivity is directly related to working environment and no one will like being restricted and everyone likes to use software they like themselves.
Of course there should be some standards in terms of version control, document format, etc., but generally developers should have right to use any programs they want.
And security should be developer's concern - company admins should care about setting up proper firewall to protect against any kinds of attacks.
A better solution would to create a secure independent environment for the developers. An environment that if compromised won't put the rest of the company at risk.
The very nature of the development is to create crafty ingenuous pithy solutions. To achieve this, failures must happen.
Whatever they do, don't take away the Internet in general. Google = Coding Help 101 :)
Or maybe just leave www.stackoverflow.com allowed haha.
I'd say this depends on quite a list of factors.
One is team size. If you have a team of half a dozen developers, this can be negotiated whenever a need for some application pops up. If you have a team of 100 developers, some policy is probably in order.
Another factor is what those developers do. If they compile C code using a proprietary compiler for an embedded platform, things are very different from a team producing distributed web or PC software in a constantly shifting environment.
The software you produce and the target customers are important, too. If you're porting the Linux kernel to some new platform, whether code leaks probably doesn't matter all that bad. OTOH, there are a lot of cases where this is very different.
There are more factors, but in the end it all boils down to two conflicting goals:
You want to give your developers as much freedom as possible, because that stimulates their creativity.
You want to restrict them as much as possible, as this reduces risks. (I'm talking of security risks as well as the risk to ship non-functioning software etc.)
You'll have to find a middle ground that doesn't hurt creativity while allowing enough guarantees to not to hurt the company.
Of course! If you want a repeatable build process, you don't want it contaminated by whatever random bit of junk a programmer happens to use as a tool to generate part of the code. Since whatever application you are building lasts much longer than anyone expects, you also want to ensure that the tools you use to build it are available for roughly the same duration; random tools from the internet don't provide any such gaurantee.
Your team should say "The following tools are allowed for build steps and nothing else" and attempt to make that list short.
Obviously, it shouldn't matter what a programmer looks at to decide what to do, so the entire Internet is just fine as long as its just-look. Nor does it matter if he produces code by magic (or random tool) as long as your team doesn't mind accepting just that tool's output as though it were written by hand.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
Has anybody any successful remarks about having a team working via Remote Desktop?
In many workplaces, we put end users via Citrix and the applications on a central, powerful server. Sometimes the clients are in the same building as the server, but often, they are remote.
There could be some huge benefits for me to put my developers on Windows XP or Vista instances running on a couple servers with Hyper-V.
I'm worried that RDP/RDC via the internet would be too slow for somebody to be able to develop efficiently.
I'm sure I can hear plenty of bad things about it... are there any people out there that have had success?
I have seen a situation where the attempt was made to do this with a sattelite office. It was done for a java development team using various java IDE tools. The result wasn't regarded as a success, and the company brought the team back into a central London office at considerable expense.
For someone doing this on a day-in day-out basis on interactive software, the result isn't really very pleasant. For something that mainly uses text-based tools such as vim and unix command line tools, it works somewhat better. At one point I had XVNC going over a 128 Kbit DSL link (of a type that was prevalent in New Zealand at the time) and could do work on an Oracle-based data warehouse at a remote location quite readily. The level of interactivity required by the tooling made them much less sensitive to the slow link than a Windows-based IDE.
So, I'll invoke the 'it depends' argument with some qualifications:
I would not recommend it for a modern IDE, and certainly not for something heavily graphical like Dreamweaver, BI Development Studio or Informatica.
For a textual environment like traditional unix development tools it could probably be made to work quite well. These user interfaces are much less sensitive to latency than a direct-manipulation user interface.
I'm something of a believer in the 'best tools' principle. Going out of your way to give a second-rate user interface to a development team will give off negative signals. The cost saving from doing this is likely to be minimal and it will annoy some of your team members. Even if it can be made to work reasonably well you are still making a value statement by doing this. Weigh the cost saving against the cost of replacing one or more of your key development staff.
We connect to our development environments using RDP and locally the performance is great. It slows a bit over VPN, but is still acceptably responsive.
Turn off all the windows animation functionality, desktop background, etc. and that will help considerably.
If you're not worried about the latency on audio and fast-moving imagery and you're not developing anything dependent on 3D hardware, you'll likely be fine.
I've never used it in a team environment, but I use my laptop RDP'd into my workstation all day and love it.
I've worked in an environment where we would occasionally edit some existing code via remote desktop. There were no significant challenges to this. As a developer I positively hated doing that work. Everything felt slow and unresponsive. However, we got the work done.
Thankfully these were often short 3-4 hours jobs... mostly fixes to existing systems on remote customer sites. I don't think I could recommend it as a normal way of doing work, but its certainly possible.
I've used both VNC and RDP over a DSL connection, running through an SSH tunnel, and have had no real issues.
There are definitely some lags, particularly if you're redrawing large parts of a screen. But most development involves small edits, and both of these protocols handle that very well.
I use Remote Desktop to control my Windows machine at work. I use a Parallels VM on a Mac and my connection is 2.5M down, 256k up.
This works really really well. I've been doing this for 2 years for 1-3 days a week. The slow upspeed isn't an issue - I can't type that fast.
I have 3 screens at work but still find a 20" Mac screen to be superior. The colours are much cleaner and I can work longer at the Mac than my work screens!
The thing that is a killer is Flash on a browser. If I accidentally open a browser on my remote machine with Flash it kills the connection. The solution is to use FlashBlock (a firefox addin).
I use Eclipse and Visual Studio with no issues whatsoever.
I've used it to work from home (remote login to my in-office PC via VPN).
The performance depends on your ISPs, of course.
It's slightly less reliable (because as well as your having downtime when/if ever the office LAN is down, there's now additional risk of downtime while either of the internet connections is down).
I have a remote server on a 1Mbps upstream pipe which I RDP to (over a VPN) and it works just fine. I even use large screen resolutions (1600x1200) with no performance problems. Of course, I'm not sure how such a setup would fare for multiple concurrent users, however.
A benefit of developing over RDP that I hadn't anticipated is that you can save your sessions--so after you get done developing for the day, you quit your RDP client and power down your computer, and when you log back in the following day your session is right where you left it.
As an added bonus, RDP clients are available for linux, and OS X.
I use RDP daily for development, I leave my laptop on at home with my work environment open and ready to go. When I get to work and everybody is loading up their projects and opening their programs I just RDP in and I'm ready to go. You have to keep in mind certain keyboard shortcuts that change though (CTRL+ALT+DEL for example), it is annoying at first but you get used to it.
To keep the latency to a minimum, I recommend...
turning the colors down to 256 (after all, you only need to see text)
Leave the wallpaper at the other computer
Leave sounds at the other computer
Leave any themes on the other computer
Choose a lower connection speed, even if yours is higher. Windows will minimize the data sent.
One of the advantages you might also consider is processing power. If your machine at home has far better specifications than your workstation on the job, compilation time is improved a fair bit. Since your local machine only needs to update the image from the remote machine, your local computer is not under load.
Using this option also allows me to keep on track. While others log in and browse the internet and waste time, I'm set up and ready to go. Being more productive helps you get paid the big bucks (if your employer notices), while others are still stuck in their junior programming roles.
Pre-2000 I did it for 3 years every day several hours a day. This was when bandwidth sucked too.
Nowadays it's much much better.
And if you use NxMachine life gets even better :)
I did not, however, use the machine with multiple users. My concern with that would be that developers are a finicky bunch (myself included) and we tend to push machines really hard as it is.
Can't imagine several folks on one box all deciding to compile :)
G-Man
We do it with citrix and is very fast.
I wonder what the reason for this would be. Does the central server(s) have access to some resources that the individual developer machines could not access?
I'm using RDP to connect from my home computer to my work computer from time to time. I have to say - it's possible to code, but it's way more comfortable to do it when the IDE is on your own machine. Even when on a 100MBit LAN there is some noticeable lag. Not enough to bother work, but annoying nevertheless.
If the people have to work from remote places on a regular basis, I'd rather prefer a setup where the central source control is available through some secure protocol (HTTPS, VPN, etc.), but the development can happen locally on the developer's machines. If using something like SVN, which works well even with offline development, then it should be way more comfortable for the programmers themselves.
What is important for a development workstation is sheer processing power. At our place the developers have the most high-end workstations in terms of cpu, memory, disk, etc and not in terms of audio and graphics. It's the latter that are most affected by RDP.
As long as the server that your developers are RDP-ing to is fast enough to handle multiple compiles, builds at the same time you should be fine.
As with all things, the answer to your question is "Your Milage May Vary" or YMMV. It depends on what the developers are doing. Do they spend most of their time writing code, or do they do a lot of large compiles? Do they need direct hardware access?
Do they need debugging rights? Once you grant them debugging rights, they basically own the machine and can interfere with other users.
It's typically much better to allow the users to develop on their own computers, and use a VPN to allow them to acces the version control system. Then they can checkout the files to their local computers and do whatever they want, then checkin the changes.
But, RDP has it's advantages too. You really need to weigh the pro's and cons and decide which list is longer or more "weighty".
I use NoMachine NX Client to remote desktop onto a headless server that runs FreeNX. It is great because I can login to my session from anywhere and my last session is still there for me. Speed has never been a problem, except when the DSL line is down.
Anyway, my point is that if you are running a Linux server and use 'vi' then there is a nicer alternative than 'screen'.