As a team should we develop locally and merge into the dev server, or develop on the dev server? - development-environment

Recently I was tasked with writing up formal procedures for a team based development enviroment. We have several projects with multiple modules each. Right now there are only two programmers, however there are plans to expand to 4-6 programmers. Each programmer will be working on the same project and possibly pages which may cause over writing or error issues.
So far the ideal solution I have thought up is:
Local development (WAMP/VM or some virtual server instance on their own machine). Once a developer has finished their developments, they check it into the CVS Repository and merge it wih other fixes etc.
The CVS version is then deployed to the primary dev server for testing by the devs.
The MySQL DAtabases are kept on the primary dev server and users may remotely connect to it. Any Schema / Data alterations are run through a DB Admin who will notify all devs of any DB Changes (Which should be rare).
Does anyone see an issue with this or have a better solution?

Looks good. Just wanted to highlight this one very important point:
Make sure you have nightly builds in to the dev server. This will help catch problems at very early stages
By the way, while we are at it and in case you arent aware, Joel has a very good 12 point system for Quality of a Software Team
Excerpt
1 Do you use source control?
2 Can you make a build in one step?
3 Do you make daily builds?
4 Do you have a bug database?
5 Do you fix bugs before writing new code?
6 Do you have an up-to-date schedule?
7 Do you have a spec?
8 Do programmers have quiet working conditions?
9 Do you use the best tools money can buy?
10 Do you have testers?
11 Do new candidates write code during their interview?
12 Do you do hallway usability testing?

The model you describe is the one I've seen and used most often. I think each developer having their own local copy is more efficient and less risky. If all the code only exists on the dev server, an outage of the dev server stops all development. You also get less network traffic with a distributed model.

I think you should use a local version of your database not a live version. This will make testing easier and the developers wont have to worry that another developer edits data.

The idea of testing locally, merging changes and then deploying sounds good. The alternative (everyone developing on one server) sounds like a disaster waiting to happen.
You might want to try upgrading from CVS though. Subversion is similar to CVS but more modern. If you want to completely change your way of thinking, you could trying something like Git.

Related

Development / Release Challenges

Background
1 x Dev SQL Server
1 x UAT SQL Server
1 x Prod SQL Server
Developers use SSMS to view SQL Server objects and code and make changes directly to these objects in SQL Server itself.
Challenge
We have multiple developers potentially making changes to the same database object (let’s say a stored procedure or a view). The challenge arises from different bits of work happening on the same object where the delivery timescales for release for each of the bits of work are different. This means we end up with someone having completed their changes on the dev object, but releasing the changes into the next environment along may fail as the view (for example) contain may another developer’s changes too, and those changes themselves may require other objects. The business may not be expecting that other’s developer’s work to be released anyway, as there may be days/weeks of effort still to put into it before release. But that doesn't help the developer who's ready to go into the next environment.
How do we get round that?
How should each developer have started off, before they started making changes, to avoid dependency issues when releasing?
How can a developer “jump the queue” and release their bits of work, equally without scuppering anyone starting off their particular change too.
This is not a perfect answer, nor is it the only potential answer - but it's a good start. It's based on my experience within a relatively small shop, where tasks are re-prioritised frequently and changes required after testing etc.
Firstly - it's about process. You need to make sure you have a decent process and people follow it. Software etc can help, but it won't stop people making process errors. There are a lot of products out there to help with this, but I find making small steps is often a good start.
In our shop, we use Git source control for managing codes and releases. These script the entire database structure and views/etc, and are used to manage any changes to those scripts.
In general, we have a 'release' branch, then 'feature' branches for updates we're working on, and 'hotfix' branches for when we do changes to live on the fly (e.g., fixes etc).
When working on a specific branch, you check out that branch and work on it. Any change to the database has to go into an appropriate branch.
When ready to go live, you merge the feature/hotfix branches into that release branch when they're released. This way the 'release' branch always exactly matches what is on the production database.
For software, we use Redgate Source Control integratated with SSMS, but there are definitely others available (e.g., ApexSQL Source Control). You can also do it manually, but I wouldn't suggest it.
You don't have to, but you can also use a git GUI (e.g., SourceTree) to manage your branching and merging etc.
There are additional software products that can help to manage releases/etc (including scripting etc) but the source control aspect should be the biggest help with the main issue (being able to work on different things and helping ensure no clashes).
Regarding Git and how to use it (or SVN etc) - if you haven't used them before, they're a bit weird and take some getting used to. We had a few re-starts with a few different processes before we came up with an approach we liked. It will also take some time to run into the different issues that can arise - so you cannot expect this to just fix it out of the box.
1 source control
Any source control system GIT/TFS to manage your code and control changes
2 branching/release strategy
Git Flow! F.e. main branch with current working source code (main, develop whatever you call it), each developer works on his own feature branch, after he done his work he test it by deploying on DEV environment and running tests. After that it could be merged into release branch that will go live on PROD.
Also you need to consider merge vs rebase strategy (some link).
3 and some SCRUM
The most basic: 2 weeks for sprint, after end of the sprint you create new release branch and deploy it on UAT for testing. During next sprint release is tested on UAT, developers work on tasks from new sprint. Deploy tested release on PROD, developers have there 3rd sprint and UAT is ready for new release to be deployed. And so on.
4 more then one DEV environments
Based on the number of developers you need more DEV environments.

How should release management be structured for an agile Professional Services department?

Background
Professional Services departments provide add-on services to customers of a product.
A lot of these projects are small (4-10 hours) and need to be turned around quickly. Additionally, these are important projects as they are enhancements that customers rely on for their business.
Some challenges are:
There is a good amount of rework or feature changes as customers often change their mind or make tiny additional request. Aside from the obvious that this is a mangement issue (managing scope creep etc.), the fact remains that often there are minor tweaks that need to be implemented after the project is "live".
Sometimes something breaks for whatever reason and issues need to be handled with expedience. Again, these are in-production processes that customers rely on.
Currently, our release management is very ad hoc:
Engineers manage the projects from soup to nuts, including scoping, customer relationships, code development, production deployment, and project support (for any subsequent issues).
We have dev servers and we have production servers. The servers exist on-site in a server farm. They are not backed up ever, and they have no redundancy because they are not in the colo - they kind of get second class service from operations.
They Engineers have full root(linux)/admin(windows) access to the dev and prod servers. They develop on the dev servers, and when the project is ready, deploy to prod (basically, just copy the files up). When issues come up, they just work directly on the servers.
We use svn for source control but it's basically just check out to dev, work on the project, check in as necessary, and deploy to prod just by copying files up to the server.
The problem:
The problem is basically number 2 above. The servers are not treated with the same reverence by operations that our product servers (in the colo) are treated. We need the servers to be first class citizens for operations. However their proposal is to put them in the colo, which makes them untouchable. If we do that, we will need to go through operations to get projects deployed. Basically it will be the same arduous and painful process that the product engineers go through when releasing an update to our software product.
This would take away all our agility in responding to these tiny projects and the issues that arise that need immediate attention.
The question
How should we solve this problem?
Should we put the servers in colo and just work with the formal release process?
How should this situation be handled?
Any help making this question better is welcome!
The servers exist on-site in a server farm. They are not backed up
ever, and they have no redundancy because they are not in the colo -
they kind of get second class service from operations.
So you want these servers to be self-serviceable by your PS engineers, yet have good redundancy, backup etc without having to go through formal ops processes. Can't you move them from the on-site server farm to the cloud (ec2 or other)? btw, #3 & #4 are accidents waiting to happen but that is not material to the main question here.
This is an old question but sounds very similar to our company in that production team requires a lot of small changes.
I'm having a hard time understanding the question but I'll attempt an answer.
You should not place development servers in the colo because it will slow down your development process. If operations is not able to give you the support you need in development could you designate a developer or bring on someone that can support your teams needs when it comes to server management/requirements. Ideally a build engineer, release manager, or even say a QA resource. Unfortunately it sounds like a political management issue. In that case you need to clearly layout you issues and address them with management. If I completely missed the mark let me know.

Procedure to keep texts in sync between production and development

We have a production database with texts that shows on our web site.
We also have development servers with multiple branches (several copies of the production database).
The poblem we have is that during development we add and change texts in each branch. And we also change texts in our production environment.
If development and production has changed the same text it's hard to find our how to merge these changes.
We were thinking of that we only can make changes on the production database and only add texts on development database. But that would give us many many text with different keys that has pretty much the same data.
How do you handle text changes between environments?
Thanks!
This is a fairly common problem - Martin Fowler wrote about it a while ago (http://martinfowler.com/articles/evodb.html).
THere's no nice, simple, painless solution - but http://www.amazon.com/Recipes-Continuous-Database-Integration-ebook/dp/B000RH0EI4 is probably the best book on the topic....
It's a fairly major undertaking, and requires a lot of discipline from your development team - but it's worth it if you're running into the problems you describe.
It boils down to scripting your database creation/modification tasks, and committing those scripts to source code control. You use a naming convention to determine the order in which to run the scripts, and then have an automated process to run them when setting up an environment, or deploying a new version to that environment.

How do you simulate frequent database activity for realistic testing?

I'm building a web app against a database where a small number of records (about 5000) are active at the same time. Each active working record probably experiences 50-300 changes by 30 users over a 4 hour period ... which is thousands of changes per minute.
Because our testing environment is so static, testing is not realistic, and some issues do not arise until we hit the production database.
I had the idea to Run Profiler, collect the DML statements, then replay them on the test server while debugging the app ... Assuming I can replay them in the same time intervals as the original was run. But even this wouldn't be a valid test, since tester changes could corrupt future DML statements being replayed.
Does anybody know how to simulate real time database changes for realistic testing?
Thanks.
BTW-Our problems are not concurrency issues.
Maybe this Selenium-based service is what you need: browsermob
Few people recommended it.
And yes, this is not an ad :)
There's a few commercial packages that do this. I work for Quest Software, makers of one of the packages, but I'm going to list three because I've worked with all of 'em before I came to work for Quest:
Microsoft Visual Studio Test Edition - it has load testing tools added on. It lets you design tests inside Visual Studio like simulating browsers hitting your web app. Recording the initial macros is kind of a pain, but when you've done it, it's easy to replay. It also has agents that you can deploy across multiple desktops to drive more load. For example, we installed it on several developers' desktops, and when we needed to do load testing after hours, we could throw a ton of computing power at the servers. The downside is that the setup and ongoing maintenance is kinda painful.
HP Quality Center (used to be Mercury Test Director and some other software) - also has load testing tools, but it's designed from the ground up for testers. If your testers don't have Visual Studio experience, this is an easier choice.
Quest Benchmark Factory - this tool focuses exclusively on the database server, not the web and app servers. It captures load on your production server and then can replay it on your dev/test servers, or it can generate synthetic transactions too. There's a freeware version you can use to get started.
If you know and love Visual Studio, and if you want to test your web servers and app servers, then go with Visual Studio Test Edition. If you just want to focus on the database, then go with Benchmark Factory.
Perhaps use something along the lines of a database stress-testing tool like the mysqlslap load-emulator. Here's a link explaining use-cases and specific examples.

Developing via Remote Desktop [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
Has anybody any successful remarks about having a team working via Remote Desktop?
In many workplaces, we put end users via Citrix and the applications on a central, powerful server. Sometimes the clients are in the same building as the server, but often, they are remote.
There could be some huge benefits for me to put my developers on Windows XP or Vista instances running on a couple servers with Hyper-V.
I'm worried that RDP/RDC via the internet would be too slow for somebody to be able to develop efficiently.
I'm sure I can hear plenty of bad things about it... are there any people out there that have had success?
I have seen a situation where the attempt was made to do this with a sattelite office. It was done for a java development team using various java IDE tools. The result wasn't regarded as a success, and the company brought the team back into a central London office at considerable expense.
For someone doing this on a day-in day-out basis on interactive software, the result isn't really very pleasant. For something that mainly uses text-based tools such as vim and unix command line tools, it works somewhat better. At one point I had XVNC going over a 128 Kbit DSL link (of a type that was prevalent in New Zealand at the time) and could do work on an Oracle-based data warehouse at a remote location quite readily. The level of interactivity required by the tooling made them much less sensitive to the slow link than a Windows-based IDE.
So, I'll invoke the 'it depends' argument with some qualifications:
I would not recommend it for a modern IDE, and certainly not for something heavily graphical like Dreamweaver, BI Development Studio or Informatica.
For a textual environment like traditional unix development tools it could probably be made to work quite well. These user interfaces are much less sensitive to latency than a direct-manipulation user interface.
I'm something of a believer in the 'best tools' principle. Going out of your way to give a second-rate user interface to a development team will give off negative signals. The cost saving from doing this is likely to be minimal and it will annoy some of your team members. Even if it can be made to work reasonably well you are still making a value statement by doing this. Weigh the cost saving against the cost of replacing one or more of your key development staff.
We connect to our development environments using RDP and locally the performance is great. It slows a bit over VPN, but is still acceptably responsive.
Turn off all the windows animation functionality, desktop background, etc. and that will help considerably.
If you're not worried about the latency on audio and fast-moving imagery and you're not developing anything dependent on 3D hardware, you'll likely be fine.
I've never used it in a team environment, but I use my laptop RDP'd into my workstation all day and love it.
I've worked in an environment where we would occasionally edit some existing code via remote desktop. There were no significant challenges to this. As a developer I positively hated doing that work. Everything felt slow and unresponsive. However, we got the work done.
Thankfully these were often short 3-4 hours jobs... mostly fixes to existing systems on remote customer sites. I don't think I could recommend it as a normal way of doing work, but its certainly possible.
I've used both VNC and RDP over a DSL connection, running through an SSH tunnel, and have had no real issues.
There are definitely some lags, particularly if you're redrawing large parts of a screen. But most development involves small edits, and both of these protocols handle that very well.
I use Remote Desktop to control my Windows machine at work. I use a Parallels VM on a Mac and my connection is 2.5M down, 256k up.
This works really really well. I've been doing this for 2 years for 1-3 days a week. The slow upspeed isn't an issue - I can't type that fast.
I have 3 screens at work but still find a 20" Mac screen to be superior. The colours are much cleaner and I can work longer at the Mac than my work screens!
The thing that is a killer is Flash on a browser. If I accidentally open a browser on my remote machine with Flash it kills the connection. The solution is to use FlashBlock (a firefox addin).
I use Eclipse and Visual Studio with no issues whatsoever.
I've used it to work from home (remote login to my in-office PC via VPN).
The performance depends on your ISPs, of course.
It's slightly less reliable (because as well as your having downtime when/if ever the office LAN is down, there's now additional risk of downtime while either of the internet connections is down).
I have a remote server on a 1Mbps upstream pipe which I RDP to (over a VPN) and it works just fine. I even use large screen resolutions (1600x1200) with no performance problems. Of course, I'm not sure how such a setup would fare for multiple concurrent users, however.
A benefit of developing over RDP that I hadn't anticipated is that you can save your sessions--so after you get done developing for the day, you quit your RDP client and power down your computer, and when you log back in the following day your session is right where you left it.
As an added bonus, RDP clients are available for linux, and OS X.
I use RDP daily for development, I leave my laptop on at home with my work environment open and ready to go. When I get to work and everybody is loading up their projects and opening their programs I just RDP in and I'm ready to go. You have to keep in mind certain keyboard shortcuts that change though (CTRL+ALT+DEL for example), it is annoying at first but you get used to it.
To keep the latency to a minimum, I recommend...
turning the colors down to 256 (after all, you only need to see text)
Leave the wallpaper at the other computer
Leave sounds at the other computer
Leave any themes on the other computer
Choose a lower connection speed, even if yours is higher. Windows will minimize the data sent.
One of the advantages you might also consider is processing power. If your machine at home has far better specifications than your workstation on the job, compilation time is improved a fair bit. Since your local machine only needs to update the image from the remote machine, your local computer is not under load.
Using this option also allows me to keep on track. While others log in and browse the internet and waste time, I'm set up and ready to go. Being more productive helps you get paid the big bucks (if your employer notices), while others are still stuck in their junior programming roles.
Pre-2000 I did it for 3 years every day several hours a day. This was when bandwidth sucked too.
Nowadays it's much much better.
And if you use NxMachine life gets even better :)
I did not, however, use the machine with multiple users. My concern with that would be that developers are a finicky bunch (myself included) and we tend to push machines really hard as it is.
Can't imagine several folks on one box all deciding to compile :)
G-Man
We do it with citrix and is very fast.
I wonder what the reason for this would be. Does the central server(s) have access to some resources that the individual developer machines could not access?
I'm using RDP to connect from my home computer to my work computer from time to time. I have to say - it's possible to code, but it's way more comfortable to do it when the IDE is on your own machine. Even when on a 100MBit LAN there is some noticeable lag. Not enough to bother work, but annoying nevertheless.
If the people have to work from remote places on a regular basis, I'd rather prefer a setup where the central source control is available through some secure protocol (HTTPS, VPN, etc.), but the development can happen locally on the developer's machines. If using something like SVN, which works well even with offline development, then it should be way more comfortable for the programmers themselves.
What is important for a development workstation is sheer processing power. At our place the developers have the most high-end workstations in terms of cpu, memory, disk, etc and not in terms of audio and graphics. It's the latter that are most affected by RDP.
As long as the server that your developers are RDP-ing to is fast enough to handle multiple compiles, builds at the same time you should be fine.
As with all things, the answer to your question is "Your Milage May Vary" or YMMV. It depends on what the developers are doing. Do they spend most of their time writing code, or do they do a lot of large compiles? Do they need direct hardware access?
Do they need debugging rights? Once you grant them debugging rights, they basically own the machine and can interfere with other users.
It's typically much better to allow the users to develop on their own computers, and use a VPN to allow them to acces the version control system. Then they can checkout the files to their local computers and do whatever they want, then checkin the changes.
But, RDP has it's advantages too. You really need to weigh the pro's and cons and decide which list is longer or more "weighty".
I use NoMachine NX Client to remote desktop onto a headless server that runs FreeNX. It is great because I can login to my session from anywhere and my last session is still there for me. Speed has never been a problem, except when the DSL line is down.
Anyway, my point is that if you are running a Linux server and use 'vi' then there is a nicer alternative than 'screen'.