How do you simulate frequent database activity for realistic testing? - sql-server-2000

I'm building a web app against a database where a small number of records (about 5000) are active at the same time. Each active working record probably experiences 50-300 changes by 30 users over a 4 hour period ... which is thousands of changes per minute.
Because our testing environment is so static, testing is not realistic, and some issues do not arise until we hit the production database.
I had the idea to Run Profiler, collect the DML statements, then replay them on the test server while debugging the app ... Assuming I can replay them in the same time intervals as the original was run. But even this wouldn't be a valid test, since tester changes could corrupt future DML statements being replayed.
Does anybody know how to simulate real time database changes for realistic testing?
Thanks.
BTW-Our problems are not concurrency issues.

Maybe this Selenium-based service is what you need: browsermob
Few people recommended it.
And yes, this is not an ad :)

There's a few commercial packages that do this. I work for Quest Software, makers of one of the packages, but I'm going to list three because I've worked with all of 'em before I came to work for Quest:
Microsoft Visual Studio Test Edition - it has load testing tools added on. It lets you design tests inside Visual Studio like simulating browsers hitting your web app. Recording the initial macros is kind of a pain, but when you've done it, it's easy to replay. It also has agents that you can deploy across multiple desktops to drive more load. For example, we installed it on several developers' desktops, and when we needed to do load testing after hours, we could throw a ton of computing power at the servers. The downside is that the setup and ongoing maintenance is kinda painful.
HP Quality Center (used to be Mercury Test Director and some other software) - also has load testing tools, but it's designed from the ground up for testers. If your testers don't have Visual Studio experience, this is an easier choice.
Quest Benchmark Factory - this tool focuses exclusively on the database server, not the web and app servers. It captures load on your production server and then can replay it on your dev/test servers, or it can generate synthetic transactions too. There's a freeware version you can use to get started.
If you know and love Visual Studio, and if you want to test your web servers and app servers, then go with Visual Studio Test Edition. If you just want to focus on the database, then go with Benchmark Factory.

Perhaps use something along the lines of a database stress-testing tool like the mysqlslap load-emulator. Here's a link explaining use-cases and specific examples.

Related

How should release management be structured for an agile Professional Services department?

Background
Professional Services departments provide add-on services to customers of a product.
A lot of these projects are small (4-10 hours) and need to be turned around quickly. Additionally, these are important projects as they are enhancements that customers rely on for their business.
Some challenges are:
There is a good amount of rework or feature changes as customers often change their mind or make tiny additional request. Aside from the obvious that this is a mangement issue (managing scope creep etc.), the fact remains that often there are minor tweaks that need to be implemented after the project is "live".
Sometimes something breaks for whatever reason and issues need to be handled with expedience. Again, these are in-production processes that customers rely on.
Currently, our release management is very ad hoc:
Engineers manage the projects from soup to nuts, including scoping, customer relationships, code development, production deployment, and project support (for any subsequent issues).
We have dev servers and we have production servers. The servers exist on-site in a server farm. They are not backed up ever, and they have no redundancy because they are not in the colo - they kind of get second class service from operations.
They Engineers have full root(linux)/admin(windows) access to the dev and prod servers. They develop on the dev servers, and when the project is ready, deploy to prod (basically, just copy the files up). When issues come up, they just work directly on the servers.
We use svn for source control but it's basically just check out to dev, work on the project, check in as necessary, and deploy to prod just by copying files up to the server.
The problem:
The problem is basically number 2 above. The servers are not treated with the same reverence by operations that our product servers (in the colo) are treated. We need the servers to be first class citizens for operations. However their proposal is to put them in the colo, which makes them untouchable. If we do that, we will need to go through operations to get projects deployed. Basically it will be the same arduous and painful process that the product engineers go through when releasing an update to our software product.
This would take away all our agility in responding to these tiny projects and the issues that arise that need immediate attention.
The question
How should we solve this problem?
Should we put the servers in colo and just work with the formal release process?
How should this situation be handled?
Any help making this question better is welcome!
The servers exist on-site in a server farm. They are not backed up
ever, and they have no redundancy because they are not in the colo -
they kind of get second class service from operations.
So you want these servers to be self-serviceable by your PS engineers, yet have good redundancy, backup etc without having to go through formal ops processes. Can't you move them from the on-site server farm to the cloud (ec2 or other)? btw, #3 & #4 are accidents waiting to happen but that is not material to the main question here.
This is an old question but sounds very similar to our company in that production team requires a lot of small changes.
I'm having a hard time understanding the question but I'll attempt an answer.
You should not place development servers in the colo because it will slow down your development process. If operations is not able to give you the support you need in development could you designate a developer or bring on someone that can support your teams needs when it comes to server management/requirements. Ideally a build engineer, release manager, or even say a QA resource. Unfortunately it sounds like a political management issue. In that case you need to clearly layout you issues and address them with management. If I completely missed the mark let me know.

A free test management tool (Not web-based but a downloadable tool on windows 7)

Is there any free downloadable test management tool for Windows 7.
I do not want to use any web-based tool where-in I have to sign-up.
All of the test management tools and bug trackers I've used (SpiraTest, Quality Center, Jira) have been web-based. My personal opinion on test management tools, for a lone / single tester, is that they are often too complex and too restrictive to be useful. Sure you can link requirements to tests but you have to input everything into a requirements section, then link it to each test set. Seems much easier to just create a matrix or two in Excel and track your coverage.
All test management tools also seem to think its possible to record your tests in a simple step by step manner with expected and actual results. I personally find this type of system to be too restrictive. A good test design may not yield itself to simple step by step instructions.
Having a good bug tracker that you and your development team can use is an excellent idea. There are lots of free / cheap options however they are all also web-based. Bugzilla is free and takes a bit of time to setup, however you can probably find a free VM somewhere which will get you up and running quickly (Google around). Trac is a free wiki of sort with some bug reporting abilities. SpiraTest is cheap - like $50 for a single user and has test managment and bug tracking capabilities, although if you have a few developers you'll want a larger license that may cost upwards of $200/300.
I personally like Jira and Atlassian's software. I'm playing around with Jira, Confluence and Bonfire for my testing. I can create wikis, etc. for my test ideas in Confluence, report bugs in Jira and use Bonfire for exploratory testing. If you want to host the software yourself (install it on VMs on your machine) Atlassian has this really cool deal called Starter for just $10/each item.
My experience has taught me that having a web-based bug tracking tool is a must have but Test Management tools tend to be a waste. However I encourage you to look around and explore. Most, if not all, of these tools have trial programs. You'll have to sign up / register but that's the cost of doing business. Wikipedia has this list on Test management Tools so have at it.

As a team should we develop locally and merge into the dev server, or develop on the dev server?

Recently I was tasked with writing up formal procedures for a team based development enviroment. We have several projects with multiple modules each. Right now there are only two programmers, however there are plans to expand to 4-6 programmers. Each programmer will be working on the same project and possibly pages which may cause over writing or error issues.
So far the ideal solution I have thought up is:
Local development (WAMP/VM or some virtual server instance on their own machine). Once a developer has finished their developments, they check it into the CVS Repository and merge it wih other fixes etc.
The CVS version is then deployed to the primary dev server for testing by the devs.
The MySQL DAtabases are kept on the primary dev server and users may remotely connect to it. Any Schema / Data alterations are run through a DB Admin who will notify all devs of any DB Changes (Which should be rare).
Does anyone see an issue with this or have a better solution?
Looks good. Just wanted to highlight this one very important point:
Make sure you have nightly builds in to the dev server. This will help catch problems at very early stages
By the way, while we are at it and in case you arent aware, Joel has a very good 12 point system for Quality of a Software Team
Excerpt
1 Do you use source control?
2 Can you make a build in one step?
3 Do you make daily builds?
4 Do you have a bug database?
5 Do you fix bugs before writing new code?
6 Do you have an up-to-date schedule?
7 Do you have a spec?
8 Do programmers have quiet working conditions?
9 Do you use the best tools money can buy?
10 Do you have testers?
11 Do new candidates write code during their interview?
12 Do you do hallway usability testing?
The model you describe is the one I've seen and used most often. I think each developer having their own local copy is more efficient and less risky. If all the code only exists on the dev server, an outage of the dev server stops all development. You also get less network traffic with a distributed model.
I think you should use a local version of your database not a live version. This will make testing easier and the developers wont have to worry that another developer edits data.
The idea of testing locally, merging changes and then deploying sounds good. The alternative (everyone developing on one server) sounds like a disaster waiting to happen.
You might want to try upgrading from CVS though. Subversion is similar to CVS but more modern. If you want to completely change your way of thinking, you could trying something like Git.

How integration tests are performed on your company/job/project?

I want to improve integration tests methods where I work and I would like to know how this process happens in other places.
Things like:
- When test plans writing begin
- Proportion between testers, developers and stuff (entire applications or modifications) to be tested
- What kind of methods are used for integration testing.
Actually, I test webapps and test plans are managed with Test Link. Bugs found are reported on Bugzilla. I am trying to automate tests with Selenium RC, but I takes some time to write the plans and write the code to execute on Selenium. And time is something that I dont have, because I am testing 3 or more aplications.
Most of my problems are caused by differences between test environment and production environment. But tests are taking too long to begin. If someone finishes a modification today, it will take about 3 weeks for me to begin tests. And the test process queue keeps growing.
It would be really good if anyone suggests something that would improve testing process (like more people testing,etc). But mostly, I would like to hear how testing process works on other places.
Thanks.
For us the integration test is generally performed by the developer before a commit. Just simple surface test to see that nothing obvious is broken.
Then we deploy the code from trunk on a development server connected to a test database that is a complete copy of the production database and have the users responsible for the new functionality do acceptance test and further integration tests on that server.
We have a concept of "super user" to organize this. Super users are responsible for educating other users in their area of expertise and answering helpdesk questions related to the usage of the system. The super users are also the people who are involved in feature requests and requirement discussions for all features related to their work.
So when a new feature is developed the super user is the one who first validate the design suggestion and than performs the final stages of testing before deployment.
This setup is good because it ensures that domain experts are the ones who validate the system functionality and removes some responsibilities from the IT-department.
The bad thing is that they are not usually very technical or good testers. As users they tend to see the the system for what is is rather than what it could be. The fact that they also have their ordinary functions in the organization as full time employees also means that they are a very limited resource in terms of testing.
I'll assume you mean integration testing as in checking to see if the parts of the application work together, (for example, getting the database and the website to work together after the DBA and web developer respectively say they're done) And I'll use an example from my current project
I code generate several configuration files so I can observe the application with certain modules on/off, namely error reporting, authentication, debug mode compilation, with/without SSL. Development environments are likely to have "friendly error pages" turned off, no authentication, no SSL, etc.
I also use a build script to create a copy of the application for each variant of the config file
It is helpful to pedantically reproduce the characteristics of production to staging and development as much as you can-- use virtual machines if you lack the hardware
I also wrote into the production code bases a few pages that test the sort of things that break when code move from one machine to another, i.e. does the db connection work, do emails send, is the temp folder writable and made that page the home page of the server operator
The key is automating as much as you can. Frequent integration testing catches issues earlier.
From check in to packaging code for deployment, it takes me 8 minutes of automated work and 1/2 hour of manual clicking for smoke tests.

Developing via Remote Desktop [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
Has anybody any successful remarks about having a team working via Remote Desktop?
In many workplaces, we put end users via Citrix and the applications on a central, powerful server. Sometimes the clients are in the same building as the server, but often, they are remote.
There could be some huge benefits for me to put my developers on Windows XP or Vista instances running on a couple servers with Hyper-V.
I'm worried that RDP/RDC via the internet would be too slow for somebody to be able to develop efficiently.
I'm sure I can hear plenty of bad things about it... are there any people out there that have had success?
I have seen a situation where the attempt was made to do this with a sattelite office. It was done for a java development team using various java IDE tools. The result wasn't regarded as a success, and the company brought the team back into a central London office at considerable expense.
For someone doing this on a day-in day-out basis on interactive software, the result isn't really very pleasant. For something that mainly uses text-based tools such as vim and unix command line tools, it works somewhat better. At one point I had XVNC going over a 128 Kbit DSL link (of a type that was prevalent in New Zealand at the time) and could do work on an Oracle-based data warehouse at a remote location quite readily. The level of interactivity required by the tooling made them much less sensitive to the slow link than a Windows-based IDE.
So, I'll invoke the 'it depends' argument with some qualifications:
I would not recommend it for a modern IDE, and certainly not for something heavily graphical like Dreamweaver, BI Development Studio or Informatica.
For a textual environment like traditional unix development tools it could probably be made to work quite well. These user interfaces are much less sensitive to latency than a direct-manipulation user interface.
I'm something of a believer in the 'best tools' principle. Going out of your way to give a second-rate user interface to a development team will give off negative signals. The cost saving from doing this is likely to be minimal and it will annoy some of your team members. Even if it can be made to work reasonably well you are still making a value statement by doing this. Weigh the cost saving against the cost of replacing one or more of your key development staff.
We connect to our development environments using RDP and locally the performance is great. It slows a bit over VPN, but is still acceptably responsive.
Turn off all the windows animation functionality, desktop background, etc. and that will help considerably.
If you're not worried about the latency on audio and fast-moving imagery and you're not developing anything dependent on 3D hardware, you'll likely be fine.
I've never used it in a team environment, but I use my laptop RDP'd into my workstation all day and love it.
I've worked in an environment where we would occasionally edit some existing code via remote desktop. There were no significant challenges to this. As a developer I positively hated doing that work. Everything felt slow and unresponsive. However, we got the work done.
Thankfully these were often short 3-4 hours jobs... mostly fixes to existing systems on remote customer sites. I don't think I could recommend it as a normal way of doing work, but its certainly possible.
I've used both VNC and RDP over a DSL connection, running through an SSH tunnel, and have had no real issues.
There are definitely some lags, particularly if you're redrawing large parts of a screen. But most development involves small edits, and both of these protocols handle that very well.
I use Remote Desktop to control my Windows machine at work. I use a Parallels VM on a Mac and my connection is 2.5M down, 256k up.
This works really really well. I've been doing this for 2 years for 1-3 days a week. The slow upspeed isn't an issue - I can't type that fast.
I have 3 screens at work but still find a 20" Mac screen to be superior. The colours are much cleaner and I can work longer at the Mac than my work screens!
The thing that is a killer is Flash on a browser. If I accidentally open a browser on my remote machine with Flash it kills the connection. The solution is to use FlashBlock (a firefox addin).
I use Eclipse and Visual Studio with no issues whatsoever.
I've used it to work from home (remote login to my in-office PC via VPN).
The performance depends on your ISPs, of course.
It's slightly less reliable (because as well as your having downtime when/if ever the office LAN is down, there's now additional risk of downtime while either of the internet connections is down).
I have a remote server on a 1Mbps upstream pipe which I RDP to (over a VPN) and it works just fine. I even use large screen resolutions (1600x1200) with no performance problems. Of course, I'm not sure how such a setup would fare for multiple concurrent users, however.
A benefit of developing over RDP that I hadn't anticipated is that you can save your sessions--so after you get done developing for the day, you quit your RDP client and power down your computer, and when you log back in the following day your session is right where you left it.
As an added bonus, RDP clients are available for linux, and OS X.
I use RDP daily for development, I leave my laptop on at home with my work environment open and ready to go. When I get to work and everybody is loading up their projects and opening their programs I just RDP in and I'm ready to go. You have to keep in mind certain keyboard shortcuts that change though (CTRL+ALT+DEL for example), it is annoying at first but you get used to it.
To keep the latency to a minimum, I recommend...
turning the colors down to 256 (after all, you only need to see text)
Leave the wallpaper at the other computer
Leave sounds at the other computer
Leave any themes on the other computer
Choose a lower connection speed, even if yours is higher. Windows will minimize the data sent.
One of the advantages you might also consider is processing power. If your machine at home has far better specifications than your workstation on the job, compilation time is improved a fair bit. Since your local machine only needs to update the image from the remote machine, your local computer is not under load.
Using this option also allows me to keep on track. While others log in and browse the internet and waste time, I'm set up and ready to go. Being more productive helps you get paid the big bucks (if your employer notices), while others are still stuck in their junior programming roles.
Pre-2000 I did it for 3 years every day several hours a day. This was when bandwidth sucked too.
Nowadays it's much much better.
And if you use NxMachine life gets even better :)
I did not, however, use the machine with multiple users. My concern with that would be that developers are a finicky bunch (myself included) and we tend to push machines really hard as it is.
Can't imagine several folks on one box all deciding to compile :)
G-Man
We do it with citrix and is very fast.
I wonder what the reason for this would be. Does the central server(s) have access to some resources that the individual developer machines could not access?
I'm using RDP to connect from my home computer to my work computer from time to time. I have to say - it's possible to code, but it's way more comfortable to do it when the IDE is on your own machine. Even when on a 100MBit LAN there is some noticeable lag. Not enough to bother work, but annoying nevertheless.
If the people have to work from remote places on a regular basis, I'd rather prefer a setup where the central source control is available through some secure protocol (HTTPS, VPN, etc.), but the development can happen locally on the developer's machines. If using something like SVN, which works well even with offline development, then it should be way more comfortable for the programmers themselves.
What is important for a development workstation is sheer processing power. At our place the developers have the most high-end workstations in terms of cpu, memory, disk, etc and not in terms of audio and graphics. It's the latter that are most affected by RDP.
As long as the server that your developers are RDP-ing to is fast enough to handle multiple compiles, builds at the same time you should be fine.
As with all things, the answer to your question is "Your Milage May Vary" or YMMV. It depends on what the developers are doing. Do they spend most of their time writing code, or do they do a lot of large compiles? Do they need direct hardware access?
Do they need debugging rights? Once you grant them debugging rights, they basically own the machine and can interfere with other users.
It's typically much better to allow the users to develop on their own computers, and use a VPN to allow them to acces the version control system. Then they can checkout the files to their local computers and do whatever they want, then checkin the changes.
But, RDP has it's advantages too. You really need to weigh the pro's and cons and decide which list is longer or more "weighty".
I use NoMachine NX Client to remote desktop onto a headless server that runs FreeNX. It is great because I can login to my session from anywhere and my last session is still there for me. Speed has never been a problem, except when the DSL line is down.
Anyway, my point is that if you are running a Linux server and use 'vi' then there is a nicer alternative than 'screen'.