Let's say I have a bunch of unit tests, integration tests, and e2e tests that cover my app. Does it make sense to have these continuously running against prod, e.g. every 10 mins?
I'm thinking no, here's why:
My tests are already ran after every prod deploy. If they passed and no code has changed after that, they should continue to pass. So testing them thereafter doesn't make sense.
What I really want to test continuously is my infrastructure -- is it still running? In this case, running an API integration test every 10 mins to check if my API is still working makes sense. So I'm dealing with a subset of my test suites -- the ones that test my infrastructure availability (integration+e2e) versus only single bits of code (unit test). So in practice, would I have seperate test suites that test prod uptime than the suites used to test pre/post deploy?
Such "redundant" verifications (they can include building as well, BTW, not only testing) offer additional datapoints increasing the monitoring precision for your actual production process.
Depending on the complexity of your production environment even the simple "is it up/running?" question might not have a simple answer and subset/shortcut versions of the verifications might not cut it - you'd only cover those versions, not the actual production ones.
For example just because a build server is up doesn't mean it's also capable of building the product successfully, you'd need to check every aspect of the build itself: availability of every tool, storage, dependencies, OS resources, etc. For complex builds it's probably simpler to just perform the build itself than to manage the code reliably checking if the build would be feasible ;)
There are 2 production process attributes that would benefit from a more precise monitoring (and for which subset/shortcut verifications won't be suitable either):
reliability/stability - the types, occurence rates and root causes of intermittent failures (yes, those nasty surprises which could make a difference between meeting the release date or not)
performance - the avg/min/max durations of various verifications; especially important if verifications are expensive in terms of duration/resources involved; trending could be desired for planning, budgeting, production ETAs, etc
Donno if any of these are applicable to or have acceptable cost/benefit ratios for your context but they are definitely important for most very large/complex sw projects.
Related
I'm going to work on the software testing process for a company, which has several projects (with uses different technologies), and I'm planing to improve and automatize the software testing process. I know some of the concepts such as black box and white box testing, and some of its techniques, but I do not have much experience in the field. I'm going to have access to the projects documentation, and I expect to be involved more with functional testing, rather than white-box testing (alhough I'm not entirely sure).
What's the "right way" to start? I know that it depends on several factors, so I don't expect to get a perfect answer, but if I could read how others start, it would be great for me.
What sort of guidelines do you follow from the start? Where do the CMMI and IEEE829 standards come in? Are the any other standards/guidelines worth of note?
What's the best way to make a correct assessment of the current efficiency/productivity level of the software testing process inside the company?
Different Phases of Testing Life Cycle
The life cycle of testing process intersects the software development lifecycle. When would testing start varies from one company to another. In some companies, testing start simultaneously with development, while in others, it starts after the software has been built. Both these methods have their own advantages and disadvantages. Whatever be the method adopted for testing the software, the steps followed are more or less as mentioned below.
Planning Phase
The process of software testing life cycle phase starts with the test planning stage. It is recommended that one spend a lot of time in this phase, to minimize headaches in the other software testing phases. It is in this phase that the 'Test Plan' is created. It is a document, where the items to be tested along with the features to be tested, pass/fail criteria of a test, exit criteria, environment to be created, risks and contingencies are mentioned. This gives the testing team refined specifications.
Analysis Phase
An analysis of the requirements is carried out, so that the testing team can be well versed with the software that has been developed. It is in this phase that the types of testing to be carried out in the different phases of the testing life cycle are decided upon. In some cases, the test may have to be automated and in others, manual tests would have to be carried out. Functional validation matrix, which is based on business requirements is made. It is normally based on one or more than one test cases. This matrix helps in analyzing, which of the test cases will have to be automated and which will have to be tested manually.
Designing Phase
In the software testing life cycle, this phase has an important role to play. Here the test plan, functional validation matrix, test cases, etc. are all revised. This ensures there are no problems existing in any of them. If the test cases have to be automated, the suitable scripts for the same are designed at this stage. Test data for both manual as well as automated test cases is also generated.
Development Phase
Based on the test plan and test cases the entire scripting takes place in this phase. If the testing activity starts along with the development activity of the software, the unit tests will also have been implemented in the development phase. Often along with the unit tests, stress and performance test plans are generated in this phase.
Execution Phase
After the test scripts have been made, they are executed. Initially, unit tests are executed, followed by functionality tests. In the initial phase testing is carried out on the superficial level, i.e. on the top level. This helps in identifying bugs on the top level, which are then reported to the development team. Then the software is tested in depth. The test reports are created and bugs are reported.
Retest and Regression Testing Phase
Once the bugs have been identified, they are sent to the development team. Depending on the nature of the bug, the bug may be rejected, deferred or fixed. If the bug has been accepted and fixed immediately, the software has to be retested to check if the bug has indeed been fixed. Regression testing is carried out to ensure that no new bugs have been created in the software, while fixing of the bug.
Implementation
After the system has been checked, final testing on the developers side is carried out. It is here that load, stress, performance and recovery testing is carried out. Then the software is implemented on the customers end. The end users tests the software and bugs if any are reported. The necessary documents for the same are generated.
The phases of testing life cycle does not end after the implementation phase. This is when the bugs found are studied, so as to rule out such problems in the future. This analysis helps in improving the software development process for the next software.
I am working in a project which is quite complex in terms of size (it's to make a web app). The first problem is that nobody is interested in any products which could really solve the problems surrounding the project (lack of time, no adjustments in timescales in response to ever changing requirements). Bare in mind these products are not expensive ( < $500 for a company making millions) and not products which require a lot of configuration (though the project needs products like that, such as build automation tools, to free up time).
Anyway, this means that testing is all done manually as documentation is a deliverable - this means the actual technical design, implementation and testing of the site suffers (are we developers or document writers? What are we trying to do here? are questions which come to mind). The site is quite large and complex (not on the scale of Facebook or anything like that), but doing manual tests as instructed to do so (despite my warnings) tells me this is not high quality testing and therefore not a high quality product to come out of it.
What benefits can I suggest to the relevant people to encourage automated testing (which they know I can implement)? I know it is possible to change resolution via cmd with a 3rd party app for Windows, so this could all be part of an automated build. Instead, I will probably have to run through all these permutations of browsers, screen resolutions, and window sizes manually. Also, where do recorded tests fall down on? Do they break when windows are minimised? The big problem with this is that I am doing the work in monitoring the test and the PC is not doing ALL of the work, which is my job (make the pc do all the work). And given a lack of resources, this clogs up a dev box - yes, used for development and then by me for testing. Much better to automate this for a night run when the box is free.
Thanks
Talking about money is usually the best way to get management attention, so here are a few suggestions:
Estimate how long it takes you to do your current manual testing.
Get a list of critical bugs that were found by customers - ideally with an idea of the impact cost (fixing a bug after release is always much more expensive than before), but it's usually good enough just to describe one or two particularly bad bugs. Your manual testing didn't catch these customer bugs, so this is a good way to demonstrate that your manual testing is inadequate.
Come up with a pilot project where you automate testing a certain area of the product where bugs were found in production. Estimate the cost of the pilot project - doing a restricted pilot has the advantages of being easier to scope and estimate. Then compare the ongoing cost of repeatedly running the automation versus testing every release manually; after a few release you should break even on the cost of the automation tool plus the test development. Be careful picking the automation area - try to avoid areas like a complex UI that might change significantly between releases and thus require a lot of time to be spent on updating the automated tests.
Good luck to you. I screamed for all of this and I work for a billion+ company. We still perform manual testing (including regression testing). Automated tests are finally being instituted because some of the developers went out and got demos of some of the software you're describing and began configuring a framework.
Your best bet is to come up with an actual dollars and cents documented comparison between working with a product and working without a product to prove unequivocably to the management figures in charge of spending the money and designing the processes that the ROI is not only there but people who need to perform testing and/or change their existing processes will actually find their jobs a little bit easier.
Go grassroots. Talk to your team, get them on board. Talk to your business analysts, get them on board. Talk to any QA people you have and get them on board. When the villagers attack the castle with pitchforks and torches, you can bet that the wallets will open up and you'll be performing automated testing.
I would just try to automate as much as you can, whenever you can. I don't think you need to necessarily ask for permission to do things like this. Maybe your management doesn't think of these things, and often they won't see the benefit until you show them a great example.
Is it just that capital expenditures are difficult ? I've seen places where the time of existing employees is already spent, and therefore, essentially worthless in comparison to new purchases.
As for convincing managers, cost of manual regression tests versus cost to automate. If you are running lots of manual tests, this should be an easy win. If you aren't running the tests often, try for cost of a bug. However, in many companies, the cost for a bug isn't attributed to the development department, quality and the cost of bug may not be a strong motivation (in other words, quality is just about pride and ego, not actually what it costs).
Convincing developers...if they aren't already on board...electo-shock therapy ? If they aren't there, it's going to be an up hill battle.
Have been trying to similar on my current project... I can say there's another factor - time. There's a learning curve on automated tools and automated test development. The first release that is tested with automated tools will not be tested as quickly as it was manually, because the testers are learning the tools in addition to exercising tests. The second release will be much faster and every release after that will be faster still - but the first one will be a schedule hit, if not a cost hit.
The financial case is not too hard - over time, the project saves lots of money, as resources for repetitive testing are vastly reduced.
But the hard part to find a strategy that lets you get the tool into usage with a minimum of schedule drag on the first release that uses the test tool. Testing is always squashed at the end of the schedule, so it's the thing most sensitive to schedule stress. Anything you can do to show management how to reduce or remove the learning curve and automated test setup and installation time is likely to increase your chances of using the tool.
Is there a rule on how often an application should be stress or load tested? I normally do it before putting into production a new version, when the hardware changes or when the expected amount of users is known to change.
But today i'm asked if this should be a standard practice for an application that is in production even if no changes are introduced. If so, how often?
It really depends on how you want to address it for your company's needs. Personally, we load test our integration (test) builds daily - just like the builds go out. After the build runs at approx 1a, we have it scripted to be load tested as well. Our goal is specifically looking for build over build changes in performance. Even if we do not introduce changes into the code, the servers that the code is load tested on still recieves updates/patches/hot fixes/service packs/etc. At worst, once automated, it provides additional historical data.
We are going this route (build relativity) because it is cost prohibitive to try and replicate our hardware environment in production. In the event that we see a sudden change (or gradual changes) to key performance monitors, we can look into what changesets were introduced at that time and isolate potential code changes that adversely impacted performance.
From the sound of it, you are testing against a lab that replicates production? That is a different approach then we had, because we are going under the assumption that most of our bottlenecks would be code-induced and not directly dependent on hardware. We use VMs to approximate, but not duplicate, our production environment.
One thing that affects system performance, even though the code is unchanged, is data.
An example might be performance of a database query. As data is added to a table the cost of maintaining indexes goes up. Page splits in the index can degrade performance. As indexes grow, the number of 'levels' in the index will every so often have to be increased. When that happens you see a sudden, apparently inexplicable change in performance.
Running stress tests in a production environment is not always possible - it affects your day-to-day business. More often systems are instrumented to provide on-going feedback about performance. Maybe using something like ganglia. The data are used to detect issues and for capacity planning.
I think that whenever you change something in the application - code, data files, use-cases - and that includes but is not limited to expected amount of users, you should test it.
My two cents: sometimes you won't have changed anything and your site's performance could suffer. It could be from the app handling too much data (ie: caches overflowing). It could be from third party advertisements on the site slowing down. Heck, it could be because of fault RAM!
The main thing is that while it's generally advised you do testing after any known change, it's also not a bad idea to do occasional performance testing to check for possible unknown changes that could affect performance.
My company, BrowserMob, provides a free website monitoring service that runs real Selenium scripts every X minutes against your site. While it's not a load test, it definitely can help you identify trends and bottlenecks in your production site.
This depends on how mission-critical your system is ... if it is just a small tool that one can do without, once you put it on production. If your life depends on it, after every single build.
As far as I'm concerned, that is.
Depends on how much effort the testing requires. If it is easy enough, more testing never hurts. However, I see no reason to test if there are no changes expected. If this would lead to the software not tested for a long time, then it might be appropriate to run the tests from time to time in case there are unexpected changes.
Needless to say, it also depends on whether your software is running a nuclear reactor or a bulletin board.
If no changes are introduced in your product and the load testing simply repeats the same process over and over for some interval, I don't see the benefit of re-running.
I used to have stress tests as part of my ant file, so I could run those tests every night, if I wanted, and I would run them when I was making any changes, or testing a possible new change, that would in one way or another impact what I was testing, to see if there was any improvement.
I think how often would depend on your environment.
If you can't really stress test in development then you may have to wait until you get to QA, where you are testing just before you go to production.
I think the sooner you do it the better, as you can find problems and fix them faster, since you know it worked two nights ago, last night it failed the test, so there was some change during the day that caused it.
I used junitperf for many of my stress tests.
I think we don't want to stress test Notepad. It's a joke. Never mind =).
Stress testing not for all applications. :-)
If your software can kill someone, i think you should.
Imagine someone dying because the blood didn't arrive because some timeout on the system.
"Timeout.exception: your heart are not coming anymore. Try again later"
I've recently become quite interested in identifying patterns for software scalability testing. Due to the variable nature of different software solutions, it seems to like there are as many good solutions to the problem of scalability testing software as there are to designing and implementing software. To me, that means that we can probably distill some patterns for this type of testing that are widely used.
For the purposes of eliminating ambiguity, I'll say in advance that I'm using the wikipedia definition of scalability testing.
I'm most interested in answers proposing specific pattern names with thorough descriptions.
All the testing scenarios I am aware of use the same basic structure for the test which involves generating a number of requests on one or more requesters targeted at the processing agent to be tested. Kurt's answer is an excellent example of this process. Generally you will run the tests to find some thresholds and also run some alternative configurations (less nodes, different hardware etc...) to build up an accurate averaged data.
A requester can be a machine, network card, specific software or thread in software that generates the requests. All it does is generate a request that can be processed in some way.
A processing agent is the software, network card, machine that actually processes the request and returns a result.
However what you do with the results determines the type of test you are doing and they are:
Load/Performance Testing: This is the most common one in use. The results are processed is to see how much is processed at various levels or in various configurations. Again what Kurt is looking for above is an example if this.
Balance Testing: A common practice in scaling is to use a load balancing agent which directs requests to a process agent. The setup is the same as for Load Testing, but the goal is to check distribution of requests. In some scenarios you need to make sure that an even (or as close to as is acceptable) balance of requests across processing agents is achieved and in other scenarios you need to make sure that the process agent that handled the first request for a specific requester handles all subsequent requests (web farms are commonly needed like this).
Data Safety: With this test the results are collected and the data is compared. What you are looking for here is locking issues (such as a SQL deadlock) which prevents writes or that data changes are replicated to the various nodes or repositories you have in use in an acceptable time or less.
Boundary Testing: This is similar to load testing except the goal is not processing performance but how much is stored effects performance. For example if you have a database how many rows/tables/columns can you have before the I/O performance drops below acceptable levels.
I would also recommend The Art of Capacity Planning as an excellent book on the subject.
I can add one more type of testing to Robert's list: soak testing. You pick a suitably heavy test load, and then run it for an extended period of time - if your performance tests usually last for an hour, run it overnight, all day, or all week. You monitor both correctness and performance. The idea is to detect any kind of problem which builds up slowly over time: things like memory leaks, packratting, occasional deadlocks, indices needing rebuilding, etc.
This is a different kind of scalability, but it's important. When your system leaves the development shop and goes live, it doesn't just get bigger 'horizontally', by adding more load and more resources, but in the time dimension too: it's going to be running non-stop on the production machines for weeks, months or years, which it hasn't done in development.
Our product earned bad reputation in terms of performance. Well, it's a big enterprise application, 13 years old, that needs a refreshment treat, and specifically a boost in its performance.
We decided to address the performance problem strategically in this version. We are evaluating a few options on how to do that.
We do have an experienced load test engineers equipped with the best tools in the market, but usually they get a stable release late in the version development life cycle, therefore in the last versions developers didn't have enough time to fix all their findings. (Yes, I know we need to deliver earlier a stable versions, we are working on this process as well, but it's not in my area)
One of the directions I am pushing is to set up a lab environment installed with the nightly build so developers can test the performance impact of their code.
I'd like this environment to be constantly loaded by scripts simulating real user's experience. On this loaded environment each developer will have to write a specific script that tests his code (i.e. single user experience in a real world environment). I'd like to generate a report that shows each iteration impact on existing features, as well as performance of new features.
I am a bit worried that I'm aiming too high, and it it will turn out to become too complicated.
What do you think of such an idea?
Does anyone have an experience with setting up such an environment?
Can you share your experience?
It sounds like a good idea, but in all honesty, if your organisation can't get a build to the expensive load test team it has employed just for this purpose, then it will never get your idea working.
Go for the low hanging fruit first. Get a nightly build available to the performance testing team earlier in the process.
In fact, if this version is all about performance, why not have the team just take this version to address all the performance issues that came late in the iteration for the last version.
EDIT: "Don't developers have a responsibility to performance test code" was a comment. Yes, true. I personally would have every developer have a copy of YourKit java profiler (it's cheap and effective) and know how to use it. However, unfortunately performance tuning is a really, really fun technical activity and it is possible to spend a lot of time doing this when you would be better developing features.
If your developer team are repeatedly developing noticeably slow code then education on performance or better programmers is the only answer, not more expensive process.
One of the biggest boost in productivity is an automated build system which runs overnight (this is called Continuous Integration). Errors made yesterday are caught today early in the morning, when I'm still fresh and when I might still remember what I did yesterday (instead of several weeks/months later).
So I suggest to make this happen first because it's the very foundation for anything else. If you can't reliably build your product, you will find it very hard to stabilize the development process.
After you have done this, you will have all the knowledge necessary to create performance tests.
One piece of advice though: Don't try to achieve everything at once. Work one step at a time, fix one issue after the other. If someone comes up with "we must do this, too", you must do the same triage as you do with any other feature request: How important is this? How dangerous? How long will it take to implement? How much will we gain?
Postpone hard but important tasks until you have sorted out the basics.
Nightly builds are the right approach to performance testing. I suggest you require scripts that run automatically each night. Then record the results in a database and provide regular reports. You really need two sorts of reports:
A graph of each metric over time. This will help you see your trends
A comparison of each metric against a baseline. You need to know when something drops dramatically in a day or when it crosses a performance threshold.
A few other suggestions:
Make sure your machines vary similarly to your intended environment. Have low and high end machines in the pool.
Once you start measuring, never change the machines. You need to compare like to like. You can add new machines, but you can't modify any existing ones.
We built a small test bed, to do sanity testing - ie did the app fire up and work as expected when the buttons were pushed, did the validation work etc. Ours was a web app and we used Watir a ruby based toolkit to drive the browser. The output from those runs are created as Xml documents, and the our CI tool (cruise control) could output the results, errors and performance as part of each build log. The whole thing worked well, and could have been scaled onto multiple PCs for proper load testing.
However, we did all that because we had more bodies than tools. There are some big end stress test harnesses that will do everything you need. They cost, but that will be less than the time spent to hand roll. Another issue we had was getting our Devs to write Ruby/Watir tests, in the end that fell to one person and the testing effort was pretty much a bottleneck because of that.
Nightly builds are excellent, lab environments are excellent, but you're in danger of muddling performance testing with straight up bug testing I think.
Ensure your lab conditions are isolated and stable (i.e. you vary only one factor at a time, whether that's your application or a windows update) and the hardware is reflective of your target. Remember that your benchmark comparisons will only be bulletproof internally to the lab.
Test scripts written by the developers who wrote the code tends to be a toxic thing to do. It doesn't help you drive out misunderstandings at implementation (since the same misunderstanding will be in the test script), and there is limited motivation to actually find problems. Far better is to take a TDD approach and write the tests first as a group (or a separate group), but failing that you can still improve the process by writing the scripts collaboratively. Hopefully you have some user-stories from your design stage, and it may be possible to replay logs for real world experience (app varying).