What is the difference between load tests and performance tests? Are load tests just a special type of performance tests? If so, could you provide an example of performance tests, which are not load tests?
Terminology questions are always difficult because many definitions float around. Yet, most of the time "performance test" is a wide category of test in which we look at how the Software Under Test behaves from a technical point of view: time to do some computation, response time of API or UI, memory used on the machine, disk footprint etc. And "load test" is the special case where you check your SUT under heavy load (lots of connections to your server for example).
An example of perf test that is not load test? For example "longevity test": test how your SUT behaves when it runs (under normal load) for a long time (several days/weeks). This test might highlight a memory or thread leakage, or you could discover that a given log file become huge, or you could discover that after some time, for some reasons, the system become unstable.
Related
I read that the --runInBand flag speeds up Jest test duration by 50% on CI servers. I can't really find an explanation online on what that flag does except that it lets tests run in the same thread and sequentially.
Why does running the test in the same thread and sequentially make it faster? Intuitively, shouldn't that make it slower?
Reading your linked page and some other related sources (like this github issue) some users have found that:
...using the --runInBand helps in an environment with limited resources.
and
... --runInBand took our tests from >1.5 hours (actually I don't know how long because Jenkins timed out at 1.5 hours) to around 4 minutes. (Note: we have really poor resources for our build server)
As we can see, those users had improvements in their performances on their machines even though they had limited resources on them. If we read what does the --runInBand flag does from the docs it says:
Alias: -i. Run all tests serially in the current process, rather than creating a worker pool of child processes that run tests. This can be useful for debugging.
Therefore, taking into consideration these comments and the docs, I believe the improvement in performance is due to the fact that now the process runs in a single thread. This greatly helps a limited-resource-computer because it does not have to spend memory and time dealing and handling multiple threads in a thread pool, a task that could prove to be too expensive for its limited resources.
However, I believe this is the case only if the machine you are using also has limited resources. If you used a more "powerful" machine (i.e.: several cores, decent RAM, SSD, etc.) using multiple threads probably will be better than running a single one.
When you run tests in multi-threads, jest creates a cache for every thread. When you run with --runInBand jest uses one cache storage for all tests.
I found it after runs 20 identical tests files, first with key --runInBand, a first test takes 25 seconds and next identical tests take 2-3s each.
When I run tests without --runInBand key, each identical test file executes in 25 seconds.
I would like to ask a general question,
I am doing automation testing using robotium tool with the help of a tablet which is single processor. While performing some actions my test case is failing like INSTRUMENTATION TEST RUN FAILED DUE TO JAVA.LANG.OUT OF MEMORY error.
What i need is whether the out of memory error depends on the device processor speed also or purely it depends on the app and test code.
Any solutions can help me a lot
The OutOfMemoryError indicates that you've probably run out of heap space in the application. The device's kernel may set the limits on heap, but your problem is probably in your application and test code.
Does your test run out of memory while executing large tests?
You may want to profile your application for Memory Usage and start resolving memory leaks first.
It can also help if your robotium tests don't run for extended periods of time, but is only a band-aid if your application has memory leaks.
We're undertaking a large project that is focused on delivering automated testing of the software that we produce.
We have a lot of "events" that trigger certain behavior at specific times. Ideally, we would be able to exercise these tests in an automated fashion without the need to move the system clock in intervals to specific points in time.
To that end, I'm wondering if there is a way (with VMWare, or any other virtualization software) to increase the speed of the system clock of the guest operating system. I'm not interested in measuring performance in these tests, only functionality.
Is there anything out there that would allow for this behavior?
It works for VirtualBox:
VBoxManage setextradata "VM name" "VBoxInternal/TM/WarpDrivePercentage" x
where x is the percentage you want (for instance, 200 is doubling, 50 is halving)
You can also more information here, on the section "Accelerate or slow down the guest clock". Regards.
I was able to work around this using the Win32 API SetSystemTimeAdjustment()
This allows you to increase the amount of time added to the system clock for each OS tick interval. It's meant generally for addressing clock skew, but can be used outside of that particular context.
I don't see what the benefits are of testing this in a fast-forwarding VM instead of unit testing the event trigger using a mock implementation of the date/time dependency.
The only thing you "gain" by testing this in a fast-forwarding VM is that you test both the system's and the programming language's date/time implementation, which I think you are save to trust because it is used, developed and tested by so many for such a long time.
I'm a software engineer who will/may be hired as a firmware test engineer. I just want to get an idea of some software tools available in the market used in testing firmware. Can you state them and explain a little about what type of testing they provide to the firmware? Thanks in advance.
Testing comes in a number of forms and can be performed at different stages. Apart from design validation before code is even written, code testing may be divided into unit testing, integration testing, system testing and acceptance testing (though exact terms and number of stages may very). In the V model, these would correspond horizontally with stages in requirements and design development. Also in development and maintenance you might perform regression testing - ensuring that fixed bugs remain fixed when other changes are applied.
As far as tools are concerned, these can be divided into static analysis and dynamic analysis. Static tools analyse the source code without execution, whereas dynamic analysis is concerned with the behaviour of the code during execution. Some (expensive) tools perform "abstract execution" which is a static analysis technique that determines how the code may fail during execution without actual execution, this approach is computationally expensive but can process far more execution paths and variable states than traditional dynamic analysis.
The simplest form of static analysis is code review; getting a human to read your code. There are tools to assist even with this ostensibly manual process such as SmartBear's Code Collaborator. Likewise the simplest form of dynamic analysis is to simply step through your code in your debugger or even to just run your code with various test scenarios. The first may be done by a programmer during unit development and debugging, while the latter is more suited to acceptance or integration testing.
While code review done well can remove a large amount of errors, especially design errors, it is not so efficient perhaps at finding certain types of errors caused by subtle or arcane semantics of programming languages. This kind of error lends itself to automatic detection using static analysis tools such as Gimpel's PC-Lint and FlexeLint tools, or Programming Research's QA tools, though lower cost approaches such as setting your compiler's warning level high and compiling with more than one compiler are also useful.
Dynamic analysis tools come in a number of forms such as code coverage analysis, code performance profiling, memory management analysis, and bounds checking.
Higher-end tools/vendors include the likes of Coverity, PolySpace (an abstract analysis tool), Cantata, LDRA, and Klocwork. At the lower end (in price, not necessarily effectiveness) are tools such as PC-Lint and Tessy, or even the open-source splint (C only), and a large number of unit testing tools
Here are some firmware testing techniques I've found useful...
Unit test on the PC; i.e., extract a function from the firmware, and compile and test it on a faster platform. This will let you, for example, exhaustively test a function whereas this would be prohibitively time consuming in situ.
Instrument the firmware interrupt handlers using a free running hardware timer: ticks at entry and exit, and count of interrupts. Keep track of min and max frequency and period for each interrupt handler. This data can be used to do Rate Monotonic Analysis or Deadline Monotonic Analysis.
Use a standard protocol, like Modbus RTU, to make an array of status data available on demand. This can be used for configuration and verification data.
Build the firmware version number into the code using an automated build process, e.g., by getting the version info from the source code repository. Make the version number available using #3.
Use lint or another static analysis tool. Demand zero warnings from lint and from the compiler with -Wall.
Augment your build tools with a means to embed the firmware's CRC into the code and check it at runtime.
I have found stress tests useful. This usually means giving the a system a lot of input in a short time and see how it handles it. Input could be
A files with a lot of data to process. An example would me a file with wave data that needs to analyzed by a alarm device.
Data received by an application running on another machine. For example a program that generates random touch screen presses/releases data and sends it to a device of a debug port.
These types of tests can shake out a lot of bugs (particularly in systems where performance is critical as well as limited). A good logging system is also good to have to track down the causes of the errors raised by a stress test.
I use TFS 2008. We run unit tests as part of our continuous integration build and integration tests nightly.
What other types of testing do you automate and include in your build process? what technologies do you use to do so?
I'm thinking about smoke tests, performance tests, load tests but don't know how realistic it is to integrate these with Team Build.
First, we have check-in (smoke) tests that must run before code can be checked in. It's done automatically by running a job that runs the tests and then makes the check-in to source control upon successful test completion. Second, cruise control kicks off build and regression tests. The product is built then several sets of integration tests are run. The number of tests vary by where we are in the release cycle. More testing is added late in the cycle during ramp down. Cruise control takes all submissions within a certain time window (12 minutes) so your changes may be built and tested with a small number of others. Third, there's an automated nightly build and tests that are quite extensive. We have load or milestone points every 2 or 3 weeks. At a load point, all automated tests are run plus manual testing is done. Performance testing is also done for each milestone. Performance tests can be kicked off on request but the hardware available is limited so people have to queue up for performance tests. Usually people rely on the load performance tests unless they are making changes specifically to improve performance. Finally, stress tests are also done for each load. These tests are focussed on making sure the product has no memory leaks or anything else that prevents 24/7 running of the product as opposed to performance. All of this is done with ant, cruise control, and Python scripts.
Integrating load testing during you build process is a bad idea, just do your normal unit testing to make sure that all your codes work as expected. Load and performance testing should be done separately.