Our organization is in the need of executing Performance test in Protocol level(HTTP requests) and UI Level(Browser Actions).
After doing some explorations, I found that eggPlant has option to execute functional scripts in eggPlant Performance. Do any one know about what are all the Real Browser Metrics which eggPlant Performance is providing?
Looking forward for some useful helps.
EggPlant Performance doesn't provide a fixed number of metrics. A better mental model is to understand that while a test suite is running EggPlant records event log entries. These event log entries are consumed by a collection of analysis tools to derive whatever charts and metrics are determined to be relevant. In addition to the default events captured during a test run EggPlant supports the logging of custom events.
This is useful because it allows you as the user to choose what metrics are relevant to your specific use case.
Instead of asking "What metrics are there?" a better question to be asking is "What metrics do I care about?".
Related
I realize that this might not be the best platform to ask this, but I think this would be best unbiased one to put my question in.
How would you compare OpenMDAO v/s modeFrontier with regards to there optimization capabilities and application scaling and overall software development? Which one would you pick and why?
If you know of any resources or link do provide.
The most fundamental technical difference is OpenMDAO can pass data + derivative information between components. This means that if you want to use gradient based optimization and have access to at least some tools that provide derivative information, OpenMDAO will have far more effective overall capabilities. This is especially important when doing optimization with high-cost analysis tools (e.g. partial differential equation solvers --- CFD, FEA). In those situations making use of derivatives offers between a 100x and 10000x speedup.
One other difference is that OpenMDAO is designed to run natively on a distributed memory compute cluster. Industrial frameworks can submit jobs to remote clusters and query for the results, but OpenMDAO itself can run on the cluster and has a direct and internal MPI based distributed memory capability. This is critical to it being able to efficiently handle derivatives of those expensive PDE solvers. To the best of my knowledge, OpenMDAO is unique in this regard. This is a low level technical detail that most users never need to directly understand, but the consequence is that if you want to do any kind of high fidelity coupled optimziations (aero-structural, aero-propulsive, aero-thermal) with more than one PDE solver in the loop then OpenMDAO's architecture is going to be by far the most effective.
However, OpenMDAO does not offer a GUI. It does not have the same level of data tracking and visualization tools. Also, I know that mode-frontier offers the ability to split a single model up across multiple computers distributed across an organization. Mode Frontier, along with other tools like ModelCenter and Isight, all offer this kind of smooth user experience and code-free interaction that many find valuable.
Honestly, I'm not sure a direct comparison is really warranted. I think if you have an organization that invests in a commercial integration tool like Mode Fronteir, then you can still use OpenMDAO to create tightly coupled integrated optimizations which you can then include as boxes inside your overall integration framework.
You certainly can use OpenMDAO as a complete integration framework, and it has some advantages in that area related to derivatives and execution in distributed memory environments. But you don't have to, and it certainly does not have to be an exclusive decision.
I want to start writing performance tests on my project (JMeter + Selenium bundle). The following questions arose:
Where to start? What questions should I set before I start writing tests?
On what metrics should I concentrate on the first chord, if I'm
interested in stability and partial performance and, for example,
the speed of page loading?
Is it possible to integrate with the existing Test Automation
Framework to avoid writing tests specifically for JMeter from
scratch?
I will be glad to any other advice, tips, links, etc.
You need to determine your goal prior to starting any testing, for example you can ask around for any SLAs or NFRs you can stick to. If they don't exist you can mimic real-life usage of your application and increase the load until errors start occurring or response time exceeds reasonable values (it is highly unlikely that use will wait for page to load for more than i.e. 20 seconds)
It depends on the nature of application under test, check out Performance Metrics for Websites for high level overview. The key points are:
response time
throughput (requests per second)
application under test health (how much headroom in terms of CPU, RAM, network/disk IO is left)
how do KPI's correlate (i.e. when you add more users does throughput increase, remain the same or decrease)
how (if) your application scales
etc.
For JMeter there is one main rule: JMeter test should represent real user using real browser as close as possible (mind cookies, headers, cache, embedded resources, think times, etc.). You can build a test "skeleton" fairly quickly and easily using JMeter Proxy Server. For Selenium it will be enough to stick to main OOP principles plus Page Objects pattern.
If someone has a webpage, the usual way of testing the web site for user interaction bugs is to create each test case by hand and use selenium.
Is there a tool to create these testcases automatically? So if I have a webpage that gets altered, new test cases get created automatically?
You can look at a paid product. That type of technology is not being developed as open source and will probably cost a bit. Some of the major test tools get closer to this, but full auto I have not heard of.
If this was the case the role of QA Engineer and especially Automation Engineer would not be as important and the jobs would spike downwards pretty quickly. I would imagine that if such a tool was out there that it would be breaking news to the entire industry and be world wide.
If you go down the artificial intelligence path this is possible in theory and concept, however, usually artificial intelligence development efforts costs more than the app being developed that needs the testing, so...that's not going to happen.
The best to do at this point is separate out as much of the maintenance into a single section from the rest so you limit the maintenance headache when modyfying and keep a core that stays the same. I usually focus on control manipulation as generic and then workflow and specific maps and data change. That will allow it to function against any website...but you still have to write/update the tests and maintain the maps.
I think Growing Test Cases Automatically is more of what your asking. To be more specific I'll try to introduce basics and if you're interested take a closer look at Evolutionary Testing
Usually there is a standard set of constraints we meet like changing functionality of the system under test (SUT), limited timeframe, lack of appropriate test tools and the list goes on… Yet there is another type of challenge which arises as technological solutions progress further – increase of system complexity.
While the typical constraints are solvable through different technical and management approaches, in the case of system complexity we are facing the limit of our capability of defining a straight-forward analytical method for assessing and validating system behavior. Complex system consist of multiple, often heterogeneous components which when working together amplify each other’s statistical and behavioral deviations, resulting in a system which acts in ways that were not part of its initial design. To make matter worse, complex systems increase sensitivity to their environment as well with the help of the same mechanism.
Options for testing complex systems
How can we test a system which behaves differently each time we run a test scenario? How can we reproduce a problem which costs days and millions to recover from, but happens only from time to time under conditions which are known just approximately?
One possible solution which I want to focus on is to embrace our lack of knowledge and work with whatever we have by using evolutionary testing. In this context the evolutionary testing can be viewed as a variant of black-box testing, because we are working with feeding input into and evaluating output from a SUT without focusing on its internal structure. The fine line here is that we are organizing this process of automatic test case generation and execution on a massive scale as an iterative optimization process which mimics the natural evolution.
Evolutionary testing
Elements:
• Population – set of test case executions, participating into the optimization process
• Generation – the part of the Population, involved into given iteration
• Individual – single test case execution and its results, an element from the Population
• Genome – unified definition of all test cases, model describing the Population
• Genotype – a single test case instance, a model describing an Individual, instance of the Genome
• Recombination – transformation of one or more Genotypes into a new Genotype
• Mutation – random change in a Genotype
• Fitness Function – formalized criterion, expressing the suitability of the Individual against the goal of the optimization
How we create these elements?
• Definition of the experiment goal (selection criteria) – sets the direction of the optimization process and is related to the behavior of the SUT. Involves certain characteristics of SUT state or environment during the performed test case experiments. Examples:
o “SUT should complete the test case execution with an error code”
o “The test case should drive the SUT through the largest number of branches in SUT’s logical structure”
o “Ambient temperature in the room where SUT is situated should not exceed 40 ºC during test case execution”
o “CPU utilization on the system, where SUT runs should exceed 80% during test case execution”
Any measurable parameters of SUT and its environment could be used in a goal statement. Knowledge of the relation between the test input and the goal itself is not obligatory. This gives a possibility to cover goals which are derived directly from requirements, rather than based on some late requirement derivative like business, architectural or technical model.
• Definition of the relevant inputs and outputs of the tested system – identification of SUT inputs and outputs, as well as environment parameters, relevant to the experiment goal.
• Formal definition of the experiment genome – encoding the summarized set of test cases into a parameterized model (usually a data structure), expressing relevant SUT input data, environment parameters and action sequences. This definition also needs to comply with the two major operations applied over genome instances – recombination and mutation. The mechanism for those two operations can be predefined for the type of data or action present in the genome or have custom definitions
• Formal definition of the selection criteria (fitness function) – an evaluation mechanism which takes SUT output or environment parameters resulting from a test case execution (Individual) and calculates a number (Fitness), signifying how close is this particular Individual to the experiment goal.
How the process works?
We use the Genome to create a Generation of random Genotypes (test case instances).
We execute the test cases (Genotypes) generating results (Individuals)
We evaluate each execution result (Individual) against our goal using the Fitness Function
We select only those Individuals from given Generation which have Fitness above a given threshold (the top 10 %, above the average, etc.)
We use the selected individuals to produce a new, full Generation set by applying Recombination and Mutation
We repeat the process, returning on step 2
The iteration process usually stops by setting a condition with regard to the evaluated Fitness of a Generation. For example:
• If the top Fitness hasn’t changed with more than 0.1% since the last Iteration
• If the difference between the top and the bottom Fitness in a Generation is less than 0.3%
then probably it is time to stop.
Upsides and downsides
Upsides:
• We can work with limited knowledge for the SUT and goal-oriented test definitions
• We use a test case model (Genome) which allows us to mass-produce a large number of test cases (Genotypes) with little effort
• We can “seed” test cases (Genotypes) in the first iteration instead of generating them at random in order to speed up the optimization process.
• We could run test cases in parallel in order to speed up the process
• We could find multiple solutions which meet our test goal
• If the optimization process in convergent we have a guarantee that each following Generation is a better approximate solution of our test goal. This means that even if we need to stop before we have reached optimal Fitness we will still have better test cases than the one we started with.
• We can achieve replay of very complex, hard to reproduce test scenarios which mimic real life and which are far beyond the reach of any other automated or manual testing technique.
Downsides:
• The process of defining the necessary elements for evolutionary test implementation is non-trivial and requires specific knowledge.
• Implementing such automation approach is time- and resource-consuming and should be employed only when it is justifiable.
• The convergence of the optimization process depends on the smoothness of the Fitness Function. If its definition results in a zones of discontinuity or small/no gradient then we can expect slow or no convergence
Update:
I also recommend you to look at Genetic algorithms and this article about Test data generation can give you approaches and guidelines.
I happen to develop ecFeed - an open-source tool that may assist in test design. It's in pre-release phase and we are going to add better integration with Selenium, but you may have a look at the current snapshot: https://github.com/testify-no/ecFeed/wiki . The next version should arrive in October and will have major improvements in usability. Anyway, I am looking forward for constructive criticism.
In the Microsoft development world there is Visual Studio's Coded UI Test framework. This will record your actions in a web browser and generate test cases to replicate that use case. It won't update test cases with any changes to code though, you would need to update them manually or re-generate.
Can someone point out the major differences between LoadRunner and Performance Center? My little research shows that both can be used for load testing and performance monitoring. What additional features are provided by Performance Center? Is VUGen a part of Performance Center?
Let me try to give a little less sales-oriented and more technical slant (without the rant).
Performance Center runs multiple copies of LoadRunner.
Performance Center adds a web app to schedules time on the various LoadRunner machines. Allocating time also implies allocating the number of "vusers" that has been purchased.
For more technical information (including an architecture map and tips on using it), see my http://wilsonmar.com/lr_perf_center.htm
For more sales information, see Joe or a HP Software salesperson.
As indicated by the previous post LoadRunner is performance and load testing. Performance center (which includes LoadRunner) is supposed to be a complete performance management solution.
Performance center is HORIFICALLY expensive and from what I can see a number of the so-called features simply address some of the licensing restrictions in LoadRunner. It also appears that not a lot of organisations use Performance Center so HP don't have much expertise.
If you just want performance and load testing without all the hassles then Borland SilkPerformer is a better option.
It should also be noted that significant license differences exist between Performance Center and LoadRunner controller related to remote access. By default Performance Center allows it, the LoadRunner controller does not. Please do not take my word for it, examine the license text for your copy of LoadRunner. These constraints have been in place for the better part of a decade.
Well Performance center is an advanced web based performance test management tool. however it is costlier than the Loadrunner in terms of license and deployment. But it provides the ease of managing performance testing task. You can schedule multiple tests through the scheduling window and according to the scheduler timing you can assign different scenarios to run at that point of time.
Hp Loadrunner provides the facility to run and schedule a single scenario at any point of time. However we cannot ignore the good things in Loadrunner --i.e it is easy to manage and schedule a run and monitor the results. Whereas in Performance center it may be difficult for new users to schedule and run a test and collate the results after the test.
I would suggest that Both the tools are good in use.
I've recently become quite interested in identifying patterns for software scalability testing. Due to the variable nature of different software solutions, it seems to like there are as many good solutions to the problem of scalability testing software as there are to designing and implementing software. To me, that means that we can probably distill some patterns for this type of testing that are widely used.
For the purposes of eliminating ambiguity, I'll say in advance that I'm using the wikipedia definition of scalability testing.
I'm most interested in answers proposing specific pattern names with thorough descriptions.
All the testing scenarios I am aware of use the same basic structure for the test which involves generating a number of requests on one or more requesters targeted at the processing agent to be tested. Kurt's answer is an excellent example of this process. Generally you will run the tests to find some thresholds and also run some alternative configurations (less nodes, different hardware etc...) to build up an accurate averaged data.
A requester can be a machine, network card, specific software or thread in software that generates the requests. All it does is generate a request that can be processed in some way.
A processing agent is the software, network card, machine that actually processes the request and returns a result.
However what you do with the results determines the type of test you are doing and they are:
Load/Performance Testing: This is the most common one in use. The results are processed is to see how much is processed at various levels or in various configurations. Again what Kurt is looking for above is an example if this.
Balance Testing: A common practice in scaling is to use a load balancing agent which directs requests to a process agent. The setup is the same as for Load Testing, but the goal is to check distribution of requests. In some scenarios you need to make sure that an even (or as close to as is acceptable) balance of requests across processing agents is achieved and in other scenarios you need to make sure that the process agent that handled the first request for a specific requester handles all subsequent requests (web farms are commonly needed like this).
Data Safety: With this test the results are collected and the data is compared. What you are looking for here is locking issues (such as a SQL deadlock) which prevents writes or that data changes are replicated to the various nodes or repositories you have in use in an acceptable time or less.
Boundary Testing: This is similar to load testing except the goal is not processing performance but how much is stored effects performance. For example if you have a database how many rows/tables/columns can you have before the I/O performance drops below acceptable levels.
I would also recommend The Art of Capacity Planning as an excellent book on the subject.
I can add one more type of testing to Robert's list: soak testing. You pick a suitably heavy test load, and then run it for an extended period of time - if your performance tests usually last for an hour, run it overnight, all day, or all week. You monitor both correctness and performance. The idea is to detect any kind of problem which builds up slowly over time: things like memory leaks, packratting, occasional deadlocks, indices needing rebuilding, etc.
This is a different kind of scalability, but it's important. When your system leaves the development shop and goes live, it doesn't just get bigger 'horizontally', by adding more load and more resources, but in the time dimension too: it's going to be running non-stop on the production machines for weeks, months or years, which it hasn't done in development.