I am new to loadrunner, using loadrunner 9.1.
I can create script, scenario but not able to analysis result (graph).
Is there any way to analysis graph?
how can we determine impact of load on application by using graph?
Please help
"how can we determine impact of load on application by using graph?"
What you ask is simple, but the answer not so. To answer you would take volumes, with large depedencies upoin your application architecture.
As you are new to LoadRunner may we ask
* Have you been through training?
* Are you involved in an internship program for your first several projects?
James Pulley
Moderator
-SQAForums WinRunner, LoadRunner
-YahooGroups LoadRunner, Advanced LoadRunner
-GoogleGroups lr-loadrunner
-LinkedIn LoadRunner (owner), LoadRunnerByTheHour (Owner)
Mercury Alum (1996-2000)
CTO, Newcoe Performance Engineering | LoadRunnerByTheHour.com
Related
I was able to develop a couple of algorithms for my recommendation system, that I want to apply to an ecomm website.
My goal is to perform a live a/b test to check which system perform better. I would not rely only on offline metrics.
Does google optimize support this type of test? I have been investigating but I had no luck for now to find any documentation about it.
I would appreciate any additional insights or tips on how to apply the a/b test in this type of problem.
I have been doing all my development in Python.
I have Essbase as the BI solution (for Predictive Analytics and Data Mining) in my current workplace. It's a really clunky tool, hard to configure and slow to use. We're looking at alternatives. Any pointers as to where I can start at?
Is Microsoft Analysis Services an option I can look at? SAS or any others?
Essbase focus and strenght is in the information management space, not in predictive analytics and data mining.
The top players (and expensive ones) in this space are SAS (with Enterprise Miner & Enteprise Guide combination) and IBM with SPSS.
Microsoft SSAS (Analysis Services) is a lot less expensive (it's included with some SQL Server versions) and has good Data Mining capabilities but is more limited in the OR (operations research) and Econometrics/Statistics space.
Also, you could use R, an open source alternative, that is increasing its popularity and capabilities over time, for example some strong BI players (SAP, Microstrategy, Tableau, etc.) are developing R integration for predictive analytics and data mining.
Check www.kpionline.com , is a product cloud based in Artus.
It has many dashboards, scenarios and functions prebuilt to do analysis.
Other tool than you could check is microstrategy. It has many functions to analysis.
Can someone point out the major differences between LoadRunner and Performance Center? My little research shows that both can be used for load testing and performance monitoring. What additional features are provided by Performance Center? Is VUGen a part of Performance Center?
Let me try to give a little less sales-oriented and more technical slant (without the rant).
Performance Center runs multiple copies of LoadRunner.
Performance Center adds a web app to schedules time on the various LoadRunner machines. Allocating time also implies allocating the number of "vusers" that has been purchased.
For more technical information (including an architecture map and tips on using it), see my http://wilsonmar.com/lr_perf_center.htm
For more sales information, see Joe or a HP Software salesperson.
As indicated by the previous post LoadRunner is performance and load testing. Performance center (which includes LoadRunner) is supposed to be a complete performance management solution.
Performance center is HORIFICALLY expensive and from what I can see a number of the so-called features simply address some of the licensing restrictions in LoadRunner. It also appears that not a lot of organisations use Performance Center so HP don't have much expertise.
If you just want performance and load testing without all the hassles then Borland SilkPerformer is a better option.
It should also be noted that significant license differences exist between Performance Center and LoadRunner controller related to remote access. By default Performance Center allows it, the LoadRunner controller does not. Please do not take my word for it, examine the license text for your copy of LoadRunner. These constraints have been in place for the better part of a decade.
Well Performance center is an advanced web based performance test management tool. however it is costlier than the Loadrunner in terms of license and deployment. But it provides the ease of managing performance testing task. You can schedule multiple tests through the scheduling window and according to the scheduler timing you can assign different scenarios to run at that point of time.
Hp Loadrunner provides the facility to run and schedule a single scenario at any point of time. However we cannot ignore the good things in Loadrunner --i.e it is easy to manage and schedule a run and monitor the results. Whereas in Performance center it may be difficult for new users to schedule and run a test and collate the results after the test.
I would suggest that Both the tools are good in use.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
Does anyone have any suggestions on what Model Based Testing Tools to use? Is Spec Explorer/SPEC# worth it's weight in tester training?
What I have traditionally done is create a Visio Model where I call out the states and associated variables, outputs and expected results from each state. Then in a completely disconnected way, I data drive my test scripts with those variables based on that model. But, they are not connected. I want a way to create a model, associate the variables in a business friendly way, that will then build the data parameters for the scripts.
I can't be the first person to need this. Is there a tool out there that will do basically that? Short of developing it myself.
You might find the following answer to a similar question helpful:
http://testing.stackexchange.com/questions/92/how-to-get-started-with-model-based-testing
In it, I mention:
UML Pad http://web.tiscali.it/ggbhome/umlpad/umlpad.htm
A list of free UML Tools: http://en.wikipedia.org/wiki/Category:Free_UML_tools
Our Pairwise and combinatorial test case generator (which generates tests for you automatically based on a model you create - even if you don't create a UML model): http://hexawise.com
Incidentally, as explained in the answer I link to above, I focus my energies (research, tool development focus, passion, etc.) on the second part of your question - generating efficient and effective sets of tests that maximize coverage in a minimum number of test cases.
Justin (Founder of Hexawise)
I think an updated version of the "Spec Explorer for Visual Studio" power tool is supposed to be released soon - it's much easier to ramp up on than the current version, but still takes some time to learn.
If you want to start smaller, nmodel (also from microsoft) is a good place to start.
Check out TestOptimal. It offers full cycle Model-Based Testing with built-in data driven testing and combinatorial testing right within the model. It has graphical modeling and debugging which you can play the model and it graphically animates the model execution. You can link state/transition to the requirements. Models can be re-purposed for load testing with no changes. It can even create full automated MBT for web applications without any coding/scripting. Check out this short slide presentation: http://TestOptimal.com/tutorials/Overview.htm
You should try the "MaTeLo" tool of All4Tec. www.all4tec.net
"MaTeLo is a test cases generator for black box functional and system testing. Conformed to the Model Based Testing approach, MaTeLo uses Markov chains for modeling the test. This statistic addin allows products validation in a Systematic way. The efficiency is achieved by a reduction of the human resources needed, an increase of the model reuse and by the enhancement of the test strategy relevance (due to the reliability target). MaTeLo is independent and user-friendly, offers to the validation activities to pass from test scripting to real test engineering and to focus on the real added value of testing: the test plans"
You can ask an evaluation licence and try by yourself.
You can find some exemples here : http://www.all4tec.net/wiki/index.php?title=Tutorials
A colleague of mine have made this tool, http://mbt.tigris.org/ and its being used in large scale testing environments for years. It's Open Source and all..
Updating:
Here are short whitepaper: http://www.prolore.se/filer/whitepaper/MBT-Agile.pdf
This tool is great with MBT, yED a free modelling software.
I can tell you that the 2010 version of Spec Explorer that requires The Professional version of Visual Studio is a great tool, assuming you already have Visual Studio. The older version of spec explorer was good, but the limitation was that if you ended up modeling a system that was non-finite, you were out of luck.
The new version has improved techniques for looking at 'slices' of the model to the point where you have finite states. Once you have the finite states, you can generate the test cases.
The great thing is that as you change the model and re-slice your model, it's straightforward to re-generate tests and re-run them. This certainly beats the manual process any day.
I can't compare this tool to other toolsets, but the integration with Visual Studio is invaluable. If you don't use Visual Studio, you may have limited success.
I've been reading a bit about this recently but it looks to be a bit heavy. Does anybody have real world experience using it?
Are there any light weight alternatives?
The Personal Software Process is a personal improvement process. The full-blown PSP is quite heavy and there are several forms, templates, and documents associated with it. However, a key point is that you are supposed to tailor the PSP to your specific needs.
Typically, when you are learning about the PSP (especially if you are learning it in a course), you will use the full PSP with all of its forms. However, as Watts S. Humphrey says in "PSP: A Self-Improvement Process for Software Engineers", it's important to "use a process that both works for you and produces the desired results". Even for an individual, multiple projects will probably require variations on the process in order to achieve the results you want to.
In the book I mentioned above, "PSP: A Self Improvement Process for Software Engineers", the steps that you should follow when defining your own process are:
Determine needs and priorities
Define objectives, goals, and quality criteria
Characterize the current process
Characterize the target process
Establish a strategy to develop the process
Validate the process
Enhance the process
If you are familiar with several process models, it should be fairly easy to take pieces from all of them and create a process or workflow that works on your particular project. If you want more advice, I would suggest picking up the book. There's an entire chapter dedicated to extending and modifying the PSP as well as creating your own process.
The Personal Software Process itself is a subset of the Capability Maturity Model (CMM) processes. There are no light weight alternatives available as of now.