Been playing with both for a couple hours.
You use a Coded UI test to record some actions and verify them through assertions..
You use a Web Performance test to record some actions and verify them through validation tests/extraction tests... basically same thing... then you can convert to code optionally like the Coded UI Tests
But it seem you can only add a WEB PERFORMACE TEST to a loadTest...
But arent they both pretty much the same thing?? What am I not understand?? Why not allow a Coded UI Test to be inside a load test?
Coded UI tests are for automated functional testing. These tests will simulate user interaction against the UI, such as button clicks and entering text. Coded UI tests require an interactive desktop environment, because they actually interact with the windows and objects of your application. Coded UI Tests in VS2010 are the equivalent of using something like HP QuickTest Pro or Selenium to drive your automated functional regression tests.
Load tests record and drive your application at the HTTP level. These tests simulate headless user interaction against your app server by sending HTTP requests directly, without a UI. Load tests typically assume that your application works correctly for 1 user, but aim to see if it functions under a heavy user load. Load tests are headless because simulating thousands of users with an interactive UI is not practical. By being headless, a single load agent machine can simulate hundreds or thousands of users. VS load tests are the equivalent of using HP LoadRunner or JMeter to drive virtual user load.
Functional and performance testing are two distinct types, with different strategies and processes. On a given project, you might have hundreds of automated functional tests (coded ui, for example), but only dozens of automated performance tests. You have so many more functional tests because you are testing your app in many different scenarios relative to your business requirements. Whereas with performance tests, you take your top dozen most commonly used transactions and run them under load.
i think this article has a great value on this discussion
CodedUI Tests –
Coded UI tests are for automated functional testing. These tests will simulate user interaction against the UI, such as button clicks and entering text. Coded UI tests require an interactive desktop environment, because they actually interact with the windows and objects of your application. Coded UI Tests in VS2010 are the equivalent of using something like HP QuickTest Pro or Selenium to drive your automated functional regression tests.
Web Performance Tests -
Web testing has much more than GUI testing. Web Performance Tests are used for testing the functionality and performance of the web page, web application, web site, web services, and combination of all of these. Web Performance Tests can be created by recording the HTTP requests and events during user interaction with the web application. The recording also captures the web page redirects, validations, view state information, authentication, and all the other activities. It can be classified in two ways which includes Simple Web Performance Tests and Coded Web Performance Tests.
Simple Web Performance Tests generate and execute the test as per the recording with a series of valid flows of events. Once the test is started there won’t be any intervention and it is not conditional.
Coded Web Performance Tests are more complex but provide a lot of flexibility. These types of tests are used for conditional execution based on values. Coded web tests can be created manually or generated from the Web Performance Test recording.
Load Tests-
Load tests record and drive your application at the HTTP level. These tests simulate headless user interaction against your app server by sending HTTP requests directly, without a UI. Load tests typically assume that your application works correctly for 1 user, but aim to see if it functions under a heavy user load. Load tests are headless because simulating thousands of users with an interactive UI is not practical. By being headless, a single load agent machine can simulate hundreds or thousands of users. VS load tests are the equivalent of using HP LoadRunner or JMeter to drive virtual user load.
Conclusion
Functional and performance testing are two distinct types, with different strategies and processes. On a given project, you might have hundreds of automated functional tests (coded ui, for example), but only dozens of automated performance tests. You have so many more functional tests because you are testing your app in many different scenarios relative to your business requirements. Whereas with performance tests, you take your top dozen most commonly used transactions and run them under load.
Coded UI tests are new to 2010. They validate against the actual UI (placement in the DOM, visibility etc.) of the application where the other does not. The Web Performance Test validates against the HTTP/HTTPS connection against the server.
This talks about functional UI testing and shows the use of the Coded UI test.
http://channel9.msdn.com/shows/10-4/10-4-Episode-18-Functional-UI-Testing/
Good news, from VS2012 you can add coded ui test into Load Test.
http://msdn.microsoft.com/en-us/library/ff468125.aspx
Related
I would like to get input on how to run automated smoke tests based on what developers check in. Currently, when there is a commit by devs the jenkins job gets built to build the code and smoke tests run to test the app. But smoke tests contains more than 50 tests. How would you design your automation so when there is a check in by devs, the automation only runs against the app features that could be affected by the new check in? Here is our flow: Dev checks in to git repo, jenkins job gets triggered through web hook and builds the app, once the build is done there is a downstream job to run the smoke tests. I would like to limit the smoke tests to only test the features that are affected by the new check in.
You can determine which areas of your product might be affected but you can not be 100% sure. Don't rely on that. You don't want to have regressions with unknown source. They are extremely hard to triage and one of the best things about continuous integration is that each change or small amount of changes are tested separately and you know at each moment what is wrong with your app without spending many time on investigation. 10 minutes for a set of 50 tests is actually very good. Why don't think on making them parallel instead of reducing the test suit if the only problem about running the tests is the time consumed. I would prefer to speed up the test execution phase instead of decreasing the test suit.
I am trying to run load tests on my existing selenium web tests and my api(unit) tests. The tests run in Visual studio using load test editor but does not collect all the metrics like response time and requests per seconds. Are there any additional parameters that I need to add to collect all the metrics ?
Load testing; how many selenium clients are you running? One or two will not generate much load. First issue to think about; you need load generators and selenium is a poor way to go about this (unless you are running grid headless but still).
So the target server is what, Windows Server 2012? Google Create a Data Collector Set to Monitor Performance Counters.
Data collection and analysis of same is your second issue to think about. People pays loads of money for tools like LoadRunner because they provide load generators and sophisticated data collection of servers, database, WANs and LANS and analysis reports to pinpoint bottlenecks. Doing this manually is hard and not easily repeatable. Most folks who start down your path eventually abandon it. Look into the various load/performance tools to see what works best for you and that you can afford.
I have two systems:
REST web application which return data in xml
Windows service which daily gets data from 1st web app and sync it wit its database.
Question: how to make integration testing for this applications (check whether data is sunchronised corectly)? Is it possible to automate such testing?
If I were you, I would send a request from 2 and validate my database data at 2. This forms a whole journey (E2E) there by interacting with as many other systems involved. You may also need to consider different scenarios/paths so that as much interaction is covered.
I want to know that how can I test my website (web-based program) performance with the factors of speed and response time when using MS-SQL Server and ASP.net
Actually I want to know when my users increased to 1,000,000 and more, how the speed and performance changed?
Thank you
There are a number of tools to run load tests against web sites; I like JMeter (http://jmeter.apache.org/) - open source, free, easy to use - but there are lots of others - google "web performance testing" and take your pick.
All those tools allow you to specify a number of concurrent users, wait times between page requests, and then specify one or more user journeys through the site. They will give you a report showing response times as the number of users changes.
You can install the load testing applications on any machine; most have the concept of "controller", and "load agent". The controller orchestrates the load test, while the load agents execute the tests. Generating the equivalent load of 1 million visitors is likely to require significant horse power - you may need to use one of the cloud providers of load testing solutions. Again, Google is your friend here.
I want to analyze the performance (hence its weak points) of a sharepoint site doing stress test activity. What is needed to be done is call some methods exposed via web service that do the following things inside the sharepoint site:
-create a new group
-add a content to the group
-add an attachment to the content
-delete the content
-delete the previously created group
What is required is a simulation of a situation where there are 4500 users trying to do these operations concurrently (at the same time or more realistically within a short timespan, for example within 5 seconds).
We want to register the execution time of each operation (web method, for example of the "create new group"), too. I thought I could simulate these operations via a console applications using threads and stopwatchs. Is there anyone who has encountered a similar problem and can give me any existing solutions or hints to do it "the right way"? For
example how can I obtain that all threads start at the same instant? Thanks in advance.
I am a user of Visual Studio Load Testing since 2 years, and I find it very powerfull and easy to use. You can run integration tests, navigation in a web site, simulate database load, ... in fact, everything. Because it is a MS application, it is also fully compatible with all MS products like Sharepoint : it's easier to call a WCF service from a unit test than another technology (how to test nettcpbinding ?). You can also use the Visual Studio Profiler for instrumenting your code (and see what line of code is expensive or event ADO.net interactions). You can also easily extend the load testing by many extensibility points.
One important thing is that VS laod testing is "intrusive". It will note only collect response time, request lengths, ... but also all performance counters, database queries, ... All this metrics are saved in a dedicated database like SQLExpress for reporting. There is an AddOn for Excel.
Juste one important note (available for all load testing solutions !) :
You can run load tests from a developer machine or even a single dedicated machine, but you usually can't generate enough traffic to really see how the application responds (you machine can not simulate 500 concurrent users because of limited CPU/Memory/Network) . In order to simulate a lot of users, you'll set up what is known as a Load Test Rig.
A test rig is made up of a Test Controller machine and one or more Test Agent machines as shown in Figure 1. The controller manages and coordinates the agent machines and the agents generate load against the application. The test controller is also responsible for collecting performance monitor data from the servers under test and optionally from the test rig machines.
Here are some links :
MSDN
Dave's introduction
Not saying Visual Studio Load Testing is not a great tool. There are tools, like Tsung, Eventlet (and many others) that can support well over thousands of concurrent users.
Good luck.