Automating LoadRunner execution - automation

I have a requirement to automate the Loadrunner execution by doing some pre-checks. Steps involved would be checking for a new application build and Loadrunner/Performance center should start the load test.

See HP Documentation on Jekins and web services automation framework.
See command line options for mdrv and/or wlrun
Ask difficult questions. What is the end goal of the automation of the execution? How will the developers be "controlled" to not introduce structural changes to the conversation which would require business process updates to each build? How will you add actionable intelligence to the analysis of the tests?
You may be better off coupling small performance unit tests at the dev stage to ensure that code is performant at the unit and component assembly stages where the cost to fix is very small, combining this with performance checks at the functional stage (see developer tools and rules related to performance) as well as tracking response times for a single user on all business processes and then a daily/nightly execution of a business level performance test.
With performance comes ownership. If you have a working test and a developer changes something structural in their code (removes/adds web page elements, form fields, dynamic elements) and this information is not broadcast forward to test to address the changes in the test definition before the test fails, then that developer should be called out for breaking the build. The ad hominem of "Well, it didn't change the screen so it shouldn't matter...." illustrates an architectural naïveté on which OSI layer performance testing tools operate at.

After installing Load runner, you will have to create new script by selecting the correct protocol for communication between Server and the client in LoadRunner and then start recording the scripts. Next step would be to run them after parameterization required data. Please find more details at http://performancetestworld.com/JSPFiles/LoadRunnerFirstScript.jsp

For LoadRunner Enterprise, it supports integrations with a few popular CI systems to fulfill your requirment, like Jenkins, Azure Devops, etc. Learn more from the online help.

Related

Can you write unit tests if you don't actually write the code?

For instance, I assume this is what SDETs do?
They don't actually write the functional code but they're able to write integration/unit tests, am i correct?
But can someone learn to read code and then start writing tests?
This is actually a good question. I have been at the same place when I was working only on Manual Testing. Here is how I experienced things when I transitioned to automation. To answer your question, yes someone can read code and start writing tests on it but you need to understand the code that you are going to test.
There are different types of testing methodologies that are used when testing an application. These tests are done in layers so that the application is properly tested. Here is how the layering looks like:
1) Unit testing: This part is usually written by developers. This is because they have written the code and know the functionalities of the code and is easier for them to write. I am an SDET and I have written unit tests. There was only one opportunity that presented itself and it was when we were refactoring our code and there was a lot of room to write Unit Tests. In unit tests, you test functions in isolation by giving it some values and verifying an expected value. This is not something an SDET does, but should be able to do if provided the chance.
2) Integration testing: This part is also usually written by developers but the definition of integration testing is a bit vague. It means testing multiple modules in isolation. This can be modules in backend or modules in frontend but not together. Frameworks that help achieve this are code level integration tests for the technology you are using. Like for Angular application, there are deep integration tests that test the HTML and CSS of a component and there are shallow integration tests that just test two component's logic together. This can be written by an SDET but is usually written by the developer.
3) API testing (contract based testing): Pact helps us achieve this. There are other tools like rest assured, postman and jmeter that help in testing API end points. Pact helps test the integration of APIs on the frontend and verifies that integration in backend. This is very popular with microservices. This is something that can be written either by the developer or by the SDET.
4) End to End testing: This is something that is the sole responsibility of SDET. This covers testing of user flows depending on user stories. It is testing the entire stack together. Backend and frontend. This allows SDETs to automate how a user would use the application. This is also called as blackbox testing. There are different frameworks that help achieve this. Selenium, Protractor, Cypress, Detox etc. This is the sole responsibility of an SDET.
5) Load testing: This is again something that an SDET does. Using tools like hey, jmeter, loadrunner etc. These tests allow the SDET to initiate a heavy load on the system and check for breaking points of the system.
6) Performance testing: Testing the performance of the webpage for an end user depending on the page load time, the SEO optimisation and the weight of elements of page. This is something achieved by google's lighthouse tool that is an amazing tool to use. I am not aware of anything else that is as amazing as lighthouse because it gives us a lot of data that we can use to improve our website. This is a primary job of an SDET.
7) CI/CD: Continuous Integration and Continuous Deployments is something that requires architectural knowledge of the system. This is something you can do when you are an SDET3 or a lead QA engineer. For systems like AWS and GCP, using CI build tools like Jenkins and CircleCI, one can set up a pipeline that runs all the above tests when ever a branch is merged into master or whenever a pull request is created. Creating the pipeline will require you to have knowledge of Docker, Kubernetes, Jenkins and your test frameworks. First you dockerize your tests, then you build the image and push it to a directory in cloud, then you use the image to create a kubernetes job that runs everytime a change is presented in your code.
These are the levels of work that an SDET does. It takes time and hard work to have an understanding of all testing frameworks and how everything fits together. An SDET should have knowledge of the server, http protocols, frontend, backend, browsers, caching, pipeline management and orchestration of tests.
Yes, sure. You can write unit tests increasing test coverage of the codebase. That's very qualified work from software test engineering since you need to be aware what is going on in the code. This skill is definitely great!
I advise you to take a look on so called "mutation coverage". Usage of mutation coverage as a better metric than simple unit test coverage. Mutation tests are changing logical operators in the different parts of the codebase (making so called "mutants") and then are running unit tests to find out how many unit tests will fail showing their effectiveness (if after mutants were injected the result is the same as without them - unit tests quality is low and they won't catch any new injected issues to the codebase).

Testing of interactive web app

I'm writing thesis and my supervisor told me I have to make chapter about testing but I have no idea what it should include. Can you give me some hints?
Thank you
In my experience the foundations of testing, in application development, rely mainly onto a good architecture design.
Your application must be modular, structured in layers (n-tier architecture) and/or in services.
In order to test the whole application you have to test each module with some kind of unit testing approach. In order to do this you have to isolate each module and treat it as a black box.
Then your unit tests for the specific module should run against the interface module and all the interactions of the module towards other application components must be replaced with mock components, that simulates real behaviours. This is important because lets you controll what to expect in terms of successful results or failures.
After unit testing each component you can go on designing integration tests, that test the interactions between each module.
Then there are the functional tests that check that the application user experience and use cases meets the client requirements.
As a final step the appliation testing must be configured as a step of DevOps routines such as continous integration in order to keep your development workflow clean and secure from regressions and bugs.
Although the application testing is a HUGE argument i hope that this could help you find more informations for your thesis. Good luck!

Usage of STAF/STAX in Automation

I had been exploring on STAX/STAF past week.
It is used for test automation execution & is some what similar to Hudson.
I would like to know on which type of Tests it can be used. i.e functional tests, load tests etc., The functional automation tests are basically dependent on the framework i.e how they run, their return status on fail or pass are through the framework . How I can we integrate such with the Test Automation Framework like STAF?
I've been using STAF/STAX for over 4 years.
PROs:
Open Source
Cross-platform
Concurrent execution
Extensible (i.e. you can write your own services)
Decent support from IBM through the STAF website
CONs:
Sometimes buggy
Difficult to diagnose problems
Programming STAX scripts is awkward and ugly (i.e. scripting via XML tags and embedded jython)
I've found that STAF/STAX is useful for systems test. It enables you, for example, to launch a server on one system and a client on another, then test their interaction. It's also helpful if you need to test cross-platform, or for multiple language bindings. I also like the fact that it can be used both in large, networked systems, as well as on an individual's desktop.
On the other hand, I would probably avoid using it for unit testing, or tests that are relatively simple and can be run on a single system. I'd probably use a language specific unit framework for that.
STAF is not comparable to Hudson.
When I look at something like Hudson/Jenkins and Buildbot, I see GUI with emphasis on scheduling, viewing what's going on, what was done, and how it went.
STAF, on the other hand, is more like the plumbing for a QA framework over a distributed environment. It helps with launching processes, collecting logs, locking resources, etc.

What testing advice will you give a beginner for testing websites

I'm just starting out working on test scripts.I'm going to get a web application created in .net for testing. I have no idea what kind of testing is needed for such kind of applications.
My suggestion is that you should have a healthy mix of automated and manual testing.
AUTOMATED TESTING
Unit Testing
Use NUnit to test your classes, functions and interaction between them.
http://www.nunit.org/index.php
Automated Functional Testing
If it's possible you should automate a lot of the functional testing. Some frame works have functional testing built into them. Otherwise you have to use a tool for it. If you are developing web sites/applications you might want to look at Selenium.
http://www.peterkrantz.com/2005/selenium-for-aspnet/
Continuous Integration
Use CI to make sure all your automated tests run every time someone in your team makes a commit to the project.
http://martinfowler.com/articles/continuousIntegration.html
MANUAL TESTING
As much as I love automated testing it is, IMHO, not a substitute for manual testing. The main reason being that an automated can only do what it is told and only verify what it has been informed to view as pass/fail. A human can use it's intelligence to find faults and raise questions that appear while testing something else.
Exploratory Testing
ET is a very low cost and effective way to find defects in a project. It take advantage of the intelligence of a human being and a teaches the testers/developers more about the project than any other testing technique i know of. Doing an ET session aimed at every feature deployed in the test environment is not only an effective way to find problems fast, but also a good way to learn and fun!
http://www.satisfice.com/articles/et-article.pdf
There are a lot of small things that you may need to check (assuming you're doing manual testing):
Check for the exact location/alignment of items
Check whether all hyperlinks are working as expected
Check by clicking a button (submitting the form) multiple time
Check for security aspects (google for xss or cross site scripting)
Check for fonts etc. (if they're different from standards)
Hope this helps.
A web application should go through below test's
1) Functionality Testing
2) Usability testing
3) Interface testing
4) Compatibility testing
5) Performance testing
6) Security testing
for complete guide
Read this Article

How integration tests are performed on your company/job/project?

I want to improve integration tests methods where I work and I would like to know how this process happens in other places.
Things like:
- When test plans writing begin
- Proportion between testers, developers and stuff (entire applications or modifications) to be tested
- What kind of methods are used for integration testing.
Actually, I test webapps and test plans are managed with Test Link. Bugs found are reported on Bugzilla. I am trying to automate tests with Selenium RC, but I takes some time to write the plans and write the code to execute on Selenium. And time is something that I dont have, because I am testing 3 or more aplications.
Most of my problems are caused by differences between test environment and production environment. But tests are taking too long to begin. If someone finishes a modification today, it will take about 3 weeks for me to begin tests. And the test process queue keeps growing.
It would be really good if anyone suggests something that would improve testing process (like more people testing,etc). But mostly, I would like to hear how testing process works on other places.
Thanks.
For us the integration test is generally performed by the developer before a commit. Just simple surface test to see that nothing obvious is broken.
Then we deploy the code from trunk on a development server connected to a test database that is a complete copy of the production database and have the users responsible for the new functionality do acceptance test and further integration tests on that server.
We have a concept of "super user" to organize this. Super users are responsible for educating other users in their area of expertise and answering helpdesk questions related to the usage of the system. The super users are also the people who are involved in feature requests and requirement discussions for all features related to their work.
So when a new feature is developed the super user is the one who first validate the design suggestion and than performs the final stages of testing before deployment.
This setup is good because it ensures that domain experts are the ones who validate the system functionality and removes some responsibilities from the IT-department.
The bad thing is that they are not usually very technical or good testers. As users they tend to see the the system for what is is rather than what it could be. The fact that they also have their ordinary functions in the organization as full time employees also means that they are a very limited resource in terms of testing.
I'll assume you mean integration testing as in checking to see if the parts of the application work together, (for example, getting the database and the website to work together after the DBA and web developer respectively say they're done) And I'll use an example from my current project
I code generate several configuration files so I can observe the application with certain modules on/off, namely error reporting, authentication, debug mode compilation, with/without SSL. Development environments are likely to have "friendly error pages" turned off, no authentication, no SSL, etc.
I also use a build script to create a copy of the application for each variant of the config file
It is helpful to pedantically reproduce the characteristics of production to staging and development as much as you can-- use virtual machines if you lack the hardware
I also wrote into the production code bases a few pages that test the sort of things that break when code move from one machine to another, i.e. does the db connection work, do emails send, is the temp folder writable and made that page the home page of the server operator
The key is automating as much as you can. Frequent integration testing catches issues earlier.
From check in to packaging code for deployment, it takes me 8 minutes of automated work and 1/2 hour of manual clicking for smoke tests.