I know it is possible to separate Integration and Unit test code coverage in Sonar, but is it possible to retrieve separate pass/fail results for Integration and Unit tests? Thanks
No, unfortunately, this is currently not possible.
Feel free to start a discussion on this topic on Sonar user mailing-list.
Related
For instance, I assume this is what SDETs do?
They don't actually write the functional code but they're able to write integration/unit tests, am i correct?
But can someone learn to read code and then start writing tests?
This is actually a good question. I have been at the same place when I was working only on Manual Testing. Here is how I experienced things when I transitioned to automation. To answer your question, yes someone can read code and start writing tests on it but you need to understand the code that you are going to test.
There are different types of testing methodologies that are used when testing an application. These tests are done in layers so that the application is properly tested. Here is how the layering looks like:
1) Unit testing: This part is usually written by developers. This is because they have written the code and know the functionalities of the code and is easier for them to write. I am an SDET and I have written unit tests. There was only one opportunity that presented itself and it was when we were refactoring our code and there was a lot of room to write Unit Tests. In unit tests, you test functions in isolation by giving it some values and verifying an expected value. This is not something an SDET does, but should be able to do if provided the chance.
2) Integration testing: This part is also usually written by developers but the definition of integration testing is a bit vague. It means testing multiple modules in isolation. This can be modules in backend or modules in frontend but not together. Frameworks that help achieve this are code level integration tests for the technology you are using. Like for Angular application, there are deep integration tests that test the HTML and CSS of a component and there are shallow integration tests that just test two component's logic together. This can be written by an SDET but is usually written by the developer.
3) API testing (contract based testing): Pact helps us achieve this. There are other tools like rest assured, postman and jmeter that help in testing API end points. Pact helps test the integration of APIs on the frontend and verifies that integration in backend. This is very popular with microservices. This is something that can be written either by the developer or by the SDET.
4) End to End testing: This is something that is the sole responsibility of SDET. This covers testing of user flows depending on user stories. It is testing the entire stack together. Backend and frontend. This allows SDETs to automate how a user would use the application. This is also called as blackbox testing. There are different frameworks that help achieve this. Selenium, Protractor, Cypress, Detox etc. This is the sole responsibility of an SDET.
5) Load testing: This is again something that an SDET does. Using tools like hey, jmeter, loadrunner etc. These tests allow the SDET to initiate a heavy load on the system and check for breaking points of the system.
6) Performance testing: Testing the performance of the webpage for an end user depending on the page load time, the SEO optimisation and the weight of elements of page. This is something achieved by google's lighthouse tool that is an amazing tool to use. I am not aware of anything else that is as amazing as lighthouse because it gives us a lot of data that we can use to improve our website. This is a primary job of an SDET.
7) CI/CD: Continuous Integration and Continuous Deployments is something that requires architectural knowledge of the system. This is something you can do when you are an SDET3 or a lead QA engineer. For systems like AWS and GCP, using CI build tools like Jenkins and CircleCI, one can set up a pipeline that runs all the above tests when ever a branch is merged into master or whenever a pull request is created. Creating the pipeline will require you to have knowledge of Docker, Kubernetes, Jenkins and your test frameworks. First you dockerize your tests, then you build the image and push it to a directory in cloud, then you use the image to create a kubernetes job that runs everytime a change is presented in your code.
These are the levels of work that an SDET does. It takes time and hard work to have an understanding of all testing frameworks and how everything fits together. An SDET should have knowledge of the server, http protocols, frontend, backend, browsers, caching, pipeline management and orchestration of tests.
Yes, sure. You can write unit tests increasing test coverage of the codebase. That's very qualified work from software test engineering since you need to be aware what is going on in the code. This skill is definitely great!
I advise you to take a look on so called "mutation coverage". Usage of mutation coverage as a better metric than simple unit test coverage. Mutation tests are changing logical operators in the different parts of the codebase (making so called "mutants") and then are running unit tests to find out how many unit tests will fail showing their effectiveness (if after mutants were injected the result is the same as without them - unit tests quality is low and they won't catch any new injected issues to the codebase).
I am working on a research of test case prioritization. And I need some sample program with set of test cases. I found some program here.
But I need some more. If anyone have any resources like that please share with me. It will be help!
Thanks in advance
Incremental Scenario Testing Tool
Description
IST supports the test teams in managing their complexity by adaptively selecting and prioritizing the test cases according to the past test results. ISTT guides the testers through a test session with high-level test scenarios generated on the fly.
Features
IST is an innovative hybrid of scripted and exploratory testing to automate the test planning for flexible, adaptable and efficient test sessions.
Find as many bugs as possible by generating prioritized scenarios based on the analysis of previous test runs.
Involving developers allows focusing test resources on areas of the software which are naturally error-prone.
Install it on your own server or try it for free on the demo server at https://istt.symphonyteleca.com/ ! (request trial access through the contact mailing list, which is linked from the login page)
for download or more information please visit http://sourceforge.net/projects/istt/
TestCube
Description:
Jataka Testcube is a Web-based test case management tool designed to integrate & track enterprise-wide Test Cases.
for download or more information please visit http://www.jatakasource.org/testcube/#1
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
My company at the beginning of building Test Automation architecture.
There are different types of apps: windows desktop, web, mobile.
What would you experienced folks recommend to start from?
I mean resources.
Building whole system or construct something basic and enhancing in future?
Thanks a lot!
Start small. If you don't know what you need, build the smallest thing you can that adds value.
It's very likely that the first thing you build will not be what you need, and that you will need to scrap it and do something else.
Finally, don't try and test EVERYTHING. This is what I see fail over and over. Most automated test suites die under their own weight. Someone makes the decision that EVERYTHING must be tested, and so you build 10,000 tests around every CSS change. This then costs a fortune to update when the requirements change. And then you get the requirement to make the bar blue instead of red...
One of two things happen, either the tests get ignored, and the suite dies, or the business compromises what it wants because the tests cost so much to update. In the first case, the investment in tests was a complete waste, the second case is even more dangerous, it implies that the test suite is actually impeding progress, not assisting it.
Automate the most important tests. Find the most important workflows. The analysis of what to test should take more time than writing the tests themselves.
Finally, embrace the Pyramid of Tests.
Just as Rob Conklin said,
Start small
Identify the most important tests
Build your test automation architecture around these tests
Ensure your architecture allows for reusability and manageability
Build easily understandable report and error logs
Add Test Data Management to your architecture
Once you ensure all these, you can enhance later as you add new tests
in addition to what was already mentioned:
Make sure you have fast feedback from your automated tests. Ideally they should be executed after each commit to master branch.
Identify in which areas of your system test automation brings the biggest value.
Start from integration tests and leave end-to-end tests for a while
Try to keep every automated test very small and checking only one function
Prefer low level test interface like API, CLI over GUI.
I'm curious on what path you chose. We run UI automated tests for mobile, desktop applications, and web.
Always start small but building a framework is what I recommend as the first steps when facing this problem.
The approach we took is:
create mono repo
installed selenium webdriver for web
installed winapp driver for desktop
installed appium for mobile
created an api for each system
DesktopApi
WebApi
MobileApi
These APIs contain business functions that we share across teams.
This builds our framework to now write tests going across the different systems such as:
create a user on mobile device
enter a case for them in our desktop
application login on the web as the user and check balance
Before getting started on the framework it is always best to learn from others test automation mistakes.
Start with prioritizing which tests should be automated such as business critical features, repetitive tests that must be executed for every build or release (smoke tests, sanity tests, regression tests), data-driven tests, and stress and load testing. If your application supports different operating systems and browsers, it’s highly useful to automate tests early that verifies stability and proper page rendering.
In the initial stages of building your automation framework, keep the tests simple and gradually include more complex tests. And in all cases, the tests should be easily maintained, and you need to consider how you will debug errors, report on test results, scheduling tests, and bulk test runs.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I don't have a clear idea about smoke testing and sanity testing, some books say that both are same but some tester in some project called as a smoke testing and some tester in some project called as sanity testing, So please give me clear cut idea about my question.
Sorry but there is no clear-cut. Like you explain in your question there is no consensus on the definition, or at least on the difference between sanity and smoke.
Now about smoke tests (or sanity tests!), those are the tests that you can run quickly to get a general idea of how your System Under Test (SUT) behaves. For software testing, this will obviously contain some kind of installation, setup, playing around with the feature and shutdown. If nothing goes wrong, then you now you can go on with your testing. This provides a quick feedback to the team and avoid starting a longer test campaign only to realise that some major features are broken and the SUT is not really usable.
This definition stands for both manual and automated tests. For example, if you use Jenkins (for CI) and Robot Framework (for test automation), you could create 2 jobs in Jenkins: smoke tests and full tests (using tags, this is straightforward). Smoke test job could last a couple of minutes (or max 15 minutes let's say) and the full tests job could as long as needed. Thus the smoke test job gives you a quick feedback on the SUT build (if your smoke tets is a child project of the SUT build of course)
Smoke testing also known as Build version Testing.
Smoke testing is the initial testing process exercised to check whether the software under test is ready/stable for further testing.
sanity testing is a type of testing to check the capability of a new software version is able to perform well enough to accept it for a major testing effort.
Think of the analogy of testing a new electronic device. The first thing you do is turn it on to see if it starts smoking. If it does, there's something fundamentally wrong, so no additional testing either can be done or is worth doing.
For a website, the simplest smoke test is to go to the website and see if the http response is 200. If not, there's no point in testing further. A more useful smoke test might be to hit every page once.
Smoke tests should run as fast as possible. The goal is quick feedback so that you can make a decision.
As for the difference between smoke tests and sanity tests... There is no significant difference. What you call them is irrelevant, as long as everyone in your organization has the same basic understanding. What's important is a quick verification that the system under test is running and has no blatantly obvious flaws.
The smoke test is designed to see if the device seems to work at all. - This is to determine if we can go on with more extensive testing or if something fundamental is broken.
The sanity tests are designed to test the most frequent usecases.
Example:
You are testing a cellphone.
Smoketest - Does it start up without crashing/starting to smoke etc. does it seem to work good enough to perform more extensive testing?
Sanity test - Can you place/recieve calls/messages - the most basic and most used features.
These are both done often and should be quick to run through, they are NOT extensive tests.
Smoke Testing is testing the basic and critical features of an application, before going ahead and doing thorough testing of that application.
Note: Only if the Smoke testing passes, we can carry ahead with other stages of testing, else the product is not fit to be tested and should be sent to Development team.
Sanity Testing: There is no clear definition as such, but this one I picked up from the Internet
Check the entire application at the basic level, focuses on Breadth rather than length.
I am writing an application that uses 3rd party libraries to instantiate and make some operations on virtualmachines.
At first I was writing integration tests to every functionality of the application. But them I found that these tests were not really helping since my environment had to be at a determined state, which turned the tests more and more difficult to write. And I decided to make only the unit and acceptance tests.
So, my question ... is/can there be method or a clue to notice when the integration tests are not to be used?? (or I am wrong and on all cases they should be written)
When you don't plan on actually hooking your application up to anything "real"; no real containers, databases, resources or actual services. That's what an integration test is supposed to verify; that everything works properly together.
Integration tests are good to test a full system that has well-defined inputs and outputs that are unlikely to change. If your expected input/outputs change often then maintaining the test may become a maintenance challenge, or, worse, you may choose against improving an interface because of the amount of work that may be required to upgrade the integration tests.
The easy and short rule is: Test in integration test what breaks due to integration and test the rest in unit tests in isolation.
You can even hate integration tests. Writing a unit test for a function that takes only one integer parameter is hard enough. All possible combinations of state (internal and external(time, external systems)) and input can make integration testing practically impossible (for a decent application.)