What are test artifacts needed for automated testing? - testing

i want to know the artifacts like test suites, test cases.
which ones are needed for Automated Testing?

You can see about Test suite in here, you can see about Test cases in here.
And test suite and test cases are related. Both are need for automatic testing.
I hope when you are reading above two links you will be understand very well.

Related

Regarding regression suite and smoke test suite implementation in Nunit Framework : C# Selenium

We have been using Nunit Framework for our Test Automation Project.
Programming Language : C#
Automation IDE : Visual Studio, Selenium Libraries
Currently we are running all tests in a single namespace and class file.
We have got a requirement to divide the test cases as below
Implement Test Suite concept like Smoke Suite, Regression Suite
Divide the test cases into functionality wise and keep in Regression suite.
For example : Smoke Suite : All general test cases
Regression Suite should contain Functionality1, Funtionality2,...Functionality n test cases, like we see in HP ALM or Microsoft Test Manager.
like under regression suite ..login test cases, book ticket test cases, Cancellation ticket test cases...
Could you please have a look once and let me know the attributes, way of implementation ideas in Nunit Framework.
Regards,
Khaja Shaik
Very general question, so a very general answer. :-)
There are a number of approaches. I will list three...
Divide your tests into separate assemblies. Run each assembly separately when you need it... like SmokeTests.dll, RegressionSuite.dll. I prefer to use this method to divide unit tests from functional or integration tests. It's particularly useful if different people are responsible for each, like programmers for unit tests and testers for final functional tests.
Use Categories to flag fixtures as belonging to each group, like [Category("SmokeTest")], etc. When you run the tests, you need to specify a filter to select the category or, if you don't, you'll run all of them. This has the advantage that the same fixture could be run in more than one category.
Use namespaces to separate your tests, like namespace SmokeTests, etc. Personally, I don't like this because namespaces are more useful to divide tests needing common setup, etc so I prefer to reserve the namespace for dividing the tests according to the nature of what they are testing. However, I've seen some people do it this way.

Are there any advantages of using Testng with cucumber?

When creating automated tests with selenium, I thought one would use easier cucumber with selenium or testng with selenium or just junit with selenium although using only junit is not very popular. I have recently found out that you could use cucumber with testng but I don't see what is the gain of doing this. If someone is using both of them together can you tell me why ?
EDIT:
Using Testng over junit has many advantages. My question is if i use cucumber does it still make a difference or not anymore.
P.S I am not trying to start this tool vs this tool war
The answer that you seem to be looking for, is one of interest in what Cucumber, as a tool, adds to existing test frameworks.
The answer:
Cucumber adds an extra level of communication between you (the development team) and the management team. You are able to link test cases to scenarios that are now understandable by the business, which means that everybody is on the same page. You can even use the BDD tool to start talking about behaviours of the feature:
What things should be included?
Do we need more information?
Lets add that to the file, so that we can test that use case later.
Any new functionality added to the feature later?
Need to understand which section has gone wrong quickly, without having to decipher code written by the intern that was in for 2 months in the summer?
Cucumber helps with all of this, and that's just scraping the surface.
TestNG, JUnit, Selenium? You imagine it, you can do it. With Cucumber as your helpful neighbourhood BDD tool, you can pull together your test suite and bolt an abstraction layer on top. The business will now be able to look at the test results. Where tests have failed, they will be able to describe exactly what section has gone wrong to other members of management, without having to go too far into technical details.
If you're wondering whether to use JUnit or TestNG for this, it is most likely to be your choice. Using whatever is the current test tool to bolt cucumber on top of is the best option if you have an existing suite.
Also, make sure you are using the right language for your team. For instance:
Are you introducing a team of manual testers to developing test automation?
Maybe you should use Ruby or JavaScript, as they are easier languages to pick up as a first language
Are you a development team, using cucumber to add an abstraction layer to your unit tests?
Use the language that you are using for development, with the unit test tool that you are using.
Are you developers in test, using cucumber for automating tests for your website?
Use the language that you and your team are most comfortable with, taking the language being used for development over any others that tie with this (based on a team vote).
I think it depends on what are your other tests (unit ones for example) and how you run them.
If your current tests are already using TestNG, then it will be easier to run Cucumber tests with TestNG engine.
At the opposite, if you already have JUnit tests, it could be easier to use JUnit for Cucumber run (but TestNG is able to run JUnit tests, so you can use TestNG in that case too).
And if you have no other tests, so the choice of the test runner will depend on your own taste.
Yes.. I understand your question. Even I had the same doubt as below:
We use selenium for automation testing. Since they don't provide proper reports, we add TestNG to it (and also for other features). But now, we have cucumber, which gives proper reports. So why do we need TestNG?
I realized, though we get proper results with cucumber, TestNG provides us with many other features which cucumber cannot; like setting priority, setting method dependency, timeouts, grouping , etc.
Though cucumber provides a tag feature, it does not provide all the features provided by TestNG. Maybe when cucumber incorporates all those features, we can eliminate TestNG.

Specflow scenarios with NUnit test parameters

When using NUnit, you can pass in parameters to your tests using TestCaseSourceAttribute.
[Test, TestCaseSource(typeof(WebDriverFactory), "Drivers")]
What would be the best approach to doing the same for tests generated using specflow? Those tests do not use the 'Test' attribute. They use 'Given', 'And', 'Then' etc.
I'm trying to pass in different web drivers (selenium) so I don't have to manually change them to test across different browsers.
Specflow creates automatically test fixtures, so you cannot use [TestCaseSource]. You can try Test class generator to drive automated web ui tests with Selenium and SpecFlow.
However you should ask yourself if executing Specflow scenarios in different browsers brings a lot of benefits to your project, as execution time of your acceptance tests will double/triple. From my experience cross browser testing identifies UI changes and very rare functional (to be honest I've never encountered any). In our team testers perform it manually.

Testing phases, where is the smoke testing?

So I was looking to a book and I dont really understand their classification:
Unit tests
Integration tests
Smoke and Sanity tests
System tests
Acceptance tests
I thought smoke test would be right after integration one? Also I thought that sanity means quick check of the application when new part is deployed.
Also the question: is this correct or should the smoke and sanity tests be in different order. If so, why?
Smoke tests should be performed before sanity tests - that is correct. The purpose of smoke tests is just to quickly check whether the SUT is runnable, it's interfaces and main components respond to the users actions. There is no deep insight into the app during these tests.
The sanity tests can be a subset of regression tests. Their main goal is to quickly test logic of the application in compliance with requirements provided. Should be done after each major change in the way some parts of system work. And simply if results are negative there is no point in going through more detailed tests. They should give us the information whether tested parts of system match the requirements and specification.
And now the thing is that sanity tests can be put into the unit test level as well as system test level. You can simply run a few unit tests specificly designed to check only basic of functionality and these can be than called sanity tests. The same applies to system test level. So there is no strict definition of where is the place for sanity tests. I believe you should not take it as granted because every case is different and context of tests and application should be taken into major consideration.
A Smoke Test is a Quick & Dirty test of the most important features usually done by someone other than the developer after unit & integration testing to see if there's a point in doing more specific/ rigorous testing.
Basic test of key features.
Only test happy path.
Major features.
For Example if you're smoke testing an API
Check responses are correct.
Test login with valid details.
Test main endpoints.
Check the correct response is returned.
Smoke Testing is the first & foremost Testing done by any QA personnel. This is done once the unit testing is completed by the developer.
The main agenda to perform Smoke testings is to believe your application can handle the positive flow at the Least. Once this done QA gradually proceeds with the following
1.Functional Testing
2.Link & Download Options
3.UI
4.System Testing
5.Regression for better results from previous builds.
Happy Testing :)

Is the automated testing still referred to as smoke testing?

If not, is smoke testing still used?
It's sort of a Venn Diagram. Some Automated tests are Smoke tests, and some smoke tests are Automated (inasfar as they are ran by a computer program). A Smoke test is a take off (if I recall correctly) on the term "Where there's smoke, there's usually fire." It's a set of preliminary tests that the program must pass to be considered for 'real' (viz. fire) testing.
A smoke test can be manual insomuch as a tester has a list of steps he follows, but these aren't automated with a computer program.
Smoke testing is still used -- in places I've worked, it's usually automated.
Automated testing can do smoke testing (shallow, wide), but it can also do other testing like regression testing, and unit testing. Basically automated testing can be any repeatable test.
Yes, smoke testing is still being used. I've generally seen two scenarios. The first is to determine whether the software is ready for more in depth testing. The second, and IMO more common, to skimp on fully testing functionality that should not have been affected by the changes to the new build.
I don't think smoke tests are usually automated. The smoke test in my experience is really just a basic sanity test to make sure that subsequent tests can actually be run, and that nothing basic got broken like startup code or menu entries. This would usualy be done manually by a person. I suppose it could be automated, but usually it involves the addition of new features so the automated tests would have to be changed as well and you'd still have the same problem that you'd need a person to verify that the automated tests were modified to test the new feature properly. In contrast, automated tests (like unit tests) represent a regression test suite and are created to test well-established functionality that should not change much from release to release, although of course you would add unit tests to cover new functionality as well.
Probably more in companies from a hardware background where the smoke test was taken literally. Few people call them that anymore. It's usually just a small yet broad subset of a larger acceptance or system test suite. These tets are automated and are automatically run against code before it is submitted or on submission to source code control.
I am not sure we can compare Smoke and Automated testing. Smoke testing is a way to run a set of basic tests on a build, covering all the basic features but not going in depth on any. The purpose is to determine whether a build can be used for more detailed testing or not. It is also a set of steps that can be run quickly even on a developer build to determine if there are any issues due to some significant or core changes that are about to go in a build. We consider Smoke test to be one of our 'test plans' but one that is run on every build.
Automated testing is not specific to Smoke tests but can be applied there as well. It is done to 'automate' redundant or repetitive steps that a tester always does to save time. That is the primary purpose of automation. It is allowes a tester to spend more time to do other tests.
It can never be replacement of testing by a real brain nor everything can be automated. It is an activity that supplements the testing process in place, not replace it.
Since Smoke test is potentially run on every build, there is a good value in automating it. If a smoke test run manually takes 4 hours, and after automation it takes 1 hour, you have saved an effort of 3 man-hours * number of builds.
There are several tools in market for automation testing - AutoIT and SilkTest to name a few.
In very simple words we can say that Smoke testing can be automated but it is not like automated testing is always smoke testing.
Yes, smoke testing is a popular way of testing any application/software.
My understanding of "smoke testing" is different than the wikipedia article. I understand smoke testing to be the developer opening the app and testing the basic functionality to verify that the app looks right & is doing the basics. So I always thought it was a manual process, not an automated one.
Test automation suite contains various levels like smoke test, acceptance test, nightly build, so on. Its up to the tester to decide which test case needs to be run in each level. Each test case is numbered depending upon the levels at which they should be run. Say if there are 2 test cases automated, numbered with 1 and 2 respectively to indicate the levels, and you define test level as 2 in configuration file, its gonna run only the second test case and gives you the result. Smoke test generally has less number of test cases compared to acceptance test.
Smoke test can be automated but not all automated tests are smoke tests.