SpecFlow - How to use data driven tests like NUnits TestCaseSource property? - testing

I'm a QA who decided to use SpecFlow for my test automation after some consideration. I think it's brilliant, but missing one feature which I did use often with other test runners such as NUnit - something similar to the TestCaseSource property from NUnit to specify a potentially dynamic set of data for tests to be ran against at run time.
I would often have different data in each environment the test should run in, so cannot specify hardcoded values for test parameters. A trivial example is for checking that each type of user account is able to login, the user account credentials can be retrieved using a DB query to populate each test case dynamically in NUnit:
public List<User> GetTestData()
{
List<User> testData = new List<User>();
testData = MyDatabase.GetAllUsersInfo().ToList();
return testData;
}
[Test, TestCaseSource("GetTestData")]
public void CallLoginService(User user)
{
var response = LoginController.TryLogin(User.UserName, User.Password);
if (response.Error != null)
{
Assert.Fail("Failed to Login: {0}", response.Error);
}
Assert.AreEqual("Logged in ok", response.Message, "Login message not as expected");
}
Obviously this is a simple example of that feature, but I think it describes it well enough. I know we have the ability in SpecFlow to use a Scenario Outline and table of test run input data, but that is still static, so doesn't fit the bill.
I've been looking for a while and have not found anything in SpecFlow like this yet, does anybody know of anything similar to the above which can be used (or planned if anyone who works on the project reads this)?
Thanks :)

I have no idea if anything like this is planned but for now the problem is that there is a background code generation step when you edit your feature file via Visual Studio.
When it is saved in Visual Studio it is parsed and converted into the feature.cs file and that is the one that is compiled and used for testing.
So your process would become
edit your data source
export to feature file
get specflow's VS plugin to convert to feature.cs
run msbuild
run tests via Nunit or similar
I wouldn't do this. Instead I'd focus on getting my tests to be better examples. It sounds like you are to trying to exhaustively cover every possibility. Don't come up with examples to cover every possible case, but instead cover as much logic as possible with fewer tests.

Related

Running only selected tests with dynamic input [duplicate]

This question already has an answer here:
Tag logic for Parallel Run
(1 answer)
Closed 2 years ago.
I have tried few approaches to solve my problem but with no success (I do need to improve my Java :)), so I am hopping that I am missing something or that someone can point me in the right direction.
I have multiple microservices that I need to test. I should be able to test all at once or only the ones I want. Each service has its own DB and different feature files. Note that these services may not be all up and running.
I can run tests with manually setting config for each service. Ideally I would like to pass a variable with service name in command line and the tests should start.
In current set up I use callSingle to run DBInit.feature which runs SQL scripts to populate my DB. I have also set global variables that are used in feature files. And this works fine.
Problems start when I add more feature files that are used to test the service that is not running. And when I have to use callSingle for specified service to populate its DB.
The first idea was to use different envs, but I could need 5 envs to be executed in a single run and with one report. Then I was thinking to implement runner for each service but I am not sure if these runners run in parallel and not sure how could I populate DB in this case?
Is it possible to use custom variable that will be passed to main test class.
public class DemoTestSelected {
#BeforeClass
public static void beforeClass() throws Exception {
TestBase.beforeClass();
}
#Test
public void testSelected() {
List<String> tags = Arrays.asList("~#ignore");
List<String> features = Arrays.asList("classpath:demo/cats");
String karateOutputPath = "target/surefire-reports";
Results results = Runner.path(features)
.tags(tags)
.outputCucumberJson(true)
.reportDir(karateOutputPath).parallel(5);
DemoTestParallel.generateReport(karateOutputPath);
assertTrue(results.getErrorMessages(), results.getFailCount() == 0);
}
}
For example tags and features to be set in config?
I re-read your question a few times and gave up trying to understand it. But I'll lay down a couple of principles:
you should use tags to decide which features to run / not-run. try to fit everything you need to this model and don't complicate things
for more control, you can set some "system property" on the command-line and maybe before you use the Runner, you can write some Java logic which would be - "if karate.env (or some other system property) is foo, then select tags one. two and three etc.
yes the Karate 1.0 series can technically run multiple Runner instances in parallel, but that is left to you and we don't have an example, it would require you to manage threads or a Java Executor manually

How to restrict test data method call for respective Test method by using TestCaseSource attribute in NUnit

I am using NUnit for the Selenium C# project. In which I have many test methods. For getting data (from excel) I am using a public static method that returns IEnumerable<TestCaseData> which I am calling at test method level as TestCaseSource. I am facing challenges now, as I start executing on the test method it is invoking all the static methods which are there in the project.
Code looks like this:
public static IEnumerable<TestCaseData> BasicSearch()
{
BaseEntity.TestDataPath = PMTestConstants.PMTestDataFolder + ConfigurationManager.AppSettings.Get("Environment").ToString() + PMTestConstants.PMTestDataBook;
return ExcelTestDataHelper.ReadFromExcel(BaseEntity.TestDataPath, ExcelQueryCreator.GetCommand(PMTestConstants.QueryCommand, PMTestConstants.PMPolicySheet, "999580"));
}
[Test, TestCaseSource("BasicSearch"), Category("Smoke")]
public void SampleCase(Dictionary<string, string> data)
{
dosomething;
}
Can someone help me how can I restrict my data call method to the respective test method?
Your TestCaseSource is not actually called by the test method when you run it, but as part of test discovery. While it's possible to select a single test to execute, it's not possible to discover tests selectively. NUnit must examine the assembly and find all the tests before it's possible to run any of them.
To make matters worse, if you are running under Visual Studio, the discovery process takes place multiple times, first before the tests are initially displayed and then again each time the tests are run. This is made necessary by the architecture of the VS Test Window, which runs separate processes for the initial disovery and the execution of the tests.
That makes it particularly important to minimize the amount of work done in test discovery, especially when running under Visual Studio. Ideally, you should structure the code so that the variable parameters are recorded during discovery. The actual data access should take place at execution time. This can be done in a OneTimeSetUp method, a SetUp method or at the start of the test itself.
Finally, I'd say that your instinct is correct: it should be possible to set up a TestCaseSource, which only runs if the test you select is about to be executed. Unfortunately, that's a feature that NUnit doesn't yet have.

Elasticsearch testing(unit/integration) best practices in C# using Nest

I've been seraching for a while how should I test my data access layer with not a lot of success. Let me list my concerns:
Unit tests
This guy (here: Using InMemoryConnection to test ElasticSearch) says that:
Although asserting the serialized form of a request matches your
expectations in the SUT may be sufficient.
Does it really worth to assert the serialized form of requests? Do these kind of tests have any value? It doesn't seem likely to change a function that should not change the serialized request.
If it does worth it, what is the correct way to assert these reqests?
Unit tests once again
Another guy (here: ElasticSearch 2.0 Nest Unit Testing with MOQ) shows a good looking example:
void Main()
{
var people = new List<Person>
{
new Person { Id = 1 },
new Person { Id = 2 },
};
var mockSearchResponse = new Mock<ISearchResponse<Person>>();
mockSearchResponse.Setup(x => x.Documents).Returns(people);
var mockElasticClient = new Mock<IElasticClient>();
mockElasticClient.Setup(x => x
.Search(It.IsAny<Func<SearchDescriptor<Person>, ISearchRequest>>()))
.Returns(mockSearchResponse.Object);
var result = mockElasticClient.Object.Search<Person>(s => s);
Assert.AreEqual(2, result.Documents.Count()).Dump();
}
public class Person
{
public int Id { get; set;}
}
Probably I'm missing something but I can't see the point of this code snippet. First he mocks an ISearchResponse to always return the people list. then he mocks an IElasticClient to return this previous search response to any search request he makes.
Well it doesn't really surprise me the assertion is true after that. What is the point of these kind of tests?
Integration tests
Integration tests do make more sense to me to test a data access layer. So after a little search i found this (https://www.nuget.org/packages/elasticsearch-inside/) package. If I'm not mistaken this is only about an embedded JVM and an ES. Is it a good practice to use it? Shouldn't I use my already running instance?
If anyone has good experience with testing that I didn't include I would happily hear those as well.
Each of the approaches that you have listed may be a reasonable approach to take, depending on exactly what it is you are trying to achieve with your tests. you haven't specified this in your question :)
Let's go over the options that you have listed
Asserting the serialized form of the request to Elasticsearch may be a sufficient approach if you build a request to Elasticsearch based on a varying number of inputs. You may have tests that provide different input instances and assert the form of the query that will be sent to Elasticsearch for each. These kinds of tests are going to be fast to execute but make the assumption that the query that is generated and you are asserting the form of is going to return the results that you expect.
This is another form of unit test that stubs out the interaction with the Elasticsearch client. The system under test (SUT) in this example is not the client but another component that internally uses the client, so the interaction with the client is controlled through the stub object to return an expected response. The example is contrived in that in a real test, you wouldn't assert on the results of the client call as you point out but rather on the output of the SUT.
Integration/Behavioural tests against a known data set within an Elasticsearch cluster may provide the most value and go beyond points 1 and 2 as they will not only incidentally test the generated queries sent to Elasticsearch for a given input, but will also be testing the interaction and producing an expected result. No doubt however that these types of test are harder to setup than 1 and 2, but the investment in setup may be outweighed by their benefit for your project.
So, you need to ask yourself what kinds of tests are sufficient to achieve the level of assurance that you require to assert that your system is doing what you expect it to do; it may be a combination of all three different approaches for different elements of the system.
You may want to check out how the .NET client itself is tested; there are components within the Tests project that spin up an Elasticsearch cluster with different plugins installed, seed it with known generated data and make assertions on the results. The source is open and licensed under Apache 2.0 license, so feel free to use elements within your project :)

How to apply TDD to web API development

I'm designing a web service running on Google App Engine that scrapes a number of websites and presents their data via a RESTful interface. Based on some background reading, I think I'd like to attempt Test Driven Development (TDD) and develop my tests before I write any business code.
My problem is caused by the fact that my list of scraped elements includes timetables and other records that change quite frequently. The limit of my knowledge on TDD is that you write tests that examine the results of code execution and compare these results to a hardcoded result set. Seeing as the data set changes frequently, this method seems impossible. Assuming that this is true, what would be the best approach to test such an API? How would a large-scale web API be tested (Twitter, Google, Netflix etc.)?
You have to choose the type of test:
Unit tests just test proper operation of your modules (units). You provide input data and test that code outputs proper results. If there are system dependent classes you try to mock them or in case of GAE services, you use google provided local services. Unit tests can be run locally on your machine or on CI servers. There are two popular unit test libs for java: Junit & TestNG.
Integration tests check that various modules (internal & external) work together - they basically check that APIs between modules are working. They are usually run on real servers and call real external services. They are technology specific and are harder to run.
In your case, I'd go with unit tests and provide sets of different input data which you logic should parse and act upon. Since your flow is pretty simple (load data from fixed Url, parse it) you could also embed loading of real data into unit tests (we do this when we parse external sources).
From what you are describing you could easily find yourself writing integration tests. If your aim is to test the logic for processing what is returned from the scraped data (e.g. you know that you are going to get a timetable in a specific format coming in and you now have logic to process that data) you will need to create a SEAM between your web services logic and your processing logic. Once you have done this you should be able to mock the data that is returned from the web service call to always return the same table data and then you can write consistent unit tests against it.
public class ScrapingService : IScrapingService
{
public string Scrape(string url)
{// scraping logic}
}
public interface IScrapingService
{
string Scrape(string url);
}
public class ScrapingProcessor
{
private IScrapingService _scrapingService
// inject the dependency
pubilc ScrapingProcessor(IScrapingService scrapingService)
{
_scrapingService = scrapingService;
}
public void Process(string url)
{
var scrapedData = _scrapingService.Scrape(url)
// now process the scrapedData
}
}
To test you can now create a FakeScrapingService that implements the IScrapingService interface and then return whatever data you like from the Scrape method. There are some very good Mocking frameworks out there that make this type of thing easy. My personal favorite is NSubstitue.
I hope this explanation helps.

NAnt, MbUnit, CruiseControl, Selenium - passing settings to the test assembly

I am putting together some ideas for our automated testing platform and have been looking at Selenium for the test runner.
I am wrapping the recorded Selenium C# scripts in an MbUnit test, which is being triggered via the MbUnit NAnt task. The Selenium test client is created as follows:
selenium = new DefaultSelenium("host", 4444, "*iexplore", "http://[url]/");
How can I pass the host, port and url settings into the test so their values can be controlled via the NAnt task?
For example, I may have multiple Selenium RC servers listening and I want to use the same test code passing in each server address instead of embedding the settings within the tests themselves.
I have an approach mocked up using a custom NAnt task I have written but it is not the most elegant solution at present and I wondered if there was an easier way to accomplish what I want to do.
Many thanks if anyone can help.
Thanks for the responses so far.
Environment variables could work, however, we could be running parallel tests via a single test assembly so I wouldn't want settings to be overwritten during execution, which could break another test. Interesting line of thought though, thanks, I reckon I could use that in other areas.
My current solution involves a custom NAnt task build on top of the MbUnit task, which allows me to specify the additional host, port, url settings as attributes. These are then saved as a config file within the build directory and then read in by the test assemblies. This feels a bit "clunky" to me as my tests need to inherit from a specific class. Not too bad but I'd like to have less dependencies and concentrate on the testing.
Maybe I am worrying too much!!
I have a base class for all test fixtures which has the following setup code:
[FixtureSetUp]
public virtual void TestFixtureSetup ()
{
BrowserType = (BrowserType) Enum.Parse (typeof (BrowserType),
System.Configuration.ConfigurationManager.AppSettings["BrowserType"],
true);
testMachine = System.Configuration.ConfigurationManager.AppSettings["TestMachine"];
seleniumPort = int.Parse (System.Configuration.ConfigurationManager.AppSettings["SeleniumPort"],
System.Globalization.CultureInfo.InvariantCulture);
seleniumSpeed = System.Configuration.ConfigurationManager.AppSettings["SeleniumSpeed"];
browserUrl = System.Configuration.ConfigurationManager.AppSettings["BrowserUrl"];
targetUrl = new Uri (System.Configuration.ConfigurationManager.AppSettings["TargetUrl"]);
string browserExe;
switch (BrowserType)
{
case BrowserType.InternetExplorer:
browserExe = "*iexplore";
break;
case BrowserType.Firefox:
browserExe = "*firefox";
break;
default:
throw new NotSupportedException ();
}
selenium = new DefaultSelenium (testMachine, seleniumPort, browserExe, browserUrl);
selenium.Start ();
System.Console.WriteLine ("Started Selenium session (browser type={0})",
browserType);
// sets the speed of execution of GUI commands
if (false == String.IsNullOrEmpty (seleniumSpeed))
selenium.SetSpeed (seleniumSpeed);
}
I then simply supply the test runner with a config. file:
For MSBuild I use environment variables, I create those in my CC.NET config then they would be available in the script. I think this would work for you too.
Anytime I need to integrate with an external entity using NAnt I either end up using the exec task or writing a custom task. Given the information you posted it would seem that writing your own would indeed be a good solution, However you state you're not happy with it. Can you elaborate a bit on why you don't think you current solution is an elegant one?
Update
Not knowing internal details it seems like you've solved it pretty well with a custom task. From what I've heard, that's how I would have done it.
Maybe a new solution will show itself in time, but for now be light on yourself!