how to test void methods at junit in order to increase code coverage - testing

I want to increase code coverage in junit testing . I have too many methods and all these methods are nearly in the type of "void". Therefore, I could not make assertian or any other junit tests in the void methods. I heard in the forums to test a variable in the void method to increase code coverage. But still it does not work ? What do i need to do to increase code coverage at Junit ?
For instance, in the following code I want to test transform method, not createAngularMethod. How I can do that ?
Regards
Alper
public void transform(File fileToTransform, LogGenerator logGenerator) {
createAngularLines(menu, transform);
}
private StringBuilder createAngularLines(Menu menu, Transform transform) {
StringBuilder angularLines = createInitialAngularLines(transform);
createBodyOfTheAngularPage(angularLines, transform);
createNextMethod(angularLines,transform);
createAngularLines(menu);
return angularLines;
}

Related

I need help writing a Junit 5 test for a Method

I am very new to Java, and I have been tasked with creating Junit5 tests for already written code. To start, I have the below method that I need to write a test for. I am unsure how to approach a test for this method.
public static Double getFormattedDoubleValue(Number value){ return getFormattedDoubleValue(value, -1); }
I tried the below test, and it passes, but I feel like I am testing the wrong thing here.
#Test
public void testDoubleString() {
Double num = 41.1212121212;
String expected = "41.12";
String actual = String.format("%.2f", num);
assertEquals(expected, actual, "Should return 41.12");}
Writing tests are simpler than they are sometimes made out to be, all it is is just calling your code from a test class instead of a business logic class and making sure that you get the right output based on what you input.
Here is an excellent article that will take you from the very beginning of the process: Baeldung: JUnit 5
Possible Sample Test
As I'm not quite certain what the expectations of your method are, I am just going to pretend that your method should take a Double, and return that number minus one:
#Test
void getFormattedDoubleValue_Test() {
Double expected = 5.0L
Double actual = getFormattedDoubleValue(expected + 1L)
assertEquals(expected, actual)
}

A proper way to conditionally ignore tests in JUnit5?

I've been spotting a lot of these in my project's tests:
#Test
void someTest() throws IOException {
if (checkIfTestIsDisabled(SOME_FLAG)) return;
//... the test starts now
Is there an alternative to adding a line at the beginning of each test? For example in JUnit4 there is an old project that provides an annotation #RunIf(somecondition) and I was wondering if there is something similar in JUnit5?
Thank you for your attention.
Tests can be disabled with #DisabledIf and a custom condition.
#Test
#DisabledIf("customCondition")
void disabled() {
// ...
}
boolean customCondition() {
return true;
}
See also the user guide about custom conditions.

Maintain context of Selenium WebDriver while running parallel tests in NUnit?

Using:C#NUnit 3.9
Selenium WebDriver 3.11.0
Chrome WebDriver 2.35.0
How do I maintain the context of my WebDriver while running parallel tests in NUnit?
When I run my tests with the ParallelScope.All attribute, my tests reuse the driver and fail
The Test property in my tests does not persist across the [Setup] - [Test] - [TearDown] without the Test being given a higher scope.
Test.cs
public class Test{
public IWebDriver Driver;
//public Pages pages;
//anything else I need in a test
public Test(){
Driver = new ChromeDriver();
}
//helper functions and reusable functions
}
SimpleTest.cs
[TestFixture]
[Parallelizable(ParallelScope.All)]
class MyTests{
Test Test;
[SetUp]
public void Setup()
{
Test = new Test();
}
[Test]
public void Test_001(){
Test.Driver.Goto("https://www.google.com/");
IWebElement googleInput = Test.Driver.FindElement(By.Id("lst-ib"));
googleInput.SendKeys("Nunit passing context");
googleInput.SendKeys(Keys.Return);
}
[Test]
public void Test_002(){
Test.Driver.Goto("https://www.google.com/");
IWebElement googleInput = Test.Driver.FindElement(By.Id("lst-ib"));
googleInput.SendKeys("Nunit passing context");
googleInput.SendKeys(Keys.Return);
}
[Test]
public void Test_003(){
Test.Driver.Goto("https://www.google.com/");
IWebElement googleInput = Test.Driver.FindElement(By.Id("lst-ib"));
googleInput.SendKeys("Nunit passing context");
googleInput.SendKeys(Keys.Return);
}
[Test]
public void Test_004(){
Test.Driver.Goto("https://www.google.com/");
IWebElement googleInput = Test.Driver.FindElement(By.Id("lst-ib"));
googleInput.SendKeys("Nunit passing context");
googleInput.SendKeys(Keys.Return);
}
[TearDown]
public void TearDown()
{
string outcome = TestContext.CurrentContext.Result.Outcome.ToString();
TestContext.Out.WriteLine("#RESULT: " + outcome);
if (outcome.ToLower().Contains("fail"))
{
//Do something like take a screenshot which requires the WebDriver
}
Test.Driver.Quit();
Test.Driver.Dispose();
}
}
The docs state: "SetUpAttribute is now used exclusively for per-test setup."
Setting the Test property in the [Setup] does not seem to work.
If this is a timing issue because I'm re-using the Test property. How do I arrange my fixtures so the Driver is unique each test?
One solution is to put the driver inside the [Test]. But then, I cannot utilize the TearDown method which is a necessity to keep my tests organized and cleaned up.
I've read quite a few posts/websites, but nothing solves the problem. [Parallelizable(ParallelScope.Self)] seems to be the only real solution and that slows down the tests.
Thank you in advance!
The ParallelizableAttribute makes a promise to NUnit that it's safe to run certain tests in parallel, but it doesn't do anything to actually make it safe. That's up to you, the programmer.
Your tests (test methods) have shared state, i.e. the field Test. Not only that, but each test changes the shared state, because the SetUp method is called for each test. That means your tests may not safely be run in parallel, so you shouldn't tell NUnit to run them that way.
You have two ways to go... either use a lesser degree of parallelism or make the tests safe to run in parallel.
Using a lesser degree of parallelism is the easiest. Try using ParallelScope.Fixtures on the assembly or ParallelScope.Self (the default) on each fixture. If you have a large number of independent fixtures, this may give you as good a throughput as you will get doing something more complicated.
Alternatively, to run tests in parallel, each test must have a separate driver. You will have to create it and dispose of it in the test method itself.
In the future, NUnit may add a feature that will make this easier, by isolating each test method in a separate object. But with the current software, the above is the best you can do.

Is it possible to skip a scenario with Cucumber-JVM at run-time

I want to add a tag #skiponchrome to a scenario, this should skip the scenario when running a Selenium test with the Chrome browser. The reason to-do this is because some scenario's work in some environments and not in others, this might not even be browser testing specific and could be applied in other situation for example OS platforms.
Example hook:
#Before("#skiponchrome") // this works
public void beforeScenario() {
if(currentBrowser == 'chrome') { // this works
// Skip scenario code here
}
}
I know it is possible to define ~#skiponchrome in the cucumber tags to skip the tag, but I would like to skip a tag at run-time. This way I don't have to think about which steps to skip in advance when I starting a test run on a certain environment.
I would like to create a hook that catches the tag and skips the scenario without reporting a fail/error. Is this possible?
I realized that this is a late update to an already answered question, but I want to add one more option directly supported by cucumber-jvm:
#Before //(cucumber one)
public void setup(){
Assume.assumeTrue(weAreInPreProductionEnvironment);
}
"and the scenario will be marked as ignored (but the test will pass) if weAreInPreProductionEnvironment is false."
You will need to add
import org.junit.Assume;
The major difference with the accepted answer is that JUnit assume failures behave just like pending
Important Because of a bug fix you will need cucumber-jvm release 1.2.5 which as of this writing is the latest. For example, the above will generate a failure instead of a pending in cucumber-java8-1.2.3.jar
I really prefer to be explicit about which tests are being run, by having separate run configurations defined for each environment. I also like to keep the number of tags I use to a minimum, to keep the number of configurations manageable.
I don't think it's possible to achieve what you want with tags alone. You would need to write a custom jUnit test runner to use in place of #RunWith(Cucumber.class). Take a look at the Cucumber implementation to see how things work. You would need to alter the RuntimeOptions created by the RuntimeOptionsFactory to include/exclude tags depending on the browser, or other runtime condition.
Alternatively, you could consider writing a small script which invokes your test suite, building up a list of tags to include/exclude dynamically, depending on the environment you're running in. I would consider this to be a more maintainable, cleaner solution.
It's actually really easy. If you dig though the Cucumber-JVM and JUnit 4 source code, you'll find that JUnit makes skipping during runtime very easy (just undocumented).
Take a look at the following source code for JUnit 4's ParentRunner, which Cucumber-JVM's FeatureRunner (which is used in Cucumber, the default Cucumber runner):
#Override
public void run(final RunNotifier notifier) {
EachTestNotifier testNotifier = new EachTestNotifier(notifier,
getDescription());
try {
Statement statement = classBlock(notifier);
statement.evaluate();
} catch (AssumptionViolatedException e) {
testNotifier.fireTestIgnored();
} catch (StoppedByUserException e) {
throw e;
} catch (Throwable e) {
testNotifier.addFailure(e);
}
}
This is how JUnit decides what result to show. If it's successful it will show a pass, but it's possible to #Ignore in JUnit, so what happens in that case? Well, an AssumptionViolatedException is thrown by the RunNotifier (or Cucumber FeatureRunner in this case).
So your example becomes:
#Before("#skiponchrome") // this works
public void beforeScenario() {
if(currentBrowser == 'chrome') { // this works
throw new AssumptionViolatedException("Not supported on Chrome")
}
}
If you've used vanilla JUnit 4 before, you'd remember that #Ignore takes an optional message that is displayed when a test is ignored by the runner. AssumptionViolatedException carries the message, so you should see it in your test output after a test is skipped this way without having to write your own custom runner.
I too had the same challenge, where in I need to skip a scenario from running based on a flag which I obtain from the application dynamically in run-time, which tells whether the feature to be tested is enabled on the application or not..
so this is how I wrote my logic in the scenarios file, where we have the glue code for each step.
I have used a unique tag '#Feature-01AXX' to mark my scenarios that need to be run only when that feature(code) is available on the application.
so for every scenario, the tag '#Feature-01XX' is checked first, if its present then the check for the availability of the feature is made, only then the scenario will be picked for running. Else it will be merely skipped, and Junit will not mark this as failure, instead it will me marked as Pass. So the final result if these tests did not run due to the un-availability of the feature will be pass, that's cool...
#Before
public void before(final Scenario scenario) throws Exception {
/*
my other pre-setup tasks for each scenario.
*/
// get all the scenario tags from the scenario head.
final ArrayList<String> scenarioTags = new ArrayList<>();
scenarioTags.addAll(scenario.getSourceTagNames());
// check if the feature is enabled on the appliance, so that the tests can be run.
if (checkForSkipScenario(scenarioTags)) {
throw new AssumptionViolatedException("The feature 'Feature-01AXX' is not enabled on this appliance, so skipping");
}
}
private boolean checkForSkipScenario(final ArrayList<String> scenarioTags) {
// I use a tag "#Feature-01AXX" on the scenarios which needs to be run when the feature is enabled on the appliance/application
if (scenarioTags.contains("#Feature-01AXX") && !isTheFeatureEnabled()) { // if feature is not enabled, then we need to skip the scenario.
return true;
}
return false;
}
private boolean isTheFeatureEnabled(){
/*
my logic to check if the feature is available/enabled on the application.
in my case its an REST api call, I parse the JSON and check if the feature is enabled.
if it is enabled return 'true', else return 'false'
*/
}
I've implemented a customized junit runner as below. The idea is to add tags during runtime.
So say for a scenario we need new users, we tag the scenarios as "#requires_new_user". Then if we run our test in an environment (say production environment which dose not allow you to register new user easily), then we will figure out that we are not able to get new user. Then the ""not #requires_new_user" will be added to cucumber options to skip the scenario.
This is the most clean solution I can imagine now.
public class WebuiCucumberRunner extends ParentRunner<FeatureRunner> {
private final JUnitReporter jUnitReporter;
private final List<FeatureRunner> children = new ArrayList<FeatureRunner>();
private final Runtime runtime;
private final Formatter formatter;
/**
* Constructor called by JUnit.
*
* #param clazz the class with the #RunWith annotation.
* #throws java.io.IOException if there is a problem
* #throws org.junit.runners.model.InitializationError if there is another problem
*/
public WebuiCucumberRunner(Class clazz) throws InitializationError, IOException {
super(clazz);
ClassLoader classLoader = clazz.getClassLoader();
Assertions.assertNoCucumberAnnotatedMethods(clazz);
RuntimeOptionsFactory runtimeOptionsFactory = new RuntimeOptionsFactory(clazz);
RuntimeOptions runtimeOptions = runtimeOptionsFactory.create();
addTagFiltersAsPerTestRuntimeEnvironment(runtimeOptions);
ResourceLoader resourceLoader = new MultiLoader(classLoader);
runtime = createRuntime(resourceLoader, classLoader, runtimeOptions);
formatter = runtimeOptions.formatter(classLoader);
final JUnitOptions junitOptions = new JUnitOptions(runtimeOptions.getJunitOptions());
final List<CucumberFeature> cucumberFeatures = runtimeOptions.cucumberFeatures(resourceLoader, runtime.getEventBus());
jUnitReporter = new JUnitReporter(runtime.getEventBus(), runtimeOptions.isStrict(), junitOptions);
addChildren(cucumberFeatures);
}
private void addTagFiltersAsPerTestRuntimeEnvironment(RuntimeOptions runtimeOptions)
{
String channel = Configuration.TENANT_NAME.getValue().toLowerCase();
runtimeOptions.getTagFilters().add("#" + channel);
if (!TestEnvironment.getEnvironment().isNewUserAvailable()) {
runtimeOptions.getTagFilters().add("not #requires_new_user");
}
}
...
}
Or you can extends the official Cucumber Junit test runner cucumber.api.junit.Cucumber and override method
/**
* Create the Runtime. Can be overridden to customize the runtime or backend.
*
* #param resourceLoader used to load resources
* #param classLoader used to load classes
* #param runtimeOptions configuration
* #return a new runtime
* #throws InitializationError if a JUnit error occurred
* #throws IOException if a class or resource could not be loaded
* #deprecated Neither the runtime nor the backend or any of the classes involved in their construction are part of
* the public API. As such they should not be exposed. The recommended way to observe the cucumber process is to
* listen to events by using a plugin. For example the JSONFormatter.
*/
#Deprecated
protected Runtime createRuntime(ResourceLoader resourceLoader, ClassLoader classLoader,
RuntimeOptions runtimeOptions) throws InitializationError, IOException {
ClassFinder classFinder = new ResourceLoaderClassFinder(resourceLoader, classLoader);
return new Runtime(resourceLoader, classFinder, classLoader, runtimeOptions);
}
You can manipulate runtimeOptions here as you wish. But the method is marked as deprecated, so use it with caution.
If you're using Maven, you could read use a browser profile and then set the appropriate ~ exclude tags there?
Unless you're asking how to run this from command line, in which case you tag the scenario with #skipchrome and then when you run cucumber set the cucumber options to tags = {"~#skipchrome"}
If you wish simply to temporarily skip a scenario (for example, while writing the scenarios), you can comment it out (ctrl+/ in Eclipse or Intellij).

How do I sense if my unit test is a member of an ordered test and, if it is, which position in that ordered test it is at?

Environment:
I have a program - named CSIS - which I need to run a lot of automated tests on in Visual Studio 2010 using C#. I have a series of functions which need to be run in many different orders but which all start and end at the same 'home screen' of CSIS. The tests will either be run on their own as a single CodedUITest (.cs filetype) or as an ordered test (.orderedtest filetype).
Goal:
The goal is to open to the CSIS homepage once no matter which of the unit tests is run first and then, after all CodedUITests are finished, no matter which unit test is last, the automated test will close CSIS. I don't want to create a separate unit test to open CSIS to the homepage and another to close CSIS as this is very inconvenient for testers to use.
Current Solution Development:
UPDATE: The new big question is how do I get '[ClassInitialize]' to work?
Additional Thoughts:
UPDATE: I now just need ClassInitialize to execute code at the beginning and ClassCleanUp to execute code at the end of a test set.
If you would like the actual code let me know.
Research Update:
Because of Izcd's answer I was able to more accurately research the answer to my own question. I've found an answer online to my problem.
Unfortunately, I don't understand how to implement it in my code. I pasted the code as shown below in the 'Code' section of this question and the test runs fine except that it executes the OpenWindow() and CloseWindow() functions after each test instead of around the whole test set. So ultimately the code does nothing new. How do I fix this?
static private UIMap sharedTest = new UIMap();
[ClassInitialize]
static public void ClassInit(TestContext context)
{
Playback.Initialize();
try
{
sharedTest.OpenCustomerKeeper();
}
finally
{
Playback.Cleanup();
}
}
=====================================================================================
Code
namespace CSIS_TEST
{
//a ton of 'using' statements are here
public partial class UIMap
{
#region Class Initializization and Cleanup
static private UIMap sharedTest = new UIMap();
[ClassInitialize]
static public void ClassInit(TestContext context)
{
Playback.Initialize();
try
{
sharedTest.OpenWindow();
}
finally
{
Playback.Cleanup();
}
}
[ClassCleanup]
static public void ClassCleanup()
{
Playback.Initialize();
try
{
sharedTest.CloseWindow();
}
finally
{
Playback.Cleanup();
}
}
#endregion
Microsoft's unit testing framework includes ClassInitialise and ClassCleanUp attributes which can be used to indicate methods that execute functionality before and after a test run.
( http://msdn.microsoft.com/en-us/library/ms182517.aspx )
Rather than try and make the unit tests aware of their position, I would suggest it might be better to embed the opening and closing logic of the home screen within the aforementioned ClassInitialise and ClassCleanUp marked methods.
I figured out the answer after a very long process of asking questions on StackOverflow, Googling, and just screwing around with the code.
The answer is to use AssemblyInitialize and AssemblyCleanup and to write the code for them inside the DatabaseSetup.cs file which should be auto-generated in your project. You should find that there already is a AssemblyInitialize function in here but it is very basic and there is no AssemblyCleanup after it. All you need to do is create a static copy of your UIMap and use it inside the AssemblyInitialize to run your OpenWindow() code.
Copy the format of the AssemblyInitialize function to create an AssemblyCleanup function and add your CloseWindow() function.
Make sure your Open/CloseWindow functions only contains basic code (such as Process.Start/Kill) as any complex variables such as forms have been cleaned up already and won't work.
Here is the code in my DatabaseSetup.cs:
using System.Data;
using System.Data.Common;
using System.Configuration;
using Microsoft.VisualStudio.TestTools.UnitTesting;
using Microsoft.Data.Schema.UnitTesting;
using System.Windows.Input;
using Keyboard = Microsoft.VisualStudio.TestTools.UITesting.Keyboard;
using Mouse = Microsoft.VisualStudio.TestTools.UITesting.Mouse;
using MouseButtons = System.Windows.Forms.MouseButtons;
namespace CSIS_TEST
{
[TestClass()]
public class DatabaseSetup
{
static private UIMap uIMap = new UIMap();
static int count = 0;
[AssemblyInitialize()]
public static void InitializeAssembly(TestContext ctx)
{
DatabaseTestClass.TestService.DeployDatabaseProject();
DatabaseTestClass.TestService.GenerateData();
if(count < 1)
uIMap.OpenWindow();
count++;
}
[AssemblyCleanup()]
public static void InitializeAssembly()
{
uIMap.CloseWindow();
}
}
}