Using But keyword in BDD throws Step not implemented exception while running scenario in Quantum framework - selenium

I am trying to run a BDD scenario in Quantum framework. While execution, the step with But keyword fails with error "Step not yet implemented".
Auto-generated code snippet by QMetry Automation Framework.
TODO: remove NotYetImplementedException and call test steps
throw new NotYetImplementedException();
I do not see issue with any other BDD keywords. Only the steps starting with "But" keyword fail with the above exception. Is there anything that I am missing?
Please find the scenario we are using
Scenario: Validate help me log in link
Given user have the "XXX" app in mobile
But user open the app by the name "XXX"
Step implementation:
import cucumber.api.java.en.But;
...
#But("^user open the app by the name \"([^\"]*)\"$")
public void user_open_the_app_by_the_name(String arg1) throws Throwable {
try {
AppiumUtils.stopApp(arg1);
} catch (Exception e) {
}
}

QAF uses following BDD Keywords :
Given
When
Then
And
Using
Having
With
As But is not inbuilt keyword it may give you error. It doesn't mean that you can't use But as keyword! There is a feature to support any custom keyword(s) by defining synonyms.
You can try add following property in application.properties file:
And=But;And
This will allow you to use But as keyword and should resolve your issue.

Related

BlockHound allowBlocking not working with junit5

I am using blockhound to detect blocking calls, I would like to allow my custom methods but its not working and still throwing error.
i am using junit5 with blockhound-junit-platform
#BeforeAll
static void configureBlockHound(){
System.out.println("called");
BlockHound.builder().allowBlockingCallsInside(JWTHelper.class.getName(), "toToken").install();
}
When i run the test i can see called printed on console but still throwing exception.
Try to run blockhound in "log-only" mode first to understand the root cause.
BlockHound.install(builder -> builder
.blockingMethodCallback(method ->
log.error("[{}] Blocking call: {}", Thread.currentThread(), method))
);

Logging Messages from Java Class back to the Karate Report

We have a scenario where we have to post a json request and then validate few actions in the UI. So we have a karate feature file which hits a request and after that we are invoking a java class from within the feature file. The java class would run the Selenium Webdriver test of us. In the java methods we have few Assertions/Info Messages which we would like to log back to the Karate Reports.
Is there a way in Karate with which we can write these messages from my Java Class to the Karate Test Reports?
If you use the Karate parallel runner it is designed to collect anything logged to package com.intuit.karate, which will then appear in the cucumber-html-report.
So you can try using the same slf4j Logger - and it might work.
A better way might be to just return a Map back to Karate from your Java code, with all the information you need to log. Then all you need to do is use the print keyword. I would recommend this approach actually, it should be less complicated and it does not assume that you are using the Cucumber HTML reporting. You could even do Karate asserts with the match keyword on the JSON that you get, refer to this example: dogs.feature.
We did try logging back to “com.intuit.karate” as shown below and its logging to the ‘overview-features.html’ report
Logger in Java class :
private static final Logger logger = LoggerFactory.getLogger("com.intuit.karate");
However we made an observation that if an exception is thrown from the java class the exception would appear first and then followed by all the information that is logged, both looks to exist as a different piece in the report.
For e.g
This is sample method with logger in the java class which i am invoking from the feature file
public void myTestMethod() {
logger.info("Starting Test");
logger.info("Setting path of chrome driver");
System.setProperty("webdriver.chrome.driver", "chromedriver.exe"); //driver doesnt exist in this path
logger.info("invoking chrome"); // we would expect the exception to be thrown after this line
driver = new ChromeDriver();
logger.info("perform search in google");
driver.get("http://www.google.com");
driver.findElement(By.id("lst-ib")).sendKeys("Selenium");
driver.findElement(By.id("lst-ib")).submit();
driver.quit();
}
But in the reports the exception appears first and then all the information that is logged from the java class. Both looks like two different pieces in the report, please see the report here https://www.screencast.com/t/bBhAIj7WKj
In the above case would it be possible to have the exception thrown after the line where we log” invoking Chrome” i think this would make it easier to identify the test failures from the reports. Please let us know if this would be possible
It worked for me by passing the karate object from the javascript file to the java code. With this I was able to call the karate.log function from the Java code. This way I could see the messages on the report.

How can I get junit tearDown method run if an exception is thown on startup?

I'm running selenium tests through junit.
In my system the setUp method of our AbstractSeleniumTestCase class sets up the selenium web driver and firefox profile, and the tearDown method logs out of the system and closes selenium.
Some tests will override the setUp and tearDown methods to do custom test setUp and tearDown.
The problem I'm having is that if an error accrues in the startUp method of a test (Like an unexpected popup or an selenium exception) then the web browser is never closed and the test specific tearDown operations are never done.
You could use a try block in the setUp() method to run tearDown() after encountering an error, and move the "meat" of the test setup into another method:
public void setUp() throws Exception {
try {
mySetUp();
} catch (Exception e) {
tearDown();
throw e;
}
}
Then, in your subclasses, override mySetUp() instead of setUp().
You should implement a TestWatcher and override finished method that, according to docs:
Invoked when a test method finishes (whether passing or failing)
I have not used JUnit for some time now, but as much as I remember, rules were applied before #Before.
Also you can init the driver in overriding starting method and take any relevant action by overriding the failed method etc. By that way it is possible to get rid of repetitive stuff on #Before and #After. Check the docs for specifics.
Then you can annotate your test classes with either #Rule or #ClassRule, google to understand which better suits your needs. For your any possible specific needs, it is also possible to create a rule chain.

Autoloading in Laravel Tests

I'm trying to start writing tests within Laravel.
I think it is good practice to write my own basic TestCase, that runs the test setup (for example migrations).
<?php
class TestCase extends PHPUnit_Framework_TestCase {
public function setUp()
{
parent::setUp();
// just for example:
DB::table('categories')->first();
}
}
After that, I want to inherit all my TestCases from the one created above
<?php
class TestExample extends TestCase {
public function testSomethingIsTrue()
{
$this->assertTrue(true);
}
}
So now I have three problems:
My TestCase throws error, class DB not found. Why the heck is the test not autoloading Laravel Framework Classes?
PHPUnit tells me (with a warning) that my TestCase does not contain any tests, but that is my suspected behaviour for this class, how can I avoid this message?
My TestExample cannot find the class TestCase.
Can you help me understanding that?! How can I configure a test specific autoloading?
UPDATE:
Answer 1: Because I run the tests in NetBeans IDE, that needs to be configured! Setting up the right phpunit.xml helped
As revealed in our discussion, when you run your PHPUnit tests using your IDE (NetBeans, PhpStrom and so), you have to make sure your IDE is correctly configured to catch phpunit.xml.

TypeMock in an integration/regression test suite

I need to run an integration/regression test suite for our application and the application is supposed to behave differently at different times of the day. I cannot change the system time since other apps depend on it I would like to mock DateTime.Now for this purpose. However, when I put the mocking in the main method, exceptions were thrown. When I use mocking in an nunit test for the same application, it works fine. Can typemock be used only in the context of a unit test? Is there anyway I can run the solution with mocking enabled?
I ran the solution through TMockRunner.exe as well but had the same issue.
Thanks!
I see this error when I run using the method that Travis mentioned
#Travis Illig, The code for the wrapper is:
class Program
{
static void Main(string[] args)
{
ExpectationsSetup();
ConsoleApplication2.Program.Main(args);
}
public static void ExpectationsSetup()
{
Isolate.WhenCalled(() => DateTime.Now).WillReturn(new DateTime(2010, 1, 2));
}
}
I see the following error:
Unhandled Exception: TypeMock.TypeMockException:
*** No method calls found in recording block. Please check:
* Are you trying to fake a field instead of a property?
* Are you are trying to fake an unsupported mscorlib type? See supported types
here: http://www.typemock.com/mscorlib-types
at gt.a(c0 A_0, Boolean A_1)
at bg.a(Boolean A_0)
at dt.b(Boolean A_0)
at i2.b(Boolean A_0)
at i2.a(Object A_0, Boolean A_1, Func`1 A_2, Action A_3, Action A_4, Action A
_5, Boolean A_6)
at i2.b(Object A_0)
at TypeMock.ArrangeActAssert.ExpectationEngine`1.a(TResult A_0)
at ConsoleApplication2Mock.Program.ExpectationsSetup() in C:\Users\shvenkat\D
ocuments\Visual Studio 2010\Projects\ConsoleApplication2\ConsoleApplication2Mock
\Program.cs:line 22
at ConsoleApplication2Mock.Program.Main(String[] args) in C:\Users\shvenkat\D
ocuments\Visual Studio 2010\Projects\ConsoleApplication2\ConsoleApplication2Mock
\Program.cs:line 14
Any help will be appreciated
Thanks!
The typical use of Typemock Isolator is within the context of a unit test or a small testing environment. There is a non-zero level of overhead associated with running Isolator (or any other profiler-based product like NCover) in a process, so you generally don't want to do that.
However, there are some total edge-cases when you really do want to run Isolator on a regular process, and that is possible.
Here's an article I wrote from a while back explaining how you can do this to a Windows Service, for example.
The basic algorithm holds:
Enable Typemock Isolator (either by setting up the profiling flags on the process or by running things through TMockRunner.exe).
Set up your expectations (this is where you mock DateTime.Now or whatever else you want mocked out).
Let the application finish starting up and run as normal.
The first step is easy enough - it's just like if you were running it in a unit test environment. It's the second step that can be difficult. It means you need to have some sort of "wrapper" or something that runs before the rest of your application is allowed to start that will set up your expectations. This normally happens in a test setup method or in the "arrange" part of your "arrange-act-assert" unit test. You'll see an example of this in my article.
Again, I'll warn you about performance. It's probably fine to do something like this in a test environment like you mention you're doing, but I don't think I'd do it in production.
A specific note about your program and the error you're seeing:
I tried to set up a reproduction of it and while I was able to mock other things, I wasn't able to get DateTime.Now mocking to work either. I got the same exception you're seeing.
For example, try this in your wrapper:
class Program
{
static void Main(string[] args)
{
ExpectationsSetup();
ConsoleApplication2.Program.Main(args);
}
public static void ExpectationsSetup()
{
// Mock something OTHER than DateTime.Now - this mocks
// the call to your wrapped application.
Isolate
.WhenCalled(() => ConsoleApplication2.Program.Main(null))
.DoInstead(ctx => Console.WriteLine("faked!"));
}
}
Running that wrapper through TMockRunner.exe, you'll actually get the mocking to work. However, switching it back to DateTime.Now, you'll get the exception again.
I did verify that mocking DateTime.Now in a unit test environment does work. So there must be something special about the unit test environment, though I don't know what.
Figuring out that difference is a little more in-depth than something that can be handled here. You should contact Typemock support with this one and explain the situation. They are pretty good about helping out. Be sure to send them a reproduction (e.g., a simple console app showing the issue) and you'll get a faster/better response.