Is there a way to check that SetUp code has actually worked properly in GTest fixtures, so that the whole fixture or test-application can be marked as failed rather than get weird test results and/or have to explicitly check this in each test?
If you put your fixture setup code into a SetUp method, and it fails and issues a fatal failure (ASSERT_XXX or FAIL macros), Google Test will not run your test body. So all you have to write is
class MyTestCase : public testing::Test {
protected:
bool InitMyTestData() { ... }
virtual void SetUp() {
ASSERT_TRUE(InitMyTestData());
}
};
TEST_F(MyTestCase, Foo) { ... }
Then MyTestCase.Foo will not execute if InitMyTestData() returns false. If you already have nonfatal assertions in your setup code (i.e., EXPECT_XXX or ADD_FAILURE), you can generate a fatal assertion from them by writing ASSERT_FALSE(HasFailure()); You can find more info on failure detection in the Google Test Advanced Guide wiki page.
Related
I don't know if I am missing something but I could not find anything that says how to do a test suite like in JUnit. Can someone help me? I saw that documentation offers grouping tests, but when I run from Gradle, the logs are really large, and not very useful
You can group your tests using Tags, see https://kotest.io/docs/framework/tags.html.
For example, to group tests by operating system you could define the following tags:
object Linux : Tag()
object Windows: Tag()
Test cases can then be marked with tags using the config function:
import io.kotest.specs.StringSpec
class MyTest : StringSpec() {
init {
"should run on Windows".config(tags = setOf(Windows)) {
// ...
}
"should run on Linux".config(tags = setOf(Linux)) {
// ...
}
"should run on Windows and Linux".config(tags = setOf(Windows, Linux)) {
// ...
}
}
}
Then you can tell Gradle to run only tests with specific Tags, see https://kotest.io/docs/framework/tags.html#running-with-tags
Example: To run only test tagged with Linux, but not tagged with Database, you would invoke Gradle like this:
gradle test -Dkotest.tags="Linux & !Database"
Tags can also be included/excluded in runtime (for example, if you're running a project configuration instead of properties) through the RuntimeTagExtension:
RuntimeTagExpressionExtension.expression = "Linux & !Database"
I have a project with Quarkus and Lambda extension. I have multiple integration test and I need to restart the lambda between each test.
Apparently the lambda is created by MockEventServer but I didn't succeed to access to the object created.
By analyzing the code I find that you can inject a MockEventServer to the following method:
DevServicesLambdaProcessor.startEventServer(LaunchModeBuildItem launchMode,
LambdaConfig config,
Optional<EventServerOverrideBuildItem> override,
BuildProducer<DevServicesResultBuildItem> devServicePropertiesProducer,
BuildProducer<RuntimeApplicationShutdownBuildItem> runtimeApplicationShutdownBuildItemBuildProducer)
So I try to create a build step:
public class AwsMockServ {
public static MockEventServer mockEventServer = new MockEventServer();
#BuildStep
public EventServerOverrideBuildItem overrideEventServer() {
return new EventServerOverrideBuildItem(
() -> mockEventServer);
}
}
But the buildStep is ignored...
Someone succeed to restart the mockEventServer between integration test ?
Thanks for your help.
How can I make only failed scenarios to be run again automatically on failure ?
Here is some clue on what I am doing:
Pass TestRunner class from command line through cucumber-testng.xml file at run-time.
I am able to see rerun.txt file after scenario failed, with feature/GM/TK/payment.feature:71 (pointing to failed scenario) but failed scenario is not automatically re-run.
The "TestRunner" java file
#RunWith(Cucumber.class)
#CucumberOptions(strict = true,
features = { "src/test/resources/" }, //feature file location
glue = { "com/test/stepdefs", "com.test.cucumber.hooks" }, //hooks and stepdef location
plugin = { "json:target/cucumber-report-composite.json", "pretty", "rerun:target/rerun.txt"}
)
public class CucumberTestRunner extends AbstractTestNGCucumberTests
{
}
The "RunFailedTest" Class to re-run from rerun.txt file
#RunWith(Cucumber.class)
#CucumberOptions(
strict = false,
features = { "#target/rerun.txt" }, //rerun location
glue = { "com/test/stepdefs", "com.test.cucumber.hooks" }, //hooks and stepdef location
plugin = {"pretty", "html:target/site/cucumber-pretty", "json:target/cucumber.json"}
)
class RunFailedTest
{
}
you can achieve it by using gherkin with qaf it generates testng XML configuration for failed scenarios that you can use for rerun. It also support scenario rerun on fail by setting retry.count property.
using Cucumber + Maven + TestNG
first, you don't need "#RunWith(Cucumber.class)" as you have mentioned in your question, if you are using TestNG, only "#CucumberOptions" is required.
When you start your test execution, all scenario failures will be written to file "target/rerun.txt" as per the configuration mentioned in your Runner file.
Now, you need to create one more Runner file (for example - "FailureRunner") and in this file provide the path of "#target/rerun.txt" ( this already have the details of the failure scenarios ) as -> features = { "#target/rerun.txt" }
Now, you need to UPDATE your TestNG.xml file and include the "FailureRunner" as below-
<class name="Class path of Your First Runner Class name" />
<class name="class path of FailureRunner Class" />
Once you do all the above steps and run your execution, the first execution will write the failure scenarios in the "target/rerun.txt" and After that, the "FailureRunner" Class will be executed which will pick up the "#target/rerun.txt" file and Hence, Failure scenarios will be executed.
I have executed in the same way and it works fine, let me know if it helps !!
In an Objective-C project, we started writing our new Unit Tests in Swift. I'm just now trying to create our first Unit Test of successfully saving the results of a parsed JSON. However, the test already fails during setup() due to the following error:
[ProjectTests.Project testInitializingOverlayCollectionCreatesAppropriateRealmObjects] : failed: caught "NSInvalidArgumentException", "+[RLMObjectBase ignoredProperties]: unrecognized selector sent to class 0x759b70
So apparently it tries to execute ignoredProperties on the RLMObjectBase class, and that method isn't implemented yet. Not sure how this happens, because I have yet to initialise anything, beyond creating a RLMRealms object with a random in-memory identifier.
ProjectTests.swift
import XCTest
class ProjectOverlayCollectionTests: XCTestCase {
var realm: RLMRealm!
override func setUp() {
super.setUp()
// Put setup code here. This method is called before the invocation of each test method in the class.
let realmConfig = RLMRealmConfiguration()
realmConfig.inMemoryIdentifier = NSUUID().UUIDString
do {
realm = try RLMRealm(configuration: realmConfig) // <-- Crashes here.
}
catch _ as NSError {
XCTFail()
}
}
override func tearDown() {
// Put teardown code here. This method is called after the invocation of each test method in the class.
super.tearDown()
}
func testInitializingOverlayCollectionCreatesAppropriateRealmObjects() {
XCTAssertTrue(true)
}
}
Project-Bridging-Header.h
#import <Realm/Realm.h>
Podfile
source 'https://github.com/CocoaPods/Specs.git'
platform :ios, '7.1'
def shared_pods
pod 'Realm', '0.95.0'
end
target 'Project' do
shared_pods
end
target 'ProjectTests' do
shared_pods
end
As mentioned in the Realm documentation;
Avoid Linking Realm and Tested Code in Test Target
Remove the Realm pod from the ProjectTests target and all is right with the world.
Update: This answer is outdated. As #rommex mentions in a comment, following the current Realm installation documentation should link it to both your module and test targets without problems. However, I have not checked this.
My application is based on playframework and contains multiple modules.
The database interaction is handled trough JPA (<provider>org.eclipse.persistence.jpa.PersistenceProvider</provider>)
My task is to cover one of these modules with unit-tests.
Unfortunately running the "play test" command with unit tests provied on module-level results in the following Exception:
javax.persistence.PersistenceException: No Persistence provider for EntityManager named defaultPersistenceUnit
Persistence-Provider is defined globaly (outside of the Module) in conf/META-INF/persistence.xml
copying the global persistence.xml to the module doesn't fix the issue.
Placing the tests outside of the module (in global test directory) and execute them works flawless presuming that there are no other tests within modules.
Can someone explain me why the Error comes up? Is there any way to have working JPA-capable tests on module level?
Thanks in advance
Urs
I had the same problem running JUnit tests from my play application in Eclipse.
To solve this issue you need to make the folder conf available to the all project.
Project Properties-> Java Build Path ->Source
Add Folder and choose the conf folder.
I checked your code. I think don't need another persistence.xml. Can you try these solutions in play module:
#Test()
public void empty() {
running(fakeApplication(), new Runnable() {
public void run() {
JPA.withTransaction(new play.libs.F.Callback0() {
public void invoke() {
Object o = new Object(someAttrib);
o.save();
assertThat(o).isNotNull();
}
});
}
});
}
Or:
#Before
public void setUpIntegrationTest() {
FakeApplication app = fakeApplication();
start(app);
em = app.getWrappedApplication().plugin(JPAPlugin.class).get().em("default");
JPA.bindForCurrentThread(em);
}
These codes are from this page. I don't test it!
Please modify your code to:
#BeforeClass
public static void setUp() {
FakeApplication app = fakeApplication(inMemoryDatabase());
start(app);
em = app.getWrappedApplication().plugin(JPAPlugin.class).get().em("default");
JPA.bindForCurrentThread(em);
}
Please try it!