How to call test helper function from within integration tests? - testing

I'm trying to figure out how to best organize my tests in Rust, and I'm running into the following problem. I have a test utility (test_util) that I define in a module and I would like to be able to use it from my unit tests as well as from my integration tests.
Definition of test_util in src/lib.rs:
#[cfg(test)]
pub mod test_util {
pub fn test_helper() {}
}
I can access my helper function from my unit tests in another module, src/some_module.rs:
#[cfg(test)]
pub mod test {
use crate::test_util::test_helper;
#[test]
fn test_test_helper() {
test_helper();
}
}
However, when I try to use the utility from my integration test, as in tests/integration_test.rs:
use my_project::test_util::test_helper;
#[test]
fn integration_test_test_helper() {
test_helper();
}
I get the following compiler message:
8 | use my_project::test_util::test_helper;
| ^^^^^^^^^ could not find `test_util` in `my_project`
Is there a good reason why it is not allowed to access test code from the project from within an integration test belonging to that same project? I get it that integration tests can only access the public parts of the code, but I think it would make sense to also allow access to the public parts of the unit test code. What would be a work around for this?

The test feature is only enabled while tests are being run on that crate. Integration tests are run externally to the crate, so you cannot access anything that is gated on test.
In my company we have a convention to put shared test utilities in a public test_utils module at the top level of a crate. You could gate this module behind your own feature, say integration_test, which you always enable when running those tests, but we don't currently bother with that.

Related

What is an idiomatic way to implement a trait only for tests across multiple crates? [duplicate]

This question already has answers here:
What is an idiomatic way to have shared utility functions for integration tests and benchmarks?
(3 answers)
Closed 1 year ago.
I work in a workspace with dozens of crates. One of those crates exposes a trait. As a mock, I implement that trait for () with unimplemented! in every function (they're not actually used). I'd like that implementation to be available from the other crates, but only during the tests: what is the idiomatic way (the handiest) to do so?
For now, the implementation is behind a mock feature, and I add this crate with the mock feature as a dev dependency in a random crate. That forces the compiler to take that implementation into account during the tests. It's an ugly hack, so I'd rather have another way.
Items gated with test are not exported from a crate, even for crates that use it as a dev dependency.
As of Rust 1.51.0, you can work around that by using a custom feature.
In Cargo.toml:
[features]
test-traits = []
In the code:
#[cfg(feature = "test-traits")]
impl MyTrait for MyStruct {}
In crates that depend on it, you can enable the new resolover:
[package]
resolver = "2"
And add a dev dependency that enables the feature:
[dev-dependencies]
your_crate = { version = "1.0", features = ["test-traits"] }
Without the new resolver enabled all features are additive across targets, so enabling the feature in dev-dependencies would enable it for non-test code too. With the new resolver this is now handled more like you would expect.

How to disable a parallel run for a specific SpecFlow scenario?

Is it possible to exclude a specflow scenario from parallel run?
I set up parallel run for all the assembly by doing this:
[assembly: Parallelize(Workers = 10, Scope = ExecutionScope.ClassLevel)]
in AssemblyInfo.cs file.
But now I need to exclude one specific scenario from parallel run. How can I do it?
One way to solve this is to use the NonParallelizable Attribute, provided by NUnit.
Example:
namespace Tests
{
[SetUpFixture]
public class TestsSetUpFixture
{
//setup the tests
}
[TestFixture]
[NonParallelizable]
public class TestFixture1
{
[Test]
public void TestFixture1_Test()
{
//do stuff in your test
}
}
}
NUnit provides this documentation:
This attribute is used to indicate that the test on which it appears
may not be run in parallel with any other tests. The attribute takes
no arguments and may be used at the assembly, class or method level.
When used at the assembly level, its only effect is that execution
begins on the non-parallel queue. Test suites, fixtures and test cases
will continue to run on the same thread unless a fixture or method is
marked with the Parallelizable Attribute.
When used on a test fixture or method, that test will be queued on the
non-parallel queue and will not run while other tests marked as
Parallelizable are being run.
Hope this helps.

Mule- Behaviour-Driven Development using JBehave

Is it possible to use JBehave for BDD testing in mule application? Any working example will be very helpful.
Thank you :)
should be possible. What do you want to test? It's easy to test a single Java Transformer with JBehave, but it's getting worse when you start writing integration tests with JBehave. Seriously I won't do that.
It could work if you use MUnit with Java, but I would never ever mix Java JBehave stuff with XML MUnit tests because it will become unmaintainable.
I always test without a BDD tool as wrapper and use simple Given-When-Then-like syntax as names of my tests. For example: "should-be-irrelevant-when-purchaser-is-zero" is a name of one of my test. By using this you always see which test fails why.
looking forward to your response
In case you want to test a custom Java transformer like this one:
import org.mule.api.transformer.TransformerException;
import org.mule.transformer.AbstractTransformer;
public class MyCustomTransformer extends AbstractTransformer {
#Override
protected Object doTransform(Object src, String enc) throws TransformerException {
return null;
}
}
It's definitely possible, but I don't see why it should be a benefit. I would use Mockito with Given/When/Then syntax instead.

Use different structure for grails unit tests

I have a very simple package structure, only one level deep for all my grails artifacts- the name of the application "estra"- because the grails application structure is already providing the separation folders. But when writing unit-tests all the classes are inside the same estra.* package and I want to keep them separated like this estra.domain, estra.controllers, etc.
Right now everything works fine, but the tests are pretty simple. Will I face any problem in the future with dependency injection or something?
No, the package name don't influence in your test since in your test class you "say" which class is tested, using the #TestFor annotation. But remember that in unit tests you need to manually set your dependencies.
class ServiceOne {
def serviceTwo
}
#TestFor(ServiceOne)
class ServiceOneTests {
#Before
public void setup() {
service.serviceTwo = new ServiceTwo() //or mocked instance...
}
}

Run Cucumber JVM tests manually

I have a bit of a special situation. Basically I have a unit test, annotated with #Test, and inside that test I need to execute a Cucumber JVM test class.
Why? Long story. Something to do with classloaders and RoboGuice, it's not very important but it does impose limits on what I can and cannot do.
Here's the test method:
#Test
public void runCucumberFeature() throws Exception {
Cucumber cucumber = new Cucumber(MyCucumberTest.class);
cucumber.run(new RunNotifier());
}
MyCucumberTest is a class I have created, and annotated like this:
//#RunWith(Cucumber.class)
#Cucumber.Options(format = {"pretty", "html:target/cucumber"}, strict=true)
public class MyCucumberTest {
// Empty, as required by Cucumber JVM
}
Why have I commented out the #RunWith annotation? Because if I don't, the Cucumber test runner will pick up the test and run it, which I don't want because I am running the test manually.
The problem is that the above doesn't work. It looks like Cucumber is finding the feature files, it is verifying that MyCucumberTest contains the #Givens etc, it even prints out the test as if it was running it.
But it doesn't. No code is executing inside the #Given, #When and #Then methods. I'm not sure why this is, but I have a vague idea that the Cucumber JVM test runner doesn't want to execute the code because the class isn't annotated with #RunWith.
Can anyone help?
I can't provide the solution you're looking for, but....
... have you considered tagging the test that you want to run manually (e.g. with #Manual)?
Then you could uncomment your #RunWith annototation and exclude the manual test by adding --tags ~#Manual to your Cucumber-JVM invocation.
In your manual JUnit invocation you could add --tags #Manual