Share managed resources in tests (zio tests) - managed

I'm binding an HttpRoute and testing it.
That occurs inside a ZManaged context.
Yet I need to use it for each test, which is very resource inefficient.
Is there a way run many labeled tests inside a ZManaged context?

Related

Implementing Custom Cache Store

Just implemented a custom cache store based on the official existing RocksDb one for a different backend store.
That leads me to a number of concerns/questions:
Found out the hard way that PersistenceContextInitializerImpl is auto-generated and had added an import from Eclipse to resolve the issue. Now I have to leave it non-imported and showing as an error in Eclipse, is there a best practice way to handle this?
Why is RocksDBDbStoreTest#testSegmentsRemovedAndAdded call when segmented is false, since this calls removeSegments that contractually should not be called if not segmented?
Same class, why is buildConfig numSegments set or larger than 1 for non-segmented test cases?
Any example of store implementing the NonBlockingStore transactional methods? Mostly wondering to make sure that all calls are from the same thread?
Wanted to disable the compatibility test, since not supported in prior versions. Changed group to unstable or manual and would always still get called, which doesn't seem to match documentation. What is the right way to disable it from build time run?
Are there any kind of performance/stress tests for persistence store that can be executed or adapted?
Found out the hard way that PersistenceContextInitializerImpl is auto-generated and had added an import from Eclipse to resolve the issue. Now I have to leave it non-imported and showing as an error in Eclipse, is there a best practice way to handle this?
There should be a way to have it run the annotation processor. We use IntelliJ and it works fine OOTB.
Why is RocksDBDbStoreTest#testSegmentsRemovedAndAdded call when segmented is false, since this calls removeSegments that contractually should not be called if not segmented?
This is just a side effect of having a parameterized test like that. In actual runtime it won't be invoked. You can just have the test ignore it if it is segmented.
if (segmented) return;
Same class, why is buildConfig numSegments set or larger than 1 for non-segmented test cases?
Infinispan data container is always segmented when running any of the cluster modes. The store however is not required to be segmented in those cases. If the store is not segmented you can ignore any segment parameters as documented.
Any example of store implementing the NonBlockingStore transactional methods? Mostly wondering to make sure that all calls are from the same thread?
You can see some at https://github.com/infinispan/infinispan/blob/main/persistence/jdbc/src/main/java/org/infinispan/persistence/jdbc/stringbased/JdbcStringBasedStore.java#L724. The methods can and will be invoked from different metehods, thus why it stores a Map keyed by the Transaction.
Wanted to disable the compatibility test, since not supported in prior versions. Changed group to unstable or manual and would always still get called, which doesn't seem to match documentation. What is the right way to disable it from build time run?
Not sure I understand the question. For your store you shouldn't need any compatibility test, so just don't copy that test file.
Are there any kind of performance/stress tests for persistence store that can be executed or adapted?
We have https://github.com/infinispan/infinispan-benchmarks/blob/main/cachestores/src/main/java/org/infinispan/jmhbenchmarks/InfinispanHolder.java that should work.

Location leaking between vue-test-utils tests

I am trying to create everything fresh for each tests, for example creating localVue in each test, however, the location seems to be leaking between unit tests. I am using vue-test-utils and jest with vue-router. The way I work around it is to explicitly navigate to '/' at the beginning of each test. Is it inevitable, or is there a way to isolate the tests from each other?
Yes, because you are running tests in an environment with window on the global, any changes to window or its properties affect future tests running in the same scope. The best approach is to reset any properties that you're altering in your source code before each test.

TestNG & Selenium: Separate tests into "groups", run ordered inside each group

We use TestNG and Selenium WebDriver to test our web application.
Now our problem is that we often have several tests that need to run in a certain order, e.g.:
login to application
enter some data
edit the data
check that it's displayed correctly
Now obviously these tests need to run in that precise order.
At the same time, we have many other tests which are totally independent from the list of tests above.
So we'd like to be able to somehow put tests into "groups" (not necessarily groups in the TestNG sense), and then run them such that:
tests inside one "group" always run together and in the same order
but different test "groups" as a whole can run in any order
The second point is important, because we want to avoid dependencies between tests in different groups (so different test "groups" can be used and developed independently).
Is there a way to achieve this using TestNG?
Solutions we tried
At first we just put tests that belong together into one class, and used dependsOnMethods to make them run in the right order. This used to work in TestNG V5, but in V6 TestNG will sometimes interleave tests from different classes (while respecting the ordering imposed by dependsOnMethods). There does not seem to be a way to tell TestNG "Always run tests from one class together".
We considered writing a method interceptor. However, this has the disadvantage that running tests from inside an IDE becomes more difficult (because directly invoking a test on a class would not use the interceptor). Also, tests using dependsOnMethods cannot be ordered by the interceptor, so we'd have to stop using that. We'd probably have to create our own annotation to specify ordering, and we'd like to use standard TestNG features as far as possible.
The TestNG docs propose using preserve-order to order tests. That looks promising, but only works if you list every test method separately, which seems redundant and hard to maintain.
Is there a better way to achieve this?
I am also open for any other suggestions on how to handle tests that build on each other, without having to impose a total order on all tests.
PS
alanning's answer points out that we could simply keep all tests independent by doing the necessary setup inside each test. That is in principle a good idea (and some tests do this), however sometimes we need to test a complete workflow, with each step depending on all previous steps (as in my example). To do that with "independent" tests would mean running the same multi-step setup over and over, and that would make our already slow tests even slower. Instead of three tests doing:
Test 1: login to application
Test 2: enter some data
Test 3: edit the data
we would get
Test 1: login to application
Test 2: login to application, enter some data
Test 3: login to application, enter some data, edit the data
etc.
In addition to needlessly increasing testing time, this also feels unnatural - it should be possible to model a workflow as a series of tests.
If there's no other way, this is probably how we'll do it, but we are looking for a better solution, without repeating the same setup calls.
You are mixing "functionality" and "test". Separating them will solve your problem.
For example, create a helper class/method that executes the steps to log in, then call that class/method in your Login test and all other tests that require the user to be logged in.
Your other tests do not actually need to rely on your Login "Test", just the login class/method.
If later back-end modifications introduce a bug in the login process, all of the tests which rely on the Login helper class/method will still fail as expected.
Update:
Turns out this already has a name, the Page Object pattern. Here is a page with Java examples of using this pattern:
http://code.google.com/p/selenium/wiki/PageObjects
Try with depends on group along with depends on method. Add all methods in same class in one group.
For example
#Test(groups={"cls1","other"})
public void cls1test1(){
}
#Test(groups={"cls1","other"}, dependsOnMethods="cls1test1", alwaysrun=true)
public void cls1test2(){
}
In class 2
#Test(groups={"cls2","other"}, dependsOnGroups="cls1", alwaysrun=true)
public void cls2test1(){
}
#Test(groups={"cls2","other"}, dependsOnMethods="cls2test1", dependsOnGroups="cls1", alwaysrun=true)
public void cls2test2(){
}
There is an easy (whilst hacky) workaround for this if you are comfortable with your first approach:
At first we just put tests that belong together into one class, and used dependsOnMethods to make them run in the right order. This used to work in TestNG V5, but in V6 TestNG will sometimes interleave tests from different classes (while respecting the ordering imposed by dependsOnMethods). There does not seem to be a way to tell TestNG "Always run tests from one class together".
We had a similar problem: we need our tests to be run class-wise because we couldn't guarantee the test classes not interfering with each other.
This is what we did:
Put a
#Test( dependsOnGroups= { "dummyGroupToMakeTestNGTreatThisAsDependentClass" } )
Annotation on an Abstract Test Class or Interface that all your Tests inherit from.
This will put all your methods in the "first group" (group as described in this paragraph, not TestNG-groups). Inside the groups the ordering is class-wise.
Thanks to Cedric Beust, he provided a very quick answer for this.
Edit:
The group dummyGroupToMakeTestNGTreatThisAsDependentClass actually has to exist, but you can just add a dummy test case for that purpose..

Running GWT 2.4 Tests With JUnitCore

I have several GWTTestCases in my test suite, and I'm currently using a homegrown testing script which is written in Java that runs tests as follows:
for(Class<?> testClass : allTestClasses) {
final JUnitCore core = new JUnitCore();
final Result result = core.run(testClass);
}
Now, the first GWT test will pass and all subsequent tests will fail. It doesn't matter which test runs first, and I can run the tests successfully from the command line.
Looking through the logs, the specific error is typically like:
java.lang.RuntimeException: deepthought.test.JUnit:package.GwtTestCaseClass.testMethod: could not instantiate the requested class
I think it has something to do with GWTTestCase static state, but am unsure. If I do one run where I pass all the testClasses to the core, they all pass, and then subsequently any individual test will pass.
My guess is that gwt compiles and caches the tests you are running, and then stores them based on the module. But in this case, the compiler misses my other test cases, because it doesn't see a dependency to them. Then for the next test, it comes back to the cache, hits it and fails to find the test I want.
Any thoughts on a workaround, other than just passing all the tests in at once?
The workaround I discovered is to first add all the GWTTestCase classes to a GWTTestSuite, which you can then throw away. You don't incur the cost of compilation at this point, but it somehow makes GWT aware of all the test cases, and so when you compile the first one...they all get compiled.
If you ask me, this is a GWT bug.

Do tests belong in-tree alongside the code, or in a serarate test-tree?

When designing a new testing system, Should I keep tests in tree with the source-code, for example, in a test sub-directory of the application root. Therefore running the tests for a given branch of your code only against that branch.
OR
Should I keep a separate source tree for the tests, and run the latest tests against all branches of the project?
Our team, and most teams where I work put unit tests in the same tree as the code, but put the rest of the tests (aka the tests the test team writes) in a separate tree. Organizationally, the test tree mirrors the code tree (i.e. the directory structure is the same (mostly)).
But we write a lot of test code.
I do it both ways. It depends:
If the system under test is distributed as binary, I keep tests next to the code. I prefer this because it makes it easier to switch between tests and source. It also increases the visibility of tests to other developers, especially those who aren't yet writing their own tests.
However, if the system under test is distributed as source code, I keep tests in a parallel tree so that people can choose to get the working system only.
I think keeping them with the source code is better. It's often the case that expected behavior for a given piece of code changes, and the tests then change as well. Also, as you add functionality, you will get new tests that would fail on older code.
That said, it makes good sense to have tests in a separate directory of your source repository, so you can easily get rid of them when releasing. But they should be versioned together with the code they're testing, in any case.