Go hangs when running tests - testing

I'm writing a web application in go and it runs just fine. However, when running the tests for a package, the go test command just hangs (it does nothing, not even terminate).
I have a function for testing which starts the server:
func mkroutes(t *testing.T, f func()) {
handlerRegistry = handlerList([]handler{})
middlewareRegistry = []middleware{}
if testListener == nil {
_testListener, err := net.Listen("tcp", ":8081")
testListener = _testListener
if err != nil {
fmt.Printf("[Fail] could not start tcp server:\n%s\n", err)
}
}
f()
go func() {
if err := serve(testListener, nil); err != nil {
fmt.Printf("[Fail] the server failed to start:\n%s\n", err)
t.FailNow()
}
}()
}
If I change the port that it listens on, everything runs fine (all the tests fail though, since they can't connect to the server). This shows that the code is indeed running, but if I log something in the function, or even in the init function, while the port is correct, it again breaks.
After I force the go test command to terminate manually, it does print whatever I logged, then exit. This leads me to believe that something else is blocking on the main thread before execution reaches the log, but that's impossible since changing the port makes a difference.
The package doesn't have any init functions and the only code that runs on startup is var sessionStore = sessions.NewCookieStore([]byte("test-key")) which is using the package github.com/gorilla/sessions. When I run the program normally, this causes no problems, and I don't see anything in the package's source that would cause it to behave differently in testing.
That's the only package outside the standard library which is imported.
I can provide any other code in the package, but I have no idea what's relevant.

First: Note that go test will create, compile and run a test program which intercept output from each test so you will not see output from your tests until the test finishes (in this moment the output is printed).
Two issues with your code:
If you cannot start the tcp server t is not notified, you only do Printf here and continue as if everything was fine.
You call t.FailNow from a different goroutine than your test runs in. You must not do this. See http://golang.org/pkg/testing/#T.FailNow
Fixing those might at least show what else goes wrong. Also: Take a look at how package net/http does it's testing.

Related

Why am I getting "java.lang.NoClassDefFoundError: Could not initialize class io.mockk.impl.JvmMockKGateway" when using quarkusDev task in IntelliJ?

I am using Gradle 7.5, Quarkus 2.12.3 and mockk 1.13.3. When I try to run quarkusDev task from command line and then start continuous testing (by pressing r), then all tests pass OK.
However, when I do the same as from IntelliJ (as gradle run configuration), all tests fail with error:
java.lang.NoClassDefFoundError: Could not initialize class io.mockk.impl.JvmMockKGateway
How can I fix that?
Masked thrown exception
After much debugging I found the problem. The thrown exception actually originates in HotSpotVirtualMachine.java and is thrown during attachment of ByteBuddy as a java agent. Here is the relevant code;
// The tool should be a different VM to the target. This check will
// eventually be enforced by the target VM.
if (!ALLOW_ATTACH_SELF && (pid == 0 || pid == CURRENT_PID)) {
throw new IOException("Can not attach to current VM");
}
Turning check off
So the check can be turned off by setting ALLOW_ATTACH_SELF constant to true. The constant is set from a system property named jdk.attach.allowAttachSelf:
String s = VM.getSavedProperty("jdk.attach.allowAttachSelf");
ALLOW_ATTACH_SELF = "".equals(s) || Boolean.parseBoolean(s);
So, in my case, I simply added the following JVM argument to my gradle file and tests started to pass:
tasks.quarkusDev {
jvmArgs += "-Djdk.attach.allowAttachSelf"
}

How to increase Kotlin coroutines when running a test?

I've implemented an integration test. It run some stuff, including two suspend functions which are run inside a launch{}. Now for some reason, when I run more than four of my integration tests, I have six, the fifth job gets cancelled and the IT fails.
This is an excerpt of the code I'm testing:
io.launch {
temporaryStorage.storeFiles(businessProcess)
.publishEvent(businessProcess, expectedDocumentType)
.tapLeft { orchestrationFailure -> orchestrationFailure.handleFailure() }
}
Now the test is actually testing an endpoint. When the endpoint is called, the code I'm testing is called. The specific part which fails, is the part that verifies if a function call in the .publishEvent(...) method is called:
verify(exactly = 1) { eventPublisherMock.publish(any()) }
In the logs I see the first couple of tests run smoothly, but before it runs the test or instance from above it see the job got cancelled: JobImpl{Cancelled}#23edf317 and that the job is not active.
I have a producer function to produce my CoroutineDispatcher. When I up the .maxAsync() and .maxQueue() to respectively 6 and 8 for example it still cancels for some reason. This is the producer:
#Produces
#Singleton
#Named("IO")
fun ioDispatcher(coroutinesDispatcherConfig: CoroutinesDispatcherConfig): CoroutineDispatcher =
SmallRyeManagedExecutor.builder()
.withNewExecutorService()
.maxAsync(coroutinesDispatcherConfig.ioMaxAsync())
.maxQueued(coroutinesDispatcherConfig.ioMaxWaiting())
.build()
.asCoroutineDispatcher()
Does anyone know how I should handle this?

Elrond mandos test elrond_wasm_debug::mandos_rs pass however erdpy contract test fail

I'm writing test cases for my NFT smart contract (SC). When I check the state of the SC after creating my NFT I'm expecting to see a variable (next_index_to_mint:u64, that's I increase by 1 every new NFT) to be updated.
So I'm running the test using the command:
$ erdpy contract test
INFO:projects.core:run_tests.project: /Users/<user>/sc_nft
INFO:myprocess:run_process: ['/Users/<user>/elrondsdk/vmtools/mandos-test', '/Users/<user>/sc_nft/mandos'], in folder: None
CRITICAL:cli:External process error:
Command line: ['/Users/<user>/elrondsdk/vmtools/mandos-test', '/Users/<user>/sc_nft/mandos']
Output: Scenario: buy_nft.scen.json ... FAIL: wrong account storage for account "sc:nft-minter":
for key 0x6e657874496e646578546f4d696e74 (str:nextIndexToMint): Want: "0x02". Have: ""
Scenario: create_nft.scen.json ... FAIL: wrong account storage for account "sc:nft-minter":
for key 0x6e657874496e646578546f4d696e74 (str:nextIndexToMint): Want: "0x02". Have: ""
Scenario: init.scen.json ... ok
Done. Passed: 1. Failed: 2. Skipped: 0.
ERROR: some tests failed
However, when I'm running the test using elrond_wasm_debug::mandos_rs function with the create_nft.scen.json file, it passed.
use elrond_wasm_debug::*;
fn world() -> BlockchainMock {
let mut blockchain = BlockchainMock::new();
blockchain.set_current_dir_from_workspace("");
blockchain.register_contract_builder("file:output/test.wasm", nft_auth_card::ContractBuilder);
blockchain
}
#[test]
fn create_nft() {
elrond_wasm_debug::mandos_rs("mandos/create_nft.scen.json", world());
}
BTW, if you want to add this to the NFT SC example, that would be great in the tests/ folder.
I tried to put an incorrect value, and it failed as expected. So my question is how could it be possible that it work using mandos elrond_wasm debug but not erdpy ?
running 1 test
thread 'create_nft' panicked at 'bad storage value. Address: sc:nft-minter. Key: str:nextIndexToMint. Want: "0x04". Have: 0x02', /Users/<user>/elrondsdk/vendor-rust/registry/src/github.com-1ecc6299db9ec823/elrond-wasm-debug-0.28.0/src/mandos_step/check_state.rs:56:21
Here is the code (I use the default NFT SC example):
const NFT_INDEX: u64 = 0;
fn create_nft_with_attributes<T: TopEncode>(...) -> u64 {
...
self.next_index_to_mint().set_if_empty(&NFT_INDEX);
let next_index_to_mint = self.next_index_to_mint().get();
self.next_index_to_mint().set(next_index_to_mint+1);
...
}
#[storage_mapper("nextIndexToMint")]
fn next_index_to_mint(&self) -> SingleValueMapper<u64>;
Short answer: most likely you haven't re-built your contract before testing it with erdpy.
Long answer: currently there are two ways mandos tests are executed, as you've exemplified in your case:
Run tests directly from rust through mandos_rs
Run tests through erdpy (which in turn uses mandos_go)
These two frameworks (mandos_rs and mandos_go) are functioning in different ways:
mandos_rs: this framework is running on your rust code directly and it's testing it agains a mocked VM and mocked blockchain in the background. Therefore, it's not necessary to build your contract when using mandos_rs.
mandos_go: this framework is testing your compiled contract against a
REAL VM with mocked blockchain in the background, so it's necessary to build your latest changes into a .wasm bytecode (e.g. erdpy contract build) before running the tests via mandos_go, as this compiled file will be loaded by the VM like in a real use scenario.

Static Hangfire RecurringJob methods in LINQPad are not behaving

I have a script in LINQPad that looks like this:
var serverMode = EnvironmentType.EWPROD;
var jobToSchedule = JobType.ABC;
var hangfireCs = GetConnectionString(serverMode);
JobStorage.Current = new SqlServerStorage(hangfireCs);
Action<string, string, XElement> createOrReplaceJob =
(jobName, cronExpression, inputPackage) =>
{
RecurringJob.RemoveIfExists(jobName);
RecurringJob.AddOrUpdate(
jobName,
() => new BTR.Evolution.Hangfire.Schedulers.JobInvoker().Invoke(
jobName,
inputPackage,
null,
JobCancellationToken.Null),
cronExpression, TimeZoneInfo.Local);
};
// psuedo code to prepare inputPackage for client ABC...
createOrReplaceJob("ABC.CustomReport.SurveyResults", "0 2 * * *", inputPackage);
JobStorage.Current.GetConnection().GetRecurringJobs().Where( j => j.Id.StartsWith( jobToSchedule.ToString() ) ).Dump( "Scheduled Jobs" );
I have to schedule in both QA and PROD. To do that, I toggle the serverMode variable and run it once for EWPROD and once for EWQA. This all worked fine until recently, and I don't know exactly when it changed unfortunately because I don't always have to run in both environments.
I did purchase/install LINQPad 7 two days ago to look at some C# 10 features and I'm not sure if that affected it.
But here is the problem/flow:
Run it for EWQA and everything works.
Run it for EWPROD and the script (Hangfire components) seem to run in a mix of QA and PROD.
When I'm running it the 'second time' in EWPROD I've confirmed:
The hangfireCs (connection string) is right (pointing to PROD) and it is assigned to JobStorage.Current
The query at the end of the script, JobStorage.Current.GetConnection().GetRecurringJobs() uses the right connection.
The RecurringJob.* methods inside the createOrReplaceJob Action use the connection from the previous run (i.e. EWQA). If I monitor my QA Hangfire db, I see the job removed and added.
Temporary workaround:
Run it for EWQA and everything works.
Restart LINQPad or use 'Cancel and Reset All Queries' method
Run it for EWPROD and now everything works.
So I'm at a loss of where the issue might lie. I feel like my upgrade/install of LINQPad7 might be causing problems, but I'm not sure if there is a different way to make the RecurringJob.* static methods use the 'updated' connection string.
Any ideas on why the restart or reset is now needed?
LINQPad - 5.44.02
Hangfire.Core - 1.7.17
Hangfire.SqlServer - 1.7.17
This is caused by your script (or a library that you call) caching something statically, and not cleaning up between executions.
Either clear/dispose objects when you're done (e.g., JobStorage.Current?) or tell LINQPad not to re-use the process between executions, by adding Util.NewProcess=true; to your script.

Time out for test cases in googletest

Is there a way in gtest to have a timeout for inline/test cases or even tests.
For example I would like to do something like:
EXPECT_TIMEOUT(5 seconds, myFunction());
I found this issue googletest issues as 'Type:Enhancement' from Dec 09 2010.
https://code.google.com/p/googletest/issues/detail?id=348
Looks like there is no gtest way from this post.
I am probably not the first to trying to figure out a way for this.
The only way I can think is to make a child thread run the function, and if it does not return by the
time limit the parent thread will kill it and show timeout error.
Is there any way where you don't have to use threads?
Or any other ways?
I just came across this situation.
I wanted to add a failing test for my reactor. The reactor never finishes. (it has to fail first). But I don't want the test to run forever.
I followed your link but still not joy there. So I decided to use some of the C++14 features and it makes it relatively simple.
But I implemented the timeout like this:
TEST(Init, run)
{
// Step 1 Set up my code to run.
ThorsAnvil::Async::Reactor reactor;
std::unique_ptr<ThorsAnvil::Async::Handler> handler(new TestHandler("test/data/input"));
ThorsAnvil::Async::HandlerId id = reactor.registerHandler(std::move(handler));
// Step 2
// Run the code async.
auto asyncFuture = std::async(
std::launch::async, [&reactor]() {
reactor.run(); // The TestHandler
// should call reactor.shutDown()
// when it is finished.
// if it does not then
// the test failed.
});
// Step 3
// DO your timeout test.
EXPECT_TRUE(asyncFuture.wait_for(std::chrono::milliseconds(5000)) != std::future_status::timeout);
// Step 4
// Clean up your resources.
reactor.shutDown(); // this will allow run() to exit.
// and the thread to die.
}
Now that I have my failing test I can write the code that fixes the test.