I am trying to add test coverage verification in my kotlin gradle project. I added a super high minimum value (0.99) to fail my build but this task is not getting executed.
tasks.jacocoTestCoverageVerification {
violationRules {
rule {
limit {
minimum = "0.99".toBigDecimal()
}
}
}
}
The test coverage report is generated successfully from the coverageReport task (details not in the post)
tasks.withType<Test> {
finalizedBy(coverageReport) // report is always generated after tests run
}
According to jacoco violation rules official documentation
Any violation of the declared rules would automatically result in a failed build when executing the check task.
So I am under the assumption that the test coverage verification should be auto triggered?
So I expected that the jacocoTestCoverageVerification would execute without I having to call it. I also added the following to the jacocoTestCoverageVerification task but still doesn't fail so not writing all the rules is not a likely issue.
rule {
isEnabled = true
element = "CLASS"
includes = listOf("org.gradle.*")
limit {
counter = "LINE"
value = "TOTALCOUNT"
maximum = "0.99".toBigDecimal()
}
}
I also tried :
tasks.jacocoTestCoverageVerification {
violationRules {
rule {
classDirectories.setFrom(sourceSets.main.get().output.asFileTree.matching {
})
isEnabled = true
limit {
minimum = "0.99".toBigDecimal()
}
}
}
}
Can anyone please help me catch what I am missing?
EDIT:
Gradle version
bin/gradle --version
------------------------------------------------------------
Gradle 7.6
------------------------------------------------------------
Kotlin: 1.7.10
Groovy: 3.0.13
Ant: Apache Ant(TM) version 1.10.11 compiled on July 10 2021
JVM: 17.0.5 (Eclipse Adoptium 17.0.5+8)
OS: Mac OS X 13.2 aarch64
The gradle build command:
bin/gradle build
Build logs
Execution optimizations have been disabled for task ':codeCoverageReport' to ensure correctness due to the following reasons:
- Gradle detected a problem with the following location: '/Users/Development/myrepo/build/reports/jacoco/codeCoverageReport/codeCoverageReport.xml'. Reason: Task ':validateDependenciesKtFile' uses this output of task ':codeCoverageReport' without declaring an explicit or implicit dependency. This can lead to incorrect results being produced, depending on what order the tasks are executed. Please refer to https://docs.gradle.org/7.6/userguide/validation_problems.html#implicit_dependency for more details about this problem.
- Gradle detected a problem with the following location: '/Users/Development/myrepo/build/reports/jacoco/codeCoverageReport/html'. Reason: Task ':validateDependenciesKtFile' uses this output of task ':codeCoverageReport' without declaring an explicit or implicit dependency. This can lead to incorrect results being produced, depending on what order the tasks are executed. Please refer to https://docs.gradle.org/7.6/userguide/validation_problems.html#implicit_dependency for more details about this problem.
These indicate that the optimizations are disabled so doesnt seem like a red flag?
Related
I am using Gradle 7.5, Quarkus 2.12.3 and mockk 1.13.3. When I try to run quarkusDev task from command line and then start continuous testing (by pressing r), then all tests pass OK.
However, when I do the same as from IntelliJ (as gradle run configuration), all tests fail with error:
java.lang.NoClassDefFoundError: Could not initialize class io.mockk.impl.JvmMockKGateway
How can I fix that?
Masked thrown exception
After much debugging I found the problem. The thrown exception actually originates in HotSpotVirtualMachine.java and is thrown during attachment of ByteBuddy as a java agent. Here is the relevant code;
// The tool should be a different VM to the target. This check will
// eventually be enforced by the target VM.
if (!ALLOW_ATTACH_SELF && (pid == 0 || pid == CURRENT_PID)) {
throw new IOException("Can not attach to current VM");
}
Turning check off
So the check can be turned off by setting ALLOW_ATTACH_SELF constant to true. The constant is set from a system property named jdk.attach.allowAttachSelf:
String s = VM.getSavedProperty("jdk.attach.allowAttachSelf");
ALLOW_ATTACH_SELF = "".equals(s) || Boolean.parseBoolean(s);
So, in my case, I simply added the following JVM argument to my gradle file and tests started to pass:
tasks.quarkusDev {
jvmArgs += "-Djdk.attach.allowAttachSelf"
}
I'm writing test cases for my NFT smart contract (SC). When I check the state of the SC after creating my NFT I'm expecting to see a variable (next_index_to_mint:u64, that's I increase by 1 every new NFT) to be updated.
So I'm running the test using the command:
$ erdpy contract test
INFO:projects.core:run_tests.project: /Users/<user>/sc_nft
INFO:myprocess:run_process: ['/Users/<user>/elrondsdk/vmtools/mandos-test', '/Users/<user>/sc_nft/mandos'], in folder: None
CRITICAL:cli:External process error:
Command line: ['/Users/<user>/elrondsdk/vmtools/mandos-test', '/Users/<user>/sc_nft/mandos']
Output: Scenario: buy_nft.scen.json ... FAIL: wrong account storage for account "sc:nft-minter":
for key 0x6e657874496e646578546f4d696e74 (str:nextIndexToMint): Want: "0x02". Have: ""
Scenario: create_nft.scen.json ... FAIL: wrong account storage for account "sc:nft-minter":
for key 0x6e657874496e646578546f4d696e74 (str:nextIndexToMint): Want: "0x02". Have: ""
Scenario: init.scen.json ... ok
Done. Passed: 1. Failed: 2. Skipped: 0.
ERROR: some tests failed
However, when I'm running the test using elrond_wasm_debug::mandos_rs function with the create_nft.scen.json file, it passed.
use elrond_wasm_debug::*;
fn world() -> BlockchainMock {
let mut blockchain = BlockchainMock::new();
blockchain.set_current_dir_from_workspace("");
blockchain.register_contract_builder("file:output/test.wasm", nft_auth_card::ContractBuilder);
blockchain
}
#[test]
fn create_nft() {
elrond_wasm_debug::mandos_rs("mandos/create_nft.scen.json", world());
}
BTW, if you want to add this to the NFT SC example, that would be great in the tests/ folder.
I tried to put an incorrect value, and it failed as expected. So my question is how could it be possible that it work using mandos elrond_wasm debug but not erdpy ?
running 1 test
thread 'create_nft' panicked at 'bad storage value. Address: sc:nft-minter. Key: str:nextIndexToMint. Want: "0x04". Have: 0x02', /Users/<user>/elrondsdk/vendor-rust/registry/src/github.com-1ecc6299db9ec823/elrond-wasm-debug-0.28.0/src/mandos_step/check_state.rs:56:21
Here is the code (I use the default NFT SC example):
const NFT_INDEX: u64 = 0;
fn create_nft_with_attributes<T: TopEncode>(...) -> u64 {
...
self.next_index_to_mint().set_if_empty(&NFT_INDEX);
let next_index_to_mint = self.next_index_to_mint().get();
self.next_index_to_mint().set(next_index_to_mint+1);
...
}
#[storage_mapper("nextIndexToMint")]
fn next_index_to_mint(&self) -> SingleValueMapper<u64>;
Short answer: most likely you haven't re-built your contract before testing it with erdpy.
Long answer: currently there are two ways mandos tests are executed, as you've exemplified in your case:
Run tests directly from rust through mandos_rs
Run tests through erdpy (which in turn uses mandos_go)
These two frameworks (mandos_rs and mandos_go) are functioning in different ways:
mandos_rs: this framework is running on your rust code directly and it's testing it agains a mocked VM and mocked blockchain in the background. Therefore, it's not necessary to build your contract when using mandos_rs.
mandos_go: this framework is testing your compiled contract against a
REAL VM with mocked blockchain in the background, so it's necessary to build your latest changes into a .wasm bytecode (e.g. erdpy contract build) before running the tests via mandos_go, as this compiled file will be loaded by the VM like in a real use scenario.
When I separately run the runAsyncWithMock test, it waits for 3 seconds until the mock's execution is finalised, rather than get terminated like the other 2 tests.
I was not able to figure out why.
It is interesting that:
When multiple Runnables are executed by CompletableFuture.runAsync in a row in the runAsyncWithMock test, only the first one waits, the others not.
When having multiple duplicated runAsyncWithMock tests, each and every of them runs for 3s when the whole specification is executed.
When using Class instance rather than a Mock, the test is finalised immediately.
Any idea what I got wrong?
My configuration:
macOS Mojave 10.14.6
Spock 1.3-groovy-2.4
Groovy 2.4.15
JDK 1.8.0_201
The repo containing the whole Gradle project for reproduction:
https://github.com/lobodpav/CompletableFutureMisbehavingTestInSpock
The problematic test's code:
#Stepwise
class SpockCompletableFutureTest extends Specification {
def runnable = Stub(Runnable) {
run() >> {
println "${Date.newInstance()} BEGIN1 in thread ${Thread.currentThread()}"
sleep(3000)
println "${Date.newInstance()} END1 in thread ${Thread.currentThread()}"
}
}
def "runAsyncWithMock"() {
when:
CompletableFuture.runAsync(runnable)
then:
true
}
def "runAsyncWithMockAndClosure"() {
when:
CompletableFuture.runAsync({ runnable.run() })
then:
true
}
def "runAsyncWithClass"() {
when:
CompletableFuture.runAsync(new Runnable() {
void run() {
println "${Date.newInstance()} BEGIN2 in thread ${Thread.currentThread()}"
sleep(3000)
println "${Date.newInstance()} END2 in thread ${Thread.currentThread()}"
}
})
then:
true
}
}
This is caused by the synchronized methods in https://github.com/spockframework/spock/blob/master/spock-core/src/main/java/org/spockframework/mock/runtime/MockController.java when a mock is executed it delegates through the handle method. The Specification also uses the synchronized methods, in this case probably leaveScope, and is thus blocked by the sleeping Stub method.
Since this is a thread interleaving problem I guess that additional closure in runAsyncWithMockAndClosure moves the execution of the stub method behind the leaveScope and thus changes the ordering/blocking.
Oh, just now after writing my last comment I saw a difference:
You use #Stepwise (I didn't when I tried at first), an annotation I almost never use because it creates dependencies between feature methods (bad, bad testing practice). While I cannot say why this has the effect described by you only when running the first method, I can tell you that removing the annotation fixes it.
P.S.: With #Stepwise you cannot even execute the second or third method separately because the runner will always run the preceding one(s) first, because - well, the specification is said to be executed step-wise. ;-)
Update: I could briefly reproduce the problem with #Stepwise, but after recompilation now it does not happen anymore, neither with or without that annotation.
I just started a new Gradle project. In my previous build.gradles I had been putting this:
compile 'org.codehaus.groovy:groovy-all:2.4.15'
testCompile 'org.spockframework:spock-core:1.1-groovy-2.4'
... and also these dependencies:
testCompile 'net.bytebuddy:byte-buddy:1.6.11'
testCompile 'org.objenesis:objenesis:2.6'
By a process of trial and error I had found that Groovy 2.4.15 with these Bytebuddy and Objensis dependencies enabled me to mock BufferedReader. This proved useful in a console application where I wanted to mock user input to the console. The "console handler" class thus has the following field/property:
def br = new BufferedReader( new InputStreamReader(System.in, 'UTF-8' ))
used as follows in the app class to get user console input:
String response = br.readLine().trim()
... meaning that Spock tests can do this sort of thing:
def 'prompt should show help on entering H'() {
given:
consoleHandler.br = Mock( BufferedReader )
consoleHandler.br.readLine() >> 'h'
i.e. simulate the entry of the letter h at the console.
... but it doesn't work with Groovy 2.5.3 and its matching Spock dependency: for this new project I put:
compile 'org.codehaus.groovy:groovy-all:2.5.3'
testCompile 'org.spockframework:spock-core:1.2-groovy-2.5'
... with the same ByteBuddy and Objenisis dependencies. I get the following test failure:
java.lang.IllegalArgumentException: Could not create type at
net.bytebuddy.TypeCache.findOrInsert(TypeCache.java:140) at
net.bytebuddy.TypeCache$WithInlineExpunction.findOrInsert(TypeCache.java:346)
at net.bytebuddy.TypeCache.findOrInsert(TypeCache.java:161) at
net.bytebuddy.TypeCache$WithInlineExpunction.findOrInsert(TypeCache.java:355)
at
org.spockframework.mock.runtime.ByteBuddyMockFactory.createMock(ByteBuddyMockFactory.java:41)
at
org.spockframework.mock.runtime.ProxyBasedMockFactory.create(ProxyBasedMockFactory.java:42)
at
org.spockframework.mock.runtime.JavaMockFactory.createInternal(JavaMockFactory.java:58)
at
org.spockframework.mock.runtime.JavaMockFactory.create(JavaMockFactory.java:38)
at
org.spockframework.mock.runtime.CompositeMockFactory.create(CompositeMockFactory.java:42)
at
org.spockframework.lang.SpecInternals.createMock(SpecInternals.java:46)
at
org.spockframework.lang.SpecInternals.createMockImpl(SpecInternals.java:294)
at
org.spockframework.lang.SpecInternals.createMockImpl(SpecInternals.java:284)
at
org.spockframework.lang.SpecInternals.MockImpl(SpecInternals.java:100)
at core.FirstSpec.setup(first_tests.groovy:20)Caused by:
java.lang.NoSuchMethodError:
net.bytebuddy.dynamic.loading.ClassInjector$UsingLookup.isAvailable()Z
at
org.spockframework.mock.runtime.ByteBuddyMockFactory.determineBestClassLoadingStrategy(ByteBuddyMockFactory.java:103)
at
org.spockframework.mock.runtime.ByteBuddyMockFactory.access$300(ByteBuddyMockFactory.java:27)
at
org.spockframework.mock.runtime.ByteBuddyMockFactory$1.call(ByteBuddyMockFactory.java:54)
at
org.spockframework.mock.runtime.ByteBuddyMockFactory$1.call(ByteBuddyMockFactory.java:43)
at net.bytebuddy.TypeCache.findOrInsert(TypeCache.java:138)
Any Groovy über-geeks out there?
You have to upgrade byte-buddy:
testCompile 'net.bytebuddy:byte-buddy:1.8.21'
Spock version 1.1-groovy-2.4 was depending on byte-buddy:1.6.5 - https://mvnrepository.com/artifact/org.spockframework/spock-core/1.1-groovy-2.4
Spock version 1.2-groovy-2.5 depends on byte-buddy:1.8.21 - https://mvnrepository.com/artifact/org.spockframework/spock-core/1.2-groovy-2.5
I am using mongoid_grid gem to store my files. It is working fine on development but while running cucumber test I am getting this error:
Database command 'filemd5' failed: {"errmsg"=>"exception: best guess plan requested, but scan and order required: query: { files_id: ObjectId('4d8728605835068603000024') } order: { files_id: 1, n: 1 } choices: { $natural: 1 } ", "code"=>13284, "ok"=>0.0}
I also tried
db.fs.chunks.ensureIndex({files_id:1, n:1}, {unique: true});
but it does not seem to work. When I run one scenario at a time the tests pass but when I run all of them once they fail with above error. Am I missing something here?