cucumber test for mongoid_grid - ruby-on-rails-3

I am using mongoid_grid gem to store my files. It is working fine on development but while running cucumber test I am getting this error:
Database command 'filemd5' failed: {"errmsg"=>"exception: best guess plan requested, but scan and order required: query: { files_id: ObjectId('4d8728605835068603000024') } order: { files_id: 1, n: 1 } choices: { $natural: 1 } ", "code"=>13284, "ok"=>0.0}
I also tried
db.fs.chunks.ensureIndex({files_id:1, n:1}, {unique: true});
but it does not seem to work. When I run one scenario at a time the tests pass but when I run all of them once they fail with above error. Am I missing something here?

Related

Jacoco Test Coverage Verification not getting executed

I am trying to add test coverage verification in my kotlin gradle project. I added a super high minimum value (0.99) to fail my build but this task is not getting executed.
tasks.jacocoTestCoverageVerification {
violationRules {
rule {
limit {
minimum = "0.99".toBigDecimal()
}
}
}
}
The test coverage report is generated successfully from the coverageReport task (details not in the post)
tasks.withType<Test> {
finalizedBy(coverageReport) // report is always generated after tests run
}
According to jacoco violation rules official documentation
Any violation of the declared rules would automatically result in a failed build when executing the check task.
So I am under the assumption that the test coverage verification should be auto triggered?
So I expected that the jacocoTestCoverageVerification would execute without I having to call it. I also added the following to the jacocoTestCoverageVerification task but still doesn't fail so not writing all the rules is not a likely issue.
rule {
isEnabled = true
element = "CLASS"
includes = listOf("org.gradle.*")
limit {
counter = "LINE"
value = "TOTALCOUNT"
maximum = "0.99".toBigDecimal()
}
}
I also tried :
tasks.jacocoTestCoverageVerification {
violationRules {
rule {
classDirectories.setFrom(sourceSets.main.get().output.asFileTree.matching {
})
isEnabled = true
limit {
minimum = "0.99".toBigDecimal()
}
}
}
}
Can anyone please help me catch what I am missing?
EDIT:
Gradle version
bin/gradle --version
------------------------------------------------------------
Gradle 7.6
------------------------------------------------------------
Kotlin: 1.7.10
Groovy: 3.0.13
Ant: Apache Ant(TM) version 1.10.11 compiled on July 10 2021
JVM: 17.0.5 (Eclipse Adoptium 17.0.5+8)
OS: Mac OS X 13.2 aarch64
The gradle build command:
bin/gradle build
Build logs
Execution optimizations have been disabled for task ':codeCoverageReport' to ensure correctness due to the following reasons:
- Gradle detected a problem with the following location: '/Users/Development/myrepo/build/reports/jacoco/codeCoverageReport/codeCoverageReport.xml'. Reason: Task ':validateDependenciesKtFile' uses this output of task ':codeCoverageReport' without declaring an explicit or implicit dependency. This can lead to incorrect results being produced, depending on what order the tasks are executed. Please refer to https://docs.gradle.org/7.6/userguide/validation_problems.html#implicit_dependency for more details about this problem.
- Gradle detected a problem with the following location: '/Users/Development/myrepo/build/reports/jacoco/codeCoverageReport/html'. Reason: Task ':validateDependenciesKtFile' uses this output of task ':codeCoverageReport' without declaring an explicit or implicit dependency. This can lead to incorrect results being produced, depending on what order the tasks are executed. Please refer to https://docs.gradle.org/7.6/userguide/validation_problems.html#implicit_dependency for more details about this problem.
These indicate that the optimizations are disabled so doesnt seem like a red flag?

Codeception test returning [Facebook\WebDriver\Exception\UnknownErrorException] InternalError: too much recursion

Test fails with [Facebook\WebDriver\Exception\UnknownErrorException] InternalError: too much recursion when trying to execute $I->see() on a specific piece of text.
When analyzing the source code I see there's 250k lines of HTML code, so it's causing some sort of a bottle neck. I was wondering if there's a way to identify this issue without having the whole test fail?
I tried wrapping it in try/catch, but it didn't help.
UPDATE
Browser: Geckodriver
Version: Codeception PHP Testing Framework v2.5.6
Code
try {
$this->see($text);
$isFound = true;
} catch (\PHPUnit_Framework_ExpectationFailedException $e) {
$isFound = false;
}
updated codeception 2.5.6 which is 3 yrs old and it fixed the issue. markup was over 10mb per page which is abnormally high.

Elrond mandos test elrond_wasm_debug::mandos_rs pass however erdpy contract test fail

I'm writing test cases for my NFT smart contract (SC). When I check the state of the SC after creating my NFT I'm expecting to see a variable (next_index_to_mint:u64, that's I increase by 1 every new NFT) to be updated.
So I'm running the test using the command:
$ erdpy contract test
INFO:projects.core:run_tests.project: /Users/<user>/sc_nft
INFO:myprocess:run_process: ['/Users/<user>/elrondsdk/vmtools/mandos-test', '/Users/<user>/sc_nft/mandos'], in folder: None
CRITICAL:cli:External process error:
Command line: ['/Users/<user>/elrondsdk/vmtools/mandos-test', '/Users/<user>/sc_nft/mandos']
Output: Scenario: buy_nft.scen.json ... FAIL: wrong account storage for account "sc:nft-minter":
for key 0x6e657874496e646578546f4d696e74 (str:nextIndexToMint): Want: "0x02". Have: ""
Scenario: create_nft.scen.json ... FAIL: wrong account storage for account "sc:nft-minter":
for key 0x6e657874496e646578546f4d696e74 (str:nextIndexToMint): Want: "0x02". Have: ""
Scenario: init.scen.json ... ok
Done. Passed: 1. Failed: 2. Skipped: 0.
ERROR: some tests failed
However, when I'm running the test using elrond_wasm_debug::mandos_rs function with the create_nft.scen.json file, it passed.
use elrond_wasm_debug::*;
fn world() -> BlockchainMock {
let mut blockchain = BlockchainMock::new();
blockchain.set_current_dir_from_workspace("");
blockchain.register_contract_builder("file:output/test.wasm", nft_auth_card::ContractBuilder);
blockchain
}
#[test]
fn create_nft() {
elrond_wasm_debug::mandos_rs("mandos/create_nft.scen.json", world());
}
BTW, if you want to add this to the NFT SC example, that would be great in the tests/ folder.
I tried to put an incorrect value, and it failed as expected. So my question is how could it be possible that it work using mandos elrond_wasm debug but not erdpy ?
running 1 test
thread 'create_nft' panicked at 'bad storage value. Address: sc:nft-minter. Key: str:nextIndexToMint. Want: "0x04". Have: 0x02', /Users/<user>/elrondsdk/vendor-rust/registry/src/github.com-1ecc6299db9ec823/elrond-wasm-debug-0.28.0/src/mandos_step/check_state.rs:56:21
Here is the code (I use the default NFT SC example):
const NFT_INDEX: u64 = 0;
fn create_nft_with_attributes<T: TopEncode>(...) -> u64 {
...
self.next_index_to_mint().set_if_empty(&NFT_INDEX);
let next_index_to_mint = self.next_index_to_mint().get();
self.next_index_to_mint().set(next_index_to_mint+1);
...
}
#[storage_mapper("nextIndexToMint")]
fn next_index_to_mint(&self) -> SingleValueMapper<u64>;
Short answer: most likely you haven't re-built your contract before testing it with erdpy.
Long answer: currently there are two ways mandos tests are executed, as you've exemplified in your case:
Run tests directly from rust through mandos_rs
Run tests through erdpy (which in turn uses mandos_go)
These two frameworks (mandos_rs and mandos_go) are functioning in different ways:
mandos_rs: this framework is running on your rust code directly and it's testing it agains a mocked VM and mocked blockchain in the background. Therefore, it's not necessary to build your contract when using mandos_rs.
mandos_go: this framework is testing your compiled contract against a
REAL VM with mocked blockchain in the background, so it's necessary to build your latest changes into a .wasm bytecode (e.g. erdpy contract build) before running the tests via mandos_go, as this compiled file will be loaded by the VM like in a real use scenario.

PhpUnit giving errror at second test function

I am trying to make test to my Database with phpunit and I migrate the database to memory.
the first test run just fine:
/** #test */
public function it_fetches_a_single_ano_letivo()
{
$this->makeAnoLetivo();
$this->getJson('/v1/anos-letivos');
$this->assertResponseOk();
}
but the second test fails and its exactly the same as the first one:
/** #test */
public function it_fetches_anos_letivos()
{
$this->makeAnoLetivo();
$this->getJson('/v1/anos-letivos');
$this->assertResponseOk();
}
Here is the makeAnoLetivo function:
private function makeAnoLetivo($anoLetivoFields = [])
{
while($this->times--)
{
$ano1=$this->fake->year;
$anoLetivo = array_merge([
'ano1' => $ano1+0,
'ano2' => $ano1+1
], $anoLetivoFields);
AnoLetivo::create($anoLetivo);
}
}
and here is the phpUnit output:
Configuration read from {{PATH_TO_PROJECT}}/phpunit.xml
..E
Time: 2.62 seconds, Memory: 23.25Mb
There was 1 error:
1) AnosLetivosTest::it_fetches_anos_letivos
Illuminate\Database\QueryException: SQLSTATE[23000]: Integrity constraint violation: 19 anos_letivos.id may not be NULL (SQL: insert into "anos_letivos" ("ano1", "ano2", "updated_at", "created_at") values (2009, 2010, 2015-03-27 18:41:59, 2015-03-27 18:41:59))
{{PATH_TO_PROJECT}}/vendor/laravel/framework/src/Illuminate/Database/Connection.php:620
{{PATH_TO_PROJECT}}/vendor/laravel/framework/src/Illuminate/Database/Connection.php:576
{{PATH_TO_PROJECT}}/vendor/laravel/framework/src/Illuminate/Database/Connection.php:359
{{PATH_TO_PROJECT}}/vendor/laravel/framework/src/Illuminate/Database/Connection.php:316
{{PATH_TO_PROJECT}}/vendor/laravel/framework/src/Illuminate/Database/Query/Builder.php:1702
{{PATH_TO_PROJECT}}/vendor/laravel/framework/src/Illuminate/Database/Eloquent/Builder.php:933
{{PATH_TO_PROJECT}}/vendor/laravel/framework/src/Illuminate/Database/Eloquent/Model.php:1603
{{PATH_TO_PROJECT}}/vendor/laravel/framework/src/Illuminate/Database/Eloquent/Model.php:1603
{{PATH_TO_PROJECT}}/vendor/laravel/framework/src/Illuminate/Database/Eloquent/Model.php:1501
{{PATH_TO_PROJECT}}/vendor/laravel/framework/src/Illuminate/Database/Eloquent/Model.php:544
{{PATH_TO_PROJECT}}/tests/AnosLetivosTest.php:50
{{PATH_TO_PROJECT}}/tests/AnosLetivosTest.php:32
phar:///usr/local/bin/phpunit/phpunit/TextUI/Command.php:152
phar:///usr/local/bin/phpunit/phpunit/TextUI/Command.php:104
Caused by
PDOException: SQLSTATE[23000]: Integrity constraint violation: 19 anos_letivos.id may not be NULL
{{PATH_TO_PROJECT}}/vendor/laravel/framework/src/Illuminate/Database/Connection.php:358
{{PATH_TO_PROJECT}}/vendor/laravel/framework/src/Illuminate/Database/Connection.php:612
{{PATH_TO_PROJECT}}/vendor/laravel/framework/src/Illuminate/Database/Connection.php:576
{{PATH_TO_PROJECT}}/vendor/laravel/framework/src/Illuminate/Database/Connection.php:359
{{PATH_TO_PROJECT}}/vendor/laravel/framework/src/Illuminate/Database/Connection.php:316
{{PATH_TO_PROJECT}}/vendor/laravel/framework/src/Illuminate/Database/Query/Builder.php:1702
{{PATH_TO_PROJECT}}/vendor/laravel/framework/src/Illuminate/Database/Eloquent/Builder.php:933
{{PATH_TO_PROJECT}}/vendor/laravel/framework/src/Illuminate/Database/Eloquent/Model.php:1603
{{PATH_TO_PROJECT}}/vendor/laravel/framework/src/Illuminate/Database/Eloquent/Model.php:1603
{{PATH_TO_PROJECT}}/vendor/laravel/framework/src/Illuminate/Database/Eloquent/Model.php:1501
{{PATH_TO_PROJECT}}/vendor/laravel/framework/src/Illuminate/Database/Eloquent/Model.php:544
{{PATH_TO_PROJECT}}/tests/AnosLetivosTest.php:50
{{PATH_TO_PROJECT}}/tests/AnosLetivosTest.php:32
phar:///usr/local/bin/phpunit/phpunit/TextUI/Command.php:152
phar:///usr/local/bin/phpunit/phpunit/TextUI/Command.php:104
FAILURES!
Tests: 3, Assertions: 5, Errors: 1.
So the first function run just fine but the second one is the same and fails...
Also if i create a third one (equal) only the first one will pass.
EDIT 1:
So it inserts well in the first test, it rollbacks the DB and Migrates it again for the next test and the insert in the database says the ID may not be NULL, so it seems the create method no longer knows how to insert in the database after the first test... still dont know what causes this, the migrate is correct and it is rollback"ing" well too...
Edit 2:
I tried to run the tests against the production database and it works just fine. So the problem must be on the memory database or on any configuration on this memory database. But I dont know what problem because the first test I get green and it inserts the data without problems, i can even insert 10 items in the first test and it does what it should. But the second test shows the error above.
It looks like the database insert command is failing after the first test. Could be for a number of reasons.
I think you should consider using https://github.com/laracasts/TestDummy - it is designed to allow you to have fake data for all your tests. It will also automatically reset your database between each test (using transactions).
Its a wonderful tool - give it a go
So the solution was to write this lines on the setUp() method:
AnoLetivo::flushEventListeners();
AnoLetivo::boot();
the problem may be in the laravel framework.

Why doesn't my test run tearDownAfterTestClass when it fails

in a test i am writing the setUpBeforeTests creates a new customer in the database who is then used to perform the tests with so naturally when i finish the test i should get rid of this test customer in tearDownAfterTestClass so that i can create then anew when i rerun the test and not get any false positives
how when the tests all run fine i have no problem but if a test fails and i go to rerun it my setUpBeforeTests fails because i check for mysql errors in it like this
try
{
if(!mysqli_query($connection,$query))
{
$this->assertTrue(false);
}
}
catch (Exception $exc)
{
$msg = '[tearDownAfterTestClass] Exception Error' . PHP_EOL . PHP_EOL;
$msg .= 'Could not run query - '.mysqli_error($connection). PHP_EOL;;
$this->fail($msg);
}
the error i get is that there is a primary key violation which is expected cause i'm trying to create a new customer using the same data (primary key is on email which is also used to log in) however that means when the test failed it didn't run tearDownAfterTestClass
now i could just move everything in tearDownAfterTestClass to the start of setUpBeforeTests however to me that seems like bad programming since it defeates the purpose of even have tearDownAfterTestClass
so i am wondering, why isn't my tearDownAfterTestClass running when a test fails
NOTE: the database is a fundamental part of the system i'm testing and the database and system are on a separate development environment not the live one, the backup files for the database are almost 2 GBs and takes almost 1/2 an hour to restore, the purpose of the tear down is to remove any data we have added because of the test so that we don't have to restore the database every time we run the tests