Why doesn't my test run tearDownAfterTestClass when it fails - selenium

in a test i am writing the setUpBeforeTests creates a new customer in the database who is then used to perform the tests with so naturally when i finish the test i should get rid of this test customer in tearDownAfterTestClass so that i can create then anew when i rerun the test and not get any false positives
how when the tests all run fine i have no problem but if a test fails and i go to rerun it my setUpBeforeTests fails because i check for mysql errors in it like this
try
{
if(!mysqli_query($connection,$query))
{
$this->assertTrue(false);
}
}
catch (Exception $exc)
{
$msg = '[tearDownAfterTestClass] Exception Error' . PHP_EOL . PHP_EOL;
$msg .= 'Could not run query - '.mysqli_error($connection). PHP_EOL;;
$this->fail($msg);
}
the error i get is that there is a primary key violation which is expected cause i'm trying to create a new customer using the same data (primary key is on email which is also used to log in) however that means when the test failed it didn't run tearDownAfterTestClass
now i could just move everything in tearDownAfterTestClass to the start of setUpBeforeTests however to me that seems like bad programming since it defeates the purpose of even have tearDownAfterTestClass
so i am wondering, why isn't my tearDownAfterTestClass running when a test fails
NOTE: the database is a fundamental part of the system i'm testing and the database and system are on a separate development environment not the live one, the backup files for the database are almost 2 GBs and takes almost 1/2 an hour to restore, the purpose of the tear down is to remove any data we have added because of the test so that we don't have to restore the database every time we run the tests

Related

Static Hangfire RecurringJob methods in LINQPad are not behaving

I have a script in LINQPad that looks like this:
var serverMode = EnvironmentType.EWPROD;
var jobToSchedule = JobType.ABC;
var hangfireCs = GetConnectionString(serverMode);
JobStorage.Current = new SqlServerStorage(hangfireCs);
Action<string, string, XElement> createOrReplaceJob =
(jobName, cronExpression, inputPackage) =>
{
RecurringJob.RemoveIfExists(jobName);
RecurringJob.AddOrUpdate(
jobName,
() => new BTR.Evolution.Hangfire.Schedulers.JobInvoker().Invoke(
jobName,
inputPackage,
null,
JobCancellationToken.Null),
cronExpression, TimeZoneInfo.Local);
};
// psuedo code to prepare inputPackage for client ABC...
createOrReplaceJob("ABC.CustomReport.SurveyResults", "0 2 * * *", inputPackage);
JobStorage.Current.GetConnection().GetRecurringJobs().Where( j => j.Id.StartsWith( jobToSchedule.ToString() ) ).Dump( "Scheduled Jobs" );
I have to schedule in both QA and PROD. To do that, I toggle the serverMode variable and run it once for EWPROD and once for EWQA. This all worked fine until recently, and I don't know exactly when it changed unfortunately because I don't always have to run in both environments.
I did purchase/install LINQPad 7 two days ago to look at some C# 10 features and I'm not sure if that affected it.
But here is the problem/flow:
Run it for EWQA and everything works.
Run it for EWPROD and the script (Hangfire components) seem to run in a mix of QA and PROD.
When I'm running it the 'second time' in EWPROD I've confirmed:
The hangfireCs (connection string) is right (pointing to PROD) and it is assigned to JobStorage.Current
The query at the end of the script, JobStorage.Current.GetConnection().GetRecurringJobs() uses the right connection.
The RecurringJob.* methods inside the createOrReplaceJob Action use the connection from the previous run (i.e. EWQA). If I monitor my QA Hangfire db, I see the job removed and added.
Temporary workaround:
Run it for EWQA and everything works.
Restart LINQPad or use 'Cancel and Reset All Queries' method
Run it for EWPROD and now everything works.
So I'm at a loss of where the issue might lie. I feel like my upgrade/install of LINQPad7 might be causing problems, but I'm not sure if there is a different way to make the RecurringJob.* static methods use the 'updated' connection string.
Any ideas on why the restart or reset is now needed?
LINQPad - 5.44.02
Hangfire.Core - 1.7.17
Hangfire.SqlServer - 1.7.17
This is caused by your script (or a library that you call) caching something statically, and not cleaning up between executions.
Either clear/dispose objects when you're done (e.g., JobStorage.Current?) or tell LINQPad not to re-use the process between executions, by adding Util.NewProcess=true; to your script.

Robot Framework: Re Run Failed Test Cases

In Robot Automation, how to re-run the failed test case immediately if it is failed, before going to another test case execution.
For instance,
*** Test Cases ***
Login User And Create Another User
Login User ....
Create Another User ...
Login With New User
Login User..
Test Function ABC
.....
.....
Since one test has a dependency on another test, I need to re-run the failed case immediately after it is failed. Before executing another test.
In one word, you can't, and you shouldn't; a case is a case, with binary outcome. And if you have dependencies between tests, that's a smelly design; try to change it to a pre-condition (env setup) for the second case, so it is atomic.
Disclaimer: this rant is for the automatic re-execution in a single run. After a run has finished, RF has baked-in functionality to re-execute just the failed ones (so flaky tests are given the chance to succeed); but as I understood your question, you are not asking for the latter.
In two words, if you really need to do it, you can; extract the whole test case in a keyword, and call it inside Wait Until Keyword Succeeds, giving it 2 (or more?) attempts:
*** Test Cases ***
Test Function ABC
Wait Until Keyword Succeeds 2 times 100ms The Actual Test For Function ABC
*** Keywords ***
The Actual Test For Function ABC
.....
.....

Redis how to make EVAL script behave like MULTI / EXEC?

One thing I noticed when playing around with Lua scripts is that, in a script containing multiple operations, if an error is thrown halfway through the execution of the script, the operations that completed before the error will actually be reflected in the database. This is in contrast to MULTI / EXEC, where either all operations succeed or fail.
For example, if I have a script like the following:
redis.call("hset", "mykey", "myfield", "val")
local expiry = someFunctionThatMightThrow()
redis.call("expire", "mykey", expiry)
I tested this and the results of the first hset call were reflected in redis. Is there any way to make the lua script behave so that if any error is thrown during the script, then all actions performed during that script execution are reverted?
Sample script for my comment above, on error manually rollback. Note: Syntax is not verified.
redis.call("hset", "mykey", "myfield", "val")
local expiry,error = pcall(someFunctionThatMightThrow())
if expiry ~= nil then
redis.call("expire", "mykey", expiry)
else
redis.call("hdel", "mykey", "myfield")
end

Pentaho Data Integration: Error Handling

I'm building out an ETL process with Pentaho Data Integration (CE) and I'm trying to operationalize my Transformations and Jobs so that they'll be able to be monitored. Specifically, I want to be able to catch any errors and then send them to an error reporting service like Honeybadger or New Relic. I understand how to do row-level error reporting but I don't see a way to do job or transaction failure reporting.
Here is an example job.
The down path is where the transformation succeeds but has row errors. There we can just filter the results and log them.
The path to the right is the case where the transformation fails all-together (e.g. DB credentials are wrong). This is where I'm having trouble: I can't figure out how to get the error info to be sent.
How do I capture transformation failures to be logged?
You can not capture job-level errors details inside the job itself.
However there are other options for monitoring.
First option is using database logging for transformations or jobs (see the "Log" tab in the job/trans parameters dialog) - this way you always have up-to-date information about the execution status so you can, say, write a job that periodically scans the logging database and sends error reports wherever you need.
Meanwhile this option seems to be something pretty heavy-weight for development and support and not too flexible for further modifications. So in our company we ended up with monitoring on a job-execution level - i.e. when you run a job with kitchen.bat and it fails by any reason you get an "error" status of execution of the kitchen, so you can easily examine it and perform necessary actions with whenever tools you'd like - .bat commands, PowerShell or (in our case) Jenkins CI.
You could use the writeToLog("e", "Message") function in the Modified Java Script step.
Documentation:
// Writes a string to the defined Kettle Log.
//
// Usage:
// writeToLog(var);
// 1: String - The Message which should be written to
// the Kettle Debug Log
//
// writeToLog(var,var);
// 1: String - The Type of the Log
// d - Debug
// l - Detailed
// e - Error
// m - Minimal
// r - RowLevel
//
// 2: String - The Message which should be written to
// the Kettle Log

PhpUnit giving errror at second test function

I am trying to make test to my Database with phpunit and I migrate the database to memory.
the first test run just fine:
/** #test */
public function it_fetches_a_single_ano_letivo()
{
$this->makeAnoLetivo();
$this->getJson('/v1/anos-letivos');
$this->assertResponseOk();
}
but the second test fails and its exactly the same as the first one:
/** #test */
public function it_fetches_anos_letivos()
{
$this->makeAnoLetivo();
$this->getJson('/v1/anos-letivos');
$this->assertResponseOk();
}
Here is the makeAnoLetivo function:
private function makeAnoLetivo($anoLetivoFields = [])
{
while($this->times--)
{
$ano1=$this->fake->year;
$anoLetivo = array_merge([
'ano1' => $ano1+0,
'ano2' => $ano1+1
], $anoLetivoFields);
AnoLetivo::create($anoLetivo);
}
}
and here is the phpUnit output:
Configuration read from {{PATH_TO_PROJECT}}/phpunit.xml
..E
Time: 2.62 seconds, Memory: 23.25Mb
There was 1 error:
1) AnosLetivosTest::it_fetches_anos_letivos
Illuminate\Database\QueryException: SQLSTATE[23000]: Integrity constraint violation: 19 anos_letivos.id may not be NULL (SQL: insert into "anos_letivos" ("ano1", "ano2", "updated_at", "created_at") values (2009, 2010, 2015-03-27 18:41:59, 2015-03-27 18:41:59))
{{PATH_TO_PROJECT}}/vendor/laravel/framework/src/Illuminate/Database/Connection.php:620
{{PATH_TO_PROJECT}}/vendor/laravel/framework/src/Illuminate/Database/Connection.php:576
{{PATH_TO_PROJECT}}/vendor/laravel/framework/src/Illuminate/Database/Connection.php:359
{{PATH_TO_PROJECT}}/vendor/laravel/framework/src/Illuminate/Database/Connection.php:316
{{PATH_TO_PROJECT}}/vendor/laravel/framework/src/Illuminate/Database/Query/Builder.php:1702
{{PATH_TO_PROJECT}}/vendor/laravel/framework/src/Illuminate/Database/Eloquent/Builder.php:933
{{PATH_TO_PROJECT}}/vendor/laravel/framework/src/Illuminate/Database/Eloquent/Model.php:1603
{{PATH_TO_PROJECT}}/vendor/laravel/framework/src/Illuminate/Database/Eloquent/Model.php:1603
{{PATH_TO_PROJECT}}/vendor/laravel/framework/src/Illuminate/Database/Eloquent/Model.php:1501
{{PATH_TO_PROJECT}}/vendor/laravel/framework/src/Illuminate/Database/Eloquent/Model.php:544
{{PATH_TO_PROJECT}}/tests/AnosLetivosTest.php:50
{{PATH_TO_PROJECT}}/tests/AnosLetivosTest.php:32
phar:///usr/local/bin/phpunit/phpunit/TextUI/Command.php:152
phar:///usr/local/bin/phpunit/phpunit/TextUI/Command.php:104
Caused by
PDOException: SQLSTATE[23000]: Integrity constraint violation: 19 anos_letivos.id may not be NULL
{{PATH_TO_PROJECT}}/vendor/laravel/framework/src/Illuminate/Database/Connection.php:358
{{PATH_TO_PROJECT}}/vendor/laravel/framework/src/Illuminate/Database/Connection.php:612
{{PATH_TO_PROJECT}}/vendor/laravel/framework/src/Illuminate/Database/Connection.php:576
{{PATH_TO_PROJECT}}/vendor/laravel/framework/src/Illuminate/Database/Connection.php:359
{{PATH_TO_PROJECT}}/vendor/laravel/framework/src/Illuminate/Database/Connection.php:316
{{PATH_TO_PROJECT}}/vendor/laravel/framework/src/Illuminate/Database/Query/Builder.php:1702
{{PATH_TO_PROJECT}}/vendor/laravel/framework/src/Illuminate/Database/Eloquent/Builder.php:933
{{PATH_TO_PROJECT}}/vendor/laravel/framework/src/Illuminate/Database/Eloquent/Model.php:1603
{{PATH_TO_PROJECT}}/vendor/laravel/framework/src/Illuminate/Database/Eloquent/Model.php:1603
{{PATH_TO_PROJECT}}/vendor/laravel/framework/src/Illuminate/Database/Eloquent/Model.php:1501
{{PATH_TO_PROJECT}}/vendor/laravel/framework/src/Illuminate/Database/Eloquent/Model.php:544
{{PATH_TO_PROJECT}}/tests/AnosLetivosTest.php:50
{{PATH_TO_PROJECT}}/tests/AnosLetivosTest.php:32
phar:///usr/local/bin/phpunit/phpunit/TextUI/Command.php:152
phar:///usr/local/bin/phpunit/phpunit/TextUI/Command.php:104
FAILURES!
Tests: 3, Assertions: 5, Errors: 1.
So the first function run just fine but the second one is the same and fails...
Also if i create a third one (equal) only the first one will pass.
EDIT 1:
So it inserts well in the first test, it rollbacks the DB and Migrates it again for the next test and the insert in the database says the ID may not be NULL, so it seems the create method no longer knows how to insert in the database after the first test... still dont know what causes this, the migrate is correct and it is rollback"ing" well too...
Edit 2:
I tried to run the tests against the production database and it works just fine. So the problem must be on the memory database or on any configuration on this memory database. But I dont know what problem because the first test I get green and it inserts the data without problems, i can even insert 10 items in the first test and it does what it should. But the second test shows the error above.
It looks like the database insert command is failing after the first test. Could be for a number of reasons.
I think you should consider using https://github.com/laracasts/TestDummy - it is designed to allow you to have fake data for all your tests. It will also automatically reset your database between each test (using transactions).
Its a wonderful tool - give it a go
So the solution was to write this lines on the setUp() method:
AnoLetivo::flushEventListeners();
AnoLetivo::boot();
the problem may be in the laravel framework.