Can we actually send out mails during semi-automatic testing? - shopware6

We are using unit / integration tests during Shopware 6 development.
One technique we use is to disable database transaction behaviour to see the results for example of fixtures in the admin panel, for an easier debugging / understanding:
trait IntegrationTestBehaviour
{
use KernelTestBehaviour;
// use DatabaseTransactionBehaviour;
use FilesystemBehaviour;
use CacheTestBehaviour;
use BasicTestDataBehaviour;
use SessionTestBehaviour;
use RequestStackTestBehaviour;
}
Similar to this it would be helpful to send out actual emails during some tests (only for development, not in the CI and so on).
It is already possible to automatically test emails like this:
$eventDidRun = false;
$listenerClosure = function (MailSentEvent $event) use (&$eventDidRun): void {
$eventDidRun = true;
};
$this->addEventListener($dispatcher, MailSentEvent::class, $listenerClosure);
// do something that sends an email
static::assertTrue($eventDidRun, 'The mail.sent Event did not run');
But sometimes we want to manually see the actual email.
The .env.test already contains a valid mailer URL:
MAILER_URL=smtp://x:y#smtp.mailtrap.io:2525?encryption=tls&auth_mode=login
But still no mails get send during the test.
While I guess that this is fully intentional, is there some method to workaround the blockage of getting mails sent during testing?

The reason is the MAILER_URL variable is pre-set to null://localhost in the phpunit.xml.dist of the platform repository:
<server name="MAILER_URL" value="null://localhost"/>
You could set the MAILER_URL environment variable yourself before the tests of the class are executed:
/**
* #beforeClass
*/
public static function setMailerUrl(): void
{
$_SERVER['MAILER_URL'] = 'smtp://x:y#smtp.mailtrap.io:2525?encryption=tls&auth_mode=login';
}

Related

Laravel Reset Database after Test

I have just started using Laravel Dusk to test my project and need some guidance. After I run all the tests available, I want to be able to reset my database back to before I run the tests. (If there were any entries in my database before I run the tests, I would still like to see them after I run the tests. However, any entires created during the test, I would not like to see them after the tests finish running.) Any pointers on how I would achieve this? Thank you!
Update:
<?php
namespace Tests\Browser;
use Tests\DuskTestCase;
use Laravel\Dusk\Browser;
use Illuminate\Foundation\Testing\DatabaseTransactions;
class UserRegisterTest extends DuskTestCase
{
use DatabaseTransactions;
/**
* A test for user registration.
* #group register
* #return void
*/
public function testRegisterUser()
{
//Register with all info filled out correctly
$this->browse(function ($browser){
$browser->visit('/register')
->type('firstName', 'JenLogin')
->type('lastName', 'Zhou')
->type('email', 'testLogin#gmail.com')
->type('bio', 'Hello, this user is for testing login purposes!')
->type('location_zip', '11111')
->type('password', '123456')
->type('password_confirmation', '123456')
->click('.btn-primary')
->assertPathIs('/home')
->click('.dropdown-toggle')
->click('.dropdown-menu li:last-child');
});
$this->assertDatabaseHas('users', ['firstName' => 'JenLogin', 'lastName' => 'Zhou', 'email' => 'testLogin#gmail.com']);
}
/**
* Register with duplicate user
* #group register
* #return void
*/
public function testRegisterDuplicateUser(){
$this->browse(function ($browser){
$browser->visit('/register')
->type('firstName', 'JenLoginDup')
->type('lastName', 'Zhou')
->type('email', 'testLogin#gmail.com')
->type('bio', 'Hello, this user is for testing login purposes!')
->type('location_zip', '11111')
->type('password', '123456')
->type('password_confirmation', '123456')
->click('.btn-primary')
->assertPathIs('/register')
->assertSee('The email has already been taken.');
});
$this->assertDatabaseMissing('users', ['firstName' => 'JenLoginDup', 'lastName' => 'Zhou', 'email' => 'testLogin#gmail.com']);
}
/**
* Register with incorrect password confirmation
* #group register
* #return void
*/
public function testRegisterUserNoPassConfirm(){
$this->browse(function ($browser){
$browser->visit('/register')
->type('firstName', 'JenLoginPass')
->type('lastName', 'Zhou')
->type('email', 'testLoginPass#gmail.com')
->type('bio', 'Hello, this user is for testing login purposes!')
->type('location_zip', '11111')
->type('password', '123456')
->type('password_confirmation', '888888')
->click('.btn-primary')
->assertPathIs('/register')
->assertSee('The password confirmation does not match.');
});
$this->assertDatabaseMissing('users', ['firstName' => 'JenLoginPass', 'lastName' => 'Zhou', 'email' => 'testLoginPass#gmail.com']);
}
}
You are looking for the DatabaseTransactions trait. Use it in your test class like this and it will automatically rollback all database transactions made during your tests.
use Illuminate\Foundation\Testing\DatabaseTransactions;
class ExampleTest extends TestCase
{
use DatabaseTransactions;
// test methods here
}
This will keep track of all transactions made during your test and undo them upon completion.
note: this trait only works on default database connections
First of all, when you are running tests you should use completely different database than your live (or dev) database. For this you should create .env.dusk and set in there:
DB_CONNECTION=mysql
DB_HOST=db
DB_PORT=3306
DB_DATABASE=testing_database
DB_USERNAME=root
DB_PASSWORD=pass
to database used for tests only.
Second thing is that for Laravel Dusk you cannot use just DatabaseTransactions. You should in fact use DatabaseMigrations for Dusk tests otherwise you will get unexpected results.
There is no sane workflow for running tests on live/dev db with data and reverting changes back, done by tests.
Therefore your approach fails here, instead you should:
Create separate test schema/db for tests
Switch to test db, before running tests - this can be somehow automated depending on your configuration in phpunit and .env.dusk, but it depends on your local setup.
Then in your tests you will create all from scratch on clean db (run migrations, seeds, factories)
Run tests against this test db
For development switch back to your base db with current data, which will not be affected by tests.
Next time you will run your tests all starts again from point zero - clean database, this will be done by in tests:
use CreatesApplication;
use DatabaseMigrations;
parent::setUp();etc.
Read more about these methods...
Side Notes:
With this approach, it will be easy, to test your app in CI environments also.
Never write your tests which depend on data on your dev/live db. For tests all required data should be provided by seeds or ewentually factories!
You can use the RefreshDatabase trait in your test classes.After each test the database will be like before test.
In Fact it will drop all tables and migrate again.
If you would not loose your data you can use one separate schema for test.
use Illuminate\Foundation\Testing\RefreshDatabase;
use Illuminate\Foundation\Testing\WithoutMiddleware;
use Tests\TestCase;
class ExampleTest extends TestCase
{
use RefreshDatabase;
}
For multiple databases, this helped me
class MyTest extends TestCase {
// Reset the DB between tests
use DatabaseTransactions;
// Setting this allows both DB connections to be reset between tests
protected $connectionsToTransact = ['mysql', 'myOtherConnection'];
}
I think this is a great question. I found an Artisan tool that may be what you are looking for. You can use it to take a snapshot of the database before you run the test and then use it again to load that snapshot restoring your database to the previous state. I gave it a run(using MYSQL) and it worked great. Hope this is what you are looking for. Here is a link...
https://github.com/spatie/laravel-db-snapshots
phpunit.xml file is your solution there, you can set a .env variables in this file like so
<env name="DB_CONNECTION" value="testing_mysql"/>
<env name="DB_DATABASE_TEST" value="test"/>
now you can run your tests on a separate database.
Plus you can run a .php file every time before tests in automation, you just need to tell it to unittests
<phpunit
...
bootstrap="tests/autoload.php"
>
You can put any cleaners or seeders there or something like
echo 'Migration -begin-' . "\n";
echo shell_exec('php artisan migrate:fresh --seed');
echo 'Migration -end-' . "\n";

Activiti BPMN - How to pass username in variables/expression who have completed task?

I am very new to Activiti BPMN. I am creating a flow diagram in activiti. I m looking for how username (who has completed the task) can be pass into shell task arguments. so that I can fetch and save in db that user who has completed that task.
Any Help would be highly appreciated.
Thanks in advance...
Here's something I prepared for Java developers based on I think a blog post I saw
edit: https://community.alfresco.com/thread/224336-result-variable-in-javadelegate
RESULT VARIABLE
Option (1) – use expression language (EL) in the XML
<serviceTask id="serviceTask"
activiti:expression="#{myService.toUpperCase(myVar)}"
activiti:resultVariable="myVar" />
Java
public class MyService {
public String toUpperCase(String val) {
return val.toUpperCase();
}
}
The returned String is assigned to activiti:resultVariable
HACKING THE DATA MODEL DIRECTLY
Option (2) – use the execution environment
Java
public class MyService implements JavaDelegate {
public void execute(DelegateExecution execution) throws Exception {
String myVar = (String) execution.getVariable("myVar");
execution.setVariable("myVar", myVar.toUpperCase());
}
}
By contrast here we are being passed an ‘execution’, and we are pulling values out of it and twiddling them and putting them back.
This is somewhat analogous to a Servlet taking values we are passed in the HTMLRequest and then based on them doing different things in the response. (A stronger analogy would be a servlet Filter)
So in your particular instance (depnding on how you are invoking the shell script) using the Expression Language (EL) might be simplest and easiest.
Of course the value you want to pass has to be one that the process knows about (otherwise how can it pass a value it doesn't have a variable for?)
Hope that helps. :D
Usually in BPM engines you have a way to hook out listener to these kind of events. In Activiti if you are embedding it inside your service you can add an extra EventListener and then record the taskCompleted events which will contain the current logged in user.
https://www.activiti.org/userguide/#eventDispatcher
Hope this helps.
I have used activiti:taskListener from activiti app you need to configure below properties
1. I changed properties in task listener.
2. I used java script variable for holding task.assignee value.
Code Snip:-

Generate custom steps with behat

I try to write a custom step that's generate step
my code looks like :
/**
* #Then /^Check_raoul$/
*/
public function checkRaoul()
{
// grab the content ...
// get players ...
$to_return = array();
foreach ($players as $player) {
$player = $player->textContent;
if (preg_match('/^.*video=([^&]*)&.*$/', $player, $matches))
{
array_push($to_return, new Step\Then('I check the video of id "'.$matches[1].'"'));
}
}
return $to_return;
}
/**
* #Then /^I check the video of id "([^"]*)"$/
*/
public function iCheckTheVideoOfId($id)
{
// ...
}
works fine but when integrating to jenkins or un cli, if many executions of iCheckTheVideoOfId fail, I see just one error. I wish generate a number of steps equal to the number of iCheckTheVideoOfId calls
what I a doing wrong ?
We abandoned using Jenkins to do BDD checks due to the differences in how test feedback is presented and what Jenkins is capable of. We found that just running our suites locally and then a full check before pushing code to the repo produced better results and helped everyone get better at using the framework.
To answer your question directly I would suggest configuring your jenkins job to not fail when a test fails.
This can be accomplished by not outputting results at all. You can modify your command line options to not output failures at all and just log results to an output file. You could then run a script at the end to check for failures.

How do you mock your repositories?

I've use Moq to mock my repositories. However, someone recently said that they prefer to create hard-coded test implementations of their repository interfaces.
What are the pros and cons of each approach?
Edit: clarified meaning of repository with link to Fowler.
I generally see two scenarios with repositories. I ask for something, and I get it, or I ask for something, and it isn't there.
If you are mocking your repository, that means you system under test (SUT) is something that is using your repository. So you generally want to test that your SUT behaves correctly when it is given an object from the repository. And you also want to test that it handles the situation properly when you expect to get something back and don't, or aren't sure if you are going to get something back.
Hard-coded test doubles are ok if you are doing integration testing. Say, you want to save an object, and then get it back. But this is testing the interaction of two objects together, not just the behavior of the SUT. They are two different things. If you start coding fake repositories, you need unit tests for those as well, otherwise you end up basing the success and failure of your code on untested code.
That's my opinion on Mocking vs. Test Doubles.
SCNR:
"You call yourself a repository? I've seen matchboxes with more capacity!"
I assume that by "repository" you mean a DAO; if not then this answer won't apply.
Lately I've been making "in memory" "mock" (or test) implementations of my DAO, that basically operate off of data (a List, Map, etc.) passed into the mock's constructor. This way the unit test class is free to throw in whatever data it needs for the test, can change it, etc., without forcing all unit tests operating on the "in memory" DAO to be coded to use the same test data.
One plus that I see in this approach is that if I have a dozen unit tests that need to use the same DAO for their test (to inject into the class under test, for example), I don't need to remember all of the details of the test data each time (as you would if the "mock" was hardcoded) - the unit test creates the test data itself. On the downside, this means each unit test has to spend a few lines creating and wiring up it's test data; but that's a small downside to me.
A code example:
public interface UserDao {
User getUser(int userid);
User getUser(String login);
}
public class InMemoryUserDao implements UserDao {
private List users;
public InMemoryUserDao(List users) {
this.users = users;
}
public User getUser(int userid) {
for (Iterator it = users.iterator(); it.hasNext();) {
User user = (User) it.next();
if (userid == user.getId()) {
return user;
}
}
return null;
}
public User getUser(String login) {
for (Iterator it = users.iterator(); it.hasNext();) {
User user = (User) it.next();
if (login.equals(user.getLogin())) {
return user;
}
}
return null;
}
}

Unable to turn on SecureSocketLayer with DirectoryServices.Protocols.LdapConnection

I am trying to fix a bug with SSL in a product and noticed that although the code sets SSL to be true, in the next line in the code SSL is still at false. I wrote a unit test for this and the unit test confirms my suspicions.
[TestMethod]
public void SecureSocketLayerSetToTrue( )
{
var ldapConnection = new LdapConnection(
new LdapDirectoryIdentifier( "ldap.test.com", 636 ));
ldapConnection.SessionOptions.SecureSocketLayer = true;
Assert.IsTrue( ldapConnection.SessionOptions.SecureSocketLayer );
}
The test fails. Is there something here that I am missing?
It turns out that the way that the DirectoryServices.Protocols implements it's LDAP calls is by passing them through to a low-level LDAP API. This LDAP API is what is queried when a get is done on the property.
The low-level API is only updated when the methods are executed. You can think about this like it is building command-line arguments for an executable that hasn't been launched yet.
When a call like Bind() is made, then the executable is launched and the properties will report the correct value.
So, just because the property was saying that the value was false, it was using the true when necessary.