Test fixtures or equivalent for test data with Smalltalk Seaside? - smalltalk

I've been using Test Driven Development in a Seaside app I've been playing with, and all of my data is stored as objects in the image (as opposed to a database).
So when I run my tests I've had to be careful to store away the real data before it gets trashed with test data, like this:
ToDoTest>>setUp
savedTasks := Task tasklist.
Task deleteAllTasks.
savedProjects := ToDoProject projectlist.
ToDoProject deleteAllProjects.
savedPeople := Person peoplelist.
Person deleteAllPeople.
And:
ToDoTest>>tearDown
Task tasklist: savedTasks.
ToDoProject projectlist: savedProjects.
Person peoplelist: savedPeople
The problem comes when my tests fail, which of course they do, this pops up the debugger, and I can then fix away, but the tearDown doesn't always get called and so I can lose my real data.
I do save the data out to files, so it's not a huge problem, but it is not as smooth and automated as I'd like it to be.
Anyway I can improve this?

I'm not sure if there is a scenario that will fix the problem completely. The real problem is that the model is global. That is convenient and nice but it fails easily within such a scenario. So I would consider changing the model from something global to a more localized variant so you can create your model solely for testing purpose without interfering with production data.
To fix it within your current setup you need to add an ensure: block somewhere. An ensure block "ensures" you that something is executed regardless if everything went ok or an error happened. The problem is that you need to do it before and after a test.
In this case I would overwrite TestCase>>#runCase in your own test class with something like
runCase
[ self saveRealModel.
super runCase ]
ensure: [ self restoreRealModel ]

Ah, that's a nice test smell. Norbert is right in pointing out that your tested model should probably not be global. Most tests should be on the interaction between individual objects.
In StoryBoard we have users
DEUser subclass: #SBUser
instanceVariableNames: 'email initials projects invitations'
classVariableNames: ''
poolDictionaries: ''
category: 'StoryBoard-Data'
with class instancevar users as an entrypoint. Projects are only reachable through the users.
users
^users ifNil: [ users := OrderedCollection with: (SBAdministrator new
userid: 'admin';
password: 'admin';
yourself)
]
and a way to clear them
resetUsers
" SBUser resetUsers "
users := nil
Often we can pass in dependencies on creation for domain objects
Iteration>on: aProject
^self new
project: aProject;
yourself
This allows a testcase to pass in itself or a separate (mock) object

Related

SOAP UI - Set a node value in all test step's requests of all test cases in a test suites

I'm trying to set a node value in all test step's requests xml of all test cases in a test suite.
The groovy script is in the first test case and I get an error (XmlException: Unexpected Element: CDATA) as soon as the script try to edit the same tag in the second test case.
def groovyUtils = new com.eviware.soapui.support.GroovyUtils( context )
def AlltestCases = testRunner.testCase.testSuite.project.testSuites[testRunner.testCase.testSuite.name]
0.upto(AlltestCases.getTestCaseCount()) {
AlltestCases.getTestCaseList().each{
it.getTestStepList().each{ if(it.getClass()==com.eviware.soapui.impl.wsdl.teststeps.WsdlTestRequestStep){
if(it.getName().toLowerCase().contains("verify")){
step = groovyUtils.getXmlHolder("${it.getName()}"+"#Request")
step.setNodeValue("//*:Name/text()", "\$"+"{#TestSuite#NAME_ID}")
step.updateProperty()
}
}
}
}
}
If I understand your question correctly, you want to "inject" a value in a number of requests?
I would advise against that. I would rather set some project property, and then let each of the requests simply use that particular variable.
The most important reason for me to prefer this approach, is to make it more tranparent what is happening in your testcase, should someone else at some point - like if you get a different job - would need to take over your SoapUI projects. Currently you have requests, which hold values that appear to come out of nowhere. I would advise to make it clear that the request contains some sort of variable, and where that variable comes from.
Besides you will then also get more flexibility. If a few reqeusts at some point changes the path or name of the entity you want to change, you will need to make your code above handle that kind of situation. Not so, if you are merely using a variable in each of your requests.

Running a main-like in a non-main package

We have a package with a fair number of complex tests. As part of the test suite, they run on builds etc.
func TestFunc(t *testing.T) {
//lots of setup stuff and defining success conditions
result := SystemModel.Run()
}
Now, for one of these tests, I want to introduce some kind of frontend which will make it possible for me to debug a few things. It's not really a test, but a debug tool. For this, I want to just run the same test but with a Builder pattern:
func TestFuncWithFrontend(t *testing.T) {
//lots of setup stuff and defining success conditions
result := SystemModel.Run().WithHTTPFrontend(":9999")
}
The test then would only start if I send a signal via HTTP from the frontend. Basically WithHTTPFrontend() just waits with a channel on a HTTP call from the frontend.
This of course would make the automated tests fail, because no such signal will be sent and execution will hang.
I can't just rename the package to main because the package has 15 files and they are used elsewhere in the system.
Likewise I haven't found a way to run a test only on demand while excluding it from the test suite, so that TestFuncWithFrontend would only run from the commandline - I don't care if with go run or go test or whatever.
I've also thought of ExampleTestFunc() but there's so much output produced by the test it's useless, and without defining Output: ..., the Example won't run.
Unfortunately, there's also a lot of initialization code at (private, i.e. lower case) package level that the test needs. So I can't just create a sub-package main, as a lot of that stuff wouldn't be accessible.
It seems I have three choices:
Export all this initialization variables and code with upper case, so that I could be using it from a sub-main package
Duplicate the whole code.
Move the test into a sub-package main and then have a func main() for the test with Frontend and a _test.go for the normal test, which would have to import a few things from the parent package.
I'd rather like to avoid the second option...And the first is better, but isn't great either IMHO. I think I'll go for the third, but...
am I missing some other option?
You can pass a custom command line argument to go test and start the debug port based on that. Something like this:
package hello_test
import (
"flag"
"log"
"testing"
)
var debugTest bool
func init() {
flag.BoolVar(&debugTest, "debug-test", false, "Setup debugging for tests")
}
func TestHelloWorld(t *testing.T) {
if debugTest {
log.Println("Starting debug port for test...")
// Start the server here
}
log.Println("Done")
}
Then if you want to run just that specific test, go test -debug-test -run '^TestHelloWorld$' ./.
Alternatively it's also possible to set a custom environment variable that you check in the test function to change behaviour.
I finally found an acceptable option. This answer
Skip some tests with go test
brought me to the right track.
Essentially using build tags which would not be present in normal builds but which I can provide when executing manually.

Lumen - seeder in Unit tests

I'm trying to implement unit tests in my company's project, and I'm running into some weird trouble trying to use a separate set of data in my database.
As I want tests to be performed in a confined environment, I'm looking for the easiest way to input data in a dedicated database. Long story short, to this extent, I decided to use a MySQL dump of inserted data.
This is basically my seeder code:
public function run()
{
\Illuminate\Support\Facades\DB::unprepared(file_get_contents(__DIR__ . '/data1.sql'));
}
Now here's the problem.
In my unit test, I can call the seeder, but :
If I call the seeder in the setUpBeforeClass(), it works. Although it doesn't fit my needs as I want to be able to invoke different sets of data for different tests
If I call the seeder within a test, the data is never inserted in the database (either with or without the transaction trait).
If I use DB::insert instead of ::raw or ::unprepared or ::statement without using a raw sql file, it works. But my inserts are too complicated for that.
Here's a few things I tried with the same results :
DB::raw(file_get_contents(__DIR__.'/database/data1.sql'));
DB::statement(file_get_contents(__DIR__ . '/database/data1.sql'));
$seeder = new CheckTestSeeder();
$seeder->run();
\Illuminate\Support\Facades\Artisan::call('db:seed', ['--class' => 'CheckTestSeeder']);
$this->seeInDatabase('jackpot.progressive', [
'name_progressive' => 'aaa'
]);
Any pointers on how to proceed and why I have different behaviors if I do that in the setUpBeforeClass() and within the test would be appreciated!
You may use Illuminate\Foundation\Testing\RefreshDatabase trait as explained here. If you need something more, you can override refreshTestDatabase method in RefreshDatabase trait.
protected function refreshTestDatabase()
{
parent::refreshTestDatabase();
\Illuminate\Support\Facades\Artisan::call('db:seed', ['--class' => 'CheckTestSeeder']);
}

How does R3 use the Needs field of a script header? Is there any impact on namespaces?

I'd like to know the behaviour of R3 when processing the Needs field of a script header and what implications for word binding it has.
Background. I'm currently trying to port some R2 scripts to R3 in order to learn R3. In R2 the Needs field of a script header was essentially just documentation, though I made use of it with a custom function to reference scripts that are required to make my script run.
R3 appears to call the Needs referenced scripts itself, but the binding seems different to DOing the other scripts.
For example when %test-parent.r is:
REBOL [
title: {test parent}
needs: [%test-child.r]
]
parent: now
?? parent
?? child
and %test-child is:
REBOL [
title: {test child}
]
child: now
?? child
R3 Alpha (Saphiron build 22-Feb-2013/11:09:25) returns:
>> do %test-parent.r
Script: "test parent" Version: none Date: none
child: 9-May-2013/22:51:52+10:00
parent: 9-May-2013/22:51:52+10:00
** Script error: child has no value
** Where: get ajoin case ?? catch either either -apply- do
** Near: get :name
I don't understand why test-parent cannot access Child set by %test-child.r
If I remove the Needs field from test-parent.r header and instead insert a line to just DO %test-child.r then there is no error and the script performs as expected.
Ah, you've run into Rebol 3's policy to "do what you say, it can't read your mind". R3's Needs header is part of its module system, so anything you load with Needs is actually imported as a module, even if it isn't declared as such.
Loading scripts with Needs is a quick way to get them treated as modules even in the original author didn't declare them as such. Modules get their own contexts where their words are defined. Loading a script as a module is a great way to use a script that isn't that tidy, that leaks words into the shared script context. Like your %test-child.r script, it leaks the word child into the script context, what if you didn't want that to happen? Load it with Needs or import and that will clean that right up.
If you want a script treated as a script, use do to run it. Regular scripts use a (mostly) shared context, so when you do a script it has effect on the same context as the script you called it from. That is why the child: now statement affected child in the parent script. Sometimes that's what you want to do, which is why we worked so hard to make scripts work that way in R3.
If you are going to use Needs or import to load your own scripts, you might as well make them modules and export what you want, like this:
REBOL [
type: module
title: {test child}
exports: [child]
]
child: now
?? child
As before, you don't even have to include the type: module if you are going to be using Needs or import anyway, but it would help just in case you run your module with do. R3 assumes that if you declare your module to be a module, that you wrote it to be a module and depend on it working that way even if it's called with do. At the least, declaring a type header is a stronger statement than not declaring a type header at all, so it takes precedence in the conflicting "do what you say" situation.
Look here for more details about how the module system works: How are words bound within a Rebol module?

Getting started with BDD / TDD (Rails / Rspec)

I'm just starting to learn the practice of BDD / TDD (world rejoices, I know). One of the things that I struggle with at this point is which tests are actually worth writing. Let's take these set of tests which I started for a model called Sport:
Factory.define :sport do |f|
f.name 'baseball'
end
require 'spec_helper'
describe Sport do
before(:each) do
#sport_unsaved = Factory.build(:sport) # returns an unsaved object
#sport_saved = Factory.create(:sport) # returns a saved object
end
# Schema testing.
it { should have_db_column(:name).of_type(:string) }
it { should have_db_column(:created_at).of_type(:datetime) }
it { should have_db_column(:updated_at).of_type(:datetime) }
# Index testing.
# Associations testing.
it { should have_many(:leagues) }
# Validations testing.
it 'should only accept unique names' do
#sport_unsaved.should validate_uniqueness_of(:name)
end
it { should validate_presence_of(:name) }
it 'should allow valid values for name' do
Sport::NAMES.each do |v|
should allow_value(v).for(:name)
end
end
it 'should not allow invalid values for name' do
%w(swimming Hockey).each do |v|
should_not allow_value(v).for(:name)
end
end
# Methods testing.
end
Few specific questions that I have:
Is it worth testing that the association sport.leagues returns a non-blank value?
How about a test that ensure the model is invalid if a name is not specified?
How about a test to make sure that a valid record is created and doesn't have any validation errors?
I could go on. Ideally, there would be some hard and fast rules I could follow to guide my testing effort. But I am guessing this comes with experience and good ole' pragmatism. I've thought about reading through the source code of several gems such as the rails core to gain a better understanding of what's worth testing and what isn't.
Any advice you experienced testers out there could offer?
Not if you're only re-testing Rails behavior.
Yes--it's part of model validation and a requirement, why not make sure the requirement is met?
Testing your assumptions regarding the save process is a good idea, and if there are any lifecycle listeners/observers they may not be fired until the save.
The Rails core tests won't help you decide what's a good idea to test in an application.
What should you test ? Anything you would not want to be broken
When to stop writing tests ? When fear transforms into boredom
So if 1,2,3 are defects if the specified behavior is not exhibited, then you should have tests on all 3.
From the code snippets, personally I'd refrain from checking DB implementation (which columns exist and their details). Reason: I'd want to be able to change that over time without having to break a bunch of tests and fix all of them. Tests should only break if the behavior is broken. If the way (implementation) in which you satisfy them changes, the tests should not break/need modifications.
Focus on the What over the How.
HTH