I'm just starting to learn the practice of BDD / TDD (world rejoices, I know). One of the things that I struggle with at this point is which tests are actually worth writing. Let's take these set of tests which I started for a model called Sport:
Factory.define :sport do |f|
f.name 'baseball'
end
require 'spec_helper'
describe Sport do
before(:each) do
#sport_unsaved = Factory.build(:sport) # returns an unsaved object
#sport_saved = Factory.create(:sport) # returns a saved object
end
# Schema testing.
it { should have_db_column(:name).of_type(:string) }
it { should have_db_column(:created_at).of_type(:datetime) }
it { should have_db_column(:updated_at).of_type(:datetime) }
# Index testing.
# Associations testing.
it { should have_many(:leagues) }
# Validations testing.
it 'should only accept unique names' do
#sport_unsaved.should validate_uniqueness_of(:name)
end
it { should validate_presence_of(:name) }
it 'should allow valid values for name' do
Sport::NAMES.each do |v|
should allow_value(v).for(:name)
end
end
it 'should not allow invalid values for name' do
%w(swimming Hockey).each do |v|
should_not allow_value(v).for(:name)
end
end
# Methods testing.
end
Few specific questions that I have:
Is it worth testing that the association sport.leagues returns a non-blank value?
How about a test that ensure the model is invalid if a name is not specified?
How about a test to make sure that a valid record is created and doesn't have any validation errors?
I could go on. Ideally, there would be some hard and fast rules I could follow to guide my testing effort. But I am guessing this comes with experience and good ole' pragmatism. I've thought about reading through the source code of several gems such as the rails core to gain a better understanding of what's worth testing and what isn't.
Any advice you experienced testers out there could offer?
Not if you're only re-testing Rails behavior.
Yes--it's part of model validation and a requirement, why not make sure the requirement is met?
Testing your assumptions regarding the save process is a good idea, and if there are any lifecycle listeners/observers they may not be fired until the save.
The Rails core tests won't help you decide what's a good idea to test in an application.
What should you test ? Anything you would not want to be broken
When to stop writing tests ? When fear transforms into boredom
So if 1,2,3 are defects if the specified behavior is not exhibited, then you should have tests on all 3.
From the code snippets, personally I'd refrain from checking DB implementation (which columns exist and their details). Reason: I'd want to be able to change that over time without having to break a bunch of tests and fix all of them. Tests should only break if the behavior is broken. If the way (implementation) in which you satisfy them changes, the tests should not break/need modifications.
Focus on the What over the How.
HTH
Related
I'm testing Rails 5 helpers in rspec. We have a block in a helper which is wrapped in a conditional:
if request.xhr?
# do stuff
end
In the specs, I'm unable to force that request.xhr? test to return true. I've tried allow_any_instance_of(ActionController::Request).to receive(:xhr?).and_return(true) but that says Undefined Constant ActionController::Request (which sort of makes sense). I also tried allow(request).to receive(:xhr?).and_return(true) but that just failed silently.
How can I test this - both that the #do stuff code is executed when we want it to be, and that it does what we expect?
The answer turned out to be allow_any_instance_of(ActionDispatch::Request).to receive(:xhr?).and_return(true). I had missed that the Request object had moved out of the ActionController module and into ActionDispatch.
Parts of my system are specced out really well, but when I change one of the predicates to be something obviously wrong I noticed all my tests still pass and I don't get the usual blowup from spec I've come to rely on.
I can't figure out why this is happening, and I certainly can't reproduce it starting from lein new test.
Is there a way I can get spec.test to give me a warning when it can't find a spec, for debug purposes, rather than assuming I didn't want to spec out this part of my system? Can it perhaps help me in some other way with debugging this situation?
spec should error if you try to use a spec that's not defined.
There is no way currently to have it tell you about things that aren't spec'ed. To do so would require instrumenting (replacing) all vars and adding that check.
For your particular problem, if you have a spec that you're changing, I would search for who is using that predicate and then try testing each thing that uses those specs or the original predicate.
One thing that trips people up sometimes is that stest/instrument only checks the :args specs of functions, not the :ret or :fn specs (which are only used by stest/check).
Here is a minimal reproduction:
(ns test.core
(:require [clojure.spec :as s]))
(defn my-specced-fn [x]
x)
(s/fdef my-specced-fn
:args (s/cat :arg int?))
(ns test.core-test
(:require [clojure
[test :refer :all]]
[test.core :as core]
[clojure.spec.test :as spec-test]))
(spec-test/instrument)
(deftest my-specced-fn-test
(is (= 1 (core/my-specced-fn 1))))
This test passes initially. I would then go edit test.core, change the schema and re-evaluate test.core. After changing the schema with a predicate like string? the tests should fail, but it keeps passing. To solve the problem re-evaluate the test namespace (specifically the call to instrument).
So my team is driving out our rails app (a survey and lite electronic record system) using Cucumber, Rspec and all the gems necessary to use these testing frameworks. I am in the process of setting up a Jenkins CI server and wanted to standardize our databases across our testing, development, and staging environments (we ended up choosing MySQL).
Upon switching the testing environment from SQlite to MySQL we discovered a couple of caching-related testing bugs that we resolved by using relative ids instead of hard-coded ones. example:
describe "#instance_method" do
before(:each) do
#survey = FactoryGirl.create(:survey)
#question_1 = FactoryGirl.create(:question)
#question_2 = FactoryGirl.create(:question)
....
end
context "question has no requirements" do
it "should match" do
# below breaks under MySQL (and postgres), but not SQlite
expect { #survey.present_question?(2) }.to be_true
# always works
expect { #survey.present_question?(#question_2.id) }.to be_true
end
end
end
After resolving this one failing spec, I addressed some unrelated ajax issues that have forever plaguing our test suite. Thinking that I finally masted the art of testing, I confidently ran rake. I was met with this unfriendly sight:
Now of course cucumber features/presenting_default_questions.feature rains green when ran in isolation.
Failing Scenario:
#javascript
Feature: Dynamic Presentation of Questions
In order to only answer questions that are relevant considering previously answered questions
As a patient
I want to not be presented questions that illogical given my previously answered question on the survey
Scenario: Answering a question that does not depend on any other questions
Given I have the following questions:
| prompt | datatype | options | parent_id | requirement |
| Do you like cars? | bool | | | |
| Do you like fruit? | bool | | | |
When I visit the patient sign in page
And I fill out the form with the name "Jim Dog", date of birth "1978-03-30", and gender "male"
And I accept the waiver
Then I should see the question "Do you like cars"
When I respond to the boolean question with "Yes"
Then I should see the question "Do you like fruit?"
When I respond to the boolean question with "No"
Given I wait for the ajax request to finish
Then I should be on the results page
Relevant step:
Then(/^I should be on the results page$/) do
# fails under ``rake`` passes when run in isolation
current_path.should == results_survey_path(1)
end
Then(/^I should be on the results page$/) do
# passes in isolation and under ``rake``
current_path.should == results_survey_path(Survey.last.id)
end
Besides from using Capybara.javascript_driver = :webkit, the cucumber / database cleaner config is unmodified from rails g cucumber:install.
It seems like both the failing rspec and cucumber tests suffer from the same sort of indexing problem. While the solution proposed above works, it's super janky and begs the question as to why a simple absolute index doesn't work (after all the database is cleaned between each scenario and feature). Is there something up with my tests? With database_cleaner?
Please let me know if more code would be helpful!
Relavent gem versions:
activemodel (3.2.14)
cucumber (1.3.6)
cucumber-rails (1.3.1)
database_cleaner (1.0.1)
capybara (2.1.0)
capybara-webkit (1.0.0)
mysql2 (0.3.13)
rspec (2.13.0)
The problem seems to be that you are missing the database_cleaner code for your Cucumber scenarios.
You mentioned that you have the database_cleaner code for your RSpec scenarios, but it seems that you're missing something like this for Cucumber's env.rb:
begin
require 'database_cleaner'
require 'database_cleaner/cucumber'
DatabaseCleaner.strategy = :truncation
rescue NameError
raise "You need to add database_cleaner to your Gemfile (in the :test group) if you wish to use it."
end
Around do |scenario, block|
DatabaseCleaner.cleaning(&block)
end
Do you have this code in there?
I've been using Test Driven Development in a Seaside app I've been playing with, and all of my data is stored as objects in the image (as opposed to a database).
So when I run my tests I've had to be careful to store away the real data before it gets trashed with test data, like this:
ToDoTest>>setUp
savedTasks := Task tasklist.
Task deleteAllTasks.
savedProjects := ToDoProject projectlist.
ToDoProject deleteAllProjects.
savedPeople := Person peoplelist.
Person deleteAllPeople.
And:
ToDoTest>>tearDown
Task tasklist: savedTasks.
ToDoProject projectlist: savedProjects.
Person peoplelist: savedPeople
The problem comes when my tests fail, which of course they do, this pops up the debugger, and I can then fix away, but the tearDown doesn't always get called and so I can lose my real data.
I do save the data out to files, so it's not a huge problem, but it is not as smooth and automated as I'd like it to be.
Anyway I can improve this?
I'm not sure if there is a scenario that will fix the problem completely. The real problem is that the model is global. That is convenient and nice but it fails easily within such a scenario. So I would consider changing the model from something global to a more localized variant so you can create your model solely for testing purpose without interfering with production data.
To fix it within your current setup you need to add an ensure: block somewhere. An ensure block "ensures" you that something is executed regardless if everything went ok or an error happened. The problem is that you need to do it before and after a test.
In this case I would overwrite TestCase>>#runCase in your own test class with something like
runCase
[ self saveRealModel.
super runCase ]
ensure: [ self restoreRealModel ]
Ah, that's a nice test smell. Norbert is right in pointing out that your tested model should probably not be global. Most tests should be on the interaction between individual objects.
In StoryBoard we have users
DEUser subclass: #SBUser
instanceVariableNames: 'email initials projects invitations'
classVariableNames: ''
poolDictionaries: ''
category: 'StoryBoard-Data'
with class instancevar users as an entrypoint. Projects are only reachable through the users.
users
^users ifNil: [ users := OrderedCollection with: (SBAdministrator new
userid: 'admin';
password: 'admin';
yourself)
]
and a way to clear them
resetUsers
" SBUser resetUsers "
users := nil
Often we can pass in dependencies on creation for domain objects
Iteration>on: aProject
^self new
project: aProject;
yourself
This allows a testcase to pass in itself or a separate (mock) object
How to change the environment variable of rails in testing
You could do
Rails.stub(env: ActiveSupport::StringInquirer.new("production"))
Then Rails.env, Rails.development? etc will work as expected.
With RSpec 3 or later you may want to use the new "zero monkeypatching" syntax (as mentioned by #AnkitG in another answer) to avoid deprecation warnings:
allow(Rails).to receive(:env).and_return(ActiveSupport::StringInquirer.new("production"))
I usually define a stub_env method in a spec helper so I don't have to put all that stuff inline in my tests.
An option to consider (as suggested in a comment here) is to instead rely on some more targeted configuration that you can set in your environment files and change in tests.
Rspec 3 onwards you can do
it "should do something specific for production" do
allow(Rails).to receive(:env).and_return(ActiveSupport::StringInquirer.new("production"))
#other assertions
end
Sometimes returning a different environment variable can be a headache (required production environment variables, warning messages, etc).
Depending on your case, as an alternate you may be able to simply return the value you need for your test to think it's in another environment. Such as if you wanted Rails to believe it is in production for code that checks Rails.env.production? you could do something like this:
it "does something specific when in production" do
allow(Rails.env).to receive(:production?).and_return(true)
##other assertions
end
You could do the same for other environments, such as :development?, :staging?, etc. If you don't need the full capacity of returning a full environment, this could be another option.
As a simpler variation on several answers above, this is working for me:
allow(Rails).to receive(:env).and_return('production')
Or, for as I'm doing in shared_examples, pass that in a variable:
allow(Rails).to receive(:env).and_return(target_env)
I suspect this falls short of the ...StringInquirer... solution as your app uses additional methods to inspect the environment (e.g. env.production?, but if you code just asks for Rails.env, this is a lot more readable. YMMV.
If you're using something like rspec, you can stub Rails.env to return a different value for the specific test example you're running:
it "should log something in production" do
Rails.stub(:env).and_return('production')
Rails.logger.should_receive(:warning).with("message")
run_your_code
end