I have used Thinking sphinx in my project and have test cases using Rspec for my search functionalities using setup similar to here
I have also used extensive Delta indexing in my project. How can I test my delta indexed results using rspec here.
Issue was due to https://github.com/pat/thinking-sphinx/issues/148 (delta indexing is by default disabled in rspec test cases)
I had to set the following flags true to run the test cases:
ThinkingSphinx.deltas_enabled = true
ThinkingSphinx.updates_enabled = true
Related
I'm using mongodb to store some data. Then I have a function that gets the object with the latest timestamp and one with the oldest. I haven't experienced any issues during development or production with this method but when I try to implement a test for it the test fails approx 20% of the times. I'm using rspec to test this method and I'm not using mongoid or mongomapper. I create three objects with different timestamps but get a nil response since my dataset contains 0 objects. I have read a lot of articles about write_concern and that it might be the problem with "unsafe writes" but I have tried almost all the different combinations with these parameters (w, fsync, j, wtimeout) without any success. Does anyone have any idea how to solve this issue? Perhaps I have focused too much with the write_concern track and that the problems lies somewhere else.
This is the method that fetches the latest and oldest timestamp.
def first_and_last_timestamp(customer_id, system_id)
last = collection(customer_id).
find({sid:system_id}).
sort(["t",Mongo::DESCENDING]).
limit(1).next()
first = collection(customer_id).
find({sid:system_id}).
sort(["t",Mongo::ASCENDING]).
limit(1).next()
{ min: first["t"], max: last["t"] }
end
Im inserting data using this method where data is a json object.
def insert(customer_id, data)
collection(customer_id).insert(data)
end
I have reverted back to use the default for setting up my connection
Mongo::MongoClient.new(mongo_host, mongo_port)
I'm using the gem mongo (1.10.2). I'm not using any fancy setup for my mongo database. I've just installed mongo using brew on my mac and started it. The version of my mongo database is v2.6.1.
So my team is driving out our rails app (a survey and lite electronic record system) using Cucumber, Rspec and all the gems necessary to use these testing frameworks. I am in the process of setting up a Jenkins CI server and wanted to standardize our databases across our testing, development, and staging environments (we ended up choosing MySQL).
Upon switching the testing environment from SQlite to MySQL we discovered a couple of caching-related testing bugs that we resolved by using relative ids instead of hard-coded ones. example:
describe "#instance_method" do
before(:each) do
#survey = FactoryGirl.create(:survey)
#question_1 = FactoryGirl.create(:question)
#question_2 = FactoryGirl.create(:question)
....
end
context "question has no requirements" do
it "should match" do
# below breaks under MySQL (and postgres), but not SQlite
expect { #survey.present_question?(2) }.to be_true
# always works
expect { #survey.present_question?(#question_2.id) }.to be_true
end
end
end
After resolving this one failing spec, I addressed some unrelated ajax issues that have forever plaguing our test suite. Thinking that I finally masted the art of testing, I confidently ran rake. I was met with this unfriendly sight:
Now of course cucumber features/presenting_default_questions.feature rains green when ran in isolation.
Failing Scenario:
#javascript
Feature: Dynamic Presentation of Questions
In order to only answer questions that are relevant considering previously answered questions
As a patient
I want to not be presented questions that illogical given my previously answered question on the survey
Scenario: Answering a question that does not depend on any other questions
Given I have the following questions:
| prompt | datatype | options | parent_id | requirement |
| Do you like cars? | bool | | | |
| Do you like fruit? | bool | | | |
When I visit the patient sign in page
And I fill out the form with the name "Jim Dog", date of birth "1978-03-30", and gender "male"
And I accept the waiver
Then I should see the question "Do you like cars"
When I respond to the boolean question with "Yes"
Then I should see the question "Do you like fruit?"
When I respond to the boolean question with "No"
Given I wait for the ajax request to finish
Then I should be on the results page
Relevant step:
Then(/^I should be on the results page$/) do
# fails under ``rake`` passes when run in isolation
current_path.should == results_survey_path(1)
end
Then(/^I should be on the results page$/) do
# passes in isolation and under ``rake``
current_path.should == results_survey_path(Survey.last.id)
end
Besides from using Capybara.javascript_driver = :webkit, the cucumber / database cleaner config is unmodified from rails g cucumber:install.
It seems like both the failing rspec and cucumber tests suffer from the same sort of indexing problem. While the solution proposed above works, it's super janky and begs the question as to why a simple absolute index doesn't work (after all the database is cleaned between each scenario and feature). Is there something up with my tests? With database_cleaner?
Please let me know if more code would be helpful!
Relavent gem versions:
activemodel (3.2.14)
cucumber (1.3.6)
cucumber-rails (1.3.1)
database_cleaner (1.0.1)
capybara (2.1.0)
capybara-webkit (1.0.0)
mysql2 (0.3.13)
rspec (2.13.0)
The problem seems to be that you are missing the database_cleaner code for your Cucumber scenarios.
You mentioned that you have the database_cleaner code for your RSpec scenarios, but it seems that you're missing something like this for Cucumber's env.rb:
begin
require 'database_cleaner'
require 'database_cleaner/cucumber'
DatabaseCleaner.strategy = :truncation
rescue NameError
raise "You need to add database_cleaner to your Gemfile (in the :test group) if you wish to use it."
end
Around do |scenario, block|
DatabaseCleaner.cleaning(&block)
end
Do you have this code in there?
I'm just starting to learn the practice of BDD / TDD (world rejoices, I know). One of the things that I struggle with at this point is which tests are actually worth writing. Let's take these set of tests which I started for a model called Sport:
Factory.define :sport do |f|
f.name 'baseball'
end
require 'spec_helper'
describe Sport do
before(:each) do
#sport_unsaved = Factory.build(:sport) # returns an unsaved object
#sport_saved = Factory.create(:sport) # returns a saved object
end
# Schema testing.
it { should have_db_column(:name).of_type(:string) }
it { should have_db_column(:created_at).of_type(:datetime) }
it { should have_db_column(:updated_at).of_type(:datetime) }
# Index testing.
# Associations testing.
it { should have_many(:leagues) }
# Validations testing.
it 'should only accept unique names' do
#sport_unsaved.should validate_uniqueness_of(:name)
end
it { should validate_presence_of(:name) }
it 'should allow valid values for name' do
Sport::NAMES.each do |v|
should allow_value(v).for(:name)
end
end
it 'should not allow invalid values for name' do
%w(swimming Hockey).each do |v|
should_not allow_value(v).for(:name)
end
end
# Methods testing.
end
Few specific questions that I have:
Is it worth testing that the association sport.leagues returns a non-blank value?
How about a test that ensure the model is invalid if a name is not specified?
How about a test to make sure that a valid record is created and doesn't have any validation errors?
I could go on. Ideally, there would be some hard and fast rules I could follow to guide my testing effort. But I am guessing this comes with experience and good ole' pragmatism. I've thought about reading through the source code of several gems such as the rails core to gain a better understanding of what's worth testing and what isn't.
Any advice you experienced testers out there could offer?
Not if you're only re-testing Rails behavior.
Yes--it's part of model validation and a requirement, why not make sure the requirement is met?
Testing your assumptions regarding the save process is a good idea, and if there are any lifecycle listeners/observers they may not be fired until the save.
The Rails core tests won't help you decide what's a good idea to test in an application.
What should you test ? Anything you would not want to be broken
When to stop writing tests ? When fear transforms into boredom
So if 1,2,3 are defects if the specified behavior is not exhibited, then you should have tests on all 3.
From the code snippets, personally I'd refrain from checking DB implementation (which columns exist and their details). Reason: I'd want to be able to change that over time without having to break a bunch of tests and fix all of them. Tests should only break if the behavior is broken. If the way (implementation) in which you satisfy them changes, the tests should not break/need modifications.
Focus on the What over the How.
HTH
I'm working through the railstutorial.org site and seem to have a problem with an integration test. It's suppose to check if a user gets properly created after a form post which works but, on subsequent tests fails because the test db is not getting rolled backed, this causes error because of validating that users can't have same email. Any explanation why the record would persist? If relevant the code in question is from this listing.
Note: I am the author of the Rails Tutorial. The config.cache_classes = false line got added after Peter Cooper reported that it was necessary to get RSpec and Spork to work together on his system. Since I have not found it necessary, and since it seemed to introduce lots of problems (such as those identified in this thread), that line has since been removed. If you use the latest version of the book you shouldn't run into this problem.
Look into using database_cleaner. Your spec helper will contain something like this:
config.before(:suite) do
DatabaseCleaner.strategy = :transaction
DatabaseCleaner.clean_with :truncation
end
config.before(:each) do
ActionMailer::Base.deliveries = []
DatabaseCleaner.start
end
config.after(:each) do
DatabaseCleaner.clean
end
config.after(:all) do
DatabaseCleaner.clean_with :truncation
end
Seems that the issue was with the line
config.cache_classes = false
I had set this to false assuming it would make sure not to use stale class data, but it seems to be having the opposite effect among other things. Changing this to true fixed all the weirdness I was having, but I am still on confused as to why. I think it may have something to do with the OS as in the tutorial it was said that for OSX (which I am running) having that line set to true works fine, while other OSes need it set false
While Hartl's tutorial struck me as flawless, perhaps the issue you raise here can be classified as an important ommission.
This here: RailsTutorial - chapter 8.4.3 - Test database not clearing after adding user in integration test
and
This here: Rails 3 Tutorial Chapter 11 "Validation failed: Email has already been taken" error
and
This here: config.cache_classes = false messing up rspec tests?
... are all variations on the same problem.
Mike Hartl, if you are out there, you seem a mere one issue away from RoR Tutorial perfection.
Best regards,
Perry
How to change the environment variable of rails in testing
You could do
Rails.stub(env: ActiveSupport::StringInquirer.new("production"))
Then Rails.env, Rails.development? etc will work as expected.
With RSpec 3 or later you may want to use the new "zero monkeypatching" syntax (as mentioned by #AnkitG in another answer) to avoid deprecation warnings:
allow(Rails).to receive(:env).and_return(ActiveSupport::StringInquirer.new("production"))
I usually define a stub_env method in a spec helper so I don't have to put all that stuff inline in my tests.
An option to consider (as suggested in a comment here) is to instead rely on some more targeted configuration that you can set in your environment files and change in tests.
Rspec 3 onwards you can do
it "should do something specific for production" do
allow(Rails).to receive(:env).and_return(ActiveSupport::StringInquirer.new("production"))
#other assertions
end
Sometimes returning a different environment variable can be a headache (required production environment variables, warning messages, etc).
Depending on your case, as an alternate you may be able to simply return the value you need for your test to think it's in another environment. Such as if you wanted Rails to believe it is in production for code that checks Rails.env.production? you could do something like this:
it "does something specific when in production" do
allow(Rails.env).to receive(:production?).and_return(true)
##other assertions
end
You could do the same for other environments, such as :development?, :staging?, etc. If you don't need the full capacity of returning a full environment, this could be another option.
As a simpler variation on several answers above, this is working for me:
allow(Rails).to receive(:env).and_return('production')
Or, for as I'm doing in shared_examples, pass that in a variable:
allow(Rails).to receive(:env).and_return(target_env)
I suspect this falls short of the ...StringInquirer... solution as your app uses additional methods to inspect the environment (e.g. env.production?, but if you code just asks for Rails.env, this is a lot more readable. YMMV.
If you're using something like rspec, you can stub Rails.env to return a different value for the specific test example you're running:
it "should log something in production" do
Rails.stub(:env).and_return('production')
Rails.logger.should_receive(:warning).with("message")
run_your_code
end