RSPEC testing a method that's cached in a constant - testing

In the below code I'm trying to write a test to confirm Sidekiq::Limiter receives :concurrent however because I have it cached in a constant LIMITER my test fails. Is there a way to test this?
class Processier
include Sidekiq::Worker
LIMITER = Sidekiq::Limiter.concurrent('analytics', 1, wait_timeout: 5, lock_timeout: 120)
def perform
uploaded_submissions.each do |submission|
#this is where LIMITER IS CALLED
LIMITER.within_limit do
ProcessDownloadedFiles.perform_async(*submission)
end
end
end
end

Related

JMeter Pass, fail, Warning?

So I am running a few tests on Jmeter, I have assertions set up for Pass/Fail. The issue is, I need to set up a "Warning" or "caution" result.
For example -
Latency < 500ms = Pass
Latency > 1000ms = Fail
Latency < 999ms AND Latency > 501 = Caution
The above is just an example.. The variation in between A and B would be much smaller.
Does anyone know how to set something like this up in Jmeter?
For the moment JMeter does not support caution result, the sampler can either be successful or not. You can set a custom response status code, message, print something to jmeter.log, send an email, etc. but you cannot get anything but Success: true|false without core JMeter changes.
You could try using JSR223 Assertion to implement your pass/fail criteria logic, the relevant code which will set sampler response code to 999 and message to CAUTION would be something like:
def latency = prev.getLatency() as int
def range = new IntRange(501, 999)
if (latency >= 1000) {
AssertionResult.setFailure(true)
AssertionResult.setFailureMessage('Latency exceeds 1000 (was ' + latency + ')')
}
if (range.contains(latency)){
prev.setResponseCode('599')
prev.setResponseMessage('CAUTION! High latency: ' + latency)
}
If latency will be between 501 and 999 inclusively you will get the next result:
And failure will look "normally"
More information:
prev is an instance of SampleResult class, see JavaDoc for available methods and fields
the same for AssertionResult
also check out Scripting JMeter Assertions in Groovy - A Tutorial for comprehensive information on using Groovy for setting custom JMeter samplers failure conditions

Access the last_error in failure method of Delayed Job Rails

Iam using delayed job in a rails application. I want to notify an error to airbake whenever a delayed job fails. I checked on github and leant about the failure method.
I want to send the last_error attribute of failed delayed job to airbrake. Something like this:
class ParanoidNewsletterJob < NewsletterJob
def perform
end
def failure
Airbrake.notify(:message => self.last_error, :error_class => self.handler)
end
end
But it gives me the following runtime error:
undefined method `last_error' for #<struct ParanoidNewsletterJob>
Please help me figure out how I can notify Airbrake the last_error of a failed delayed_job.
Many Thanks!!
You should be able to pass the job to the failure method, and then extract the last_error from the job. i.e.
def failure(job)
Airbrake.notify(:message => job.last_error, :error_class => job.handler)
end
this should work fine
def failure(job)
Airbrake.notify(:message => job.error, :error_class => job.error.class, :backtrace => job.error.backtrace)
end
There are two ways you can achieve what you want:
A job specific method which only applies to the type of job you want by implementing the failure method with the job as the parameter. The job will contain error and last_error. And this is what other answers are about.
A global option where a plugin can be developed to apply it to any job type created. This is desired if all jobs need to be monitored. The plugin can be registered and perform actions around various events in the lifecycle of a job. For example, below is a plugin to update the last_error if we want to process it before storing to database
One example below:
require 'delayed_job'
class ErrorDelayedJobPlugin < Delayed::Plugin
def self.update_last_error(event, job)
begin
unless job.last_error.nil?
job.last_error = job.last_error.gsub("\u0000", '') # Replace null byte
job.last_error = job.last_error.encode('UTF-8', invalid: :replace, undef: :replace, replace: '')
end
rescue => e
end
end
callbacks do |lifecycle|
lifecycle.around(:failure) do |worker, job, *args, &block|
update_last_error(:around_failure, job)
block.call(worker, job)
end
end
end
Basically it will be called when any failure occurs for any job. For details on how this callback thing work, you can refer to A plugin to update last_error in Delayed Job.

Retry on timeout?

I have a Cucumber Scenario for testing UI features. Sometimes due to one of the several issues, web-page takes lot of time to respond and Capybara times out with following error.
ruby-1.9.3-p327/lib/ruby/1.9.1/net/protocol.rb:146:in `rescue in rbuf_fill'
ruby-1.9.3-p327/lib/ruby/1.9.1/net/protocol.rb:140:in `rbuf_fill'
ruby-1.9.3-p327/lib/ruby/1.9.1/net/protocol.rb:122:in `readuntil'
ruby-1.9.3-p327/lib/ruby/1.9.1/net/protocol.rb:132:in `readline'
ruby-1.9.3-p327/lib/ruby/1.9.1/net/http.rb:2562:in `read_status_line'
ruby-1.9.3-p327/lib/ruby/1.9.1/net/http.rb:2551:in `read_new'
My question is-
Can I somehow force Cucumber scenario or Capybara to retry (for constant number of times) whole scenario or step respectively, on timeout error?
Maybe, you can do it like this:
Around do |scenario, block|
for i in 1..5
begin
block.call
break
rescue Timeout::Error
next
end
end
end
But I can't figure out if this code works because of the bug (It's not possible to call block several times in Around hook)
From The Cucumber book:
Add a eventually method that keeps trying to run a block of code until it either stops raising an error or it reaches a time limit.
Here is the code for that method:
module AsyncSupport
def eventually
timeout = 2
polling_interval = 0.1
time_limit = Time.now + timeout
loop do
begin
yield
rescue Exception => error
end
return if error.nil?
raise error if Time.now >= time_limit sleep polling_interval
end
end
end
World(AsyncSupport)
The method called be called as follows from a step_definition:
Then /^the balance of my account should be (#{CAPTURE_CASH_AMOUNT})$/ do |amount|
eventually { my_account.balance.should eq(amount) }
end

Rspec false positive because failure exception is rescued in code being tested

I have an rspec test that I expect to fail, but it is passing because the code that it is testing rescues the exception that rspec raises. Here's an example of the situation:
class Thing do
def self.method_being_tested( object )
# ... do some stuff
begin
object.save!
rescue Exception => e
# Swallow the exception and log it
end
end
end
In the rspec file:
describe "method_being_tested" do
it "should not call 'save!' on the object passed in" do
# ... set up the test conditions
mock_object.should_not_receive( :save! )
Thing.method_being_tested( mock_object )
end
end
I knew that the execution was reaching the "object.save!" line of the method being tested, and the test should therefore be failing, but the test passes. Using the debugger in the rescue block, I find the following:
(rdb:1) p e # print the exception object "e"
#<RSpec::Mocks::MockExpectationError: (Mock "TestObject_1001").save!
expected: 0 times
received: 1 time>
So basically the test is failing but, but the failure is being suppressed by the very code it is trying to test. I cannot figure out a viable way to stop this code from swallowing Rspec exceptions without somehow compromising the code. I don't want the code to explicitly check if the exception is an Rspec exception, because that is bad design (tests should be written for code, code should never be written for tests). But I also can't check that the exception is any particular type that I DO want it to catch, because I want it to catch ANYTHING that could be raised in a normal production environment.
Someone must have had this problem before me! Please help me find a solution.
Assuming the code is correct as-is:
describe "method_being_tested" do
it "should not call 'save!' on the object passed in" do
# ... set up the test conditions
calls = 0
mock_object.stub(:save!) { calls += 1 }
expect {Thing.method_being_tested(mock_object)}.to_not change{calls}
end
end
If there's no need to catch absolutely all exceptions including SystemExit, NoMemoryError, SignalException etc (input from #vito-botta):
begin
object.save!
rescue StandardError => e
# Swallow "normal" exceptions and log it
end
StandardError is the default exception level caught by rescue.
from rspec-mock:
module RSpec
module Mocks
class MockExpectationError < Exception
end
class AmbiguousReturnError < StandardError
end
end
end
Do you really need to catch Exception? Could you catch StandardError instead?
Catching all exceptions is generally a bad thing.
I would refactor it like so:
class Thing do
def self.method_being_tested!( object )
# ... do some stuff
return object.save
end
end
If you want to ignore the exception thrown by save! there is no point in calling save! in the first place. You just call save and inform the calling code accordingly.

Stack overflow in Cucumber step definition when re-enqueueing delayed job in .perform

I've got a job that is supposed to re-enqueue itself:
class TestJob
def perform
Delayed::Job.enqueue(TestJob.new, {priority: 0, run_at: 5.minutes.from_now})
true
end
end
I'd like to call its perform method in a Cucumber step definition:
Then /^the job should run successfully/ do
TestJob.new.perform.should == true
end
However, I get a stack overflow in this step. What's causing this?
I'm sure there's a 'better' answer out there, but last time I tried to use the enqueue method, it was 'broken'. By that, I mean I couldn't get it to work.
I do something similar to what you're doing, except that I do
TestJob.new.delay(:run_at => 10.seconds.from_now).perform