I'm currently doing a pessimistic loking with rails 3 + postgresql. But there seems to be no way to confirm that the lock is working unless I go through the hassle of making a concurrent test. Is there no way to test this via console?
Example
User.transaction do
u1 = User.find(1, :lock => true)
u2 = User.find(1)
## u2 should not be able to do anything right?
end
Open 2 consoles
Console 1:
User.transaction do
u = User.find(1, :lock => true)
sleep(30)
end
Once that is executed switch to console 2 then do this
Console 2:
u = User.find(1)
u.name = "new name"
u.save!
You will then see that console 2 will not commit it's update not until the 30 second sleep on console 1 is finished.
Related
I have a service where I am doing something like below
1. await users = user.read(); // reads all users;
2. await task = task.create('taskId') // creates a new task
3. await tasks = task.create(users,task) // add the above task for all users ;
Problem
If code line 3 fails, i.e. adding tasks to all users then code line 2 should roll back.
Edit 1
What I have tried.
As pointed out by delerik deleting is an option but I have a feeling there is a better way to do this. Imagine n tables then it will be a mess.
I am trying to figure out how I can access the status of the background job using sidekiq.
To check the status of the job I check the size of the queue.
class PagesWorker
include Sidekiq::Worker
include Sidekiq::Status::Worker
sidekiq_options queue: "high"
def perform(ifile)
system "rake db:..."
# What do the following do? In what scenario would I need to set these?
total 100
at 5, "Almost done"
store vino: 'veritas'
vino = retrieve :vino
end
end
class PagesController
def submit
job_id = SomeWorker.perform_async(file)
flash[:notice] = "#{job_id} is being processed."
end
def finished
queue = Sidekiq::Status.new('high')
if queue.size == 0
flash[:success] = "Job is finished."
elsif queue.size > 0
flash[:notice] = "Job is being processed."
end
end
end
It prints "Job is finished" from the start (while the job is still running) because the queue is always zero. Specifically, in myrailsapp/sidekiq page I see the following:
The number of processed jobs increments by one every time I submit a job, which is correct.
The 'high' queue is there but its size is always 0, which isn't correct.
And there are no jobs listed under the queue, which again isn't correct.
However the job is being processed and finishes successfully. Could the reason for the job not appearing in the sidekiq page be that the job finishes in less that a minute? Or maybe because the worker runs a rake process?
I have also checked the following in the terminal, while the process is running:
Sidekiq::Status::complete? job_id #returns false
Sidekiq::Status::status(job_id) #returns nothing
Sidekiq::Status::get_all job_id #returns {"update_time"=>"1458063970", "total"=>"100", "at"=>"5", "message"=>"Almost done"}, and after a while returns {}
To sum up, my queue is empty, my jobs list is empty, but my job is running. How is this possible? What can I do to track the status of my job?
EDIT
I use the job status to check whether it is finished or not.
Controller:
def finished
job_id = params[:job_id]
if Sidekiq::Status::complete? job_id == true
flash[:notice] = "Job is finished."
elsif Sidekiq::Status::complete? job_id == false
flash[:notice] = "Job is being processed."
end
end
Form:
=form_tag({action: :submit},multipart: true) do # submit is my controller method as defined above
= file_field_tag 'file'
= submit_tag 'submit'
=flash[:notice] # defined above as "#{job_id} is being processed."
-job_id = flash[:notice].split(" ")[0]
=form_tag({action: :finished}) do
= hidden_field_tag 'job_id', job_id
= submit_tag 'check'
Routes:
resources :pages do
collection do
post 'submit'
post 'finished'
end
end
But the Sidekiq::Status::complete? job_id never turns to true. I am tailing the log file to see when the process will finish, and click on the check button but still get that it is still getting processed. Why?
As far as I understand, you are using sidekiq-status gem and you can use the following way
https://github.com/utgarda/sidekiq-status#retrieving-status
If you just wanted to know if the job is queued, then you may stop running the sidekiq process and simply send the job using the sidekiq client and from there you can list the queues in the redis-cli like
$ redis-cli
> select 0 # (or whichever namespace Sidekiq is using)
> keys * # (just to get an idea of what you're working with)
> smembers queues
> lrange queues:app_queue 0 -1
> lrem queues:app_queue -1 "payload"
and see if the job is queued and once after starting the sidekiq process again, you would see the job being processed and you can puts in the perform method also and that too would be printed in the logs of the sidekiq.
I have a controller action that creates 2 background jobs to be run at a future date.
I am trying to test that the background jobs get run
# spec/controllers/job_controller_spec.rb
setup
post :create, {:job => valid_attributes}
Delayed::Job.count.should == 2
Delayed::Worker.logger = Rails.logger
#Delayed::Worker.new.work_off.should == [2,0]
Delayed::Worker.new.work_off
Delayed::Job.count.should == 0 # this is where it fails
This is the error:
1) JobsController POST create with valid params queues up delayed job and fires
Failure/Error: Delayed::Job.count.should == 0
expected: 0
got: 2 (using ==)
For some reason it seems like it is not firing.
You can try to use
Delayed::Worker.new(quiet: false).work_off
to debug the result of your background jobs, this could help you to find out if the fact that they're supposed to run in the future is messing with the assert itself.
Don't forget to take off the "quiet:false" when you're done, otherwise your tests will always output the results of the background jobs.
I would like to change the logging levels of a running Rails 3.2.x application without restarting the application. My intent is to use it to do short-time debugging and information gathering before reverting it to the usual logging level.
I also understand that the levels in ascending order are debug, info, warn, error, and fatal, and that production servers log info and higher, while development logs debug and higher.
I understand that if I run
Rails.logger.level=:debug #or :info, :warn, :error, :fatal
Will this change the logging level immediately?
If so, can I do this by writing a Rake task to adjust the logging level, or do I need to support this by adding a route? For example in config/routes.rb:
match "/set_logging_level/:level/:secret" => "logcontroller#setlevel"
and then setting the levels in the logcontroller. (:level is the logging level, and :secret which is shared between client and server, is something to prevent random users from tweaking the log levels)
Which is more appropriate, rake task or /set_logging_level?
Why don't you use operating system signals for that? For example on UNIX user1 and user2 signals are free to use for your application:
config/initializers/signals.rb:
trap('USR1') do
Rails.logger.level = Logger::DEBUG
end
trap('USR2') do
Rails.logger.level = Logger::WARN
end
Then just do this:
kill -SIGUSR1 pid
kill -SIGUSR2 pid
Just make sure you dont override signals of your server - each server leverages various signals for things like log rotation, child process killing and terminating and so on.
In Rails console, you can simply do:
Rails.logger.level = :debug
Now all executed code will run with this log level
As you have to change the level in the running rails instance, a simple rake task will not work.
I would go with the dedicated route.
instead of a shared secret I would use the app's standard user authentication (if your app has users) and restrict access to admin/super user.
In your controller LogController try this
def setlevel
begin
Rails.logger.level = Logger.const_get(params[:level].upcase)
rescue
logger.info("Logging level #{params[:level]} not supported")
end
end
You can also use gdb to attach to the running process, set the Rails.logger to debug level and then detach. I have created the following 1 liner to do this for my puma process:
gdb attach $(pidof puma) -ex 'call(rb_eval_string("Rails.logger.level = Logger::DEBUG"))' -ex detach -ex quit
NOTE: pidof will return multiple pids, in descending order. So if you have multiple processes with the same name this will only run on the first one returned by pidof. The others will be discarded by the "gdb attach" command with the message: "Excess command line arguments ignored. (26762)". However you can safely ignore it if you only care about the first process returned by pidof.
Using rufus-scheduler, I created this schedule:
scheduler.every 1.second do
file_path = "#{Rails.root}/tmp/change_log_level.#{Process.pid}"
if File.exists? file_path
log_level = File.open(file_path).read.strip
case log_level
when "INFO"
Rails.logger.level = Logger::INFO
Rails.logger.info "Changed log_level to INFO"
when "DEBUG"
Rails.logger.level = Logger::DEBUG
Rails.logger.info "Changed log_level to DEBUG"
end
File.delete file_path
end
end
Then, log level can be changed by creating a file under tmp/change_log_level.PID, where pid is the process id of the rails process. You can create a rake/capistrano task to detect and create these files, allowing you to quickly switch log level of your running production server.
Just remember to start rufus in the worker threads, if you are using unicorn or similar.
here's what I tried to do:
irb(main):008:0> c.title = "Another Test"
=> "Another Test"
irb(main):009:0> c.save
(0.7ms) BEGIN
FriendlyId::Slug Load (1.0ms) SELECT "friendly_id_slugs".* FROM "friendly_id_slugs" WHERE "friendly_id_slugs"."sluggable_type" = 'Contest' AND (slug = 'another-test-challenge' OR slug LIKE 'another-test-challenge--%') AND (sluggable_id <> 64) ORDER BY LENGTH(slug) DESC, slug DESC LIMIT 1
(0.5ms) ROLLBACK
=> false
When I try to do this in my app (i.e. using an edit form), I get this issue:
2013-01-10T17:53:47+00:00 app[web.2]: cache: [POST /mycontroller/this-is-the-old-title] invalidate, pass
I can't seem to edit the title for my object, which is equivalent to the friendly id associated to my object as well.
For your answer c.save will call validations on that object. If any validations fail it will rollback the SQL transaction.
If your transaction gets ROLLBACK you can ask the object c.errors.full_messages to see the errors and which validations failed or instead of using c.save use c.save! which will raise an exception if validations fail.