I'm new to ActiveRecord transactions. In the code below the first update_attributes causes a WARNING: Can't mass-assign protected attributes: account_type_cdx and that is ok. But I was surprised that the next line self.update_attributes!(:purchased => true) is executed and stored in the DB. I was expecting it to ROLLBACK because the first one failed.
I must be missing something...
Any hints?
def complete_purchase(current_user_id, plan_name)
Rails.logger.debug "COMPLETE PURCHASE"
user = User.find(current_user_id)
ActiveRecord::Base.transaction do
user.update_attributes!(:account_type_cdx => plan_name.to_sym)
self.update_attributes!(:purchased => true)
end
end
I followed the advices from this post: http://markdaggett.com/blog/2011/12/01/transactions-in-rails/
Thanks.
Rails ignores those records that are not explicitly listed in attr_accessible class call (hence the first update warning). It doesn't fail a transaction, that's why you'r reaching (and finishing) the second update_attributes! normally.
Related
I'm attempting to build a table which will act as a queue of batched syncs to a third party service.
The following method should speak for itself; but to be clear, its intention is to add a new updatable object (a polymorphic relationship) with status: :queued to the delayed_syncs table.
There is a uniqueness constraint on the polymorphic relationship + status (updatable_id, updatable_type, status) which should cause updatable objects already in the queue with the :queued status to fail here and fall into the rescue block.
The issue I am seeing is that whenever the SELECT generated by find_by is fired, this entire method fails with a:
ActiveRecord::StatementInvalid
error.
Information I've found around this suggests a ROLLBACK or RELEASE SAVEPOINT after the failed INSERT, but I'm not sure how I would accomplish that here.
The aforementioned method:
def self.enqueue(updatable:, action:)
DelayedSync.create(updatable: updatable, status: :queued, action: action)
rescue ActiveRecord::RecordNotUnique
queued_update = DelayedSync.find_by(updatable: updatable, status: :queued, action: :sync_update)
if action == :sync_delete && queued_update.present?
queued_update.sync_delete!
else
Rails.logger.debug "#{updatable.class.name} #{updatable.id} already queued for sync, skipping."
end
end
Rather than rely on exception handling for logic, you can use ActiveRecord transactions to ensure all-or-nothing updates.
Like this:
ActiveRecord::Base.transaction do
DelayedSync.create!(updatable: updatable, status: :queued, action: action)
end
You can still safely utilize rescue to handle logging cleanup.
Docs that have much more detail about this can be found here.
After further digging, I uncovered the problem was due to how I was invoking this method from an after_save callback. Rails invokes after_save and after_destroy callbacks before the transaction has been closed. Rescuing the ActiveRecord::RecordNotUnique error invoked from this callback and attempting to execute more queries is impossible with Postgres, since it invalidates the entire transaction. My solution was to transition to the after_commit callback which provides the same control as after_save and after_destroy with the on: [:create, :destroy] parameter, with the benefit of being executed after the transaction (invalid or not) has closed.
This blog post is a bit dated, but the information near the bottom was immensely helpful and still holds true: http://markdaggett.com/blog/2011/12/01/transactions-in-rails/
I was pulling my hair out trying to figure out why my call to Warden::Manager.before_logout was throwing a NoMethodError for NilClass when I tried to call 'user.my_method'. Then I added a debug puts call to the before_logout block and discovered that it was being called TWICE on each logout - the first time with the user being nil, and then immediately after, with the user object supplied. So I was able to inelegantly get around the exception by changing the call to 'user.my_method if user', but I'm still not comfortable with not knowing why before_logout is getting called twice. Has anyone else seen this? Is it perhaps one of those happens-in-development-only environment anomalies?
Devise.setup do |config|
Warden::Manager.after_authentication do |user,auth,opts|
user.update_attributes(is_active: true)
end
Warden::Manager.before_logout do |user,auth,opts|
user.update_attributes(is_active: false) if user
end
This is old, but you probably have two models (scopes in Warden's case): one for User and another for AdminUser. You can use an if statement for just the model/scope you want:
Warden::Manager.before_logout do |user, auth, opts|
if opts[:scope] == :user
user.current_sign_in_at = nil
user.save!
end
end
solve it by adding this in the config/initializers/devise.rb. Add code for what ever you want to do on logout.
Warden::Manager.before_logout do |user,auth,opts|
# store what ever you want on logout
# If not in initializer it will generate two records (strange)
end
According to the documentation Rails has_many association has clear method. Looks like it executes sql delete query immediately after it performs. Is there a canonical way to delete all the child objects and update association only at the moment of save method? For example:
#cart.container_items.delete_all_example # looks like `clear` execute sql at this line
if #cart.save
# do smth
else
#do smth
end
it is necessary because of many changes at the parent object and they must be committed all or none of them.
You don't want to delete_all, you want to destroy_all.
Calling delete_all executes a simple SQL delete, ignoring any callbacks and dependent records.
Using destroy_all invokes the destroy method on each object, allowing :dependent => :destroy to work as expected, cleaning up child records.
This does not destroy all objects at the point of save, and there is no canonical way to do that as you're not saving the record. Rails persists destroys at the point of the method call, not at a later save. If you need many destroys to be transactional, wrap them in a transaction:
Cart.begin do
#cart.container_items.delete_all_example
end
Try this:
Cart.transaction do
#cart.container_items.delete_all_example # looks like `clear` execute sql at this line
if #cart.save
# success
else
# error
raise ActiveRecord::Rollback
end
end
ActiveRecord::Rollback is not propagated outside the transaction block. It simply terminates the transaction.
Looks like i'm trying to do a transaction. Some articles to learn more about it:
Transations in Rails
Active record transactions
Is there a way to retrieve failed validations without checking the error message?
If I have a model with validates :name, :presence => true, :uniqueness => true, how can I check if determine what validation failed(was it uniqueness or was it presence?) without doing stuff like:
if error_message == "can't be blank"
# handle presence validation
elsif error_message = "has already been taken"
# handle uniqueness validation
end
There's a relatively new method that let you do just that, it's not documented anywhere as far as I know and I just stumbled on it while reading the source code, it's the #added? method:
person.errors.added? :name, :blank
Here's the original pull request: https://github.com/rails/rails/pull/3369
ActiveModel::Errors is nothing more than a dumb hash, mapping attributes names to human-readable error messages. The validations (eg. the presence one) directly add their messages to the errors object without specifying where they came from.
In short, there doesn't seem to be an official way of doing this.
You can Haz all your errors in the errors method. Try this on an saved unvalid record :
record.errors.map {|a| "#{a.first} => #{a.last}"}
Even though I'm pretty sure I know why this error gets raised, I don't seem to know why or how my session is exceeding the 4KB limit...
My app was working fine, but once I deliberately started adding bugs to see if my transactions were rolling back I started getting this error.
To give some background, I'm busy coding a tournament application that (in this section) will create the tournament and then add some tournament legs based on the number of teams as well as populate the the tournament with some 'ghost fixtures' once the legs have been created.
The flash[:tournament] was working correctly before; using a tournament object, I have access to any AR validation errors as well as data that has been entered on the previous page to create the tournament.
TournamentController.rb
begin
<other code>
Tournament.transaction do
tournament.save!
Tournament.generate_legs tournament
Tournament.generate_ghost_fixtures tournament
end
flash[:notice] = "Tournament created!"
redirect_to :action => :index
rescue Exception => e
flash[:tournament] = tournament
redirect_to :action => :new, :notice => "There was an error!"
end
Tournament.rb
self.generate_ghost_fixtures(tournament)
<other code>
#Generate the ghost fixtures
#tournament_legs is a has_many association
tournament_legs_array = tournament.tournament_legs
tournament_legs_array.each do |leg|
number_of_fixtures = matches[leg.leg_code]
#For the first round of a 32 team tournament, this block will run 16 times to create the matches
number_of_fixtures.times do |n|
Fixture.creatse!(:tournament_leg_id => leg.id, :match_code => "#{leg.leg_code}-#{n+1}")
end
end
end
I can do nothing but speculate as to why my session variable is exceeding 4KB??
Is it possible that the tournament object I pass through the flash variable contains all the associations as well?
Here is the dump of my session once I get the error.
Hope this is enough info to help me out :)
Thanks
Session Dump
_csrf_token: "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
flash: {:tournament=>#<Tournament id: nil, tournament_name: "asd", tournament_description: "asdasd", game_id: 1, number_of_teams: 16, start_date: "2011-04-30 00:00:00", tournament_style: "single elimination", tournament_status: "Drafting", active: true, created_at: "2011-04-30 10:07:28", updated_at: "2011-04-30 10:07:28">}
player_id: 1
session_id: "4e5119cbaee3d5d09111f49cf47aa8fa"
About dependencies, it is possible. Also save an ActiveRecord instance in the session is not a recommended aproach. You should save only the id. If you need it in all your requests use a before filter to retrieve it.
You can read more why is a bad idea at: http://asciicasts.com/episodes/13-dangers-of-model-in-session
The generally accepted and recommended approach is to not use a redirect on error, but a direct render instead. The standard "controller formula" is this:
def create
#tournament = Tournament.new(params[:tournament])
if #tournament.save
redirect ...
else
render 'new' # which will have access to the errors on the #tournament object and any other instance variable you may define
end
end
class Tournament < ActiveRecord::Base
before_create :set_up_legs
end
On successful saving, you can drop all instance variables (thereby wiping the in-memory state) and redirect to another page. On failure (or exception) you keep the object in memory and render a view template instead (typically the 'new' or 'edit' form page). If you're using standard Rails validation and error handling, then the object will have an errors array that you can just display.
I'd also recommend you use ActiveRecord associations which automatically give you transactions. If you push all this into the model, e.g. a "set_up_legs" method or something, then you can use ActiveRecord error handling. This is part of the "skinny controller, fat model" paradigm.
in session_store.rb, uncomment the last line with :active_record_store
Now restart the server
I would convert the exception to string before assigning it to flash[:tournament] with 'to_s'.
I had the same error and it seems assigning an exception object to a session variabla like flash means it takes the whole stack trace with it into the session. Try it, worked for me.