I'm developing a Rails application that has a background process that updates some user information. In order to do so, this method has to delete all the existing information (stored in another table) and get new information from the Internet. The problem is that if something goes wrong in the midtime users don't have information until the process runs again.
there is something to do like:
transaction = EntityUser.delete_all(:user_id => #current_user.id)
#instructions that adds new entity
...
...
transaction.commit
can anyone suggest something that i can do to avoid this kind of problem?
thank you
Read about ActiveRecord::Transactions. You can wrap everything in a block
ActiveRecord::Base.transaction do
EntityUser.delete_all(:user_id => #current_user.id)
# ...
end
If something goes wrong, you can call raise ActiveRecord::Rollback within that block to cause all changes to the database within the transaction block to be reverted.
Another thought is to soft-delete the EntityUser instance(s) for a User instead of actually removing them from the database. There are gems to help you with this functionality such as acts_as_paranoid.
You would
soft-delete the EntityUser instance(s)
Fetch and build the new EntityUser instance(s)
(Optionally) flush the soft-deleted instance(s) from the database if everything went well
If something goes wrong using this method it's easy to just un-delete the soft-deleted records.
Related
I have set up a very simple rails 5 project to narrow down my problem:
https://github.com/benedikt-voigt/capybara_js_demo
In this project the data mutation done by the Capybara JS is not deleted, neither by Rails nor by the Database cleaner I added.
The following great blog argues, that no DatabaseCleaner is needed:
http://brandonhilkert.com/blog/7-reasons-why-im-sticking-with-minitest-and-fixtures-in-rails
but this works only for fixtures, not for the mutation done by an out-of-thread Capybara test.
I added the Database cleaner, but this also needed work.
Does anybody has a sample setup?
From a quick look at your test I would say it's leaving data because the data is actually being added after DatabaseCleaner cleans. The click_on call occurs asynchronously, so when your assert_no_content call happens there's no guarantee the app has handled the request yet or the page has changed yet and since the current page doesn't have the text 'Name has already been taken' on it the assertion passes and the database gets cleaned. While that is happening the click gets processed by the app and the new data is created after the cleaning has occurred. You need to check/wait for content that will appear on the page after the click - something like
page.assert_text('New Foo Created')
You should only be asserting there is no content once you already know the page has changed, or you're expecting something to disappear from the current page.
I solved now the problem by setting the DB connection to one
class ActiveRecord::Base
mattr_accessor :shared_connection
##shared_connection = nil
def self.connection
##shared_connection || ConnectionPool::Wrapper.new(:size => 1) { retrieve_connection }
end
end
ActiveRecord::Base.shared_connection = ActiveRecord::Base.connection
as describe here:
https://mattbrictson.com/minitest-and-rails
I uploaded the working repo here:
https://github.com/benedikt-voigt/capybara_js_demo_working
In our web application, we observed the following:
GetUser/CreatMembershipEntities/ExplicitLoadFromAssembly seems quite expensive.
Also noticing that CreateEntityConnection is being called - EntityFramework?
I'm not entirely convinced that EF was configured correctly for this application. If it was, and was in use, I wouldn't expect to see new connections to be initiated for every call - yes/no?
Is a way to streamline this to avoid some major code refactoring?
The use of System.Web.Security.Membership.GetUser() seems like a biggie here. Instead of using the MembershipProvider to create users, what about just executing a sproc that does the same things.
I have found the following code that is ridiculous, as far as I am concerned - in causing a new call for each user until a unique one can be generated:
For i As Integer = 1 To Integer.MaxValue
'Generate unique username
If System.Web.Security.Membership.GetUser(userName & i) Is Nothing Then
'Increment value until no duplicate username found
userName = userName & i
Exit For
End If
Next
-- UPDATE--
I have modified the question slightly...
We were able to run up to 20 users then the IIS server would tank. Does the GetUser() method create a brand new connection every time? It looks like it, based on the results. How can I ensure that this GetUser() thing is actually using the db context, rather than spinning up its own connections?
I am using Notifications in Rails 3 (as described in this railscast http://asciicasts.com/episodes/249-notifications-in-rails-3) to log slow sql queries produced on my app. But these include queries that come from my read-only database. Therefore I would like to know if there is a way to tell which database the sql query is being executed on. Any help is much appreciated. My code is below.
ActiveSupport::Notifications.subscribe 'sql.active_record' do |*args|
begin
event = ActiveSupport::Notifications::Event.new(*args)
if event.duration > 250
Rails.logger.error "[SLOW QUERY] | #{event.duration}ms | #{event.payload[:sql]}
end
rescue
end
end
The key point you may be missing is that the payload contains a connection_id element, which is just a Ruby object_id. So, you should be able to do something like this:
def get_connection_host(payload)
connection_object_id = payload[:connection_id]
return nil unless connection_object_id.present?
connection_object = ObjectSpace._id2ref(connection_object_id)
return nil unless connection_object
#If the connection object descends from ActiveRecord::Base, it'll have this.
if connection_object.respond_to?(:connection_config)
return connection_object.connection_config[:host]
end
#If we're dealing with straight SQL, the AR connection adapter will be the connection object and have this variable.
if connection_object.instance_variable_defined?('#connection_options')
return connection_object.instance_variable_get('#connection_options').try(:[], :host)
end
rescue => e
#log error, raise, do whatever is appropriate in your context
end
This code isn't super future-proof, relying on instance variables, and I'm not 100% sure what would happen if the connection object had already been garbage collected. However, it should give you some ideas.
The suggestion to use ActiveRecord::Base.connection makes sense, but I don't think it's reliable if you're working in an application that connects to multiple databases, which is exactly where you might really want to do some type of filtering by the host or database name.
You can access this through ActiveRecord::Base.connection.raw_connection.host. If this returns nil, you're connecting to a local database, probably via a socket. If that's the case, then you can assume the host is localhost.
I have added new fields for my devise user, but, now, I can't update the student record, although I get the message:
You updated your account successfully.
But, the DB is never updated!
The new fields have attr_accessible and attr_accessor
Is it because there is a foreign keys in the new fields ? I have added country id to associate the user with his country, is this a reason for not updating ?
How can I debug the DB error occurred ? I tried to use update_attributes! in the devise function: update_with_password, but, no luck, no errors, just: You updated your account successfully.
I've noticed that there is no SQLite UPDATE command issued at Server Development log, why ?
Any help please ?
I found the solution, I should not use attr_accessor as its for those attributes that are not stored directly into DB.
I hope this will help some one.
Try raising a exception or logging by adding an after_filter to your update action on the UserController. Try overriding the controller action (and call super) if you dont have a hook to that code.
In my asp.net application, I open and close/flush the session at the beginning/ending of each request.
With this setup, I thought it would result in things like:
Entity e = EntityDao.GetById(1);
e.Property1 = "blah";
EntityDao.MakePersistant(e);
e = EntityDao.GetById(1);
e.Property1 // this won't be blah, it will be the old value since the request hasn't flushed
But I noticed that the value returned was the most recent updated value.
Someone responded that because of they way I have my identity setup?
Can someone explain this behaviour? So I don't need to call flush to ensure it is persisted to the db?
I belive (but could be mistaken) that nHibernate's caching and change-tracking is the reasons for this.
Since this Session instance is still active, nHibernate is tracking the changes you made to 'e'. When you ask for it again it includes those changes in the object returned. What happened when you called MakePersistant (which I assume calls Session.SaveOrUpdate(e)) is that it told that Session instance you are going to save those changes, thus it knows about them and will still show them when you call Session.Get(id) again.
If you instead started another Session and then called Session.Get(id) you wouldn't see those changes if you had not called Flush (or closed the Transaction, since you should be using a Transaction here) as it knows nothing of the changes made in the other Session.
To answer your other question, yes you still need to call Flush or close a Transaction to ensure changes are written to the database. One thing that is neat, is that you don't actually need to call SaveOrUpdate(e). For example this code will cause an Update to the database:
using (var session = SessionFactory.OpenSession())
using (var trans = session.BeginTransaction()){
var e = session.Get(id);
e.Name = "New Name";
trans.Commit();
}
nHibernate knows to update 'e' since it was tracking the changes that were made to during that Session. When I commit that transaction they are written. Note, this is the default behavior and I believe it can be changed if you want to require that .SaveOrUpdate() is called.