Rails 3 - Devise with acts_as_audited possible? - ruby-on-rails-3

I'd like to use Devise with acts_as_audited.
I have googled it, but the results weren't very clear.
What are its pros and cons?

I use Paper Trail here which is newer but much the same thing, and the top of my Devise User model looks like this:
class User < ActiveRecord::Base
has_paper_trail
And now I have a growing versions table in my DB with a row for every CRUD action on the User model.
The benefits are that all previous versions of your model's data are saved and stored in YAML, allowing you to rollback/undo.
The cons? Only database size and perhaps a small performance hit at Write/Update time.

Related

How do I add (sql) database to my sintra application?

When I started to code my sinatra application I never used it before. Note that I had and still have no experience with RoR. I had one .rb file and one .haml and was happy. Now I had to split .rb file into about 10 'library' files as the whole application gets more and more complex.
I store some application logs/info in csv files and now I am getting conflicts when accessing the csv file. So I think that I need to introduce "proper" database solution. I want it to be part of my ruby (sinatra) application.
How can I introduce 'light' sql database into my sintra application?
I am on ruby 1.8.7 (2010-08-16 patchlevel 302) [i386-mingw32] soon upgrading to 1.9 (hopefully)
I'd recommend looking at Sequel. It's very flexible and powerful, and works well with SQLite, MySQL, Postgres, Oracle and other DBMs. It's not opinionated about how you talk to the database; You can use it as an ORM or with simple datasets, and allows embedded SQL or more programmatic approaches.
For ORM, both ActiveRecord and Sequel are recommended. About database, I guess sqlite3 will be good enough for your need. Also you can choose mysql or pg.
If you want to use active_record, you'll find this article very useful.
And if Sequel is the choice, just read Sequel documents here.
After the gems installed. You can start writing some code to connect the db. Then maybe some migration task to build database tables (and don't forget build some corresponding models). Both gem have similar syntax for migrations. After that, import your csv data and well done.
I have had no trouble using either Active Record or DataMapper to add object persistence to my Sinatra apps. People also tell me Sequel is very good but philosophically it is not not worlds apart from Active Record imho.
Active Record and Sequel favour a more database-centric approach, whereby you spell out your tables as a set of database and table definitions in a collection of migration files and merge them into a schema which is then used to build or update your database tables. If you really care about the underlying SQL database then one of these is for you. I find them to be six of one, a half-dozen the other.
DataMapper is more object-centric and lets you define the properties and object relationships you need in your object's own class definition; and then when your app launches you make sure you call DataMapper.auto_upgrade! and it upgrades the database to suit your object graph. The upside is that you only have one place to look to find what properties your object might have. The downside is you have less control over the specifics of the underlying databases, though it's not impossible to tightly define the mappings, DataMapper works well when you care about object graphs over database tables.
The good news is they pretty much all work in the same way once you have your mappings from object graph to SQL database tables defined. All support lazy or pre-emptive loading of related collections of objects, many_to_many relationships, polymorphism, etc, and tend to vary only in configuration and seeding details.
I often start projects using DataMapper just for its speed of throwing up and tearing down database schemas, as the app's object graph is still in flux; I refactor quickly to use Active Record when the schema has settled down. Next project I think I'll give Sequel a go though, as people do seem to rave about it.
I have had success using datamapper with Sinatra, put like the other post you can also use Sequel and Active Record. One advantage to maybe using Active Record though is if you do ever want to use/learn ROR, Active Record is the default ORM so that might be something that you want to consider.
If you don't want to go the ORM route you can always use the SQL-Ruby gem which will allow you to create and run sql queries. Here is some example code from the website http://sqlite-ruby.rubyforge.org/
require 'sqlite'
db = SQLite::Database.new( "data.db" )
db.execute( "select * from table" ) do |row|
p row
end
db.close

Retrieving data across 5 tables in has_many :through

Issue: Is there a better way to model the following or create a basic recommendation system than the database diagram below? If an extremely lengthy answer is necessary, you could instead just point me in the right direction and suggest things to research further.
I'm building a rudimentary event recommendation system by allowing users to answer questions and storing their user-answer relationship in the responses model. Each question's answer relates to a tag. Each event is also tagged. Thus, I should be able to provide users with recommended events via selecting matching tags and using this 5 table relationship. However, it seems like I would have to go through several has_many :through relationships in order to accomplish this, which I don't believe is preferred using Rails.
Would it be better to instead create a relationship from users to events via a background rake task or something, computing the relationship after questions are answered? Am I missing the concepts completely here and looking at this from the wrong angle? Eventually this system would be replaced with a more robust algorithm, perhaps using Mahout or something, but for now I'm just trying to get a simple proof of concept working.
Here's a link to the database diagram: Database Diagram

ActiveRecord::Base.send(:descendants) doesn't return all models unless touched

Update: This problem no longer exists in Rails 3.2
I am trying to get an array containing all the models in my rails 3 application. I am trying:
ActiveRecord::Base.send(:descendants)
for the same. A similar discussion happened in the question: Is there a way to get a collection of all the Models in your Rails app?. As pointed out in one of the answers, we need to touch the models for the models to show up. That is precisely the problem I am facing.
There are more than a dozen models in my rails app, but
ActiveRecord::Base.send(:descendants)
returns an array of size two. The array has just User and ActiveRecord::SessionStore::Session models. I don't get the other models untill I touch the model by touching, say invoking Comment.new
How can I get all the models listed without touching all the models?
Another additional piece of information that might be useful is that I am using devise for authentication. Maybe devise is doing things in a desired way as far as User model is concerned and I am not doing those things with the other models.
Thanks a lot in advance.
If cache_classes is on (by default it's on in development, not in production), run this first:
Rails.application.eager_load!
You need to load the models first:
Dir[Rails.root + "app/models/**/*.rb"].each do |path|
require path
end

Rails: mixing NOSQL & SQL Databases

I'm looking for the better way (aka architecture) to have different kind of DBs ( MySQL + MongoDB ) backending the same Rails app.
I was speculating on a main Rails 3.1 app, mounting Rails 3.1 engines linking each a different kind of DB ...
... or having a main Rails 3.0.x app routing a sinatra endpoint for each MySQL/MongoDB istance ...
Do you think it's possible ..., any idea or suggestions ?
I notice some other similar questions here, but I think that "mounting apps" is moving fast in Rails 3.1 / Rack / Sinatra and we all need to adjust our paradigms.
Thanks in advance
Luca G. Soave
There's no need to completely over-complicate things by running two apps just to have two types of database. It sounds like you need DataMapper. It'll do exactly what you need out of the box. Get the dm-rails gem to integrate it with Rails.
In DataMapper, unlike ActiveRecord, you have to provide all the details about your underlying data store: what fields it has, how they map the attributes in your models, what the table names are (if in a database), what backend it uses etc etc.
Read the documentation... there's a bucket-load of code to give you an idea.
Each model is just a plain old Ruby object. The class definition just mixes in DataMapper::Resource, which gives you access to all of the DataMapper functionality:
class User
include DataMapper::Resource
property :id, Serial
property :username, String
property :password_hash, String
property :created_at, DateTime
end
You have a lot of control however. For example, I can specify that this model is not store in my default data store (repository) and that it's stored in one of the other configured data stores (which can be a NoSQL store, if you like).
class User
include DataMapper::Resource
storage_names[:some_other_repo] = 'whatever'
# ... SNIP ...
end
Mostly DM behaves like ActiveRecord on steroids. You get all the basics, like finding records (except you never have to use the original field names if your model abstracts them away):
new_users = User.all(:created_at.gte => 1.week.ago)
You get validations, you get observers, you get aggregate handling... then get a bunch of other stuff, like strategic eager-loading (solves the n+1 query problem), lazy loading of large text/blob fields, multiple repository support. The query logic is much nicer than AR, in my opinion. Just have a read of the docs. They're human-friendly. Not just an API reference.
What's the downside? Well, many gems don't take into account that you might not be using ActiveRecord, so there's a bit more searching to do when you need a gem for something. This will get better over time though, since before Rails 3.x seamlessly integrating DM with Rails wasn't so easy.
I dont fully understand your question., like
what is the problem you are facing right now using mongo and MySQL in same app, and
whats the reason for going multiple rails app dealing with different dbs.
Though am not an expert in ruby & rails(picked up few months ago), i like to add something here.
I am currently building the rails app utilizing both mongo and MySQL in the back end. Mongoid & ActiveRecord are the drivers. MySql for transactions and mongo for all other kind of data (geo spatial mainly). Its just straight forward. You can create different models inheriting from mongoid and activerecord.
class Item
include Mongoid::Document
field :name, :type => String
field :category, :type => String
end
and
class User < ActiveRecord::Base
end
And you can query both the way same way (except complex sql joins, also mongoid has some addition querying patterns for Geo spatial kind of queries)
Item.where(:category => 'car').skip(0).limit(10)
User.where(:name => 'ram')
Its a breeze. But there are some important points you need to know
Create your Active record models before the mongoid models. Once mongoid is activated (on rails g mongoid:config - mongoid.yml added) all the scaffolding , and generations works toward mongo db. Otherwise every time you need to delete the mongoid.yml before creating the Activerecord models
And don't use mongoid in a relational way. i know mongoid provides lot of options to define realtions. Like Belongs_to relations stores the refernece id's in child documents. Its quite opposite to the mongo DbRef. Its greatly confusing when leaving the mongo idioms for the favour of active record feel. So try to stick with the document nature of it. Use embed and DbRef whenever necessary. (may be someone corrcet me if am wrong)
Still Mongoid is a great work. Its fully loaded with features.

Why use database migrations instead of a version controlled schema

Migrations are undoubtedly better than just firing up phpMyAdmin and changing the schema willy-nilly (as I did during my php days), but after using them for awhile, I think they're fatally flawed.
Version control is a solved problem. The main function of migrations is to keep a history of changes to your database. But storing a different file for each change is a clumsy way to track them. You don't create a new version of post.rb (or a file representing the delta) when you want to add a new virtual attribute -- why should you create a new migration when you want to add a new non-virtual attribute?
Put another way, just as you check post.rb into version control, why not check schema.rb into version control and make the changes to the file directly?
This is functionally the same as keeping a file for each delta, but it's much easier to work with. My mental model is "I want table X to have such and such columns (or really, I want model X to have such and such properties)" -- why should you have to infer from this how to get there from the existing schema; just open up schema.rb and give table X the right columns!
But even the idea that classes wrap tables is an implementation detail! Why can't I just open up post.rb and say:
Class Post
t.string :title
t.text :body
end
If you went with a model like this, you'd have to make a decision about what to do with existing data. But even then, migrations are overkill -- when you migrate data, you're going to lose fidelity when you use a migration's down method.
Anyway, my question is, even if you can't think of a better way, aren't migrations kind of gross?
why not check schema.rb into version control and make the changes to the file directly?
Because the database itself is not in sync with version control.
For instance, you could be using the head of the source tree. But you're connecting to a database that was defined as some past version, not the version you have checked out. The migrations allow you to upgrade or downgrade the database schema from any version and to any version, incrementally.
But to answer your last question, yes, migrations are kind of gross. They implement a redundant revision control system on top of another revision control system. However, neither of these revision control systems is really in sync with the database.
Just to paraphrase what others have said: migrations allow you to protect the data as your schema evolves. The notion of maintaining a single schema.rb file is attractive only until your app goes into production. Thereafter, you'll need a way to migrate your existing users' data as your schema changes.
There are also data-related issues that are important to consider, which migrations solve.
Say an old version of my schema has a feet and inches column. For efficiency purposes, I want to combine that into just an inches column to make sorting and searching easier.
My migration can combine all of the feet and inches data into the inches column (feet * 12 + inches) while it's updating the database (i.e. just before it removes the feet column)
Obviously this being in a migration makes it automatically work when you later apply the changes to your production database.
As it stands, they're annoying and inadequate but quite possibly the best option we have available to us at present. Quite a few smart people have spent quite a lot of time working on the problem and this, so far, is about the best they've been able to come up with. After about 20 years of mostly hand-coding database version updates, I came very rapidly to appreciate migrations as a major improvement when I found ActiveRecord.
As you say, version control is a solved problem. Up to a point I'd agree: it's very solved for text files in particular, less so for other file types and not really very much at all for resources such as databases.
How do migrations look if you view them as version control deltas for databases? They're the sum of the deltas you have to apply to get a schema from one version to another. I'm not aware that even git, for all its super-powerfulness, can take two schema files and generate the necessary DDL to do that.
As far as declaring table content in the model, I believe that's what DataMapper does (no personal experience). I think there may be some DDL inference capabilities there as well.
"even if you can't think of a better way, aren't migrations kind of gross?"
Yes. But they're less gross than anything else we have. Do please let us know when you've completed the non-gross alternative.
I suppose given "even if you can't think of a better way", then yes, in the grand scheme of things, migrations are kind of gross. So are Ruby, Rails, ORMs, SQL, web apps, ...
Migrations have the (not insignificant) advantage that they exist. Gross-but-exists tends to win out over Pleasant-but-nonexistent. I'm sure there probably are pleasant and nonexistent ways to migrate your data, but I'm not sure what that means. :-)
OK, I'm going to take a wild guess here and say that you're probably working all by yourself. In a group development project the power of each individual to take responsibility for just his/her changes to the database required for the code that developer is writing is much much more important.
The alternative is that larger groups of programmers (e.g. 10-15 Java developers where I work) end up relying on a couple of dedicated full time database administrators to do that along with their other maintenance, optimization, etc. duties.