I'm new to rails and trying to understand the relationship between migrations and models. As far as I can tell the migration seems to only affect the datastore hence after I use scaffolding to create a resource, am I then responsible for keeping the model and migrations in sync? Are there any tools to assist in this?
Sorry if this is an obvious question, I'm still working my way through the docs.
All migrations do is modify the database. Rails handles maintaining the sync between the model and the database.
You can have a User table that has id, firs_name and your class model might look like this
class User < ActiveRecord::Base
end
As you can see the model class is empty and you can still pretty much access methods on that class like this:
#user = User.new
#user.first_name = "Leo"
#user.save!
and it will know what to do with it.
Migrations are just files that allow you to modify the database in incremental steps while keeping a sane versioning on the database schema.
Of course, Rails will complain if you try to call things from your model that don't exist in the database or the ActiveRecord::Base parent class.
#user = User.new
#user.awesome
#=> undefined method `awesome` for #<User:some_object_id>
As for the migrations, you can have multiple migrations that affect one table. Your job is only to know what attributes you've added to a model. Rails will do the rest for you.
A general rule of thumb is that migrations are best suited for Data Definition - columns in table, their type, constraints etc. So no, you do not need to keep a migration in sync with your data.
If in the longer run, your data definition itself changes (new column or change in a column type), then just adda new migration specifying the same.
ActiveRecord models are driven, in general, from the database. Any fields defined in the database will (generally) appear automatically as properties in the activerecord model bound to that table.
Altering the model will not change the schema (generally). To alter the model, one would typically define a migration and run it into the database.
Note that nothing stops you defining additional properties on the model with attr_accessor etc, but these will not be persisted by ActiveRecord if there is no column in the schema to which they are bound.
I've taken Leo's answer since he helped me to understand things better and if I had just got to the bottom of the page on migrations I might not have needed to ask : http://guides.rubyonrails.org/migrations.html#what-are-schema-files-for
The mentioned annotate_models gem also sound useful in helping improve awareness of the current structure of a class model without having to refer to the schema.
If you want to revert the attr_accessor behavior and instead give a whitelist give a blacklist, you can do per model using this:
attr_accesible *atribute_names - %(attributes black list)
Related
We have a Rails 3 application with a PostgreSQL database (with ~10 tables) mapped by activerecord. Everything's working fine.
However, we could also like to use:
a MongoDB database in order to store images (probably with mongoid gem).
a Neo4j database (probably with neo4j-rails gem) instead of PostgreSQL for some tables.
Using a database with one Rails ORM is simple, thanks to database.yml. But when there's more than one ORM, how can we process? Is there a good way to do so? For instance, ActiveHash (and ActiveYaml) can work well with ActiveRecord. I think there could be a possibility to let differents ORM working together. Thanks for any tips.
This really depends on the type of ORM. A great way to do this is by using inheritance. For example you can have multiple databases and adapters defined in your database.yml file. You can easily talk to these using the ActiveRecord establish_connection method.
# A typical Active record class
class Account < ActiveRecord::Base
...
end
# A new database connection
class NewConnection < ActiveRecord::Base
self.abstract_class = true
establish_connection "users_database"
end
# A new Active record class using the new connection
class User < NewConnection
...
end
The only down side here is that when you are connection to multiple active record databases migrations can get a little bit dicey.
Mixing ORM's
Mixing ORMS is easy. for example mongodb (with mongoid), simply dont inherit from active record and include the following in the model you want to use mongo:
class Vehicle
include Mongoid::Document
field :type
field :name
has_many :drivers
belongs_to :account
end
ORMs built on top of active model play very nicely together. For example with mongoid you should be able to define relations to ActiveRecord models, this means you can not only have multiple databases but they can easy communicate via active model.
Well, I had the same problem today using neo4j gem. I added require 'active_graph/railtie' in my application.rb.
So, when I want generate a model with ActiveGraph I use: rails generate model Mymodel --orm active_graph, with --orm option you can specify an orm to use.
Without --orm option, it will use AR, by default.
First off, I strongly recommend you do not try to have multiple ORMs in the same app. Inevitably you'll want your Mongoid object to 'relate' to your ActiveRecord object in some way. And there are ways (see below)...but all of them eventually lead to pain.
You're probably doing something wrong if you think you 'need' to do this. Why do you need MongoDB to store images? And if you're using it just as an image store, why would you need Mongoid or some other ORM (or more accurately, ODM)? If you really, really need to add a second data store and a second ORM/ODM, can you spin it off as a separate app and call it as a service from your first one? Think hard about this.
That said, if you really want to go with "polyglot persistence" (not my term), there is a decent gem: https://github.com/jwood/tenacity. It's no longer actively developed, but the maintainer does fix bugs and quickly responds to inquiries and pull requests.
I have a small table in my Rails app that contains static data (user Roles). It shouldn't ever change. I'm wondering if it's possible to lock the table to keep anyone (namely developers) from accidentally changing it in production.
Or should I have not put that data into the database at all? Should it have been hardcoded somewhere to make editing more difficult and, at least, auditable (git blame)?
The right way to do this is with permissions. Change the ownership of the table to another user, and grant the production database user SELECT only.
I would say the right place for these kinds of things is probably the database. That way if they ever need to change, you can change them in the database, and you don't have to redeploy your application (assuming it will notice the change).
You didn't ask this, but your wording brought it to mind so I'll say it anyway: you should never need to explicitly LOCK a table with PostgreSQL. If you think this is something you need to do, you should make sure what you're worried about can actually happen under MVCC and that transactions aren't going to do the right thing for you.
I would probably make use of attr_accesible
if you write something like:
class Role < ActiveRecord::Base
attr_accessible #none
end
you could at least prevent any assignment from the rails side, but it does not prevent any direct modifications through developers with database access.
see also this thread: How I can set 'attr_accessible' in order to NOT allow access to ANY of the fields FOR a model using Ruby on Rails?
You can use a trigger to prevent updates to the table (assuming you can't add a new db user).
Or, use a view and ensure all read requests go through it. (probably by removing the ActiveRecord class that corresponds to the table.
I have the following scenario:
I'm starting development of a long project (around 6 months) and I need to have some information on the database in order to test my features. The problem is that right now, I don't have the forms to insert this information (I will in the future) but I need the information loaded on the DB, what's the best way to handle this? Specially considering that once the app is complete, I won't need this process anymore.
As an example, lets say I have tasks that need to be categorized. I've begun working on the tasks, but I need to have some categories loaded on my db already.
I'm working with Rails 3.1 btw.
Thanks in advance!
Edit
About seeds:I've been told that seeds are not the way to go if your data may vary a bit, since you'd have to delete all information and reinsert it again. Say.. I want to change or add categories, then I'd have to edit the seeds.rb file, do my modifications and then delete and reload all data...., is there another way? Or are seeds the defenitely best way to solve this problem?
So it sounds like you'll possibly be adding, changing, or deleting data along the way that will be intermingled amongst other data. So seeds.rb is out. What you need to use are migrations. That way you can search for and identify the data you want to change through a sequential process, which migrations are exactly designed for. Otherwise I think your best bet is to change the data manually through the rails console.
EDIT: A good example would be as follows.
You're using Capistrano to handle your deployment. You want to add a new Category, Toys, to your system. In a migration file then you would add Category.create(:name => "Toys") or something similar in your migration function (I forget what they call it now in Rails 3.1, I know there's only a single method though), run rake db:migrate locally, test your changes, commit them, then if it's acceptable deploy it using cap:deploy and that will run the new migration against your production database, insert the new category, and make it available for use in the deployed application.
That example aside, it really depends on your workflow. If you think that adding new data via migrations won't hose your application, then go for it. I will say that DHH (David Heinemeier Hansson) is not a fan of it, as he uses it strictly for changing the structure of the database over time. If you didn't know DHH is the creator of Rails.
EDIT 2:
A thought I just had, which would let you skip the notion of using migrations if you weren't comfortable with it. You could 100% rely on your db/seeds.rb file. When you think of "seeds.rb" you think of creating information, but this doesn't necessarily have to be the case. Rather than just blindly creating data, you can check to see if the pertinent data already exists, and if it does then modify and save it, but if it doesn't exist then just create a new record plain and simple.
db/seeds.rb
toys = Category.find_by_name("Toys")
if toys then
toys.name = "More Toys"
toys.save
else
Category.create(:name => "More Toys")
end
Run rake db:seeds and that code will run. You just need to consistently update the seeds.rb file every time you change your data, so that 1) it's searching for the right data value and 2) it's updating the correct attributes.
In the end there's no right or wrong way to do this, it's just whatever works for you and your workflow.
The place to load development data is db/seeds.rb. Since you can write arbitrary Ruby code there, you can even load your dev data from external files, for instance.
there is a file called db/seeds.rb
you can instantiate records using it
user1=User.create(:email=>"user#test.com",
:first_name=>"user",
:last_name=>"name",
:bio=>"User bio...",
:website=>"http://www.website.com",
:occupation=>"WebDeveloper",
:password=>"changeme",
:password_confirmation=>"changeme",
:avatar => File.open(File.join(Rails.root, '/app/assets/images/profiles/image.png'))
)
user2=User.create(:email=>"user2#test.com",
:first_name=>"user2",
:last_name=>"name2",
:bio=>"User2 bio...",
:website=>"http://www.website.com",
:occupation=>"WebDeveloper",
:password=>"changeme",
:password_confirmation=>"changeme",
:avatar => File.open(File.join(Rails.root, '/app/assets/images/profiles/image.png'))
)
Just run rake db:seed from command line to get it into the db
I have a Rails 3 applications that uses different databases depending on the subdomain. I do this by using "establish_connection" in the ApplicationController.
Now I'm trying to use delayed_job gem to do some background processing, however it uses the database connection that it's active in that moment. It's connecting to the subdomain database.
I'd like to force it to use the "common" database. I've done this for some models calling "establish_connection" in the model like this:
class Customer < ActiveRecord::Base
establish_connection ActiveRecord::Base.configurations["#{Rails.env}"]
...
end
Any idea how can I do this?
Here is what you need to know. When you include the DelayedJob gem in your app you create a migration for it to create the table where the jobs are stored, but you don't create a model. This is because DelayedJob already has a model included in the gem (i.e. Delayed::Job). What you need to do is patch this model slightly, just like you did with your own models. You can do this in an initializer.
You may already have an initializer to configure DelayedJob, if so you can do this in there, if not you need to create one. So, create your initializer (in config/initializers) if you don't have one, we'll call it delayed_job_config.rb, now add the following to it:
Delayed::Job.class_eval do
establish_connection ActiveRecord::Base.configurations["#{Rails.env}"]
end
We've done to the DelayedJob model the same thing you did to your own models. Now DelayedJob will use that connection to put jobs in the DB.
I made a bad decision as I was designing a MongoDB database to embed a model rather than reference it in an associated model. Now I need to make the embedded model a referenced model, but there is already a healthy amount of data in the database (or document?).
I'm using Mongoid, so I reasoned I can just change embedded_in to referenced_in. Before I start, I figured I'd ask people who know better than I do. How can I transition the embedded data already in the database to the document for the associated model.
class Building
embeds_many :landlords
..
end
class Landlord
embedded_in :building
...
end
Short answer - Incrementally.
Create a copy of Landlord, name it Landlord2.
Make it referenced in Building.
Copy all data from Landlord to Landlord2.
Delete Landlord.
Rename Landlord2 to Landlord.
Users should not be able to CRUD Landlord during steps 3-5 (ideally). You still can get away with only locking CRUD on 4-5. Just make sure you make all updates that happened during copying, before removing Landlords.
Just chan ging the model like you have above will not work, the old data will still be in a different strucutre in the db.
Very similar the previous answer, one of the things I have done to do this migration before is to do it dynamically, while the system is running and being used by the users.
I had the data layer separated from the logic, so it let me add some preprocessors and inject code to do the following.
Lets say we start with the old datamodel, then release new code that does the following:
On every access to the document, you would have to check whether the embedded property exists, if it does, create a new entry associated as a reference and save to the database and delete the embedded property from the documents. Once this is done for a couple of days, a lot of my data got migrated and then I just had to run a similar script for everything that was not touched, made the job of migrating the data much easier and simpler and I did not have to run long running scripts or get the system offline to perform the conversion.
You may not ha ve that requirement, so Pick accordingly.