I have a Rails Application and I have an SQL Database. I am using Oracle SQL Developer to manage it. My question is how do integrate these two together so that I can display the database data?
This is obviously a big subject, but the major steps I can think of are:
For each table in the database that you want to include in the Rail App, you define a model class that inherits from ActiveRecord::Base
If the database does not respect Rails conventions with regard to naming you will have to provide class methods in each model to define the primary key column and the table name.
You create associations in the models to describe the joins between the tables.
Related
I would like to implement something along the lines of multi table inheritance for my rails application. I am familiar with how STI works and was wondering if the implementation would be similar.
My situation is as follows (names of tables have been changed):
I have a table Employee, and Employee has many janitors and programmers. Janitors and Programmers have many different types of work utensils, so a custodial Table would fit the janitor and Tech table would fit the programmer. Well the jobs could be endless and the attributes for the jobs (janitors, programmers etc) are different so they must be separate tables. I want to consolidate a table called Jobs which belongs under Employee. This table Jobs will have a job_type (here it can be either janitor or programmer) and a utensil_type (custodial, tech). How can I properly implement what this scenario is trying to achieve?
I know how important the type is for STI so I want to know how I can implement this MTI for my rails problem?
Maybe ActiveRecord::ActsAs gem will fit your needs https://github.com/hzamani/active_record-acts_as
I'm planning on creating a model that joins some tables to be used in a CGridView. Will the CRUD generated by gii work on this? Specifically the ability to Create/Update.
I've already tried generating CRUD on a MYSQL view - which did not work right for Create/Update.
Thanks!
It is my understanding that MySql's ISAM does not support relations. However, if you create your tables with the INNODB engine, you can define foreign keys.
It is also supported by SQLite, however. The Yii blogs tutorial shows an example of defining relations at the database level.
In that case, Gii dutifully creates Models with the relations predefined (or better: taken from the database).
I have a general question about rails controllers/models:
I have a model Providers, that represents the table providers in my database.
If I have sql queries (with certain conditions) to gather information from that table, where the SQL code should be implemented, in the model or in the controller?
Generally, if you're doing anything complex, put it in the model.
It's better to create named scope with custom SQL in your model, of course.
I'm looking for the better way (aka architecture) to have different kind of DBs ( MySQL + MongoDB ) backending the same Rails app.
I was speculating on a main Rails 3.1 app, mounting Rails 3.1 engines linking each a different kind of DB ...
... or having a main Rails 3.0.x app routing a sinatra endpoint for each MySQL/MongoDB istance ...
Do you think it's possible ..., any idea or suggestions ?
I notice some other similar questions here, but I think that "mounting apps" is moving fast in Rails 3.1 / Rack / Sinatra and we all need to adjust our paradigms.
Thanks in advance
Luca G. Soave
There's no need to completely over-complicate things by running two apps just to have two types of database. It sounds like you need DataMapper. It'll do exactly what you need out of the box. Get the dm-rails gem to integrate it with Rails.
In DataMapper, unlike ActiveRecord, you have to provide all the details about your underlying data store: what fields it has, how they map the attributes in your models, what the table names are (if in a database), what backend it uses etc etc.
Read the documentation... there's a bucket-load of code to give you an idea.
Each model is just a plain old Ruby object. The class definition just mixes in DataMapper::Resource, which gives you access to all of the DataMapper functionality:
class User
include DataMapper::Resource
property :id, Serial
property :username, String
property :password_hash, String
property :created_at, DateTime
end
You have a lot of control however. For example, I can specify that this model is not store in my default data store (repository) and that it's stored in one of the other configured data stores (which can be a NoSQL store, if you like).
class User
include DataMapper::Resource
storage_names[:some_other_repo] = 'whatever'
# ... SNIP ...
end
Mostly DM behaves like ActiveRecord on steroids. You get all the basics, like finding records (except you never have to use the original field names if your model abstracts them away):
new_users = User.all(:created_at.gte => 1.week.ago)
You get validations, you get observers, you get aggregate handling... then get a bunch of other stuff, like strategic eager-loading (solves the n+1 query problem), lazy loading of large text/blob fields, multiple repository support. The query logic is much nicer than AR, in my opinion. Just have a read of the docs. They're human-friendly. Not just an API reference.
What's the downside? Well, many gems don't take into account that you might not be using ActiveRecord, so there's a bit more searching to do when you need a gem for something. This will get better over time though, since before Rails 3.x seamlessly integrating DM with Rails wasn't so easy.
I dont fully understand your question., like
what is the problem you are facing right now using mongo and MySQL in same app, and
whats the reason for going multiple rails app dealing with different dbs.
Though am not an expert in ruby & rails(picked up few months ago), i like to add something here.
I am currently building the rails app utilizing both mongo and MySQL in the back end. Mongoid & ActiveRecord are the drivers. MySql for transactions and mongo for all other kind of data (geo spatial mainly). Its just straight forward. You can create different models inheriting from mongoid and activerecord.
class Item
include Mongoid::Document
field :name, :type => String
field :category, :type => String
end
and
class User < ActiveRecord::Base
end
And you can query both the way same way (except complex sql joins, also mongoid has some addition querying patterns for Geo spatial kind of queries)
Item.where(:category => 'car').skip(0).limit(10)
User.where(:name => 'ram')
Its a breeze. But there are some important points you need to know
Create your Active record models before the mongoid models. Once mongoid is activated (on rails g mongoid:config - mongoid.yml added) all the scaffolding , and generations works toward mongo db. Otherwise every time you need to delete the mongoid.yml before creating the Activerecord models
And don't use mongoid in a relational way. i know mongoid provides lot of options to define realtions. Like Belongs_to relations stores the refernece id's in child documents. Its quite opposite to the mongo DbRef. Its greatly confusing when leaving the mongo idioms for the favour of active record feel. So try to stick with the document nature of it. Use embed and DbRef whenever necessary. (may be someone corrcet me if am wrong)
Still Mongoid is a great work. Its fully loaded with features.
Like doctrine(active record) and Xyster(data mapper),what's the difference?
The difference is in how separate your domain objects are from the data access layer. With ActiveRecord, its all one object, which makes it very simple. Especially if your classes map one to one to your database.
Data mapper is more flexible, and easily allows your domain to be tested independent of any data access infrastructure code. But complexity comes at a price.
Like blockhead said, the difference lies in how you choose to separate Domain Objects from the Data Access Layer.
In a nutshell, Active"Record" maps an object to a record in the database.
Here, One Object = One Record.
From what I know, Data"mapper" maps an object with data, but it need not be a record - it could be a file as well.
Here, One Object need not be One Record
It's this way because the goal of this pattern: to keep the in memory representation and the persistent data store independent of each other and the data mapper itself.
By not placing this 1 object = 1 record restriction, Data Mapper makes these two layers independent of each other.
Any suggestions/corrections to my answer are welcome, in case I was wrong somewhere.
The main difference is that in DataMapper the model is defined in the ruby class itself:
class Post
include DataMapper::Resource
property :id, Serial
property :title, String
property :body, Text
property :created_at, DateTime
end
While in ActiveRecord the class is mostly an empty class and the framwork scans the database. This means you need either a pre-defined database or use something like migrations to generate the schema, this keeps the data model separated from the ORM.
DataMapper.auto_migrate!
would generate the schema for you.
ActiveRecord is different in this regard:
class Post < ActiveRecord::Base
end
In DataMapper there is no need for migrations, as automigrations can generate the schema or look the differences between the model and the database and migrate for you. There is also support for manual migration you can use for non-trivial cases.
Also DataMapper is much more "ruby" syntax friendy, and features like lazy loading when doing chainable conditions (like ActiveRecord in Rails 3) are there from the beginning.
Datamapper also has a feature that every record in the database maps to one ruby object, which is not true for ActiveRecord. So if you know that the database records are the same, you know that two references to the ruby object will point to the same object too.
On the counter side, while Rails 3 may promise you exchangeable frameworks, the Datamapper railtie (dm-rails) is not production ready and many features may not work.
See this page for more information.
I have to admit that I don't know doctrine or Xyster but I can at least give some insight into the difference between Active Records as implemented in Ruby versus ORMs such as SubSonic, Linq to SQL, nHibernate and Telerik. Hopefully, it will at least give you something to explore further.
Ruby's Active Record is its native data access library - it is not a mapping from an existing SQL interface library (e.g. .NET SqlDataTables) into the constructs of the language - it is the interface library. This gave the designers more latitude to build the library in a more integrated manner but it also required that they implement a broad range of SQL tools that you won't typically find in an ORM (e.g. DDL commands are a part of Ruby's Active Record interface).
ORMs are mapped to the underlying database structure using a manual step in which a code generator will open a database and scan through it - building objects corresponding to the tables (and stored procedures) that it finds. These objects are constructed using the low-level SQL programming constructs offered as part of the language (e.g. the .NET System.Data.Sql and SqlClient libraries). The objective here is to give record-oriented, relational databases a smoother, more fluent interface while you are programming: to reduce the "impedence mismatch" between the relational model and object-oriented programming.
As a side note, MS has taken a very "Active Record-like" step in building native language constructs into C# via Linq to SQL and Linq to Entities.
Hope this helps!