Rails .joins doesn't load the association - ruby-on-rails-3

Helo,
My query:
#county = County.joins(:state)
.where("counties.slug = ? AND states.slug = ?", params[:county_slug])
.select('states.*, counties.*')
.first!
From the log, the SQL looks like this:
SELECT states.*, counties.* FROM "counties" INNER JOIN "states" ON "states"."id" = "counties"."state_id" LIMIT 1
My problem is that is doesn't eager load the data from the associated table (states), because when I do, for example, #county.state.name, it runs another query, although, as you can see from the log, it had already queried the database for the data in that table as well. But it doesn't pre populate #county.state
Any idea how i can get all the data from the database in just ONE query?
Thx

I think you need to use include instead of joins to get the eager loading. There's a good railscasts episode about the differences: http://railscasts.com/episodes/181-include-vs-joins , in particular:
The question we need to ask is “are we using any of the related model’s attributes?” In our case the answer is “yes” as we’re showing the user’s name against each comment. This means that we want to get the users at the same time as we retrieve the comments and so we should be using include here.

Related

Complex SQL Query in Rails 4

I have a complicated query I need for a scope in my Rails app and have tried a lot of things with no luck. I've resorted to raw SQL via find_by_sql but wondering if any gurus wanted to take a shot. I will simplify the verbiage a bit for clarity, but the problem should be stated accurately.
I have Users. Users own many Records. One of them is marked current (#is_current = true) and the rest are not. Each CoiRecord has many Relationships. Relationships have a value for when they were active (active_when) which takes four values, [1..4].
Values 1 and 2 are considered recent. Values 3 and 4 are not.
The problem was ultimately to have a scopes (has_recent_relationships and has_no_recent_relationships) on User that filters on whether or not they have recent Relationships on current Record. (Old Records are irrelevant for this.) I tried create a recent and not_recent scope on Relationship, and then building the scopes on Record, combining with checking for is_current == 1. Here is where I failed. I have to move on with the app but have no choice but to use raw SQL and continue the app, hoping to revisit this later. I put that on User, the only context I really need it, and set aside the code for the scopes on the other objects.
The SQL that works, that correctly finds the Users who have recent relationships is below. The other just uses "= 0" instead "> 0" in the HAVING clause.
SELECT * FROM users WHERE `users`.`id` IN (
SELECT
records.owner_id
FROM `coi_records`
LEFT OUTER JOIN `relationships` ON `relationships`.`record_id` = `records`.`id`
WHERE `records`.`is_current` = 1
HAVING (
SELECT count(*)
FROM relationships
WHERE ((record_id = records.id) AND ((active_when = 1) OR (active_when = 2)))
) > 0
)
My instincts tell me this is complicated enough that my modeling probably could be redesigned and simplified, but the individual objects are pretty simple, just getting at this specific data from two objects away has become complicated.
Anyway, I'd appreciate any thoughts. I'm not expecting a full solution because, ick. Just thought the masochists among you might find this amusing.
Have you tried using Arel directly and this website?
Just copy-and-pasting your query you get this:
User.select(Arel.star).where(
User.arel_table[:id].in(
Relationship.select(Arel.star.count).where(
Arel::Nodes::Group.new(
Relationship.arel_table[:record_id].eq(Record.arel_table[:id]).and(
Relationship.arel_table[:active_when].eq(1).or(Relationship.arel_table[:active_when].eq(2))
)
)
).joins(
CoiRecord.arel_table.join(Relationship.arel_table, Arel::Nodes::OuterJoin).on(
Relationship.arel_table[:record_id].eq(Record.arel_table[:id])
).join_sources
).ast
)
)
I managed to find a way to create what I needed which returns ActiveRelationship objects, which simplifies a lot of other code. Here's what I came up with. This might not scale well, but this app will probably not end up with so much data that it will be a problem.
I created two scope methods. The second depends on the first to simplify things:
def self.has_recent_relationships
joins(records_owned: :relationships)
.merge(Record.current)
.where("(active_when = 1) OR (active_when = 2)")
.distinct
end
def self.has_no_recent_relationships
users_with_recent_relationships = User.has_recent_relationships.pluck(:id)
if users_with_recent_relationships.length == 0
User.all
else
User.where("id not in (?)", users_with_recent_relationships.to_a)
end
end
The first finds Users with recent relationships by just joining Record, merging with a scope that selects current records (should be only one), and looks for the correct active_when values. Easy enough.
The second method finds Users who DO have recent relationships (using the first method.) If there are none, then all Users are in the set of those with no recent relationships, and I return User.all (this will really never happen in the wild, but in theory it could.) Otherwise I return the inverse of those who do have recent relationships, using the SQL keywords NOT IN and an array. It's this part that could be non-performant if the array gets large, but I'm going with it for the moment.

ActiveRecord: can't use `pluck` after `where` clause with eager-loaded associations

I have an app that has a number of Post models, each of which belongs_to a User model. When these posts are published, a PublishedPost model is created that belongs_to the relevant Post model.
I'm trying to build an ActiveRecord query to find published posts that match a user name, then get the ids of those published posts, but I'm getting an error when I try to use the pluck method after eager-loading my associations and searching them with the where method.
Here's (part of) my controller:
class PublishedPostsController < ApplicationController
def index
ar_query = PublishedPost.order("published_posts.created_at DESC")
if params[:searchQuery].present?
search_query = params[:searchQuery]
ar_query = ar_query.includes(:post => :user)
.where("users.name like ?", "%#{search_query}%")
end
#found_ids = ar_query.pluck(:id)
...
end
end
When the pluck method is called, I get this:
ActiveRecord::StatementInvalid: Mysql2::Error: Unknown column 'users.name' in 'where clause': SELECT id FROM `published_posts` WHERE (users.name like '%Andrew%') ORDER BY published_posts.created_at DESC
I can get the results I'm looking for with
#found_ids = ar_query.select(:id).map{|r| r.id}
but I'd rather use pluck as it seems like the cleaner way to go. I can't figure out why it's not working, though. Any ideas?
You need to and should do joins instead of includes here.
The two functions are pretty similar except that the data from joins is not returned in the result of the query whereas the data in an includes is.
In that respect, includes and pluck are kind of antithetical. One says to return me all the data you possibly can, whereas the other says to only give me only this one little bit.
Since you only want a small amount of the data, you want to do joins. (Strangely select which also seems somewhat antithetical still works, but you would need to remove the ambiguity over id in this case.)
Try it out in the console and you'll see that includes causes a query that looks kind of like this: SELECT "posts"."id" as t0_ro, "posts"."text" as t0_r1, "users"."id" as t1_r0, "users"."name" as t1_r1 ... When you tack on a pluck statement all those crazy tx_ry columns go away and are replaced by whatever you specified.
I hope that helps, but if not maybe this RailsCast can. It is explained around the 5 minute mark.
http://railscasts.com/episodes/181-include-vs-joins
If you got here by searching "rails pluck ambiguous column", you may want to know you can just replace query.pluck(:id) with:
query.pluck("table_name.id")
Your query wouldn't work as it is written, even without the pluck call.
Reason being, your WHERE clause includes literal SQL referencing the users table which Rails doesn't notice and decides to use multiple queries and join in memory ( .preload() ) instead of joining in the database level ( .eager_load() ):
SELECT * from published_posts WHERE users.name like "pattern" ORDER BY published_posts.created_at DESC
SELECT * from posts WHERE id IN ( a_list_of_all_post_ids_in_publised_posts_returned_above )
SELECT * from users WHERE id IN ( a_list_of_all_user_ids_in_posts_returned_above )
The first of the 3 queries fails and it is the error you get.
To force Rails use a JOIN here, you should either use the explicit .eager_load() instead of .includes(), or add a .references() clause.
Other than that, what #Geoff answered stands, you don't really need to .includes() here, but rather a .joins().

Counting occurence of each distinct element in a table

I am writing a log viewer app in ASP.NET / C#. There is a report window, where it will be possible to check some information about the whole database. One kind of information there I want to display on the screen is the number of times each generator (an entity in my domain, not Firebirds sequence) appears in the table. How do I do that using COUNT ?
Do I have to :
Gather the key for each different generator
Run one query for each generator key using count
Display it somehow
Is there any way that I can do it without having to do two queries to the database? The database size can be HUGE, and having to query it "X" times where "X" is the number of generators would just suck.
I am using a Firebird database, is there any way to fetch this information from any metadata schema or there is no such thing available?
Basically, what I want is to count each occurrence of each generator in the table. Result would be something like : GENERATOR A:10 times,GENERATOR B:7 Times,Generator C:0 Times and so on.
If I understand your question correctly, it is a simple matter of using the GROUP BY clause, e.g.:
select
key,
count(*)
from generators
group by key;
Something like the query below should be sufficient (depending on your exact structure and requirements)
SELECT KEY, COUNT(*)
FROM YOUR_TABLE
GROUP BY KEY
I solved my problem using this simple Query:
SELECT GENERATOR_,count(*)
FROM EVENTSGENERAL GROUP BY GENERATOR_;
Thanks for those who helped me.
It took me 8 hours to come back and post the answer,because of the StackOverflow limitation to answer my own questions based in my reputation.

Rails Query Issue

I have photos which have_many comments.
I want to select whatever photos have recent comments and display those photos in a kind of "timeline" where the most recently commented photo is at the top and other photos fall below.
I tried this, and it worked on SQLite:
#photos = Photo.select('DISTINCT photos.*').joins(:comments).order('comments.created_at DESC')
However testing on PostgreSQL raises this error:
PGError: ERROR: for SELECT DISTINCT, ORDER BY expressions must appear in select list
\n: SELECT DISTINCT photos.* FROM \"photos\" INNER JOIN \"comments\" ON \...
So, the problem is, I'm selecting Photos but ordering by recency of comments... and Postgre doesn't like that.
Can anyone suggest either:
A: How I can fix this query...
or
B: A different way to retrieve photos by the recency of their comments?
The important reason I'm doing it this way instead of through the comments model is I want to show each photo once with any recent comments beside it, not show each comment by itself with the same photos appearing multiple times.
Thanks!
Check out the :touch parameter of of the belongs_to association:
:touch
If true, the associated object will be
touched (the updated_at/on attributes
set to now) when this record is either
saved or destroyed. If you specify a
symbol, that attribute will be updated
with the current time instead of the
updated_at/on attribute.
http://api.rubyonrails.org/classes/ActiveRecord/Associations/ClassMethods.html#method-i-belongs_to
In your Comment model, therefore, you would have:
belongs_to :photo, :touch => :comments_updated_at
Now, in order to create a time line of photos with recently updated comments all you need to do is:
Photo.order('comments_updated_at DESC').all
Just be sure to add the "comments_updated_at" datetime field to your Photo model.
Make sense?
Just for the future readers of this question, the real answer to your SQL issue in SQlite vs Postgresql is that in the SQL "standard", every selected column needs to be in the GROUP BY or be an aggregate function.
https://www.techonthenet.com/sql/group_by.php (or whatever SQL ref you want to take a look at)
Your SQLite query used SELECT * instead of specific columns. That would have blown up with a similar error on most databases like Postgresql (MySQL, Maria, probably MSSQL Server). It's definitely invalid SQL grammar for a lot of good reasons.
Under the hood, I have no clue what SQlite does -- maybe it expands the * into fields and adds them to the GROUP BY under the hood? But its not a good SQL statement which is which it threw the error.

Django - finding the extreme member of each group

I've been playing around with the new aggregation functionality in the Django ORM, and there's a class of problem I think should be possible, but I can't seem to get it to work. The type of query I'm trying to generate is described here.
So, let's say I have the following models -
class ContactGroup(models.Model):
.... whatever ....
class Contact(models.Model):
group = models.ForeignKey(ContactGroup)
name = models.CharField(max_length=20)
email = models.EmailField()
...
class Record(models.Model):
contact = models.ForeignKey(Contact)
group = models.ForeignKey(ContactGroup)
record_date = models.DateTimeField(default=datetime.datetime.now)
... name, email, and other fields that are in Contact ...
So, each time a Contact is created or modified, a new Record is created that saves the information as it appears in the contact at that time, along with a timestamp. Now, I want a query that, for example, returns the most recent Record instance for every Contact associated to a ContactGroup. In pseudo-code:
group = ContactGroup.objects.get(...)
records_i_want = group.record_set.most_recent_record_for_every_contact()
Once I get this figured out, I just want to be able to throw a filter(record_date__lt=some_date) on the queryset, and get the information as it existed at some_date.
Anybody have any ideas?
edit: It seems I'm not really making myself clear. Using models like these, I want a way to do the following with pure django ORM (no extra()):
ContactGroup.record_set.extra(where=["history_date = (select max(history_date) from app_record r where r.id=app_record.id and r.history_date <= '2009-07-18')"])
Putting the subquery in the where clause is only one strategy for solving this problem, the others are pretty well covered by the first link I gave above. I know where-clause subselects are not possible without using extra(), but I thought perhaps one of the other ways was made possible by the new aggregation features.
It sounds like you want to keep records of changes to objects in Django.
Pro Django has a section in chapter 11 (Enhancing Applications) in which the author shows how to create a model that uses another model as a client that it tracks for inserts/deletes/updates.The model is generated dynamically from the client definition and relies on signals. The code shows most_recent() function but you could adapt this to obtain the object state on a particular date.
I assume it is the tracking in Django that is problematic, not the SQL to obtain this, right?
First of all, I'll point out that:
ContactGroup.record_set.extra(where=["history_date = (select max(history_date) from app_record r where r.id=app_record.id and r.history_date <= '2009-07-18')"])
will not get you the same effect as:
records_i_want = group.record_set.most_recent_record_for_every_contact()
The first query returns every record associated with a particular group (or associated with any of the contacts of a particular group) that has a record_date less than the date/ time specified in the extra. Run this on the shell and then do this to review the query django created:
from django.db import connection
connection.queries[-1]
which reveals:
'SELECT "contacts_record"."id", "contacts_record"."contact_id", "contacts_record"."group_id", "contacts_record"."record_date", "contacts_record"."name", "contacts_record"."email" FROM "contacts_record" WHERE "contacts_record"."group_id" = 1 AND record_date = (select max(record_date) from contacts_record r where r.id=contacts_record.id and r.record_date <= \'2009-07-18\')
Not exactly what you want, right?
Now the aggregation feature is used to retrieve aggregated data and not objects associated with aggregated data. So if you're trying to minimize number of queries executed using aggregation when trying to obtain group.record_set.most_recent_record_for_every_contact() you won't succeed.
Without using aggregation, you can get the most recent record for all contacts associated with a group using:
[x.record_set.all().order_by('-record_date')[0] for x in group.contact_set.all()]
Using aggregation, the closest I could get to that was:
group.record_set.values('contact').annotate(latest_date=Max('record_date'))
The latter returns a list of dictionaries like:
[{'contact': 1, 'latest_date': somedate }, {'contact': 2, 'latest_date': somedate }]
So one entry for for each contact in a given group and the latest record date associated with it.
Anyway, the minimum query number is probably 1 + # of contacts in a group. If you are interested obtaining the result using a single query, that is also possible, but you'll have to construct your models in a different way. But that's a totally different aspect of your problem.
I hope this will help you understand how to approach the problem using aggregation/ the regular ORM functions.