I'm trying to build a page that ranks users based on how many views their profile has. Since that page might be getting a lot of hits, I cache the ordered users and invalidate that cache each time a profile gets a view. With only 9 users and ~200 page views a day, my app passed Heroku's memory cap of 512mb. New Relic confirmed this, showing me that the user listing was taking an inordinate amount of time:
Slowest Components Duration %
-----------------------------------------------
UsersController#index 2,125 ms 80%
users/index.html.erb 506 ms 19%
Memcache get 20 ms 1%
User#find 18 ms 1%
layouts/_header.html.erb 1 ms 0%
User#find_by_sql 0 ms 0%
Some reading told me that ActiveRecord apparently does not hand back memory to the OS after a request. Looking UsersController#index, I can see how this might cause a problem if the memory allocated by User.order was not freed.
UserController#index:
require 'will_paginate/array'
class UsersController < ApplicationController
PER_PAGE = 20
def index
#users = Rails.cache.read("users")
if #users.nil?
# .all is used to force the query to happen now, so that the result set is cached instead of the query
#users = User.order("views DESC").all
Rails.cache.write("users", #users)
end
#users = #users.paginate(page: params[:page], per_page: PER_PAGE)
if params[:page].nil? || params[:page] == "1"
#rank = 1
else
#title = "Page #{params[:page]}"
#rank = (params[:page].to_i - 1) * PER_PAGE + 1
end
end
end
index.html.erb
<% #users.each do |user| %>
image_tag user.profile_picture, alt: user.name
<h3><%= #rank %></h3>
<p><%= user.name %></p>
<p><%= user.views %></p>
<% #rank += 1 %>
<% end %>
<%= will_paginate %>
I don't know how I'm supposed to get around this though. I thought of possibly only pulling up one page of users into memory at a time, but with only 9 users, that wouldn't really change anything since a max of 20 is supposed to be listed per page. Do I have to manually clear #users from memory after every request, or is my approach in UsersController#index just wrong?
Any help would be greatly appreciated.
Related
I'm building an app which have group > posts > comments
To reduce the number of SQL requests, i'm using the includes method
# group controller
def show
#posts = #group.posts.includes(:comments)
end
Now, I would like to paginate comments. But I don't know how to use the function .paginate from the gem will_paginate
Do you have a tips for that ?
Issue with pagination inside controller
Since you want paginated comments associated with individual post, it's complicate to achieve in controller as you need to create N paginated comment object (say you have N posts).
What you can do
1.load posts as usual with included comments to reduce excess queries. But don't run paginate here
2.achieve pagination inside view only
<%= #posts.each do | post | %>
<% comments = post.comments.paginate(page: params[:page]) %>
..
...
...
<%= will_paginate comments%>
<% end %>
On posts index page I list all posts this way:
posts_controller.rb
def index
#posts = Post.includes(:comments).paginate(:page => params[:page]).order("created_at DESC")
end
index.html.erb
<%= render #posts %>
_post.html.erb
<%= gravatar_for post.user, size:20 %>
<%= link_to "#{post.title}", post_path(post) %>
<%= time_ago_in_words(post.created_at) %>
<%= post.comments.count %>
<%= post.category.name if post.category %>
35 posts per page
When I first load the page in dev env,
rack-mini-profiler shows this time: 1441.1 ms
after a few reloads: ~700 ms
Can I somehow decrease this time and number of sql requests?
Here're rmp images if it helps:
You could decrease the number of sql queries by:
including user as well as comments, since you seem to be using that when displaying the gravatar
changing post.comments.count to post.comments.size
While size, count, length are synonymous for arrays, for active record relations or associations they are not the same:
length loads the association (unless it is already loaded) and returns the length of the array
count does a select count(*) query whether the association is loaded or not
size uses length if the association is loaded and count if not.
In your case the comments association is loaded, but because you are using count, it's not actually used
Further, you don't actually seem to be using the comments collection for anything other than printing the number of records. If that's indeed the case, use counter_cache (4.1.2.3) instead of querying for the comments (the number of comments will be available in the parent record Post).
Also consider a client side alternative to time_ago_in_words. It will also help if you later decide to cache the entire section/page.
And finally retrieve only the fields you're going to use. In this case, I can imagine the Post contains a large amount of text for the content and it's not used anywhere (but still needs to be transmitted from the DB).
Adding an index on the reference column (comments in your case) might help.
add_index :posts, :comment_id
I have set up a cache in my model like
def self.latest(shop_id)
Inventory.where(:shop_id => shop_id).order(:updated_at).last
end
and in my view
<% cache ['inventories', Inventory.latest(session[:shop_id])] do %>
<% #inventories.each do |inventory| %>
<% cache ['entry', inventory] do %>
<li><%= link_to inventory.item_name, inventory %></li>
So, here I can have many shops, each with an inventory of stock items. Will the above cache work at all for different shops?
I think it's possible that even displaying the view in a different shop will break the cache. Or, any shop adding an inventory item will break the cache.
Can I use Russian Doll caching like this or do I need to use Inventory.all in my model?
Your idea is close, but you need to include the shop_id, the count, and the maximum updated_at of each shop's inventory into your cache key. Your outer cache needs to get busted when a shop's item gets deleted too, and that isn't covered under a max id or updated_at alone.
You can expand your custom cache key helper method to make this work. This allows you to create unique top level caches that only get busted when a member of that set gets added, updated or deleted. In effect, this gives a unique outer cache for each shop_id. Thus when one shop's inventory is changed, it doesn't affect another shop's cache.
Here is an example, based on ideas in the edge rails documentation:
module InventoriesHelper
def cache_key_for_inventories(shop_id)
count = Inventory.where(:shop_id => shop_id).count
max_updated_at = Inventory.where(:shop_id => shop_id).maximum(:updated_at).try(:utc).try(:to_s, :number)
"inventories/#{shop_id}-#{count}-#{max_updated_at}"
end
end
Then in your view:
<% cache(cache_key_for_inventories(session[:shop_id])) do %>
...
<% end %>
Currently have a very inefficient view partial that lists groups of users by their predicted score.
group controller
def show
#sfs_ordered = ScoreFootballSimple.order("home_score DESC, away_score ASC")
#live_games = Game.find(:all, :conditions => ['kickoff < ? AND completed != true AND game_state is NOT NULL', Time.now])
group#show (relevant section)
<% #live_games.each do |game| %>
<% #sfs_ordered.each do |sfs| %>
<% got_this_score = Array.new %>
<% game_points = nil %>
<% #group.members.each do |member| %>
<% if pred = member.prediction_set.predictions.where('game_id = ?',game.id).first %>
<% game_points = pred.points if !pred.points.nil? && pred.score_type == sfs.score_type %>
<% got_this_score << member.user.user_detail.display_name if pred.score_type == sfs.score_type %>
<% end %>
<% end %>
<% if got_this_score.count > 0 %>
<tr><td><%= sfs.home_score %>-<%=sfs.away_score%></td>
<td><% if !game_points.nil? %>
<div class="preds-show-points-div"><%= game_points %>pts</div>
<% else %>
-
<% end%></td>
<td><%= got_this_score.to_sentence %></td></tr>
<% end%>
<% end %>
<% end %>
Obviously this is loops within loops that means the that for every #sfs_ordered (round 50 records) it is iterating over every group member (for the largest group about 5000) and this means the page is taking seconds to load.
Don't flame me, this was the POC to show how it could look but it has exposed my lack of ability with ActiveRecord. Now I could go around creating a hash of users, predictions sets etc but I wondered if anyone could point me to a better way of selecting information with more precision using Rails queries.
The entity relationship is something like this
Group has many Member
Member belongs to User and PredictionSet
PredictionSet has many Prediction
Prediction belongs to Game and ScoreType
ScoreTypeSimple has one ScoreType
The predicted score is in ScoreTypeSimple - this is how I want the list organised
e.g 1:1 - Joe, Fred and Jane
1:0 - Sue, Dave and Helen
Then I want to grab the Group.member.user.name for the Group.member.prediction_set.prediction - where the prediction.game.id == game.id AND score_type == sfs.score_type
I know I can improve this by pure SQL joins and INs and build hashes but I wondered if any one could give me any pointers if there was a Rails/Ruby efficient way of doing this. I know the answer is probably out there in lambda but my ActiveRecord knowledge is stretch to the limit here!
Any help gratefully received.
Peter
You might be able to benefit from eager loading, since you're displaying all of some associated objects. This is done using the .includes method on a relation. For instance, your view works with all the members of a group, so when you fetch the group from the database (I don't see the line that does this in your code, but it probably looks something like
#group = Group.where('some condition').first
If you instead use this:
#group = Group.includes(:members => [:user]).where('some condition').first
Then loading the group, all it's members, and all their user objects is 3 database queries, rather than (in your extreme case of 5000 members) 10,001.
I'd say this is, at best, a small part of your solution, but it may help.
Edit: Here is the RailsCast on eager loading, and there's some docs about half way down this page.
I have a ruby on rails 3 project in which I query for a certain number of objects by using a .limit(3) . Then, in my view, I loop through these objects. After that, if there are 3 objects in the view, I display a "load more" button. Here is the view code:
<% #objects.each do |object| %>
<%= render object._type.pluralize.underscore + '/teaser', :object => object %>
<% end %>
<% if #objects.size(true) == 3 %>
#load more link here
<% end %>
The size(true) is passed a boolean to ensure that mongoID takes into account the .limit and .offset on my query (otherwise it returns the total number of objects that matched, regardless of the limit / offset). Here are the relevant development log lines:
MONGODB project_development['system.indexes'].insert([{:name=>"_public_id_1", :ns=>"project_development.objects", :key=>{"_public_id"=>1}, :unique=>true}])
MONGODB project_development['objects'].find({:deleted_at=>{"$exists"=>false}}).limit(3).sort([[:created_at, :desc]])
#some rendering of views
MONGODB project_development['system.indexes'].insert([{:name=>"_public_id_1", :ns=>"project_development.objects", :key=>{"_public_id"=>1}, :unique=>true}])
MONGODB project_development['$cmd'].find({"count"=>"objects", "query"=>{:deleted_at=>{"$exists"=>false}}, "limit"=>3, "fields"=>nil})
My question is: does MongoID do a separate query for my #objects.size(true)? I imagine the ['$cmd'] might indicate otherwise, but I'm not sure.
I don't think so, there was a pull request month ago to add aliases for :size, :length to :count to avoid re-running queries. You can check that.