I'm trying to use the following inside my model:
create!(
:title => entry.title,
:link => entry.url,
:published_date => entry.published,
:entry_id => entry.id,
:category => thing,
:author => entry.author,
:user_id => user.id
)
This fails with Mysql2::Error: Duplicate entry '0' for key 'PRIMARY' when adding anything past the first entry since the id column is being set as 0. Is there a way to auto-increment the id using the above code?
Thanks
You should never need to manually specify the ID when creating new instances; Rails will automatically create the auto-incrementing column to handle generating unique IDs for you.
In this case, if you have tampered with the ID column and changed its type, the easiest way to reset this is to simply recreate the table.
Related
We are calculating statistics for our client. Statistics are calculated for each SpecialtyLevel, and each statistic can have a number of error flags (not to be confused with validation errors). Here are the relationships (all the classes below are nested inside multiple modules, which I have omitted here for simplicity):
class SpecialtyLevel < ActiveRecord::Base
has_many :stats,
:class_name =>"Specialties::Aggregate::Stat",
:foreign_key => "specialty_level_id"
.......
end
class Stat < Surveys::Stat
belongs_to :specialty_level
has_many :stat_flags,
:class_name => "Surveys::PhysicianCompensation::Stats::Specialties::Aggregate::StatFlag",
:foreign_key => "stat_id"
......
end
class StatFlag < Surveys::Stats::StatFlag
belongs_to :stat, :class_name => "Surveys::PhysicianCompensation::Stats::Specialties::Aggregate::Stat"
......
end
In the view, we display one row for each SpecialtyLevel, with one column for each Stat and another column indicating whether or not there are any error flags for that SpecialtyLevel. The client wants to be able to sort the table by the number of error flags. To achieve this, I've created a scope in the SpecialtyLevel class:
scope :with_flag_counts,
select("#{self.columns_with_table_name.join(', ')}, count(stat_flags.id) as stat_flags_count").
joins("INNER JOIN #{Specialties::Aggregate::Stat.table_name} stats on stats.specialty_level_id = #{self.table_name}.id
LEFT OUTER JOIN #{Specialties::Aggregate::StatFlag.table_name} stat_flags on stat_flags.stat_id = stats.id"
).
group(self.columns_with_table_name.join(', '))
Now each row returned from the database will have a stat_flags_count field that I can sort by. This works fine, but I run into a problem when I try to paginate using this code:
def always_show_results_count_will_paginate objects, options = {}
if objects.total_entries <= objects.per_page
content_tag(:div, content_tag(:span, "Showing 0-#{objects.total_entries} of #{objects.total_entries}", :class => 'info-text'))
else
sc_will_paginate objects, options = {}
end
end
For some reason, objects.total_entries returns 1. It seems that something in my scope causes Rails to do some really funky stuff with the result set that it gives me.
The question is, is there another method I can use to return the correct value? Or is there a way that I can adjust my scope to prevent this meddling from occurring?
The group statement makes me suspicious. You may want to fire up a debugger and step through the code and see what's actually getting returned.
Is there a special reason you're using a scope and not just an attribute on the SpecialtyLevel model? Couldn't you just add a def on SpecialtyLevel that would function as a "virtual attribute" that just returns the length of the list of StatFlags?
The answer here is to calculate total_entries separately and pass that into the paginate method, for example:
count = SpecialtyLevel.for_participant(#participant).count
#models = SpecialtyLevel.
with_flag_counts.
for_participant(#participant).
paginate(:per_page => 10, :page => page, :total_entries => count)
I'm fairly new to Ruby on Rails and within my app the user is able to create a 'build' record that will only be saved if the entire record is unique. If a user tries to create an existing 'build' / record and the validation fails, I need to be able to redirect that user to the existing record.
As I have stated, I am a novice and made a valiant attempt at using the parameters passed to my create action as so:
def create
#build = Build.new(params[:build])
if #build.save
redirect_to :action => 'view', :id => #build.id
else
#bexist = Build.find(params[:build])
redirect_to :action => 'view', :id => #bexist.id
end
end
Clearly this isn't correct... I also tried to look into callbacks with after_validation, but wasn't sure how to access or even store the existing record's id. Anyone have any suggestions?
You need the attribute/value hash to be passed as the :conditions option, and you need to specify :first, :last, or :all as the first argument.
#bexist = Build.find(:first, :conditions => params[:build])
Alternatively, you can use the #first method instead of using #find with a :first argument.
#bexist = Build.first(:conditions => params[:build])
In Rails 3, you have yet another option...
#bexist = Build.where(params[:build]).first
I'm using the permanent_records gem in my rails 3.0.10 app, to prevent hard deletes and it seems rails is ignoring my default scope in checking uniqueness
# user.rb
class User < AR::Base
default_scope where(:deleted_at => nil)
validates_uniqueness_of :email # done by devise
end
in my rails console trying to find a user by email that has been deleted results in null, but when signing up for a new account with a deleted email address results in a validation error on the email field.
This is also the case for another model in my app
# group.rb
class Group < AR::Base
default_scope where(:deleted_at => nil)
validates_uniqueness_of :class_name
end
and that is the same case as before, deleting a group then trying to find it by class name results in nil, however when I try to create a group with a known deleted class name it fails validation.
Does anyone know if I am doing something wrong or should I just write custom validators for this behavior?
Try scoping the uniqueness check with deleted_at
validates_uniqueness_of : email, :scope => :deleted_at
This can allow two records with the same email value as long as deleted_at field is different for both. As long as deleted at is populated with the correct timestamp, which I guess permanent_records gem does, this should work.
I have a Rails 3.0 web app that allow user to create own path to the application.
example : www.my_app.com/user_company_name
So I store a custom path in user DB field. User can changing path throught a input.
I have added this validation in model
validates_presence_of :custom_page
validates_format_of :custom_page, :with => /^([a-z]|[0-9]|\-|_)+$/, :message => "Only letter (small caps), number, underscore and - are authorized"
validates_length_of :custom_page, :minimum => 3
validates_uniqueness_of :custom_page, :case_sensitive => false
But I don't know how I can validate url to check it isn't in conflict with another route in my routing.
For example in my route.rb I have
resources :user
Validation need to don't allow using www.my_app.com/user, how I can do that?
Thanks, vincent
In your routes, you match the company name to a variable
match 'some_path/:company_name.format'
you can then do the lookup using company_name which rails will populate for you.
Validating the uniqueness of the custom_page variable should be enough to ensure there's no overlap. (note that validate uniqueness of doesn't scale -- if this will be big, you need a db constraint as well) as long as users can only specify one field.
If you're letting users specify
'some_path/:custom_path_1/:custom_path_2.format'
then you have to validate across both fields, and now it's getting messy. Hope you're not doing that.
You can try a custom validation to weed out "user"
validate :custom_page_cant_be_user
def custom_page_cant_be_user
errors.add(:custom_page, "can't be `user`") if self.custom_page =~ /^user$/i
end
assuming :custom_page comes in as a basic [a-z], if :custom_page has /user you need to update the regex a bit.
I'm writing a Rails app in which I have two models: a Machine model and a MachineUpdate model. The Machine model has many MachineUpdates. The MachineUpdate has a date/time field. I'm trying to retrieve all Machine records that have the following criteria:
The Machine model has not had a MachineUpdate within the last 2 weeks, OR
The Machine model has never had any updates.
Currently, I'm accomplishing #1 with a named scope:
named_scope :needs_updates,
:include => :machine_updates,
:conditions => ['machine_updates.date < ?', UPDATE_THRESHOLD.days.ago]
However, this only gets Machine models that have had at least one update. I also wanted to retrieve Machine models that have had no updates. How can I modify needs_updates so the items it returns fulfills that criteria as well?
One solution is to introduce a counter_cache:
# add a machine_updates_count integer database column (with default 0)
# and add this to your Machine model:
counter_cache :machine_updates_count
and then add OR machine_updates_count = 0 to your SQL conditions.
However, you can also solve the problem without a counter cache by using a LEFT JOIN:
named_scope :needs_updates,
:select => "machines.*, MAX(machine_updates.date) as last_update",
:joins => "LEFT JOIN machine_updates ON machine_updates.machine_id = machines.id",
:group => "machines.id",
:having => ["last_update IS NULL OR last_update < ?", lambda{ UPDATE_THRESHOLD.seconds.ago }]
The left join is necessary so that you are sure you are looking at the most recent MachineUpdate (the one with MAX date).
Note also that you have to put your condition in a lambda so it is evaluated every time the query is run. Otherwise it will be evaluated only once (when your model is loaded on application boot-up), and you will not be able to find Machines that have come to need updates since your app started.
UPDATE:
This solution works in MySQL and SQLite, but not PostgreSQL. Postgres does not allow naming of columns in the SELECT clause that are not used in the GROUP BY clause (see discussion). I'm very unfamiliar with PostgreSQL, but I did get this to work as expected:
named_scope :needs_updates, lambda{
cols = Machine.column_names.collect{ |c| "\"machines\".\"#{c}\"" }.join(",")
{
:select => cols,
:group => cols,
:joins => 'LEFT JOIN "machine_updates" ON "machine_updates"."machine_id" = "machines"."id"',
:having => ['MAX("machine_updates"."date") IS NULL OR MAX("machine_updates"."date") < ?', UPDATE_THRESHOLD.days.ago]
}
}
If you can make changes in the table, then you can use the :touch method of the belongs_to association.
For instance, add a datetime column to Machine named last_machine_update. Thereafter in the belongs_to of MachineUpdate, add :touch => :last_machine_update. This will cause that field to become updated with the last time you either added or modified a MachineUpdate connected to that Machine, thus removing the need for the named scope.
Otherwise I would probably do it like Alex proposes.
I just ran into a similar problem. It's actually pretty simple:
Machine.all(
:include => :machine_updates,
:conditions => "machine_updates.machine_id IS NULL OR machine_update.date < ?", UPDATE_THRESHOLD.days.ago])
If you were doing a named scope, just use lambdas to ensure that the date is re-calculated every time the named scope is called
named_scope :needs_updates, lambda { {
:include => :machine_updates,
:conditions => "machine_updates.machine_id IS NULL OR machine_update.date < ?", UPDATE_THRESHOLD.days.ago]
} }
If you want to avoid returning all of the MachineUpdate records in your query, then you need to use the :select option to only return the columns you want
named_scope :needs_updates, lambda { {
:select => "machines.*",
:conditions => "machine_updates.machine_id IS NULL OR machine_update.date < ?", UPDATE_THRESHOLD.days.ago]
} }