Deploying redis on Heroku when manually precompiling assets - ruby-on-rails-3

I'm following the instructions here: https://devcenter.heroku.com/articles/redistogo to deploy redis on Heroku. I'm however running into some issues while manually precompiling my assets on localhost using:
RAILS_ENV=production bundle exec rake assets:precompile
before pushing it out to heroku. The ENV["REDISTOGO_URL"] config variable isn't set when I'm doing the production mode precompile on localhost so I get an URI error when URI.parse is called.
How do I get around this error? I don't want to hardcode the URI in my production.rb since Heroku sets this when starting the redis server. I'm quite new to this whole asset pipeline / deployment processes so any tip would be appreciated.

In application.rb, I set the following to prevent initialization prior to Redis starting up:
config.assets.initialize_on_precompile = false

Related

Deploying Custom Cartridges on Openshift Origin

I have created a new custom cartridge, in which I have packaged into an rpm using tito and installed using yum. This cartridge is being copied from my spec file to the /usr/libexec/openshift/cartridges directory, however, when I log into the origin home site and try to create an application my cartridge does not show up. I went digging in the ruby scripts and I found that there is a script named cartridge_cache.rb seems to be caching the cartridges it finds within the /usr/libexec/openshift/cartridges directory. I have tried to get origin to reload the cache to include my new cartridge by removing all the cache files within the /var/www/openshift/broker/cache directory then restarting the broker, but I have had no success. Is there somewhere I need to hardcode my cart name to some global variable or something ? Basically, Does anyone know how to get your custom cart to show up on the webpage for creating a new application.
UPDATE: So I ran into a slide deck that had one slide on how to install the cartridge. However, I still have had no success, but here is what I have tried since the previous post:
moved my cartridge directory from /usr/libexec/openshift/cartridges to /usr/libexec/openshift/catridges/v2
ran this command
oo-admin-cartridge -a install -s /usr/libexec/openshift/cartridges/v2/myfirstcart
which the output stated it installed the cartridge.
cleared cache with
bundle exec rake tmp:clear
restarted the openshift broker service
Also, just to make sure the cache was cleared out I went into the Rails console and ran Rails.cache.clear. And still no custom cartridge on the openshift webpage.
It works for me after cleaning cache
cd /var/www/openshift/broker
bundle exec rake tmp:clear
and restarting broker service
service openshift-broker restart
http://openshift.github.io/documentation/oo_administration_guide.html#clear-the-broker-application-cache
MCollective service on Node server (if you have separate servers for broker and node) must be restarted. e.g. with
service ruby193-mcollective restart
After that you should clear the caches on broker server e.g with
/usr/sbin/oo-admin-broker-cache --console
Then you should have new cartridges available

How can I run oink in heroku?

I'm having a problem running the oink gem on my app in Heroku. I've included it in my gemfile and gemfile.lock, uploaded those, and it installs. It even creates the oink.log (which I have no way of viewing, unfortunately). When I run
heroku run bundle exec oink --threshold=0 log/* --app my_app
I get
Running bundle exec oink --threshold=0
log/delayed_job.log log/development.log log/oink.log log/production.log log/test.log attached to terminal... up, run.3
/app/.bundle/gems/ruby/1.8/gems/oink-0.9.3/bin/../lib/oink/cli.rb:88:in get_file_listing':
Could not find "log/delayed_job.log" (RuntimeError)
from /app/.bundle/gems/ruby/1.8/gems/oink-0.9.3/bin/../lib/oink/cli.rb:86:ineach'
from /app/.bundle/gems/ruby/1.8/gems/oink-0.9.3/bin/../lib/oink/cli.rb:86:in get_file_listing'
from /app/.bundle/gems/ruby/1.8/gems/oink-0.9.3/bin/../lib/oink/cli.rb:59:inprocess'
from /app/.bundle/gems/ruby/1.8/gems/oink-0.9.3/bin/oink:4
from /app/.bundle/gems/ruby/1.8/bin/oink:19:in 'load'
from /app/.bundle/gems/ruby/1.8/bin/oink:19
I've tried running each of the individual files, too, and get the same result. This command runs fine on my local machine.
In my production.rb file, I have
config.logger = Hodel3000CompliantLogger.new(config.paths.log.first)
config.middleware.use( Oink::Middleware )
as configuration.
Can you enlighten me on what I'm doing wrong here? My understanding is that the logs are read only, but I don't know if that means they're only accessible through the heroku logs command. If there's a way I can see the oink.log file, too--knowing how to do that is also appreciated, or knowing how to see it in the actual Heroku log using heroku logs.
UPDATE: The configuration for oink shown above allows the commands to be run successfully on my localhost.
Thanks!
-Andrew

Bundler::GemNotFound when compiling assets from cap deploy

I'm deploying to servers with capistrano and doing a bundle exec rake RAILS_ENV=production RAILS_GROUPS=assets assets:precompile as the last step. Problem is when it gets to this point FROM cap deploy, i get the following error:
/usr/local/rvm/gems/ruby-1.9.3-p194/gems/bundler-1.1.4/lib/bundler/spec_set.rb:90:in `block in materialize': Could not find Platform-0.4.0 in any of the sources (Bundler::GemNotFound)
Platform-0.4.0 IS in fact on the server. And when i go into the server and run this exact command, all works great.
Couple of facts about my server: its using RVM, but that doesn't seem to be an issue with cap as the stack trace above would suggest. The other fact of interest is that this server was first created with a custom script I wrote that downloads an archived version of the git repo and then manually runs what cap does on a deploy. The reason I'm doing this, if anyone asks, is for automation with AWS AutoScaling. If i do a normal deploy:setup (not using my AWS script), it works fine with deployments. But the gem list is the same, and the site works all the same either way. Its just something with the cap deploy
Any thoughts?
I figured out what I was doing wrong. on my custom AMI scripts, i was naming the initial release folder 'first' when it should be a timestamp the way capistrano normally names it. That screwed things up on subsequent deployments.

Rails 3.1 assets:precompile Connecting to Database

I'm trying to deploy an application to Heroku after upgrading to Rails 3.1 with the asset pipeline. I ran into the common issue mentioned on Heroku's troubleshooting page when receiving the error:
could not connect to server: Connection refused
Is the server running on host "127.0.0.1" and accepting
TCP/IP connections on port xxxx?
I took the suggestions on the page and added the following to my config/application.rb file (after also trying to add it to the individual [environment].rb files to no effect)
config.assets.initialize_on_precompile = false
I've modified my database.yml file to point my production environment to a non-existant database, but when running the assets:precompile task locally, I get the following:
> RAILS_ENV=production bundle exec rake assets:precompile --trace
** Invoke assets:precompile (first_time)
** Execute assets:precompile
rake aborted!
FATAL: database "my_nonexistant_database" does not exist
Tasks: TOP => environment
(See full trace by running task with --trace)
I'm trying to figure out what part of my application is trying to initialize the database so that I can fix it, but I've run out of ideas for getting more debugging information than this.
Anyone have any tips for either getting more information about where my app is trying to init the DB, or for fixing the underlying problem?
You should try the new labs feature http://devcenter.heroku.com/articles/labs-user-env-compile which will make variables available at slug compile time.

Delayed Job failing in Production environment on Server

I am using delayed_job gem for sending emails in my rails app.
delayed_job was working well but from last 5 days, it is not working and throwing following error in delayed_job.log file.
2011-10-09T01:53:04+0530: [Worker(delayed_job host:backupserver pid:23953)] Syck::DomainType#private_group_join_request failed with NoMethodError: undefined method private_group_join_request' for # - 11 failed attempts
2011-10-09T01:53:04+0530: [Worker(delayed_job host:backupserver pid:23953)] 1 jobs processed at 1.4503 j/s, 1 failed ...
2011-10-09T01:54:40+0530: [Worker(delayed_job host:backupserver pid:23953)] Syck::DomainType#contact_us_email failed with NoMethodError: undefined method contact_us_email for # - 11 failed attempts
2011-10-09T01:54:40+0530: [Worker(delayed_job host:backupserver pid:23953)] 1 jobs processed at 4.3384 j/s, 1 failed ...
Following is one of the example how I am calling delayed job for sending email.
UserMailer.delay(:run_at => 10.seconds.from_now).contact_us_email(self)
I am starting delayed job with
RAILS_ENV=production script/delayed_job start
It is working correctly in development as well as production environment on my local machine.
Environment Which I am using in Rails App.
Rails 3.0.8
Ruby 1.9.2 in Linux(Ubuntu)
rake 0.9.2
delayed_job 2.1.4
This is same as
Undefined Method Error when creating delayed_job workers with script/delay_job
But solution is not working for me.
I figured it out. It was due to package "libyaml" package, which was not present on my local system but was installed on server.
Is it possible that you didn't stop and start your delayed_job worker when you deployed some new code? If a worker that was running before the deploy is trying to run new methods, it will fail.
Is it possible that YAML (or Syck) running in the worker process doesn't know about the method in question? Take a look at:
https://github.com/collectiveidea/delayed_job/wiki/Common-problems#wiki-jobs_are_silently_removed_from_the_database
... the relevant part is:
One common cause of deserialization errors is that the YAML references
a class not known to the worker. If this is the case, you can add
# file: config/initializers/custom.rb
require 'my_custom_class'
which will force my_custom_class to be loaded when the worker starts.
I had to restart my unicorns on the production server, by hand because for some reason cap deploy was not doing it for me.
So what I had to do was:
sudo /etc/init.d/unicorn_myapp stop
sudo /etc/init.d/unicorn_myapp start
But unicorn wasn't able to start, so I had to
sudo rm /tmp/unicorn.my_app.sock
And
sudo /etc/init.d/unicorn_myapp start