sent_at not updated after run rake apn:notifications:deliver - ruby-on-rails-3

Hi i am using the gem https://github.com/PRX/apn_on_rails.git and followed the instructions to send push notifications for my iOS app.I created notifications like:
device = APN::Device.create(:token => "XXXX XXXX XXXXX XXXX XXXX .... XXXX")
notification = APN::Notification.new
notification.device = device
notification.badge = 5
notification.sound = true
notification.alert = "My first push"
notification.save
And run the rake command in my local server:
rake apn:notifications:deliver
to send the notifications.Everything went well but my mobile received nothing still. I checked the apn_notifications table and found the sent_at was still nil after running rake command.I saved several notifications in database but none of them were delivered (all the sent_at stayed nil).Well i got something as
$ rake apn:notifications:deliver --trace
** Invoke apn:notifications:deliver (first_time)
** Invoke environment (first_time)
** Execute environment
** Execute apn:notifications:deliver
I included
begin
require 'apn_on_rails_tasks'
rescue MissingSourceFile => e
puts e.message
end
in my Rakefile.Is there anything I missed so that the notifications can't be delivered?

I had the same behaviour, and for me, it turns out that I have used the incorrect PEM certificate. And once I converted the certificate from .P12 to .PEM properly, and put it in /config folder, it worked.
One way to debug it, is that instead of waiting to use RAKE to push the delivery to Apple server, you can try calling
APN::App.send_notifications
immediately after the notification.save
within your ruby/rails controller.
from the log when running it, I saw that Apple is rejecting my connection, hence I knew my cert was incorrect.

Related

How can I run oink in heroku?

I'm having a problem running the oink gem on my app in Heroku. I've included it in my gemfile and gemfile.lock, uploaded those, and it installs. It even creates the oink.log (which I have no way of viewing, unfortunately). When I run
heroku run bundle exec oink --threshold=0 log/* --app my_app
I get
Running bundle exec oink --threshold=0
log/delayed_job.log log/development.log log/oink.log log/production.log log/test.log attached to terminal... up, run.3
/app/.bundle/gems/ruby/1.8/gems/oink-0.9.3/bin/../lib/oink/cli.rb:88:in get_file_listing':
Could not find "log/delayed_job.log" (RuntimeError)
from /app/.bundle/gems/ruby/1.8/gems/oink-0.9.3/bin/../lib/oink/cli.rb:86:ineach'
from /app/.bundle/gems/ruby/1.8/gems/oink-0.9.3/bin/../lib/oink/cli.rb:86:in get_file_listing'
from /app/.bundle/gems/ruby/1.8/gems/oink-0.9.3/bin/../lib/oink/cli.rb:59:inprocess'
from /app/.bundle/gems/ruby/1.8/gems/oink-0.9.3/bin/oink:4
from /app/.bundle/gems/ruby/1.8/bin/oink:19:in 'load'
from /app/.bundle/gems/ruby/1.8/bin/oink:19
I've tried running each of the individual files, too, and get the same result. This command runs fine on my local machine.
In my production.rb file, I have
config.logger = Hodel3000CompliantLogger.new(config.paths.log.first)
config.middleware.use( Oink::Middleware )
as configuration.
Can you enlighten me on what I'm doing wrong here? My understanding is that the logs are read only, but I don't know if that means they're only accessible through the heroku logs command. If there's a way I can see the oink.log file, too--knowing how to do that is also appreciated, or knowing how to see it in the actual Heroku log using heroku logs.
UPDATE: The configuration for oink shown above allows the commands to be run successfully on my localhost.
Thanks!
-Andrew

Queue_Classic: How to run the rake task automatically on Heroku without Procfile

i need to know how to run the queue_classic (rake qc:work) rake task automatically on Heroku. I tried with Procfile, but i am using Bamboo and i get the next error: "Heroku push rejected, Procfile is not supported on the Bamboo stack"
any idea??
Thanks.
On heroku there is a addon viz heroku-scheduler, its free. Add this addon to your application. It will appear in your application addon lists, click it, and you will lead to worker dashboard where you can add job for example
bundle exec rake task_name
and schedule the task.

Rails 3.1 assets:precompile Connecting to Database

I'm trying to deploy an application to Heroku after upgrading to Rails 3.1 with the asset pipeline. I ran into the common issue mentioned on Heroku's troubleshooting page when receiving the error:
could not connect to server: Connection refused
Is the server running on host "127.0.0.1" and accepting
TCP/IP connections on port xxxx?
I took the suggestions on the page and added the following to my config/application.rb file (after also trying to add it to the individual [environment].rb files to no effect)
config.assets.initialize_on_precompile = false
I've modified my database.yml file to point my production environment to a non-existant database, but when running the assets:precompile task locally, I get the following:
> RAILS_ENV=production bundle exec rake assets:precompile --trace
** Invoke assets:precompile (first_time)
** Execute assets:precompile
rake aborted!
FATAL: database "my_nonexistant_database" does not exist
Tasks: TOP => environment
(See full trace by running task with --trace)
I'm trying to figure out what part of my application is trying to initialize the database so that I can fix it, but I've run out of ideas for getting more debugging information than this.
Anyone have any tips for either getting more information about where my app is trying to init the DB, or for fixing the underlying problem?
You should try the new labs feature http://devcenter.heroku.com/articles/labs-user-env-compile which will make variables available at slug compile time.

Delayed Job failing in Production environment on Server

I am using delayed_job gem for sending emails in my rails app.
delayed_job was working well but from last 5 days, it is not working and throwing following error in delayed_job.log file.
2011-10-09T01:53:04+0530: [Worker(delayed_job host:backupserver pid:23953)] Syck::DomainType#private_group_join_request failed with NoMethodError: undefined method private_group_join_request' for # - 11 failed attempts
2011-10-09T01:53:04+0530: [Worker(delayed_job host:backupserver pid:23953)] 1 jobs processed at 1.4503 j/s, 1 failed ...
2011-10-09T01:54:40+0530: [Worker(delayed_job host:backupserver pid:23953)] Syck::DomainType#contact_us_email failed with NoMethodError: undefined method contact_us_email for # - 11 failed attempts
2011-10-09T01:54:40+0530: [Worker(delayed_job host:backupserver pid:23953)] 1 jobs processed at 4.3384 j/s, 1 failed ...
Following is one of the example how I am calling delayed job for sending email.
UserMailer.delay(:run_at => 10.seconds.from_now).contact_us_email(self)
I am starting delayed job with
RAILS_ENV=production script/delayed_job start
It is working correctly in development as well as production environment on my local machine.
Environment Which I am using in Rails App.
Rails 3.0.8
Ruby 1.9.2 in Linux(Ubuntu)
rake 0.9.2
delayed_job 2.1.4
This is same as
Undefined Method Error when creating delayed_job workers with script/delay_job
But solution is not working for me.
I figured it out. It was due to package "libyaml" package, which was not present on my local system but was installed on server.
Is it possible that you didn't stop and start your delayed_job worker when you deployed some new code? If a worker that was running before the deploy is trying to run new methods, it will fail.
Is it possible that YAML (or Syck) running in the worker process doesn't know about the method in question? Take a look at:
https://github.com/collectiveidea/delayed_job/wiki/Common-problems#wiki-jobs_are_silently_removed_from_the_database
... the relevant part is:
One common cause of deserialization errors is that the YAML references
a class not known to the worker. If this is the case, you can add
# file: config/initializers/custom.rb
require 'my_custom_class'
which will force my_custom_class to be loaded when the worker starts.
I had to restart my unicorns on the production server, by hand because for some reason cap deploy was not doing it for me.
So what I had to do was:
sudo /etc/init.d/unicorn_myapp stop
sudo /etc/init.d/unicorn_myapp start
But unicorn wasn't able to start, so I had to
sudo rm /tmp/unicorn.my_app.sock
And
sudo /etc/init.d/unicorn_myapp start

How can I make rake tasks run in an environment other than dev?

I have a staging machine with a special "staging" environment. I always forget to run rake tasks on that machine like:
rake jobs:work RAILS_ENV=staging
So instead I end up doing:
rake jobs:work
And then I'm mystified why nothing has changed in my database. Doh! It's because I didn't remember to supply RAILS_ENV=staging.
But I will never, ever need to run anything as the development environment on that server. How can I make rake tasks run in the "staging" environment by default?
Rails.env = 'staging'
Put this in your task file.
You can put a line that sets the environment variable RAILS_ENV in a file that will get run when you log onto the machine. For example, I'm a bash user, so I'd put the line
export RAILS_ENV=staging
In either ~/.bashrc (just for me) or /etc/bashrc (for everyone who logs onto the machine).
Hope this helps!