rufus cron job not working in Apache/Passenger - ruby-on-rails-3

I have a Rails app running on Apache/Passenger. It has a rufus-scheduler cron job that runs in the background and sends out notifications via email.
When I am running the app in development on the WEBrick server, the emails are sent like they are supposed to be sent.
When I start up the app in production on Apache/Passenger, the emails don't get sent. In the production logs it doesn't show any logs for rufus-scheduler.
I'm stuck in this problem. Your help will be appreciated, thank you in advance.

The easiest solution is to set PassengerSpawnMethod to direct. The Phusion Passenger documentation explains why this solves it: http://www.modrails.com/documentation/Users%20guide%20Apache.html#spawning_methods_explained
In particular, take a look at section "15.4. Smart spawning gotcha #2: the need to revive threads".

Related

Automatically Update PM2 Processes

I'm looking to automate how my bots are updated. They're hosted on a VPS from GalaxyGate and kept alive using PM2. I use VSCode to develop them, then I push to GitHub and pull the code from there to the VPS to update them directly in production. However, this currently has to be done manually and I always forget the git commands that I'm supposed to run.
I'm looking to have this process automated; ideally the server would periodically run some sort of script to update all bots that are hosted on it. It would fetch GitHub and apply the new changes (if any) and properly restart the bots, so I have a few questions:
Is this possible, and if so, how would I go about doing that?
Are there any risks/downsides with such a system?
Any help would be appreciated

Heroku turning on multiple tasks when I initiate restarts

so I am currently working in heroku and I need the task to programmatically restart if certain errors arise, so I used the heroku restart API to do so. The problem is that my program is restarting every 10 seconds or so, and it seems that heroku is letting the other web tasks stay online which leads to them later on crashing. I was wondering if there was anyway to only run one task at a time and skip their 60/30 seconds waiting period for when you restart a task. I read somewhere about One - off tasks, but I wasn't sure. I am sorry for the formatting of my question, it is my first post on StackOverflow. Thank you.

Bolt-CMS keeps logging me out after a period of time

I am using Bolt-CMS in my project and every ten minutes Bolt logs me out from the CMS. Even if I am active and on the root account.
I know I use Bolt version 3.2.13
And that bolt runs on Silex. (with some Symfony components)
And I presume there is a config file for this.
So does anyone know where to find this config file or where I can turn the automatic logout off?
Thanks in advance!
First thing you should do is check the lifetime of your cookie (check the developer toolbar of your browser).
If this does not give a clue, then most likely there is a cron job that deletes your cookie files on the server. This question might be relevant in this scenario.

Heroku, H12 and passthrough upload timeouts

Overview:
I have a photobooth that takes pictures and sends them to my web application.
Then my web application store users data and sends the picture to user facebook profile/fanpage.
My web app runs Ruby on Rails # Heroku Cedar stack.
Flow:
My webapp receives the photo from the photobooth via a POST, like an web form.
The booth waits for the server response. If the upload has failed, it will send the picture again.
The response from webapp only will be fired after facebook upload has been completed.
Problems:
Webapp only sends data to photobooth after all processing has been completed. Many times this will happen after 30 secs. This causes to Heroku fire an H12 - Timeout.
Solutions?
Keep the request alive while file is being uploaded (return some response data in order to prevent heroku from firing a H12 - https://devcenter.heroku.com/articles/http-routing#timeouts). - Is it possible? how to achieve this in Ruby?
Change to Unicorn + Nginx and activate Upload Module (this way dyno only receives the request after the upload has been completed - Unicorn + Rails + Large Uploads). Is it really possible?
Use the rack-timeout gem. This would make a lot of my passthrough uploads to fail, so the pictures would never be posted on Facebook, right?
Change the architecture. Make the upload direct to S3, spin a worker to check new pictures uploaded to S3 bucket, download them and send them to Facebook. - This one might be the best one, but it takes a lot of time and effort. I might go for it in the long term, but I'm looking for a fast solution right now.
Other...
More info on this issue.
From Rapgenius:
http://rapgenius.com/Lemon-money-trees-rap-genius-response-to-heroku-lyrics
Ten days ago, spurred by a minor problem serving our compiled
javascript, we started running a lot of ab benchmarks. We noticed that
the numbers we were getting were consistently worse than the numbers
reported to us by Heroku and their analytics partner New Relic. For a
static copyright page, for instance, Heroku reported an average
response time of 40ms; our tools said 6330ms. What could account for
such a big difference?
“Requests are waiting in a queue at the dyno level,” a Heroku engineer
told us, “then being served quickly (thus the Rails logs appear fast),
but the overall time is slower because of the wait in the queue.”
Waiting in a queue at the dyno level? What?
From Heroku:
https://blog.heroku.com/archives/2013/2/16/routing_performance_update
Over the past couple of years Heroku customers have occasionally
reported unexplained latency on Heroku. There are many causes of
latency—some of them have nothing to do with Heroku—but until this
week, we failed to see a common thread among these reports. We now
know that our routing and load balancing mechanism on the Bamboo and
Cedar stacks created latency issues for our Rails customers, which
manifested themselves in several ways, including:
Unexplainable, high latencies for some requests
Mismatch between reported queuing and service time metrics and the observed reality
Discrepancies between documented and observed behaviors
For applications running on the Bamboo stack, the root cause of these
issues is the nature of routing on the Bamboo stack coupled with
gradual, horizontal expansion of the routing cluster. On the Cedar
stack, the root cause is the fact that Cedar is optimized for
concurrent request routing, while some frameworks, like Rails, are not
concurrent in their default configurations.
We want Heroku to be the best place to build, deploy and scale web and
mobile applications. In this case, we’ve fallen short of that promise.
We failed to:
Properly document how routing works on the Bamboo stack
Understand the service degradation being experienced by our customers and take corrective action
Identify and correct confusing metrics reported from the routing layer and displayed by third party tools
Clearly communicate the product strategy for our routing service
Provide customers with an upgrade path from non-concurrent apps on Bamboo to concurrent Rails apps on Cedar
Deliver on the Heroku promise of letting you focus on developing apps while we worry about the infrastructure
We are immediately taking the following actions:
Improving our documentation so that it accurately reflects how our service works across both Bamboo and Cedar stacks
Removing incorrect and confusing metrics reported by Heroku or partner services like New Relic
Adding metrics that let customers determine queuing impact on application response times
Providing additional tools that developers can use to augment our latency and queuing metrics
Working to better support concurrent-request Rails apps on Cedar
The remainder of this blog post explains the technical details and history of our routing infrastructure, the intent behind the decisions
we made along the way, the mistakes we made and what we think is the
path forward.
1) You can use Unicorn as your app server and set the timeout before the unicorn master kills a worker to a number of seconds that is greater than your requests need. Here is some example setup where you can see a timeout of 30 seconds.
Nginx does not work on heroku, so that is no option.
2) Changing the architecture would work well too, though I would choose an option than where the upload traffic does not block my own server, such as TransloadIt. They will help you get the pictures to S3 for examples and do custom transformations, cropping etc. without you having to add additional dynos because your processes are being blocked by file uploads.
Addition: 3) Another change of architecture would be to only handle the receiving part in one action, and giving the uploading to facebook task to a background worker (using for example Sidekiq).

Push notifications not being received on distribute

I had successfully implemented push notifications for the developer certificate but cannot seem to get it work for an ad hoc test with a friend. I did the same process for creating the push notification keys/certificates except now I chose "Production Push SSL Certificate" instead of "Development Push SSL Certificate". I believe that this is correct since I could not find any tutorials around that showed how to do it for production... all of them were for development.
This quick process can be found from ray wenderlich blog here: http://www.raywenderlich.com/3443/apple-push-notification-services-tutorial-part-12
This is my guess where things might have gone wrong because maybe there is a different way to do Production Push. I left my php code the same on my server as I had it before for Development Push (copied over the new ck.pem). Is this alright or do I need to make changes? I can post the code if someone thinks it is the code but as I said.. the php server code worked before.
Can someone please help me out? Thanks in advance!
I CANNOT ANSWER MY QUESTION BECAUSE I DO NOT HAVE ENOUGH REP. THE CORRECT ANSWER IS HERE
for my development I had:
gateway.push.apple.com:2195
but for production it needs to be:
gateway.apple.com:2195
Hope this helps someone in the future.
[Edited]
You can also try to check your token are correctly set, your devices are allowed to receive notifications.
If you have made many tests, you may also have been temporarily banned from APNS server, you mustn't make too many calls to the APNS server in a small time range.
Please also note that there may be some delays from the time you send the notification to the APNS server, and the time the APNS server sends it to your device.
LAst bu not least, make sure your devices have a correct Internet / SSL free access some proxies or firewalls may block notifications
Are you getting the devices token dynamically? Because when the app is in ad-hoc distribution it is generating a different device token for push notifications form when it is in debug (a.k.a developer) mode