How to suspend process on a Heroku cedar stack - process

I have a small app on Heroku's cedar stack that uses two processes. One runs the Sinatra server and the other collects tweets and inserts them into a database. This project is still in development and while Heroku offers one process for free, the second process costs money.
I'd like to keep the Sinatra server running but suspend the tweet collector process from time to time. If you run heroku stop tweet_collector.1 it will temporarily stop the process but then it appears the Procfile restarts it. I haven't found a way to comment out processes in the Procfile so I've simply deleted the process from the file and pushed it.
Can you override the Procfile from commandline and stop a process? If not, how can you comment out a process in the Procfile so it's not read?

I believe you can scale any of your Procfile entries to zero using heroku scale:
heroku scale web=0
More information here: http://devcenter.heroku.com/articles/procfile

Related

OFBiz hangs for unknown reasons

I have downloaded Apache OFBiz 16 on a machine, unzipped it in a directory, loaded default data using loadDefault option of gradlew.
After that I ran ofbiz using gradlew ofbiz. Doing this successfully runs the ofbiz and I can access the application from localhost as well as from the other machines on the same network using http://IP:8080/appname and https://IP:8443/appname.
But after some period of time, ofbiz hangs and the requests no longer seem to be completed and they seem to be loading for ever. It seems to me that problem arises when I access the OFBiz on https but problem starts to appear after some time of deployment. Initially both http and https seems to work fine.
Can anyone point out, what could be the problem?
The problem is that ofbiz uses
DelegatorFactory.getDelegator()
to find/create asynchronously in database using a single daemon thread. When the base delegator is intially absent, ofbiz will block trying to create one by using the same daemon thread--which is already being used. Hence, ofbiz is deadlocked.
* Please share your server logs*

AppEngine Backup from one app to another

I can't seem to restore my AppEngine backups to a new app as listed in the documentation.
We are using the cron backup as listed in the documentation.
I get through all the stages to launch the restore job successfully, but when it kicks of all the shards are failing with 503 errors.
I tried this with multiple backup files and the experience is the same.
any advice?
(Java runtime)
I'm posting this hoping this will help someone, as there is really lack of resources around Google's documentation and the web in general about this.
While the appengine documentation says this can be done, I actually found the piece of code that forbids this inside the data_storeadmin app.
I managed to connect through python remote-api shell, read an entity from the backup and tried saving to the datastore, but datastore.Put(entity) operation yielded: "BadRequestError: app s~app_a cannot access app s~app_b's data" so it seems to be on an even lower level.
In the end, I decided to restore only a specific namespace to the same app which was also a tedious task - but it did save the day.
I Managed to pull my backup locally through gsutil, install a python-remote-api version on my app, accessed the interactive shell and wrote this script:
https://gist.github.com/Shuky/ed8728f8eb6187475b9a
Hope this helps.
Shuky

is it possible to run rake tasks from another server in rails

I have a bookmarking website developed in ruby on rails 3.0.7 and many rake file are running every time to get bookmarked URLs details and added users informations. Since rake are running every time, my server got engaged and its CPU utilization is 100%. I need to run my rake files in another server by saving all rakes in database and put it in a queue.
I set one cron in separate server to process the rake tasks's queue with my files shared. But the rake tasks are still running from the development server.
Is there any option to run rake files in another server? or How can I set dynamic cron jobs in rails?
Please help me
Thanks.
Looks like a job for a background worker. There are many solutions for ruby.
The basic idea is this: you submit tasks to a queue (backed by redis/mysql/whatever). Then another process (worker) pops tasks from the queue and executes them in background, without blocking or affecting your app. Naturally, you can have multiple workers and they can be located on other machines.
Here's a collection of railscasts about workers: Background Job railscasts.
My current favourite is Sidekiq.

Not able to backup the log files during instance termination issued by Auto Scaling Policy

I am having EC2 instances with auto scaling enabled on it.
Now as part of scale down policy when one of the instance is issued termination, the log files remaining on that instance need to be backed up on s3, but I am not finding any way to perform s3 logging of log files for that instance. I have tried putting the needed script in rc0.d directory through chkconfig with highest priority. I also tried to put my script in /lib/systemd/system/halt.service (or reboot.service or poweroff.service), but no luck till now.
I have found some threads related to this on stack overflow and AWS forum but no proper solution found till now.
Can any one please let me know the solution to this problem?
The only reliable way I have found of achieving this behaviour is to use rsyslog/syslog to transfer the log files to a central host as soon as they are written to the syslog subsystem.
This means you will need to run another instance that receives the log files and ships them to S3, or use an SQS-based system such as logstash.
Unfortunately there is no other way to ensure all of your log messages will be stored on S3 - you can not guarantee that your script will finish before autoscaling "pulls the plug".

Reducing End User Latency in Herokuapp

This question is a follow up to my previous question about scaling in heroku. What I've noticed is that when I use my app it doesn't feel quite so smooth - I've used plugins like YSlow that consistently tell me that the majority of time is being spent on the server side in generating the HTML. New Relic seems to show that my app is spending a lot of time in Request Queuing as shown here:
and here:
However I also have this bit of information showing me:
That seems like a really, really big discrepancy between a 10.7 ms processing time on the server vs 1.3 sec response time that the user is experiencing. What does this mean? What is the best way to reduce the latency for the user? (Again I'm a complete newbie and all help is much appreciated)
You should change to using unicorn. This will only work on the Cedar stack! It will give you more dyno essentially without buying anymore.
Quick Synopsis on how:
In your Gemfile:
gem 'unicorn'
Create Rails.root/config/unicorn.rb
worker_processes 4 # amount of unicorn workers to spin up
timeout 30 # restarts workers that hang for 30 seconds
Create Rails.root/Procfile
web: bundle exec unicorn -p $PORT -c ./config/unicorn.rb
Commit and push it out to heroku and then you need to tune it (experment with the worker_process number). There is a memory limit per dyno. If you have too many workers and hit that limit then the dyno will slow down to a crawl.
Reference this article here for more detail: http://michaelvanrooijen.com/articles/2011/06/01-more-concurrency-on-a-single-heroku-dyno-with-the-new-celadon-cedar-stack/