is it possible to run rake tasks from another server in rails - ruby-on-rails-3

I have a bookmarking website developed in ruby on rails 3.0.7 and many rake file are running every time to get bookmarked URLs details and added users informations. Since rake are running every time, my server got engaged and its CPU utilization is 100%. I need to run my rake files in another server by saving all rakes in database and put it in a queue.
I set one cron in separate server to process the rake tasks's queue with my files shared. But the rake tasks are still running from the development server.
Is there any option to run rake files in another server? or How can I set dynamic cron jobs in rails?
Please help me
Thanks.

Looks like a job for a background worker. There are many solutions for ruby.
The basic idea is this: you submit tasks to a queue (backed by redis/mysql/whatever). Then another process (worker) pops tasks from the queue and executes them in background, without blocking or affecting your app. Naturally, you can have multiple workers and they can be located on other machines.
Here's a collection of railscasts about workers: Background Job railscasts.
My current favourite is Sidekiq.

Related

How to get log unique requests and check their status in Lucee

I am trying to log specific requests by users to determine if their Lucee request has completed, if it is still running, etc. The purpose of this is to fire of automated processes on demand and ensure to the end users that the processes is already started so they do not fire off a second process. I have found the HTTP_X_REQUEST_ID in other searches, but when dumping the CGI variables, it is not listed. I have set CGI variables to Writable rather than Read Only, but it is still not showing up. Is it something I must add in IIS, or a setting in Lucee Admin that I am overlooking. Is there a different way to go about doing this rather than HTTP_X_REQUEST_ID? Any help is appreciated.
Have yo consider using <cfflush>. When Lucee request start you can send partial information to the client informing that the process has started in the server.

Capistrano Resque with one worker for production

I am really confused on how to get Resque and resque_mailer working on my production server. What I need to do is get a single worker running/restart which is called 'mailer' via Capistrano when I do a cap deploy.
I've seen this gist but I just don't get it. Is there something else that breaks it down to explain what its doing. Or is there a simpler solution to get this working?
I've already got Redis working as I'm already using it for other tasks.
My production server is as follows: Ubuntu, Apache, Passenger, Ruby 2.0, Rails 4.0
In the end I used Sidekiq. The documentation is much better and just works!

Reducing End User Latency in Herokuapp

This question is a follow up to my previous question about scaling in heroku. What I've noticed is that when I use my app it doesn't feel quite so smooth - I've used plugins like YSlow that consistently tell me that the majority of time is being spent on the server side in generating the HTML. New Relic seems to show that my app is spending a lot of time in Request Queuing as shown here:
and here:
However I also have this bit of information showing me:
That seems like a really, really big discrepancy between a 10.7 ms processing time on the server vs 1.3 sec response time that the user is experiencing. What does this mean? What is the best way to reduce the latency for the user? (Again I'm a complete newbie and all help is much appreciated)
You should change to using unicorn. This will only work on the Cedar stack! It will give you more dyno essentially without buying anymore.
Quick Synopsis on how:
In your Gemfile:
gem 'unicorn'
Create Rails.root/config/unicorn.rb
worker_processes 4 # amount of unicorn workers to spin up
timeout 30 # restarts workers that hang for 30 seconds
Create Rails.root/Procfile
web: bundle exec unicorn -p $PORT -c ./config/unicorn.rb
Commit and push it out to heroku and then you need to tune it (experment with the worker_process number). There is a memory limit per dyno. If you have too many workers and hit that limit then the dyno will slow down to a crawl.
Reference this article here for more detail: http://michaelvanrooijen.com/articles/2011/06/01-more-concurrency-on-a-single-heroku-dyno-with-the-new-celadon-cedar-stack/

How to suspend process on a Heroku cedar stack

I have a small app on Heroku's cedar stack that uses two processes. One runs the Sinatra server and the other collects tweets and inserts them into a database. This project is still in development and while Heroku offers one process for free, the second process costs money.
I'd like to keep the Sinatra server running but suspend the tweet collector process from time to time. If you run heroku stop tweet_collector.1 it will temporarily stop the process but then it appears the Procfile restarts it. I haven't found a way to comment out processes in the Procfile so I've simply deleted the process from the file and pushed it.
Can you override the Procfile from commandline and stop a process? If not, how can you comment out a process in the Procfile so it's not read?
I believe you can scale any of your Procfile entries to zero using heroku scale:
heroku scale web=0
More information here: http://devcenter.heroku.com/articles/procfile

having a script that does something continuosly at the back end without the need for a browser

I am kind of confused. So pls go easy on me. Take any standard web application implemented with mvc, like codeigniter or rails. The scripts gets executed only when a browser sends request right. So when a user logs in and sends request the server recieves it and sends him response.
Now consider the scenario where apart from the regular application i also need something like a backend process. For example a script which checks whether a bidding time is closed and sends the mail to the bidder that the bidding is closed and chooses the bid winner. Now all these actions has to be done automatically as soon as the bidding time ends.
Now if this script is part of a regular app then it should be triggered by the client(browser) but i dont want that to happen. This should be like a bot script which must run on the server checking the DB for events and patterns like this.
How do i go about doing something like this. Also is this possible to have this implemented on a regular shared or dedicated hosting where we dont have shell access but only ftp access.
You'd have to write your script as a standalone program and either have it run continuously in the background or have cron (or some other scheduling service; also only works if you're only interested in time-based events) execute it for you.
There are probably hosts that have shell-less ways to do this (fancy GUI interfaces for managing background processes or something,) but your run of the mill web host with only FTP access definitely doesn't.
You need a cron job, it's easy to set up on linux. That cron job will either call the command line version of PHP with your script or create a local HTTP request with curl or wget.
If you don't have access then you need an external site that automatically generates periodic HTTP requests. A cheap one is setcronjob.