Add jobs to a rails-sidekiq queue from node.js - ruby-on-rails-3

I have 2 apps. 1 is a rails app and the other one is a node.js one. I'm using sidekiq in the Rails app. My node.js app will receive a huge amount of http posts (at least 20 req/sec) and I need those requests to be processed by the rails app.
The best way I found is to put those requests in a sidekiq queue and have Rails process them when it can. Is it possible to add a job to sidekiq from a different application? Is this done by talking directly to redis? The job will be very simple:
message_type
source
payload
These fields are present in the initial http post request.
I thought of using rails directly as the first entry-point but rails is not that good when it comes to loads of concurrent http requests.
Any ideas on how to add a job to a sidekiq queue from outside rails?

Apparently it's not that difficult. Here is the code I used:
npm install sidekiq --save
// Node.js
// app.js
const redis = require("redis")
const Sidekiq = require("sidekiq")
const client = redis.createClient(); // Will create a connection to redis DB 0 on 127.0.0.1
const sidekiq = new Sidekiq(client); // Will instantiate a sidekiq object which we use to add jobs
sidekiq.enqueue("MyProcessor", ["some-source", "TEST_MESSAGE_TYPE", "Some payload; More payload"], {
retry: false,
queue: "default"
}); // Will enqueue a job in the "default" queue with the 3 arguments
# Rails
# app/workers/my_processor.rb
class MyProcessor
include Sidekiq::Worker
sidekiq_options retry: false
def perform(source, message_type, payload)
logger.debug("Message is being processed: #{source} - #{message_type} - #{payload}")
end
end

Related

Problem with spawn + Electron (backend + frontend ) NODEJS

On baackend I have the following modules:
express for event-stream with backend ->frontend
and req/res frontend->backend get /post
Frontend: Electron+vue+axios
When developing an application, I run both parts separately from each other and everything works well.
But in production, frontend runs as a child process of backend
const child = spawn('backend.exe', {
detached: true
});
child.unref();
Over time, and sometimes almost immediately, the backend stops responding.
When opening devTools, I see that all get/post requests are in the cancel status, but the child process is alive, and if the parent process is closed, the child is also closed by processing
app.on('will-quit', () => {
child.kill('SIGTERM')
process exit(0);
});
If you terminate the backend process and remove it from the parent thread by running it separately, then everything works fine.
I also tried to start the backend through exec, in this case everything also works fine, but exec does not provide properties for killing the process when updating the Electon application
How to deal with this spawn behavior?
Why do requests stop coming?
How can I terminate a child process started with exec? child.kill('SIGTERM') doesn't work for exec in app.on('will-quit', () => {...}) event

concurrent handling in thin, unicorn, puma, webrick

If I have the following action in a controller
def give_a
print a
a = a+1
end
What happens in each webserver when a request comes and when multiple requests are recieved?
I know that webrick and thin and single threaded so I guess that means that the a request doesn't get processed until the current request is done.
What happens in concurrent webservers such as puma or unicorn (perhaps others)
If there are 2 requests coming and 2 unicorn threads handle them, would both responses give the same a value? (in a situation when both request enter the method in the same time)
or does it all depend on what happens on the server itself and the access to data is serial?
Is there a way to have a mutex/semaphore for the concurrent webservers?
afaik, the rails application makes a YourController.new with each request env.
from what you post, it is not possible to see, what a means. when it is some shared class variable, then it is mutuable state and could be modified from both request threads.

Rails application http request

When I run my Rails application in Apache using Passenger and open two browser log the request with thread id using log4r.
I see both the request uses same thread id. How is it possible?
If I do sleep in one request until sleep expire another request is blocked.
Where can I configure use different thread for each request or maxThreadCount?
Is it the behavior for development environment or in production too? how to overcome with this?
config.threadsafe!
put it in your production.rb or development.rb.
I have same problem when calling a local webservice inside a controller action.
Puma also has better concurrency, but that threadsafe confgi make webrick multi-thread for me.

Heroku Cedar: How to scale WEB dynos based on time of day

I want my Rails 3.1 app to scale up to 1 web dyno at 8am, then down to 0 web dynos at 5pm.
BUT, I do not want to sign up for a paid service, and I cannot count on my home computer being connected to the internet.
It seems like the Heroku Scheduler should make this trivial. Any quick solutions or links?
The answer is 'yes' you can do this from scheduler and it's trivial once you know the answer:
Add a heroku config var with your app name: heroku config:add APP_NAME:blah
Add gem 'heroku' to your Gemfile
In order to verify, manually scale up/down your app: heroku ps:scale web=2
Add a rake task to lib/tasks/scheduler.rake:
desc "Scale up dynos"
task :spin_up => :environment do
heroku = Heroku::Client.new('USERNAME', 'PASSWORD')
heroku.ps_scale(ENV['APP_NAME'], :type=>'web', :qty=>2)
end
# Add a similar task to Spin down
Add the Scheduler addon: heroku addons:add scheduler:standard
Use the Scheduler web interface to add "rake spin_up" at whatever time you like
Add a rake spin_down task and schedule it for whenever.
Notes:
Step 1 is needed because I couldn't find any other way to be certain of the App name (and I use 'staging' and 'production' environments for my apps.
Step 3 is required because otherwise the ruby command errors out as it requires that you first agree (via Yes/No response) that you will be charged money as a result of this action.
In step 4, I couldn't find any docs about how to do this with an API key via the heroku gem, so it looks like user/pass is required.
Hope this helps someone else!
Just implemented this approach (good answer above #dnszero), thought I would update the answer with Heroku's new API.
Add your app name as a heroku config variable
require 'heroku-api'
desc "Scale UP dynos"
task :spin_up => :environment do
heroku = Heroku::API.new(:api_key => 'YOUR_ACCOUNT_API_KEY')
heroku.post_ps_scale(ENV['APP_NAME'], 'web', 2)
end
This is with heroku (2.31.2), heroku-api (0.3.5)
You can scale your web process to zero by
heroku ps:scale web=0
or back to 1 via
heroku ps:scale web=1
you'd then have to have a task set to run at 8 that scales it up and one that runs at 17 that scales it down. Heroku may require you to verify your account (ie enter credit card details) to use the Heroku Scheduler plus then you'd have to have the Heroku gem inside your app and your Heroku credentials too so it can turn your app on or off.
But like Neil says - you get 750hrs a month free which can't roll over into the next month so why not just leave it running all the time?
See also this complete gist, which also deals with the right command to use from the Heroku scheduler: https://gist.github.com/maggix/8676595
So I decided to implement this in 2017 and saw that the Heroku gem used by the accepted answer has been deprecated in favor of the 'platform-api' gem. I just thought I'd post what worked for me, since i haven't seen any other posts with a more up-to-date answer.
Here is my rake file that scales my web dynos to a count of 2. I used 'httparty' gem to make a PATCH request with appropriate headers to the Platform API, as per their docs, in the "Formation" section.
require 'platform-api'
require 'httparty'
desc "Scale UP dynos"
task :scale_up => :environment do
headers = {
"Authorization" => "Bearer #{ENV['HEROKU_API_KEY']}",
"Content-Type" => "application/json",
"Accept" => "application/vnd.heroku+json; version=3"
}
params = {
:quantity => 2,
:size => "standard-1X"
}
response = HTTParty.patch("https://api.heroku.com/apps/#{ENV['APP_NAME']}/formation/web", body: params.to_json, headers: headers)
puts response
end
As an update to #Ren's answer, Heroku's Platform API gem makes this really easy.
heroku = PlatformAPI.connect_oauth(<HEROKU PLATFORM API KEY>)
heroku.formation.batch_update(<HEROKU APP NAME>, {"updates" =>
[{"process" => <PROCESS NAME>,
"quantity" => <QUANTITY AS INTEGER>,
"size" => <DYNO SIZE>}]
})
If you're running on the cedar stack, you won't be able to scale to zero web dynos without changing the procfile and deploying.
Also, why bother if you get one free dyno a month (750 dyno hours, a little over a month in fact)?

How to set up Scheduler add-on at Heroku

I am accustomed from PHP to set up CRON on the URL address, that I want to run automatically in a time interval.
Now I try to set up the Schedular add-on at Heroku and I have a little problem - I created the file lib/tasks/scheduler.rake and in admin section on Heroku set up everything what is possible, but:
I am a bit confused, how it all works - for example, these lines are in lib/tasks/scheduler.rake:
desc "This task is called by the Heroku scheduler add-on"
task :update_feed => :environment do
puts "Updating feed..."
NewsFeed.update
puts "done."
end
task :send_reminders => :environment do
User.send_reminders
end
What mean the task task :update_feed? In the set up hour will be run this action? But this is action in which controller? For example, what if I would need to run every day the action my_action in the controller home? I should set up there again only my_action instead update_feed?
With a cron to call an http action, such as using curl or wget, you are scheduling an http request, and the http request then results in the php action running, and in that action's code you have some work/logic that occurs.
With heroku scheduler, you are skipping all the http request stuff and action stuff, and can put the logic/work into the rake task directly (or put it in a regular ruby class or model and invoke that from the task body).
This works because the rake task is loading up the full rails environment (the :environment dependency part of the task definition does this), so inside the rake task body, you have access to your rails app models, required gems, application configuration, everything - just like inside a controller or model class in rails.
What's also nice, if you are on cedar, is that the scheduler invokes tasks in a one-off dynamo instance, so your app's main dynamo is not occupied by a task run by the scheduler, which is not the case when you use the cron -> http request -> controller action pattern.
If you tell me more about what you are trying to do, I can give more specific advice, but in general I usually have the task's logic defined in a plain ruby class in the lib directory, or as a class method on a model, and that is what would be called from the task body (as in the example code you cite above).