aws worker tier application version - ruby-on-rails-3

I have a Rails app running on AWS elastic beanstalk on a web tier. I want to send email notifications to users so I'm using sqs to send messages to a queue:
sqs = AWS::SQS.new
sqs.queues.named("messaging_queue").send_message("HELLO")
and then I would like to take these messages off the queue using a worker tier instance.
My issues is that when I create the worker tier instance from the console it asks for the application version which defaults to the latest deployed version to my web tier. I don't want to upload my entire web application to the worker, just the code responsible for performing the emailing.
What's the best way to do this? I could upload a zip but I would like to just use git

Can you refactor the code that is responsible for sending emails into a separate library? That way you can create a new web app which just wraps around the email functionality in your library and runs on a worker tier environment. The worker daemon will post messages to your new worker tier app which will then send the email. That way you do not have to deploy your entire code base to your worker tier environment.
You can use git and eb to achieve this. Your worker tier application version and webapp application version can be managed in different branches or in your case it seems better to keep them in different git repositories. If you wish to use branches then you can read about the eb command "eb branch", it may be useful.
Read more about eb here.

Related

Hangfire and main app in different applications

We have web application developed in .NET Core and is hosted on azure. We want to use HangFire for report scheduling.
Our application is multitenant so it will have load of its own, So I want to run these background processes into different server. HangFire has option of Placing Processing into Another Process as Using Console applications or Using Windows Services.
I have gone through HangFire Doc but there is no clear explanation of how this main application (which is .NET CORE) connects to this Console application?
https://docs.hangfire.io/en/latest/background-processing/placing-processing-into-another-process.html
I came across this question but still is not clear to me
How to run Hangfire as a separate process from the main web application?
Your AspNet .Net Core application (the hangfire client) will not communicate directly with the console application (the hangfire server).
Communication is done trhough the storage (the database) : client declares new tasks in the storage, and the server polls the storage (or is notified by the storage depending on the storage technology, like Redis) to execute the tasks.
You need to ensure communication of both client and server with the storage, but not between the client and the server.

Manage In-memory cache in multiple servers in aws

Once or twice a day some files are being uploaded to S3 Bucket. I want the uploaded data to be refreshed with the In-memory data of each server on every s3 upload.
Note there are multiple servers running and I want to store the same data in all the servers. Also, the servers are scaling based on the traffic(also on start-up of the new server goes up and older ones go down means server instances will not be the same always).
Like I want to keep updated data in the cache.
I want to build an architecture where auto-scaling of the server can be supported. I came across the FAN-OUT architecture of AWS by using the SNS and multiple SQS from which different servers can poll.
How can we handle the auto-scaling of the queue with respect to servers?
Or is there any other way to handle the scenario?
PS: I m totally new to the AWS environment.
It Will be a great help for any reference.
To me there are a few things that you need to have to make this work. These are opinions and, as with most architectural designs, there is certainly more than one way to handle this.
I start with the assumption that you've got an application running on an EC2 of some sort (Elastic Beanstalk, Fargate, Raw EC2s with auto scaling, etc.) and that you've solved for having the application installed and configured when a scale-up event occurs.
Conceptually I'd have this diagram:
The setup involves having the S3 bucket publish likely s3:ObjectCreated events to the SNS topic. These events will be published when an object in the bucket is updated or created.
Next:
During startup your application will pull the current data from S3.
As part of application startup create a queue named after the instance id of the EC2 (see here for some examples) The queue would need to subscribe to the SNS topic. If the queue already exists then that's not an error.
Your application would have a background thread or process that polls the SQS queue for messages.
If you get a message on the queue then that needs to tell the application to refresh the cache from S3.
When an instance is shut down there is an event from at least Elastic Beanstalk and the load balancers that your instance will be shut down. Remove the SQS queue tied to the instance at that time.
The only issue might be that a hard crash of an environment would leave orphan queues. It may be advisable to either manually clean these up or have a periodic task clean them up.

Laravel Queue: How to use on shared hosting

I have read tutorials on Laravel Queue using Beanstalkd etc and the idea of using queue is fantastic because in my current project, sending a Welcome mail to a registered user takes up to 10 seconds to process cause of the attachment of a logo. I can imagine what will happen if more users register at an instance. So, using a queue for this will speed up things.
In the shared server I am working on, I have no SSH Access. So, setting up the queue according to the tutorials is far fetched.
I want to know if there is a way to setup Laravel Queue without SSH Access, if there is a way, I need a guide.
You can't use Beanstalkd on a shared server because you can't install the service and I don't know any hosting service that offers it for shared hosting. However you could use IronMQ which is a remotely hosted service (so you don't need to install anything on the server). The Laravel queues API is the same for any queue service, so you can just use Queue::push like you would with beanstalkd.
Here's a great video on setting this up by Taylor Otwell, the creator of Laravel:
http://vimeo.com/64703617. You can also read this tutorial which explains how to use IronMQ with Laravel in more detail.
IronMQ is a paid service, but it does have a Free Plan for developers which offers 1 million API requests per month.
Instead of using artisan queue:listen like you would for beanstalkd, you just define a route for IronMQ to call when processing each job on the queue:
Route::post('queue/receive', function()
{
return Queue::marshal();
});

Deploying java client, RabbitMQ, and Celery to server

I have a Java API on my server, and I want it to create tasks and add them to Celery via RabbitMQ. I followed the following tutorial, http://www.rabbitmq.com/tutorials/tutorial-two-python.html, where I used java for the client (send.java) and python to receive (receive.py). In receive.py, where the callback method is invoked, I call a method that I've annotated with #celery.task so that the task is added to celery.
I'm wondering how all of this is deployed on a server though, specifically, why there is a receive.py file. Is receive.py a process that must continually run on the server? Is there a way to configure RabbitMQ so that it automatically routes java client tasks to celery?
Thanks!
RabbitMQ is just a message queue. Producers put messages and consumers get them on demand. You can only restrict access for specific queues via RabbitMQ's auth options.
As for deployment: yes, receive.py needs to continuously run. It is Celery's job to do that. See the Workers Guide for info on running a worker.

Where to initialize MassTransit in an Asp.Net MVC 4 application?

I have a simple solution with 3 projects:
Asp.Net MVC4 Web app - the main website
Console App - task runner
Console App - task runner
I wish to use MassTrasnsit to serve as a queue so that actions on the website (like send email) do not block the website, instead being published to the queue and taken care by the task runners.
My question is: Where should I initialize the queue, the web app, one of the task runners or create a separate console app for that?
ps. The console apps will be windows services when running on production servers.
As creating the queue is a one-off operation and you will probably want to tweak the default permissions, it would be best to create the queue in advance using a separate console app. Note that the publisher (the web app) and the consumers (the task runners) need a queue each, and that if they are on different servers then you will need to create the queues on each server.