Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
We have to implement a queuing system for our ROR Application.We have evaluated the following options:
Amazon SQS: High Availability but relatively slow performance.Requires constant poling.
CloudAmqp: Looks promising but doubtful about the support.
RabbitMq setup on EC2: Needs user bandwidth to manage the setup,may result in downtime if some issue arises in the setup.
Right now there won't be any dedicated team/person to manage the setup full-time so implementing our own RabbitMq setup on Ec2 may result in downtime in case something goes wrong.
I want to know considering the situation which is our best option?
I use SQS and I am happy with it; I don't worry about the support aspect, because I also don't have time to deal with setting up my own server and supporting myself when I can pay AWS pennies to do it for me.
If you don't want to poll constantly, considering pairing up your SQS queue with an SNS topic and it can do push notifications to your application instead. Don't know the nature of your application, but its something to look into. http://aws.amazon.com/sns/
ALso keep in mind the slow performance of SQS (relative to Rabbit) is not apples to apples. SQS is redundant and distributed, a single instance of RabbitMQ on a single box is not; can your application deal with the queue not being available for a period of time?
At CloudAMQP all our servers are redundant, each cluster has at least two instances in different availability zones. For support we have email support 24/7, as we have staff in different timezones. We do have phone support for our largest plans too.
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 1 year ago.
Improve this question
Currently I'm using Azure Logic apps to sync the changes to different 3rd parties.
But it's too expensive when there are massive requests/messages.
The key features:
MQ connector, which can be used as trigger.
HTTP processor, used to issue HTTP requests.
Parse json response.
Possibility to check the history.
I've done some research of Apache Nifi.
My feeling is it's not very user friendly and quite old school.
One close open source option that I know of is n8n.
But you could also explore the fixed pricing model (Integration Service Environment) that logic apps offer, which is charged by the hour instead of based on the volume coming. Depending on the volume fluctuations, you can scale up more units as required.
Also, a completely new (in preview currently) way to develop and run logic app workflows was announced, which introduces a new pricing model (same as app service or premium plan of functions).
This is introduces a docker-based deployment which allows running your logic apps anywhere too.
Apache NiFi can be used for your requirements.
Apache NiFi is an easy to use, powerful, and reliable system to process and distribute data.
It has:
ConsumeMQTT processor which subscribes to a topic and receives messages from an MQTT broker.
InvokeHTTP processor which can interact with a configurable HTTP Endpoint.
Numerous json processors.
Data Provenance feature which tracks dataflow from beginning to end.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
I'm researching with a friend about the idea of using an event queue/stream system, such as kafka or rabbitmq, as a way to store the adverts in a queue instead of a traditional database.
The required system would need to provide a stream of events holding various fields, be filterable and searchable. Allow the stream to hold the events indefinitely or for a certain amount of time (for example to let the adverts expire). We are just not sure whether to go for a message queue/event stream, or whether the traditional database is the way to go.
Has anybody experience with this, would you recommend to investigate one system over another?
Kafka would support the usecase, as you can treat it not as messaging queue, but as a transaction log:
you can re-read the same message multiple times with different consumers
the messages would be persisted until they expire (configurable on server) or until they are compacted
(data propagation) there are tools such as mirror maker (or own streaming applications) to replicate data among data centres (or some part of them, e.g. if you decide to put some attributes in one topic, and other in another)
I do not know if generic messaging solution (like RabbitMQ) would suit you, as messages consumed would disappear, so you would need to re-publish them to keep them (and in case of multiple consumers, you'd need to use RMQ logic like fanout exchanges that messages get multicasted to multiple queues, each per consumer).
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I am a newbie who just started reading about distributed system.
I am wondering what are some use cases for a distributed queue as opposed to queues on each machine.
For example, how RabbitMQ is used among different web servers. How is it used for example in load balancing?
We typically use distributed queues when the up front cost of processing some task is too expensive or otherwise unnecessary. For example, when you upload a video to YouTube, typically there's some processing of the video that occurs before it's displayed on the site. In the modern web, it can be unacceptable for users to have to wait while that processing occurs. So, the video can be stored and a task put on a queue so that processing can take place later. Then, other machines that are polling the queue can process the video at their leisure. This means the user doesn't have to wait for their video to be processed before they can continue on doing other things on the site. It also critically allows for a buffer for periods of high throughput. If users are uploading videos faster than they can be processed by YouTube's servers, the queue grows independently of the back end's ability to process items.
Another consideration is that the distributed nature of the queue allows for fault tolerance. In the YouTube example, that allows the website to respond to the user, assuring the user that their video will eventually be processed. Typically distributed queues have configurable replication levels, where once an item is put on the queue it's guaranteed to be replicated on n nodes and therefore is unlikely to be lost.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I have been working on Active MQ for quite some time and familiar with the Active MQ architecture.
Recently I have been hearing a lot about Kafka as a messaging system.
What advantages does it have over Active MQ and other messaging system? Is it just another Big data buzz word?
Also is kafka suitable for zero loss messaging system?
This is too broad to discuss but in my opinion the most important factor about Kafka over ActiveMQ is the throughput. From the wiki page
Kafka provides an extremely high throughput distributed publish/subscribe messaging system. Additionally, it supports relatively long term persistence of messages to support a wide variety of consumers, partitioning of the message stream across servers and consumers, and functionality for loading data into Apache Hadoop for offline, batch processing.
Also is kafka suitable for zero loss messaging system?
In very brief kafka Guarantees these following :
1) Messages sent by a producer to a particular topic partition will be appended in the order they are sent. 2) For a topic with replication factor N, it will tolerate up to N-1 server failures without losing any messages committed to the log.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
I am trying to figure out the best MQ option for my requirements. I need to have the ability to transfer both text and binary messages within and across geographically diverse data centers with high reliability. Fast is nice but scaling is an option as well. Support is nice to have as with RabbitMQ.
Here are some assumptions:
Use federation or shoveling messages to push identical messages across data centers.
Use AMQP to transfer binary messages and since we are a .Net/Python shop.
I want to make sure my assumptions are valid and need help with which MQ to pick. I have used ActiveMQ+MySQL in the past but I like the option of Mnesia for messaging with persistance. Also, is it alright to use AMQP 0.9 instead of 1.0. Looks like RabbitMQ support 1.0 via a plugin.
Appreciate any alternate suggestions I can get.