I currently have architecture with filebeat as the log shipper, which sends logs to log stash indexer instance and then to managed elastic search in AWS. Due to persistent TCP connections, I cannot load balance using AWS ELB multiple log stash indexer instances since filebeats always picks on of the instances and sends it there. So I decided to use redis. Now seeing how difficult it is to scale redis and make it highly available compontent in ELK stack I want to ask what is even the point of redis. I read a million times it acts as a buffer, but if filebeats stops sending logs to logstash if logstash can't handle the load, why do we even need a buffer. Filebeat is smart enough to know to stop sending logs. Logstash is smart enough to stop sending logs to elastic search if elastic search goes down. So the pipeline stops. I really don't understand of the redis acting as a buffer in every standard ELK architecture.
Redis or Kafka or XYZ can be used as buffer in the ELK stack as you've rightly noticed.
The ES folks published a blog post yesterday about using Kafka in the pipeline, but it could as well have been Redis or XYZ. They make a good point about WHEN such a buffer could be needed and when it is not.
It is a good idea to have such a buffer in order to
handle event spikes
deal with a potentially unreachable ES cluster
If you don't anticipate such behaviors, i.e. you know
your events will always come at the same rate and/or
you're ok with your logs being shipped a bit later in case you need to upgrade your ES cluster
...then you don't need such a buffer. What's more, that will be one less piece of software you need to manage, monitor and maintain.
When it comes to the Elastic Stack ecosystem, there's no one-size-fits-all approach, it always depends on your precise use case and requirements. You need to ask yourself what is important to you, your system(s) and your users and then design your solution accordingly.
Related
We've recently come across a problem when using RabbitMQ: when the hard drive of our server is full, RabbitMQ's vhost are getting corrupted, and unusable.
The only to make RabbitMQ functional again is to delete, and recreate the corrupted hosts.
Doing so, all of our queues, and exchanges, along with the data in it, is then gone.
While this situation should not happen in prod, we're searching for a way to prevent data loss, if such an event does occur.
We've been looking at the official rabbitMQ documentation, as well as on stack exchange, but haven't found any solution to prevent data loss when a host is corrupted.
We plan on setting up a cluster at a later stage of development, which should at least help in reducing the loss of data when a vhost is corrupted, but it's not possible for now.
Is there any reliable way to either prevent vhost corruption, or to fix the vhost without losing data?
Some thoughts on this (in no particular order):
RabbitMQ has multiple high-availability configurations - relying upon a single node provides no protection against data loss.
In general, you can have one of two possible guarantees with a message, but never both:
At least once delivery - a message will be delivered at least one time, and possibly more.
At most once delivery - a message may or may not be delivered, but if it is delivered, it will never be delivered a second time
Monitoring the overall health of your nodes (i.e. disk space, processor use, memory, etc.) should be done proactively by a tool specific to that purpose. You should never be surprised by running out of a critical system resource.
If you are running one node, and that node is out of disk space, and you have a bunch of messages on it, and you're worried about data loss, wondering how RabbitMQ can help you, I would say you have your priorities mixed up.
RabbitMQ is not a database. It is not designed to reliably store messages for an indefinite time period. Please don't count on it as such.
I want to use Redis for a particular use case. I am not sure to go with a Redis Cluster or with Twemproxy + Sentinel.
I know the Cluster is a winner any day. I am just skeptical due to the MOVED responses. In case of MOVED responses, the client will connect another node and in case of resharding, it may have to connect another again. But in case of Twem, it knows where the data is residing, so it will never get a MOVED response.
There are different problems with Twem, like added hop, may increase overall turnaround time, problem with adding new nodes or if it ejects some nodes out, it won't be able to serve the requests for the keys present on that node. Extra maintenance headache as in, having sentinels for my Redis instances and mechanism for HA of twem itself.
Can anyone suggest me, should I go with Twem or Cluster? I am thinking of going with Twem as I will not be going to and fro in case of MOVED responses. But I am skeptical about it, considering the above mentioned concerns.
P.S. I am planning to using Jedis client for Redis (if that helps).
First of all, I'm not familiar with Twemproxy, so I'll only talk about your concerns on Redis Cluster.
Redis client can get the complete slot-node mapping, i.e. the location of keys, from Redis Cluster. It can cache the mapping on the client side, and sends request to the right node. So most of the time, it won't be redirected, i.e. get the MOVED message.
However, if you add/delete node or reshard the data set, client will receive MOVED message, since it still uses the old mapping. In this case, client can update its local cache, and any subsequent requests will be sent to the right node, i.e. no MOVED message any more.
A decent client library can take the above optimization to make it more efficient. So if your client library has this optimization, you don't need to worry about the MOVED penalty.
Redis team introduce new Streams data type for Redis 5.0. Since Streams looks like Kafka topics from first view it seems difficult to find real world examples for using it.
In streams intro we have comparison with Kafka streams:
Runtime consumer groups handling. For example, if one of three consumers fails permanently, Redis will continue to serve first and second because now we would have just two logical partitions (consumers).
Redis streams much faster. They stored and operated from memory so this one is as is case.
We have some project with Kafka, RabbitMq and NATS. Now we are deep look into Redis stream to trying using it as "pre kafka cache" and in some case as Kafka/NATS alternative. The most critical point right now is replication:
Store all data in memory with AOF replication.
By default the asynchronous replication will not guarantee that XADD commands or consumer groups state changes are replicated: after a failover something can be missing depending on the ability of followers to receive the data from the master. This one looks like point to kill any interest to try streams in high load.
Redis failover process as operated by Sentinel or Redis Cluster performs only a best effort check to failover to the follower which is the most updated, and under certain specific failures may promote a follower that lacks some data.
And the cap strategy. The real "capped resource" with Redis Streams is memory, so it's not really so important how many items you want to store or which capped strategy you are using. So each time you consumer fails you would get peak memory consumption or message lost with cap.
We use Kafka as RTB bidder frontend which handle ~1,100,000 messages per second with ~120 bytes payload. With Redis we have ~170 mb/sec memory consumption on write and with 512 gb RAM server we have write "reserve" for ~50 minutes of data. So if processing system would be offline for this time we would crash.
Could you please tell more about Redis Streams usage in real world and may be some cases you try to use it themself? Or may be Redis Streams could be used with not big amount of data?
long time no see. This feels like a discussion that belongs in the redis-db mailing list, but the use case sounds fascinating.
Note that Redis Streams are not intended to be a Kafka replacement - they provide different properties and capabilities despite the similarities. You are of course correct with regards to the asynchronous nature of replication. As for scaling the amount of RAM available, you should consider using a cluster and partition your streams across period-based key names.
I have a cluster of backend servers on GCP, and they need to send messages to each other. All the servers need to receive every message, but I can tolerate a low error rate. I can deal with receiving the message more than once on a given server. Packet ordering doesn't matter.
I don't need much of a persistence layer. A message becomes stale within a couple of seconds after sending it.
I wired up Google Cloud PubSub and pretty quickly realized that for a given subscription, you can have any number of subscribers but only one of them is guaranteed to get the message. I considered making the subscribers all fail to ack it, but that seems like a gross hack that probably won't work well.
My server cluster is sized dynamically by an autoscaler. It spins up VM instances as needed, with dynamic hostnames and IP addresses. There is no convenient way to map the dynamic hosts to static subscriptions, but it feels like that's my only real option: Create more subscriptions than my max server pool size, and then use some sort of paxos system (runtime config, zookeeper, whatever) to allocate servers to subscriptions.
I'm starting to feel that even though my use case feels really simple ("Every server can multicast a message to every other server in my group"), it may not be a good fit for Cloud PubSub.
Should I be using GCM/FCM? Or some other technology?
Cloud Pub/Sub may or may not be a fit for you, depending on the size of your server cluster. Failing to ack the messages certainly won't work because you can't be sure each instance will get the message; it could just be redelivered to the same instance over and over again.
You could use multiple subscriptions and have each instance create a new subscription when it starts up. This only works if you don't plan to scale beyond 10,000 instances in your cluster, as that is the maximum number of subscriptions per topic allowed. The difficulty here is in cleaning up subscriptions for instances that go down. Ones that cleanly shut down could probably delete their own subscriptions, but there will always be some that don't get cleaned up. You'd need some kind of external process that can determine if the instance for each subscription is still up and running and if not, delete the subscription. You could use GCE shutdown scripts to catch this most of the time, though there will still be edge cases where deletes would have to be done manually.
Currently I'm working on a distributed test execution and reporting system. I'm planning to use Redis PUB/SUB as a message queue and message distribution system.
I'm new to Redis, so I'm trying to read as many docs as I can and play around with it. One of the most important topics is high availability. As I said, I'm not an expert, but I'm aware of the possible options - using Sentinel, replication, clustering, etc.
What's not clear for me is how the Pub/Sub feature and the HA options are related each other. What's the best practice to build a reliable messaging system with Redis? By reliable I mean if my Redis message broker is down there should be some kind of a backup node (a slave?) that should be able to take over this role.
Is there a purely server-side solution? Or do I need to create a smart wrapper around the Redis client to handle this? Will a Sentinel-driven setup help me?
Doing pub sub in Redis with failover means thinking about additional factors in the client side. A key piece to understand is that subscriptions are per-connection. If you are subscribed to a channel on a node and it fails, you will need to handle reconnect and resubscribe. Because subscriptions are done at the connection level it is not something which can be replicated.
Regarding the details as to how it works and what you can expect to see, along with ways around it see a post I made earlier this year at https://objectrocket.com/blog/how-to/reliable-pubsub-and-blocking-commands-during-redis-failovers
You can lower the risk surface by subscribing to slaves and publishing to the master, but you would then need to have non-promotable slaves to subscribe to and still need to handle losing a slave - there is just as much chance to lose a given slave as there is a master.
IMO, PUB/SUB is not a good choice, may be disque (comes from antirez, author of the Redis) fits better:
Disque, an in-memory, distributed job queue