Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
I configured Redis and Redis-Sentinel in order to enable automatic failover.
Redis has one master and 2 slaves.
There are 3 sentinel nodes.
I configured redis with authentication, so in redis.conf file of the master I added this:
requirepass mypassword
and in the redis.conf of the slaves, this:
masterauth mypassword
When I stop the master, one of the slaves becomes the master, as expected.
However, when I connect to the new master using redis-cli I notice that no password is required.
I expect this is because the redis.conf file of the new master remains with the value of 'masterauth mypassword', instead of 'requirepass mypassword'.
Is this the expected behavior? Shouldn't Redis-Sentinel configure this? or should I set something else in the conf files for the new master to require authentication?
You should also configure slave node with password, i.e. requirepass mypassword.
Slave node sync data from master node, but it does not sync password. So you need to configure it manually.
Related
I have 5 redis server
2 of them run redis both Master and Slave roles ( looks like redis.conf is not setup manually but via some sort of process cause it has the following line at the bottom: Generated by CONFIG REWRITE )
From time to time I can see Master and Slave switch roles automatically - no human intervention
3 of them run redis sentinel
Question 1: I need to replicate this setup on a 5 different systems but I don’t know how is that “Generated by CONFIG REWRITE” portion setup. Where and how is this automation setup?
Question 2: Why is that /etc/redis/ has a 6329.conf file? I thought redis setup is redis.conf...
Thanks
The Config Rewrites are all caused by Redis Sentinel. The 3 sentinels you have monitor the master and in the event that enough sentinels think the master is down, they will force a failover by promoting an existing slave to the new master, then will reconfigure all other hosts to be a slave of the new master. You can read more about Redis Sentinel, including how to set it up for common scenarios, (docs page, examples section).
For the 6329.conf file, you can name the config files however you want, but however you start your redis server has to reference the non-default file name. Here's the usage example from the --help option to redis-server:
Usage: ./redis-server [/path/to/redis.conf] [options]
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
We are running a self managed EC2 instances in the EKS Cluster. While upgrading control plane is managed by AWS, the worker nodes are self managed. For running security updates and patches, we use the latest optimized AMI and role out the new instances. Some of these are manual effort, what is the best automated approach that you are following to update/patch those self managed EC2 worker nodes?
Steps that we follow:
Look for the latest optimized AMI version released by AWS
Update the Launch configuration with new AMI
Scale up nodes with New AMI
Seamlessly transfer pods from old to new nodes
scale down and delete the old nodes.
The issue here after we update with new optimized AMI, we still find out some security updates that are listed during scanning those instances.
Some of these are manual effort, what is the best automated approach
I suggest to have a look at Managed Node Groups since that is an automated approach for the lifecycle management that you are doing.
Nodes run using the latest Amazon EKS optimized AMIs in your AWS account while node updates and terminations gracefully drain nodes to ensure that your applications stay available.
All managed nodes are provisioned as part of an Amazon EC2 Auto Scaling group that is managed for you by Amazon EKS. All resources including the instances and Auto Scaling groups run within your AWS account. Each node group uses the Amazon EKS optimized Amazon Linux 2 AMI and can run across multiple Availability Zones that you define.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 3 years ago.
Improve this question
Hi I looking for In memory data grid or similar one.
My use case.
Data griding in memory. scale out available.
backup node available.
persistent backup available.
(optional) free or opensource solution
I did googling and I found candidates below
- Apache Ignite
- Redis cluster
- Hazelcast(community)
I prefer Ignite to Hazelcast because, Ignite support use direct buffer.
But I don't know Redis cluster partitioning whether it is stable or not. and, I don't know if apache ignite performance better than redis cluster or not.
Apache Ignite comparable to redis cluster? or impropert comparison?
Thanks for your answer
But I don't know Redis cluster partitioning whether it is stable or not
Redis cluster feature is stable since 3.x version and used in production by many companies.
Apache Ignite comparable to redis cluster? or impropert comparison?
Comparison Apache Ignite vs Redis only is wrong, because these projects have different grade. Redis is positioned as a storage and not as a Data Grid like Apache Ignite. So for proper comparison Apache Ignite should be compared vs Redisson - Redis Java Client
with features of In-Memory Data Grid. It offers the same features as Apache Ignite.
Redisson supports fully managed Redis services like AWS Elasticache, Azure Redis Cache. You don't need to manage/deploy/maintain Redis cluster by yourself of hire devops to do this. Apache Ignite doesn't offer such feature and you should manage/deploy/maintain it by yourself.
I used Redis in production for one of the largest US mobile network operator (IoT department). It is stable from 2.8 (Master/Slave) but cluster stable is from 3.2. Used 2.8 for 3 years and 3.2 cluster for 2 years on production with about 50k TPS load with no restart for years and no issues (except BGSAVE and memory issues but that was due to RAM limitations).
If we compare Redis and Apache Ignite:
Performance. Redis is faster, single thread and 100% in memory.
Data structure. Redis is key-value storage (even that is not a limitation, you can imagine and map almost everything in key-value models). Ingrid is a data grid as it was mentioned above.
If you are looking for a memory data grid and performance is on second priority then Ingrid will be more appropriate for you.
Redis only provides a key-value storage, while Ignite is much more functional. Here is a good feature comparison provided by GridGain: https://www.gridgain.com/resources/product-comparisons/redis-comparison
Which one to use, depends on your requirements and expectations.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
Is it "safe" to mass-kill every PID on my websites-serving dedi (debian squeeze) matching this (I know this is Apache's) ?
www-data /usr/sbin/apache2 -k start
I'll spare you the details, but these PIDs are the ultimate non-removed remnants of an intrusion, there are still maaaaany more than needed that are running and my dedi's got far too many "ESTABLISHED" connexions, local ipconntrack, TCP opening connexions and reset connexions per second as seen in netstat, "established : connections through firewall", several indicators I see in Munin are still way off the chart.
I'd be tempted to mass-close them all, but I don't know if
- that can "break" something important that won't restart by itself, or if
- that will only mean that the the browsers on computers elsewhere across the internet will suddenly stop receiving data and will have require hitting F5
Thanks if you can tell me ! :)
You can kill them if you want. If they are busy processing connections from one or more HTTP clients, those connections will be broken. If they are idle waiting for new connections then they will die gracefully, but in that case Apache's main process will probably restart them because it wants to keep a certain number of spare servers around (configuration parameter MinSpareServers).
If you have more spare servers than you need, then a better idea would be to tune down the Apache configuration parameter MaxSpareServers. If you do that and reload Apache, Apache will kill the excess processes all by itself.
Just don't kill the Apache main process. That's the one which is the parent process of all the other ones and whose own parent process ID is 1. If you kill that one then Apache will shut down.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I am looking for a case study of companies that use Cloud message queueing.
What are the benefits of such a service over rabbitmq (if any)
I know there are several mature services like SQS of amazon, OnlineMQ and Linxter.
The advantage of Cloud Messaging service is the managing the MQ systems is not your head-ache. It is being pushed to the cloud service provider. Amazon SQS integrated very well with other Amazon AWS products like EC2,S3 so it is the choice if you are already on Amazon AWS infrastructure. But SQS performance way is low compared to RabbitMQ in terms of latency.
So if you want a Message Queue service which has customized needs, RabbitMQ (or any other MQ system) can be what you are looking for.