I'm looking into altering the architecture of a hosting service intended to scale arbitrarily.
On a given machine, the service works roughly as follows:
Start a container running Redis cluster client that joins a global cluster.
Start containers for each of the "Models" to be hosted.
Use upstream Redis cluster for managing model global state. Handle namespacing via keys themselves.
I'm wondering if it might be possible to change to something like this:
For each Model, start a container running the Model and a Redis cluster client.
Reverse proxy the Redis service using something like Nginx to be available on a certain path, e.g., <host_ip>:6397/redis-<model_name>. (Note: I can't just proxy from different ports, because in theory this is supposed to be able to scale past 65,535 models running globally.)
Join the Redis cluster by using said path.
Internalizing the Redis service to the container is an appealing idea to me because it is closer to what the hosting service is supposed to achieve. We do want to share compute; we don't want to share a KV store.
Anyways, I haven't seen anything that suggests this is possible. So, sticking with the upstream may be my only option. But, in case anyone knows otherwise, I wanted to check and see.
Related
We are using prometheus in our production envirment recently. Before we only have 30-40 nodes for each service and those servers not change very often, so we just write it in the prometheus.yml, but right now it become too long to hold in one file and change much frequently then before, so my question is should i use file_sd_config to put those server list out of yml file and change those config files sepearately, or using consul for service discovery(same much easy to handle changes).
I have install 3 nodes consul cluster in data center and as i can see if i change to use consul to slove this problem , i also need to install consul client in each server(node) and define its services info. Is that correct? or does anyone have good advise.
Thanks
I totally advocate the use of a service discovery system. It may be a bit hard to deploy at first but surely it will worth it in the future.
That said, Prometheus comes with a lot of service discovery integrations. It's possible that you don't need a Consul cluster. If your servers are in a cloud provider like AWS, GCP, Azure, Openstack, etc, prometheus are able to autodiscover the instances.
If you keep running with Consul, the answer is yes, the agent must be running in every node. You can also register services and nodes via API but it's easier to deploy the agent.
I am setting up a cluster of servers using vagrant and playing with Redis sentinel and HAProxy for Postgresql db connection (with pgpool). I was curious if it make sense to put haproxy and redis sentinel on each of my web server nodes and have them connect directly to those. The thought is that it can create a distributed connection to the DB and redis and reduce the single point of failure to having a single haproxy that they connect to and then split to different db nodes. I can also keep the database connect (via haproxy) and redis (via sentinel) encapsulated to the localhost. Does this make sense?
It only makes sense if you're trying to save up on resources/costs.
Please note that redis sentinel must have a finite list of sentinel instances, which doesn't fit the scenario of placing one per machine, as your maching count would probably scale/change.
Otherwise , it's always makes the most sense to put different infrastructure components ( especially those with clustering/HA nature, such as redis ) on different machines.
By mixing them all together, you usually end up with applications getting in the way of each other and stealing CPU from each-other once the load increases. You also risk designing your applications/scripts/flows to be location aware (i.e assume external resources are always local ) which is also not a really good practice.
We run a container environment (Kubernetes) and we have a set of redis sentinels that watch over a bunch of redis instances.
Since it's a containerized environment, configuration is mostly dynamic. A sentinel container might die, another one replaces it, etc.
This poses a problem for application configuration. Normally on a static setup, you provide the client with all the addresses for the sentinels and he works with it. On a frozen container, if the environment change, the configuration becomes outdated.
To solve this, we can use a load balancer in front of the redis sentinels. This way even if the underlying containers/ips change, the application configuration is still valid.
I'm aware that sentinels never forget other sentinels (and the same for slaves) but we can flush those when changes do happen.
We do use this today, and haven't felt any side-effects AFAIK, but of course I'd like to know if there's a risk of something going wrong because of this.
So the question is: can I use a load balancer in front of redis sentinels without any major issues?
I am trying to setup a distributed system based on current spring-cloud release (meaning mostly Netflix OSS) using the following components
1 or more cloud config servers
1 or more Eureka servers
1 or more services using Eureka and Config Server clients
The setup above is easy enough to get going however once you start looking into setting up so that configuration changes in the cloud Config servers automatically trigger changes in the values of the actual clients, things start getting more complicated.
It is my understanding that for such a feature to work one should introduce spring-cloud-bus clients to the services which in turn will use, currently the only supported implementation, rabbitmq servers (the actual rabbitmq binaries and not some spring-boot app like eureka or Config servers) to allow change events in the Config server to be propagated to the clients automatically.
It sounds counterintuitive to setup such a system and have to hardcode addresses to rabbitmq servers in the clients (even if one will be keeping the amount of rabbitmq servers more or less static).
How is one supposed to register rabbitmq server instances in the Eureka service discovery server(s) to allow for clients to find them without having to have any knowledge about their location prior to startup?
I cannot seem to find any documentation on how this is done given that rabbitmq is not a spring-cloud component. In fact very little documentation seems to exist regarding on how the rabbitmq + eureka + spring-cloud-bus should be setup together.
I know that I am on a VERY old question, even though I think it worth a comment for people who read this in the future.
Most of the cloud services, lets take AWS as an example, have an Elastic IP solution - so you can configure IPs for RabbitMQ servers, and the IPs always belong to the RabbitMQ, no matter whether the instances change. You can re-attach the Elastic IP to different instances.
It works nearly the same with Elastic Load Balancer, which keeps its IP, so you could configure your microservices to a specific IP using Spring Cloud Config Server - and scale the RabbitMQ instances without a need to worry about configuration change.
I have a Redis Cluster that clients are connecting to via HAPRoxy with a Virtual IP. The Redis cluster has three nodes (with each node sharing the same server with a running sentinel instance).
My question is, when i clients gets a "MOVED" error/message from a cluster node upon sending a request, does it bypass the HAProxy the second time when it connects since it has been provided with an IP:port when the MOVEd message was issued? If not, how does the HAProxy know the second time to send it to the correct node?
I just need to understand how this works under the hood.
If you want to use HAProxy in front of Redis Cluster nodes, you will need to either:
Set up an HAProxy for each master/slave pair, and wire up something to update HAProxy when a failure happens, as well as probably intercept the topology related commands to insert the virtual IPs rather than the IPs the nodes themselves have and report via the topology commands/responses.
Customize HAProxy to teach it how to be the cluster-aware Redis client so the actual client doesn't know about cluster at all. This means teaching it the Redis protocol, storing the cluster's topology information, and selecting the node to query based on the key(s) being accessed by the consumer code.
With Redis Cluster the client must be able to access every node in the cluster. Of the two options above Option 2 is the "easier" one, but at this point I wouldn't recommend either.
Conceivably you could use the VIP as a "first place to get the topology info" IP but I suspect you'd have serious issues develop as that original IP would not be one of the ones properly being reported as a nod handling data. For that you could simply use round-robin DNS and avoid that problem, or use the built-in "here is a list of cluster IPs (or names?)" to the initial connection configuration.
Your simplest, and least likely to be problematic, route is to go "full native" and simply give full and direct access to every node in the cluster to your clients and not use HAProxy at all.