Prometheus target management - config

We are using prometheus in our production envirment recently. Before we only have 30-40 nodes for each service and those servers not change very often, so we just write it in the prometheus.yml, but right now it become too long to hold in one file and change much frequently then before, so my question is should i use file_sd_config to put those server list out of yml file and change those config files sepearately, or using consul for service discovery(same much easy to handle changes).
I have install 3 nodes consul cluster in data center and as i can see if i change to use consul to slove this problem , i also need to install consul client in each server(node) and define its services info. Is that correct? or does anyone have good advise.
Thanks

I totally advocate the use of a service discovery system. It may be a bit hard to deploy at first but surely it will worth it in the future.
That said, Prometheus comes with a lot of service discovery integrations. It's possible that you don't need a Consul cluster. If your servers are in a cloud provider like AWS, GCP, Azure, Openstack, etc, prometheus are able to autodiscover the instances.
If you keep running with Consul, the answer is yes, the agent must be running in every node. You can also register services and nodes via API but it's easier to deploy the agent.

Related

Is it possible to make Redis cluster join on a particular path?

I'm looking into altering the architecture of a hosting service intended to scale arbitrarily.
On a given machine, the service works roughly as follows:
Start a container running Redis cluster client that joins a global cluster.
Start containers for each of the "Models" to be hosted.
Use upstream Redis cluster for managing model global state. Handle namespacing via keys themselves.
I'm wondering if it might be possible to change to something like this:
For each Model, start a container running the Model and a Redis cluster client.
Reverse proxy the Redis service using something like Nginx to be available on a certain path, e.g., <host_ip>:6397/redis-<model_name>. (Note: I can't just proxy from different ports, because in theory this is supposed to be able to scale past 65,535 models running globally.)
Join the Redis cluster by using said path.
Internalizing the Redis service to the container is an appealing idea to me because it is closer to what the hosting service is supposed to achieve. We do want to share compute; we don't want to share a KV store.
Anyways, I haven't seen anything that suggests this is possible. So, sticking with the upstream may be my only option. But, in case anyone knows otherwise, I wanted to check and see.

Kong Api Gateway Clustering

I have some questions according to https://docs.konghq.com/2.0.x/clustering
I’ll really appreciated if someone help me.
1)according to Clustering Reference I need a load balancer , could you please introduce me a free one which I can use in front of my Kong nodes?
2)I still don’t know is it better to implement kong nodes in different VMs or in Docker using docker-compose file for a fully production environment ?
Best Regards,
I think both your questions are highly dependent of your tech stack / architecture.
Regarding the load balancing question, I can think of several options for different options:
DNS Load Balancing, which depends on client side load balancing
Services in an Kubernetes/OpenShift environment, which provide load balancing across a bunch of pods
AWS Load Balancers, if you deploy Kong directly on EC2 machines. (I am sure other cloud providers have simar concepts)
Whether you deploy Kong on a VM or as a Docker Container is quite hard to answer. It depends on your tech stack you already have in place and on your requirements (see https://docs.konghq.com/2.0.x/sizing-guidelines/). However, I would not recommend to use docker-compose for this use case. If you decide for a Docker-based solution you should take a look at container management solutions such as Kubernetes or OpenShift. There you have solved the management of your Kong containers (such as how many replicas are running and what happens if one replica is failing) and you have solved the load balancing issue (by using Kubernetes/OpenShift services objects).

Is it a good way to run Kafka on Kubernetes?

For a large online application, use k8s to run it. The scale maybe daily activity user 500,000.
The application inside k8s need messaging feature - Pub/Sub, there are these options:
Kafka
RabbitMQ
Redis
Kafka
It needs zookeeper and good to run on os depends on disk I/O. So if install it into k8s cluster, how? The performance will be worse?
And, if keep Kafka outside of the k8s cluster, connect Kafka from application inside the k8s cluster, how about that performance? They are in the different layer, won't be slow?
RabbitMQ
It's slow than Kafka, but for a daily activity user 500,000 application, is it good enough? If so, maybe it's a good choice.
Redis
It's another option. Maybe the most simple one. But from the internet I got that it will lose message sometimes. If true, that's terrible.
So, the most important thing is, use Kafka(also with zookeeper) on k8s, good or not in this use case?
Yes, running Kafka on Kubernetes is great. Check out this example: https://github.com/Yolean/kubernetes-kafka. It includes ZooKeeper and Kafka as StatefulSets.
PS. Running any of the services in your question on Kubernetes will be pleasant. You can Google the name of the service and "kubernetes" and find example manifests. Many examples here: https://github.com/kubernetes/charts.
For Kafka, you can find some suggestion here. Kubernetes 1.7+ supports local persistent volume, which may be good for Kafka deployment.
You can also take a look to the following project :
https://github.com/EnMasseProject/barnabas
It's about running Kafka on Kubernetes and OpenShift as well. It provides deploying with StatefulSets with persistent volumes or just in memory (for developing or just testing purpose). It provides deploying for Kafka Connect and Prometheus metrics as well.
Another simple configuration of Kafka/Zookeeper on Kubernetes in DigitalOcean with external access:
https://github.com/StanislavKo/k8s_digitalocean_kafka
You can connect to Kafka from outside of AWS/DO/GCE by regular binary protocol. Connection is PLAINTEXT or SASL_PLAINTEXT (user/password).
Kafka cluster is StatefulSet, so you can scale cluster easily.

Distributed Rabbitmq within a spring-cloud environment

I am trying to setup a distributed system based on current spring-cloud release (meaning mostly Netflix OSS) using the following components
1 or more cloud config servers
1 or more Eureka servers
1 or more services using Eureka and Config Server clients
The setup above is easy enough to get going however once you start looking into setting up so that configuration changes in the cloud Config servers automatically trigger changes in the values of the actual clients, things start getting more complicated.
It is my understanding that for such a feature to work one should introduce spring-cloud-bus clients to the services which in turn will use, currently the only supported implementation, rabbitmq servers (the actual rabbitmq binaries and not some spring-boot app like eureka or Config servers) to allow change events in the Config server to be propagated to the clients automatically.
It sounds counterintuitive to setup such a system and have to hardcode addresses to rabbitmq servers in the clients (even if one will be keeping the amount of rabbitmq servers more or less static).
How is one supposed to register rabbitmq server instances in the Eureka service discovery server(s) to allow for clients to find them without having to have any knowledge about their location prior to startup?
I cannot seem to find any documentation on how this is done given that rabbitmq is not a spring-cloud component. In fact very little documentation seems to exist regarding on how the rabbitmq + eureka + spring-cloud-bus should be setup together.
I know that I am on a VERY old question, even though I think it worth a comment for people who read this in the future.
Most of the cloud services, lets take AWS as an example, have an Elastic IP solution - so you can configure IPs for RabbitMQ servers, and the IPs always belong to the RabbitMQ, no matter whether the instances change. You can re-attach the Elastic IP to different instances.
It works nearly the same with Elastic Load Balancer, which keeps its IP, so you could configure your microservices to a specific IP using Spring Cloud Config Server - and scale the RabbitMQ instances without a need to worry about configuration change.

Elasticache with Redis - Client sdks

I have a web farm in amazon and one of my sites need some caching.
I am considering the use of Elasticache redis.
Can anyone shed some ligth on how I would connect and interact with this cache?
I have read about several client sdks like stackexchange redis, service stack etc.
.NET is my preferred platform.
Can these client sdks be used to interact with redis on elasticache?
Anyone know about some documentation and/or code examples using elasticache redis (with the stackexchange redis sdk)?
Im guessing I will have to authenticate using a key / secret pair, is this supported in any of these client sdks?
thanks in advance!
Lars
Elasticache is connected to the same way you connect to any other Redis instance. Once you create a new Elasticache instance, you'll be given the hostname to connect to. No need for secret/key pair. All access to the Redis instance there is configured through security groups just like with other AWS instances in EC2, RDS, etc...
With that said, there are two important caveats:
You will only be able to connect to elasticache from within the region and/or VPC in which it's launched, even if you open up the security group to outside IPs (for me, this is one of the biggest reasons not to use Elasticache).
You cannot set a password on your Redis instance. Anyone on a box that is given access to the instance in security groups (keeping in mind the limitations from caveat 1) will be able to get access to your Redis instance with full rights to add/delete/modify whatever keys they like. This is the other big reason not to use Elasticache, though it certainly still has use-cases where these drawbacks are less important.