I've got an express service running in a minikube cluster and I'm trying to set up a Redis client, but when I try run the service with the Redis client created it basically stalls on deployment and times out. As soon as I add the line:
const client = redis.createClient('http://127.0.0.1:6379');
My service will not deploy and run (even running the default with no supplied address causes the same issue).
I'm quite new to Kubernetes in general so I'm not sure if this is potentially an issue with minikube? Like trying to create a client from inside the cluster with that address isn't possible or something along those lines..
I'm completely lost with why just trying to create a client is causing this issue so any advice or direction would be greatly appreciated.
Try using "service-name.namespace-name.svc.cluster.local" instead of IP address to connect to service.
For example: If my service name is car-redis-service and namespace is default then the command goes like
redis.createClient(REDISPORT, redis://car-redis-service.default.svc.cluster.local)
Or
redis.createClient(REDISPORT,car-redis-service.default.svc.cluster.local)
(source)
Here REDISPORT is the port where redis is configured.
For more information on redis in kubernetes refer to this article.
Related
I want to store data with GeoMesa into data store (e.g. Redis) and to visualize/publish this data with GeoServer.
I develop an interface (and the classes which implements this interface) in Java to store data in a Redis server. Then, the plugin "GeoServer with Redis" was installed.
Thus, when I add a new vector data source, GeoServer offers me the option "Redis (GeoMesa)". I get an error when I submit the parameters of this new data source in GeoServer. I try it before and after storing data in Redis, and the results are the same.
Redis was installed by the official Docker image.
Parameters to create a data
redis.url='localhost:6379'
redis.catalog='geomesa'
redis.connection.pool.size='16'
geomesa.query.threads='8'
geomesa.query.timeout=''
redis.pipeline.enabled=FALSE
redis.connection.pool.validate=TRUE
geomesa.stats.enable=TRUE
geomesa.query.audit=TRUE
geomesa.query.loose-bounding-box=FALSE
geomesa.query.caching=FALSE
geomesa.security.auths=''
geomesa.security.auths.force-empty=TRUE
GeoServer prints this output :
Error creating data store, check the parameters. Error message: Could not get a resource from the pool
Unfortunately, I don't access to the stack trace.
Are you sure that your Redis instance is accessible on localhost:6379? Are you running Redis 5+ (GeoMesa was developed against Redis 5)?
You could try running through the Redis GeoMesa quickstart, which would eliminate any potential issues with GeoServer and should also show you a stack trace.
I had a similar issue. However, both my geoserver and redis environment are dockerized and running in a VB Ubuntu virtual machine.
First, I port forwarded my redis port to my host IP. Then I pointed the geoserver redis URL to my host IP address instead of 127.0.0.1 or localhost.
This solved my problem.
If using your host IP address also works for anyone, kindly let us know.
Our jobs service test suite expects a Redis database to connect to in order to run its test cases. We're running into an issue where sometimes this jobs service fails to load in Redis and sometimes it doesn't.
We've followed the Codeship guide to the dot, and are finding that sometimes, our service is unable to connect to Redis while sometimes it is. I've tried switching Redis versions and this does not seem to have solved the issue.
Sounds like it would be appropriate to implement a Docker healthcheck on your service.
We are using prometheus in our production envirment recently. Before we only have 30-40 nodes for each service and those servers not change very often, so we just write it in the prometheus.yml, but right now it become too long to hold in one file and change much frequently then before, so my question is should i use file_sd_config to put those server list out of yml file and change those config files sepearately, or using consul for service discovery(same much easy to handle changes).
I have install 3 nodes consul cluster in data center and as i can see if i change to use consul to slove this problem , i also need to install consul client in each server(node) and define its services info. Is that correct? or does anyone have good advise.
Thanks
I totally advocate the use of a service discovery system. It may be a bit hard to deploy at first but surely it will worth it in the future.
That said, Prometheus comes with a lot of service discovery integrations. It's possible that you don't need a Consul cluster. If your servers are in a cloud provider like AWS, GCP, Azure, Openstack, etc, prometheus are able to autodiscover the instances.
If you keep running with Consul, the answer is yes, the agent must be running in every node. You can also register services and nodes via API but it's easier to deploy the agent.
Objective
I want to access the redis database in kubernetes, from a function inside ibm functions using javascript.
Question
How do I get the right URI, when redis is running on a Pod in Kubernetes?
Situation
I used this sample to setup the redis database in kubernetes This is the link to the sample in Kubernetes
I run Kuberentes inside IBM Cloud.
Findings
I was not able to find a answer to my question on the redis documentation
As far as I understand by default no password configured.
Is this assumption right?
redis://[USER]:[PASSWORD]#[CLUSTER-PUBLIC-IP]:[PORT]
Thanks for help ... I know this is maybe a to simple question, but currently I do not see the tree in the woods ;-)
As far as I understand by default no password configured.
Yes, there is no default password in that image with Redis, you are right.
If you following the instruction you mentioned, you will use a kubectl proxy, which will forward port of your Redis in cluster to your local machine by call kubectl port-forward redis-master 6379:6379.
So in that case, Redis will be available on redis://localhost:6379 on your PC.
If you want to make it available directly from ouside of the cluster, you need to create Service with NodePort, Service with LoadBalancer (if you in Cloud) or simply Service with Ingress.
Inside a cluster, you can create Service with Cluster IP (which is actually simply Service, because it always has Cluster IP) for your Redis pod and will be available on:
redis://[USER]:[PASSWORD]#[SERVICE-IP]:[PORT]
Here is a good official documentation about connecting applications with service.
Glassfish 3.1.2
Ubuntu 12.04
I've created a cluster of two nodes and have a JMS queue.
I'm having issues trying to connect to this JMS queue using a remote standalone client.
The cluster JMS listener is on port 27676 and the queue is deployed to the cluster.
mq://Glassfish2:27676/,mq://Glassfish3:27676
When I connect using the code I'd use to connect to a stand alone instance the message is not received by the cluster.
I believe it is using the default 7676 port. When the IIOP port is changed to use port 23700 which is the one the cluster (DAS) is using I get a connection refused exception as it is trying to connect to localhost:27676. At least it's the right port.
WARNING: [C4003]: Error occurred on connection creation [localhost:27676]. - cause: java.net.ConnectException: Connection refused: connect
I've also updated the following values in node config file (domain.xml) to remove references to localhost. jms-host and node-host values.
I had this issue before with a stand alone instance and it was resolved by adding entries to the /etc/hosts file. However, this does not seem to resolve the issue.
I also have all server instance IPs in the hosts file.
Am I missing something very basic here?
Any help would be greatly appreciated.
Thanks
If you look log files under
${glassfish_home}/glassfish/nodes/cluster-name/instance-name/imq/instances/instance-name/log
folder, you will see that
master brockers does not match
Every your node has different master brockers, probably every node know its own brocker as master brocker..
I had the same error and after a few days find this..