I have a project using NestJS version 9.2.0 with Bull Queue version 4.9.0 which is connecting to Redis to schedule jobs.
Locally this is working as expected, however, when deploying to staging then Bull won't connect and it fails with error maxRetriesPerRequest.
My assumption is that on staging Bull is not being able to connect over TLS connection, however I am unable to confirm that this is definitely the issue.
I have tried the following without having any success:
set the Redis url as rediss://.....
pass the parameter ?tls=true at the end of the Redis url
pass an empty tls: {} object as part of the Redis configuration which I then use within the BullModule that I am importing as use as factory within the app.module file
The reasons why I suspect it is an issue related to TLS connection:
the local connection works just fine
when I set an invalid URL locally mocking a scenario where Bull cannot connect to Redis I experience the same behaviour in the console as it happens on staging and production
I ensured that the Redis instance on staging and production is up and running and that the url I am using in the app is correct
Related
I am using npm bull to add my queue job to handle about sending mail for my project. It runs no problems for a long time, but recently, it shows this error:
Error while handling task collect-metrics: Reached the max retries per request limit (which is 10). Refer to "maxRetriesPerRequest" option for details. error log
And I checked in redis-cli: key *, it didn't show any key.
The bull module support #bull-monitor/express to monitor the job, but since the error shows, I couldn't access the monitor
bull admin panel
here is my code
I faced this problem as well when I deployed my application to production. It turns out that Bull.js doesn't automatically allow a redis connection over TLS, especially the fact that the production environment is already running over TLS. So what fixed it for me was setting tls to true, and enableTLSForSentinelMode to false in the Redis options of my queue. Here's a sample code:
const myQueue = new Queue('my_queue', YOUR_REDIS_URL, {
redis: { tls: true, enableTLSForSentinelMode: false },
...other queue options
})
Bull can't find Redis to connect with.
I'm was using bull in local environment and there is no problem, on the cloud the bull shows me the same error.
so in local environment it's connect to 127.0.0.1:6379, but in cloud you don't have this port so you need to specific the redis's username, redis's password and redis's port.
I've got an express service running in a minikube cluster and I'm trying to set up a Redis client, but when I try run the service with the Redis client created it basically stalls on deployment and times out. As soon as I add the line:
const client = redis.createClient('http://127.0.0.1:6379');
My service will not deploy and run (even running the default with no supplied address causes the same issue).
I'm quite new to Kubernetes in general so I'm not sure if this is potentially an issue with minikube? Like trying to create a client from inside the cluster with that address isn't possible or something along those lines..
I'm completely lost with why just trying to create a client is causing this issue so any advice or direction would be greatly appreciated.
Try using "service-name.namespace-name.svc.cluster.local" instead of IP address to connect to service.
For example: If my service name is car-redis-service and namespace is default then the command goes like
redis.createClient(REDISPORT, redis://car-redis-service.default.svc.cluster.local)
Or
redis.createClient(REDISPORT,car-redis-service.default.svc.cluster.local)
(source)
Here REDISPORT is the port where redis is configured.
For more information on redis in kubernetes refer to this article.
Currently I am trying to use Apache Airflow with Celery executor. For this I have Redis service from IBM cloud. This service has the TLS connection type which means it has the redis protocol as rediss://. Side not: I am using puckel's airflow dockerfile.
I have set redis parameters and my broker url is in the form of rediss://username:password#hostname:port/virtual_host. While I try to run for example, Flower I get these errors:
Secure redis scheme specified (rediss) with no ssl options, defaulting to insecure SSL behaviour.
Steps that I have done till now:
I have added the following lines to the entrypoint.sh:
: "${AIRFLOW__CELERY__SSL_ACTIVE:="True"}"
: "${AIRFLOW__CELERY__BROKER_USE_SSL:="{'ssl_cert_reqs': ssl.CERT_REQUIRED, 'ssl_ca_certs': '/6be25d73-0600-11ea-9bce-eaebe975ceba.crt'}"}"
AIRFLOW__CELERY__BROKER_USE_SSL="${AIRFLOW__CELERY__BROKER_USE_SSL}"
AIRFLOW__CELERY__SSL_ACTIVE="${AIRFLOW__CELERY__SSL_ACTIVE}"
export AIRFLOW__CELERY__SSL_ACTIVE
export AIRFLOW__CELERY__BROKER_USE_SSL
I have tried to use redis:// with the same broker url, but flower even did not started.
Yes it does, but you need a fairly recent version of Kombu and redis-py. We simply have a broker URL that looks like rediss://:BABADEDAuikxWx0oPZYfPE3IXJ9BVlSC#broker.example.com:6379/12?ssl_cert_reqs=CERT_OPTIONAL and it "just works".
We are experiencing problems with the notifications sent by Orion for new data after the last update to version 2.2.0.
Orion is run inside a Docker container.
Specifically, the problem is this:
when we start the docker container, every endpoint is contacted when a new data arrives. But then, after a while (less than 1 day), some endpoints (currently, the one hosted by Amazon Web Service) stopped to be contacted. The error obtained is: 'notification failure for sender-thread: Timeout was reached'
As additional information,
if we try to send data manually (through a CURL request performed in a bash instance inside the docker container) it works fine. While Orion cannot contact the endpoint falling in a "Timeout" exception.
Furthermore, if we restart the container (with consequent deletion of the contextBroker.pid from the dedicated folder in (/var/lib/docker/overlay2/)), it starts to push data again.
Linked issue on github
Our jobs service test suite expects a Redis database to connect to in order to run its test cases. We're running into an issue where sometimes this jobs service fails to load in Redis and sometimes it doesn't.
We've followed the Codeship guide to the dot, and are finding that sometimes, our service is unable to connect to Redis while sometimes it is. I've tried switching Redis versions and this does not seem to have solved the issue.
Sounds like it would be appropriate to implement a Docker healthcheck on your service.