Kue - Error: Auth error: ERR max number of clients reached - redis

I am getting an redis connection error when depolyed in APP Fog. I don't know if KUE is generating too many connections.
Here is stack
/mnt/var/vcap.local/dea/apps/kue-0-543205581cf47e8551d042ab06df92e1/app/node_mod
ules/redis/index.js:516
throw callback_err;
^
Error: Auth error: ERR max number of clients reached
at Command.RedisClient.do_auth.self.send_anyway as callback
at RedisClient.return_error (/mnt/var/vcap.local/dea/apps/kue-0-543205581cf4
7e8551d042ab06df92e1/app/node_modules/redis/index.js:512:25)
at ReplyParser.RedisClient.init_parser (/mnt/var/vcap.local/dea/apps/kue-0-5
43205581cf47e8551d042ab06df92e1/app/node_modules/redis/index.js:263:14)
at ReplyParser.EventEmitter.emit (events.js:96:17)
at ReplyParser.send_error (/mnt/var/vcap.local/dea/apps/kue-0-543205581cf47e
8551d042ab06df92e1/app/node_modules/redis/lib/parser/javascript.js:296:10)
at ReplyParser.execute (/mnt/var/vcap.local/dea/apps/kue-0-543205581cf47e855
1d042ab06df92e1/app/node_modules/redis/lib/parser/javascript.js:181:22)
at RedisClient.on_data (/mnt/var/vcap.local/dea/apps/kue-0-543205581cf47e855
1d042ab06df92e1/app/node_modules/redis/index.js:488:27)
at Socket. (/mnt/var/vcap.local/dea/apps/kue-0-543205581cf47e8551
d042ab06df92e1/app/node_modules/redis/index.js:82:14)
at Socket.EventEmitter.emit (events.js:96:17)
at TCP.onread (net.js:396:14)
The redis server log shows it made 5 client connections. is there a way to reduce the no of connections
Thanks

Related

Balancing export to jaeger in openTelemetry collector

I have configuration as documentation says
exporters:
jaeger:
endpoint: "ipv4:firstHost:14250,secondHost:14250"
balancer_name: "round_robin"
Collector produces error.
How I can configure collector to balance exporter for sending requests in different backends?
info exporterhelper/queued_retry.go:276 Exporting failed. Will retry the request after interval. {"component_kind": "exporter", "component_type": "jaeger", "component_name": "jaeger", "error": "failed to push trace data via Jaeger exporter: rpc error: code = Unavailable desc = last connection error: connection error: desc = "transport: Error while dialing dial tcp: address ipv4:firstHost:14250,secondHost:14250: too many colons in address"", "interval": "30.456378855s"}
It doesn't work in golang grpc client. I used openTelemetry load balancing Another option - use kubernetes to balance requests to backends.

Proxy Error: Error reading from remote server

I am fetching a report from my apache including a 41000 rows data report and it gives me following error. I checked the logs and there is no exception there:
Proxy Error
The proxy server received an invalid response from an upstream server.
The proxy server could not handle the request GET /reporting/cards/TransactionHistory_cp_trd_trid1.rpt.
Reason: Error reading from remote server
But when I fetched about 20000 rows, it fetched perfectly.

Client timeout in Gemfire logs

​
I am currently in the process of trouble shooting Gemfire logs for client timeout:
​
Logs:​
​[warning 2018/06/11 23:45:05.685 CDT xxxxsrv01_instance_42404_cacheserver <ClientHealthMonitor Thread> tid=0x97] Server connection from [identity(xx.xx.xx.xx(5338:loner):33228:4cae4ef2,connection=2; port=33261] is terminated because its client timeout of 10,000 has expired.
Even increasing the ​"read-timeout" value in client pool config to 200000 gives "client time out of 200000 has expired".
Also noticed below logs:
xxxxx_myinstance_42404_cacheserver <Handshaker /0:0:0:0:0:0:0:0:42404 Thread 229> tid=0x2c29cc] Bridge server: failed accepting client connection {0}
​Can you please help me on possible causes of the issue.
Regards,​
Rahul N.​

tls: oversized record received with length XXXXX

I use the built-in standard SSL socket client library (net + crypto/tls) like this:
conn, err := net.Dial("tcp", "exploit.im:5222")
//...
config := tls.Config{InsecureSkipVerify: true}
tls_conn := tls.Client(conn, &config)
fmt.Println(tls_conn.Handshake())
And am getting the message:
conn, err := net.Dial("tcp", "exploit.im:5222")
I managed to find out it is somehow related to the default maximum packet size (16384 + 2048 set in common.go:31). Is there any standard work around (without patching this value & rebuilding the lib)?
You get this kind of messages if you try to do a SSL handshake with a peer which does not reply with SSL. In this case it is probably some XMPP server and with XMPP you first have some clear text handshake before you start with SSL. Trying to start directly with SSL will result in interpreting the servers clear text response as an SSL frame which can result in strange error messages like this.

Tomcat Connection Pool Exhausted

I'm using Apache Tomcat JDBC connection pooling in my project. I'm confused because under heavy load I keep seeing the following error:
12:26:36,410 ERROR [] (http-/XX.XXX.XXX.X:XXXXX-X) org.apache.tomcat.jdbc.pool.PoolExhaustedException: [http-/XX.XXX.XXX.X:XXXXX-X] Timeout: Pool empty. Unable to fetch a connection in 10 seconds, none available[size:4; busy:4; idle:0; lastwait:10000].
12:26:36,411 ERROR [org.apache.catalina.core.ContainerBase.[jboss.web].[default-host].[/APP].[AppConf]] (http-/XX.XXX.XXX.X:XXXXX-X) JBWEB000236: Servlet.service() for servlet AppConf threw exception: org.jboss.resteasy.spi.UnhandledException: java.lang.NullPointerException
My expectation was that with pooling, requests for new connections would be held in a queue until a connection became available. Instead it seems that requests are rejected when the pool has reached capacity. Can this behaviour be changed?
Thanks,
Dal
This is my pool configuration:
PoolProperties p = new PoolProperties();
p.setUrl("jdbc:oracle:thin:#" + server + ":" + port + ":" + SID_SVC);
p.setDriverClassName("oracle.jdbc.driver.OracleDriver");
p.setUsername(username);
p.setPassword(password);
p.setMaxActive(4);
p.setInitialSize(1);
p.setMaxWait(10000);
p.setRemoveAbandonedTimeout(300);
p.setMinEvictableIdleTimeMillis(150000);
p.setTestOnBorrow(true);
p.setValidationQuery("SELECT 1 from dual");
p.setMinIdle(1);
p.setMaxIdle(2);
p.setRemoveAbandoned(true);
p.setJdbcInterceptors(
"org.apache.tomcat.jdbc.pool.interceptor.ConnectionState;"
+ "org.apache.tomcat.jdbc.pool.interceptor.StatementFinalizer;"
+ "org.apache.tomcat.jdbc.pool.interceptor.ResetAbandonedTimer");
This working as per design/implementation, if you see the log Timeout: Pool empty. Unable to fetch a connection in 10 seconds and your configuration is p.setMaxWait(10000);. The requesting thread waits for 10seconds(10000 millseconds, maxwait) before giving up waiting for connection.
Now you have two solutions, increase the number of maxActive connection or check if there are any connection leaks/long running queries(which you do not expect).