I am currently running some Google Cloud functions (in typescript) that require a connection to a Redis instance in order to LPUSH into the queue (on other instances, I am using Redis as a queue worker).
Everything is fine, except I am getting a huge number of ECONNECTRESET and ECONNECTIMEOUT related errors despite everything working properly.
The following code can execute successfully on the cloud function but still, I am seeing constant errors thrown related to the connection to the Redis.
I think it is somehow related to how I am importing my client- ioredis. I have utils/index.ts, utils/redis.js and inside the redis.js I have:
const Redis = require('ioredis');
module.exports = new Redis(6380, 'MYCACHE.redis.cache.windows.net', { tls: true, password: 'PASS' });
Then I am importing this in my utils/index.ts like so: code missing
And exporting some aysnc function like: code missing
When executing in the GCF environment, I get the # of expected results in results.length and I see (by monitoring the Redis internally) this list was pushed as expected to the queue.
Nevertheless, these errors continue to appear incessantly.
ioredis] Unhandled error event: Error: read ECONNRESET at _errnoException (util.js:1022:11) at TLSWrap.onread (net.js:628:25)
Related
I am using Vue on the frontend of my application, the application is running really well on my local machine without any errors but on the server, there are some issues like click events not setting items. When I check the console, however, I get this error: Uncaught DOMException: Failed to execute 'setItem' on 'Storage': Setting the value of '136114546' exceeded the quota.
I found this Related Question where the answer is storage being full but I have unlimited storage and it is working well on my local machine.
What could be the solution to this kind of error? Since it is working well on my local server, could the problem be with the server?
I use Google Cloud Run with Google Memorystore Redis.
My application is written on Node.js (Express) and connected to the Redis Instance in this way:
const asyncRedis = require("async-redis");
const redisClient = asyncRedis.createClient(process.env.REDISPORT, String(process.env.REDISHOST));
redisClient.on('error', (err) => console.error('ERR:REDIS:', err));
.....
const value = await redisClient.get("Code");
Every half an hour the application loses its connection to the Redis and I receive the error:
AbortError: Redis connection lost and command aborted. It might have been processed.
at RedisClient.flush_and_error (/usr/src/app/node_modules/redis/index.js:362)
at RedisClient.connection_gone (/usr/src/app/node_modules/redis/index.js:664)
at RedisClient.on_error (/usr/src/app/node_modules/redis/index.js:410)
at Socket.<anonymous> (/usr/src/app/node_modules/redis/index.js:279)
at Socket.emit (events.js:315)
at emitErrorNT (internal/streams/destroy.js:92)
at emitErrorAndCloseNT (internal/streams/destroy.js:60)
at processTicksAndRejections (internal/process/task_queues.js:84)
In two or three minutes the connection returns and the application works correctly during half an hour until next disconnect.
Any idea?
According to the official GCP documentation there can be several scenarios that could cause the connectivity issues with your Redis instance. However, as the connectivity issues you are experiencing are intermittent and normally you connect to the instance successfully, none of the mentioned scenarios seems to be applicable to your situation.
There is a similar issue opened on GitHub, where some users where experiencing it when reaching the maximum number of clients for their Redis instance, however, I would recommend you to contact the GCP technical support, so that they could review your particular configuration and inspect your instance with the internal tools.
I was making a simple file upload&download service and found out that, as far as I understand, Netty doesn't release direct buffers until request processing is over. As a result, I can't upload bigger files.
I was trying to make sure that the problem is not inside my code, so I created the most simple tiny Ktor application:
routing {
post("upload") {
call.receiveMultipart().forEachPart {}
call.respond(HttpStatusCode.OK)
}
}
The default direct memory size is about 3Gb, to make test simpler I limit it with:
System.setProperty("io.netty.maxDirectMemory", (10 * 1024 * 1024).toString())
before starting the NettyApplicationEngine.
Now if I upload a large file, for example with httpie, I got "Connection reset":
http -v --form POST http://localhost:42195/upload file#/tmp/FileStorageLoadTest-test-data1.tmp
http: error: ConnectionError: ('Connection aborted.', ConnectionResetError(104, 'Connection reset by peer')) while doing POST request to URL: http://localhost:42195/upload
On the server side there is no information about the problem except for the "java.io.IOException: Broken delimiter occurred" exception. But if I put the breakpoint in NettyResponsePipeline#processCallFailed, the real exception is:
io.netty.util.internal.OutOfDirectMemoryError: failed to allocate 65536 byte(s) of direct memory (used: 10420231, max: 10485760)
It is a pity that this exception is not logged.
Also, I found out that the same code works without problems if I use Jetty engine instead.
Environment:
Ubuntu Linux
Java 8
Ktor=1.2.5
netty-transport-native-epoll=4.1.43.Final
(but if Netty started without native-epoll support, the problem is the same)
I am running a workflow on a n1-ultramem-40 instance that will run for several days. If an error occurs, I would like to catch and log the error, be notified, and automatically terminate the Virtual Machine. Could I use StackDriver and gcloud logging to achieve this? How could I automatically terminate the VM using these tools? Thanks!
Let's break the puzzle into two parts. The first is logging an error to Stackdriver and the second is performing an external action automatically when such an error is detected.
Stackdriver provides a wide variety of language bindings and package integrations that result in log messages being written. You could include such API calls in your application which detects the error. If you don't have access to the source code of your application but it instead logs to an external file, you could use the Stackdriver agents to monitor log files and relay the log messages to Stackdriver.
Once you have the error messages being sent to Stackdriver, the next task would be defining a Stackdriver log export definition. This is the act of defining a "filter" that looks for the specific log entry message(s) that you are interested in acting upon. Associated with this export definition and filter would be a PubSub topic. A pubsub message would then be written to this topic when an Stackdriver log entry is made.
Finally, we now have our trigger to perform your action. We could use a Cloud Function triggered from a PubSub message to execute arbitrary API logic. This could be code that performs an API request to GCP to terminate the VM.
I am an iphone/ipad developer, using objective c language, and I am
using couchDB for my application.
My issue is: if I delete my local couchDB (local database) or run for the first time,
I am getting the error:
OTHER: {'EXIT',{error,timeout,#Ref<0.0.0.415>}}
This is my workflow:
my application replicates to remote iriscouch
using xyz:a...#mmm.iriscouch.com/
databasename.
credentials are checked.
if the replication is success, everything works as expected.
if I reset my local couch database contents, and if I iterate the
above step.
'sometimes' I will get an error(mentioned below) and there will be no
more synchronization with the remote. and its hard to re-sync the application.
This is that error from log:
[info] [<0.140.0>] 127.0.0.1 - - GET /_replicator/_changes? feed=continuous&heartbeat=300000&since=1 200
1> OTHER: {'EXIT',{error,timeout,#Ref<0.0.0.506>}}
1> OTHER: {'EXIT',{error,timeout,#Ref<0.0.0.507>}}
1>
waiting for your response
Krishna.
Seems like some validation functions running at destination end but in this case, this is the message returning from an erlang process tree timing out. But it needs to restart by itself after some (probably 5) seconds.