need to resovle Predis "ERR syntax error" - redis

I made a back-end app based on Predis, PHP, Redis Local development was done with Redis 6.x and works great. This code is used for logging messages from other executing code to a remote Redis server where a worker will consume the stream data and move it into MySQL for longer term storage.
Moving the code into our dev server for testing I started getting "ERR syntax error". After some digging I found dev was running Redis 5.x. So we upgraded to Redis 7.x. The code was still reporting the same error. I ending up changing the code to use different Redis command and it worked when sending
/**
* return data on the status of streams
*
* #return array
*/
public function findPending() {
// XPENDING hubCurlLog hubCurlLog-group IDLE 9000 - + 1 hubCurlLog-group_1
$command = ['XPENDING', $this->stream_name, $this->stream_group_name, 'IDLE', 9000, '-', '+', 1, $this->consumer_name];
return $this->redis_client->executeRaw($command);
}
The function above will produce a Redis command like
XPENDING hubCurlLog hubCurlLog-group IDLE 9000 - + 1 hubCurlLog-group_1
which is where I am getting the ERR syntax error
If I run the Redis command in the cli I still get the same error but I do see the hint when typing in the command that it is known.
redis-cli command hint
How do I get past this syntax issue?

I found a work around that looks to work, I removed the IDLE and time
from
XPENDING hubCurlLog hubCurlLog-group IDLE 9000 - + 1 hubCurlLog-group_1
to
XPENDING hubCurlLog hubCurlLog-group - + 1 hubCurlLog-group_1

Related

Predis Error while reading line from server , timeout fix

I am using Redis with Daemon processes as well as in regular caching
Daemon processes with supervisor (Laravel Redis queues)
Regular caching as key value pair
timeout=300 is currently at my redis.conf file
It had been suggested to change it to timeout=0 at several Git links (https://github.com/predis/predis/issues/33)
My concern is that, if I do a timeout as 0, the redis sever will not drop any connection
Over a period of time, I see chances of getting error of max number of clients reached
Seeking advice for changing timeout --> 0 at redis.conf
Currently, I get following error logs frequently (every 2-3 min) [timeout=300]
{"message":"Error while reading line from the server. [tcp://10.10.101.237:6379]","context":
{"exception":{"class":"Predis\\Connection\\ConnectionException","message":"Error while reading
line from the server.
[tcp://10.10.101.237:6379]","code":0,"file":"/var/www/api/vendor/predis/predis/src/Connection/Ab
stractConnection.php:155"}},"level":400,"level_name":"ERROR","channel":"production","datetime":
{"date":"2020-09-23 07:14:01.207506","timezone_type":3,"timezone":"Asia/Kolkata"},"extra":[]}
I had changed to timeout = 0
Everything is working fine with it !!!
PS: Posting this, post an observation of 2 months after change

How to connect ioredis to google cloud function?

I am currently running some Google Cloud functions (in typescript) that require a connection to a Redis instance in order to LPUSH into the queue (on other instances, I am using Redis as a queue worker).
Everything is fine, except I am getting a huge number of ECONNECTRESET and ECONNECTIMEOUT related errors despite everything working properly.
The following code can execute successfully on the cloud function but still, I am seeing constant errors thrown related to the connection to the Redis.
I think it is somehow related to how I am importing my client- ioredis. I have utils/index.ts, utils/redis.js and inside the redis.js I have:
const Redis = require('ioredis');
module.exports = new Redis(6380, 'MYCACHE.redis.cache.windows.net', { tls: true, password: 'PASS' });
Then I am importing this in my utils/index.ts like so: code missing
And exporting some aysnc function like: code missing
When executing in the GCF environment, I get the # of expected results in results.length and I see (by monitoring the Redis internally) this list was pushed as expected to the queue.
Nevertheless, these errors continue to appear incessantly.
ioredis] Unhandled error event: Error: read ECONNRESET at _errnoException (util.js:1022:11) at TLSWrap.onread (net.js:628:25)

Cannot migrate a key between redis instances

https://github.com/antirez/redis/issues/3689
On a RHEL(RedHat) machine installed Redis 3.0.7 as a deamon: Let's call this "A" .
On a Windows Server 2012 machine installed Redis 3.2.1 as a service: Let's call this "B".
I want to migrate the key of "IdentityRepo" from A to B. In order to achive that I tried to execute the following command on Redis A.
migrate <IP of B> 6379 "IdentityRepo" 3 1000 COPY REPLACE
The following error occured:
(error) ERR Target instance replied with error: ERR DUMP payload version or checksum are wrong
What can be the problem?
The encoding version was changed between these v3.0 to v3.2 due to the addition of quick lists, so MIGRATE as well as DUMP/RESTORE will not work in that scenario.
To work around it, you'll need to read the value from the old database and then write it to the new one using any Redis client.

OTHER: {'EXIT',{error,timeout,#Ref<0.0.0.415>}}

I am an iphone/ipad developer, using objective c language, and I am
using couchDB for my application.
My issue is: if I delete my local couchDB (local database) or run for the first time,
I am getting the error:
OTHER: {'EXIT',{error,timeout,#Ref<0.0.0.415>}}
This is my workflow:
my application replicates to remote iriscouch
using xyz:a...#mmm.iriscouch.com/
databasename.
credentials are checked.
if the replication is success, everything works as expected.
if I reset my local couch database contents, and if I iterate the
above step.
'sometimes' I will get an error(mentioned below) and there will be no
more synchronization with the remote. and its hard to re-sync the application.
This is that error from log:
[info] [<0.140.0>] 127.0.0.1 - - GET /_replicator/_changes? feed=continuous&heartbeat=300000&since=1 200
1> OTHER: {'EXIT',{error,timeout,#Ref<0.0.0.506>}}
1> OTHER: {'EXIT',{error,timeout,#Ref<0.0.0.507>}}
1>
waiting for your response
Krishna.
Seems like some validation functions running at destination end but in this case, this is the message returning from an erlang process tree timing out. But it needs to restart by itself after some (probably 5) seconds.

Testing Openfire with Grinder (BOSH load testing)

I have been trying to test openfire server for Load testing over BOSH but I have been getting the following error after few minutes of run.
1)
11/4/11 3:49:33 PM (thread 3 run 0 test 601): Aborted run due to Java exception calling TestRunner
Java exception calling TestRunner
File "D:\grinder\projects\loadtest\bin\..\tests\..\tests\one2one.py", line 144, in changePresence
File "D:\grinder\projects\loadtest\bin\..\tests\..\tests\one2one.py", line 208, in __call__
Caused by: java.net.BindException: Address already in use: connect
2) I have also been getting 404 Invalid SID errors.
Initially I had set up openfire on Windows 2003 Server but later I set it up on ubuntu 11.10 (RAM 2.0 GiB Intel Core Duo T2400 # 1.83GHz)
1) Firstly, I ran php curl fetch script to add users to using the userservices plugin to add something around 10,000 uses (during which I got a lot of blank responses , so may be this is related to the problem but I will not focus on this misbehaviour now)
2) But I needed to test this for 400 users so i had the following grinder.properties set:
grinder.processes=4
grinder.threads=100
grinder.runs=1
grinder.consoleHost=192.168.1.205
grinder.consolePort=6372
grinder.logDirectory=../logs
grinder.numberOfOldLogs=0
grinder.jvm.arguments=-Dpython.cachedir=../tmp
grinder.script=../tests/one2one.py
(this strangely ended up starting only 103 concurrent users)
(I have tried testing this using one agent)
3) I did a bit of a research and found that I could configure openfire for bosh; so i added the following system.properties
xmpp.httpbind.client.idle 360
xmpp.httpbind.client.requests.max 400
badly need help!!!!! anyone has an insight about how can i resolve this?
The "Address already in use" problem is odd. You may want to try with
grinder.processes=1
grinder.threads=400
As far as only seeing 103 concurrent users, how long does it take for a single of your grinder runs to execute? My thinking is the earliest threads the JVM executes are completing before the final threads have a chance to fully initialize and do work. If you try this:
grinder.runs=100
You will be more likely to achieve the full level of concurrency you are looking for.