Different hget results in different versions - redis

Using "Redis server v=3.2.1 sha=00000000:0 malloc=jemalloc-4.0.3 bits=64 build=bcc0f4a36956ba3e" all hget that I did get updated value from a hash and work nice.
Using "Redis server v=3.2.10 sha=00000000:0 malloc=jemalloc-3.6.0 bits=64 build=c8b45a0ec7dc67c6" with same config file base hget return always nil. Using two new parameters: "list-max-ziplist-entries 512
list-max-ziplist-value 64" I can get hget working again, but if I change in redis master a value of object, 3.2.10 version will not update that value and 3.2.1 will.
3.2.1 is compiled from me and 3.2.10 is from CentOS.
I did not found any weird/error/warn log in client or server logs. I am trying to understand why I am getting nil or values that do not update. I waited sometime to full resync, but 3.2.10 continue showing nil or outdated value (I am changing manually values to test if 3.2.10 is getting updates or not).

Forgot to do feedback. maxmemory was the problem. solved

Related

Datastax cassandra cpp_driver hangs when connecting to node

I setup a ScyllDB on my Debian 9.6 machine. When I run cqlsh I can connect to it and create tables, do queries etc..
Now I tried to write a simple program in C++ using the Datstax driver and it can't connect. It always blocks when it tries to connect.
The scylla package I installed is:
scylla | 3.0.11-0.20191126.3c91bad0d-1~stretch
cpp_driver is the current master from github: https://github.com/datastax/cpp-driver
Now I tried to run the examples/simple project which is included in the driver, so I assume that it should work, but it shows the same problem. I don't get any errors it just blocks
CassCluster* cluster = cass_cluster_new();
CassSession* session = cass_session_new();
char* hosts = "127.0.0.1";
cass_cluster_set_contact_points(cluster, hosts);
cass_cluster_set_protocol_version(cluster, CASS_PROTOCOL_VERSION_V4);
connect_future = cass_session_connect(session, cluster);
// here it blocks now forever...
er = cass_future_error_code(connect_future);
I also tried to run it on an Ubuntu 16.04 but it shows the same problem. Since the connect works, using the cqlsh I think it shouldn't be a configuration problem, but rather something with the cpp_driver.
I also traced the TCP connection, and I can see that the cpp_driver talks to the server, which looks similar to cqlsh conversation.
I finally found the solution for this issue. We were using cpp_driver 2.15.1 which apparently got some change in the even handling according to their release notes. When I downgraded to 2.15.0 the problem was gone and connection could be successfully established.

redis-cli FLUSHALL and FLUSHDB return ok but do nothing after Hubot restores redis

On ubuntu 16.04. Interacting with a local redis instance via redis-cli. Working with a node hubot script which uses redis as its primary data store.
when I type keys * I get a single key hubot:storage
so I FLUSHALL and get an ok response. But if the Hubot is running or else as soon as it turns on, it restores the value of that key immediately so I can never delete it.
I'v used the info command to try to see if it is persisting on some other redis instance and I've cleared all backup files from /var/redis. Basically I can't figure out where this data is being stored to keep getting restored from.
Any advice regarding how I could clear this out or where Hubot may be caching this?
It seems to be related to this code: https://github.com/hubotio/hubot-redis-brain/blob/master/src/redis-brain.js specifically the chunk at line 49 is what gets called before each restore.
Steps
Stop hubot
Flush redis (important that this is done while hubot is NOT running)
Start hubot
The reasoning is that hubot has an in-memory representation of the brain and will write it out to redis at intervals. Perhaps a nicer solution to this which would help during script development would be a command that can empty the brain and save that, but I can't see an obvious API for that in either robot.brain or hubot-redis-brain

Cannot migrate a key between redis instances

https://github.com/antirez/redis/issues/3689
On a RHEL(RedHat) machine installed Redis 3.0.7 as a deamon: Let's call this "A" .
On a Windows Server 2012 machine installed Redis 3.2.1 as a service: Let's call this "B".
I want to migrate the key of "IdentityRepo" from A to B. In order to achive that I tried to execute the following command on Redis A.
migrate <IP of B> 6379 "IdentityRepo" 3 1000 COPY REPLACE
The following error occured:
(error) ERR Target instance replied with error: ERR DUMP payload version or checksum are wrong
What can be the problem?
The encoding version was changed between these v3.0 to v3.2 due to the addition of quick lists, so MIGRATE as well as DUMP/RESTORE will not work in that scenario.
To work around it, you'll need to read the value from the old database and then write it to the new one using any Redis client.

DalliError: No server available but still write / read fragment

I was checking out my production.log on my server and sees these lines comes every time the code is suspected to read or write cache:
DalliError: No server available
Write fragment views/artists/522-...
DalliError: No server available
Read fragment views/artists/92-...
Why and is that something i need to be worried about?
Im using dalli (2.6.2), cache_digests (0.2.0), rails (3.2.11) and memcached (1.4.2).
When I installed memcached, I happened to use sudo, can that have something to do with it?
That was probably related to the fact that Dalli was defaulting to localhost
It's solved since 2.6.4 - https://github.com/mperham/dalli/pull/266

Fail to start Apache Directory Server - Error 04450

While I was trying to start ApacheDS 1.5.7 on windows platform, An Error 04450 occurs and the apacheds-rolling.log contains:
[21:07:27] ERROR [org.apache.directory.shared.ldap.entry.DefaultServerAttribute] - ERR_04450 The value {0} is incorrect, it hasnt been added
[21:07:27] ERROR [org.apache.directory.server.Service] - Cannot start the server : reuseAddress can't be set while the acceptor is bound.
How can i fix this problem? Anybody could help me? many thanks!
The warning log message is a bit misleading, actually this is not a serious issue, the server should be running despite of this warning, this has been fixed a while back in the latest trunk code (which will be released as 2.0 instead of 1.5.8).
According to this post, the dc=example,dc=org context entry is not created by default anymore but no one has updated the documentation to reflect this. I installed 1.5.7 and it looks to me like the partition was created fine, but I'm getting the same error as described above. I suggest installing an older version.
The 2nd error message suggests that the port is already in use. Is there a chance that you already had another ApacheDS process running, or that another program is using the ports?
This isn't a domain controller perchance, is it? If so, the default LDAP ports 389 & 636 are already in use for Active Directory, so you'll need to choose another. However, I believe the defaults for ApacheDS are 10389 (LDAP) and 10636 (LDAPS), in which case they would typically be open on a Windows box.
You can check for processes on the ports with the netstat -abn command, and look through the list for the process listening on port 10389 or whichever custom port you chose.