https://github.com/antirez/redis/issues/3689
On a RHEL(RedHat) machine installed Redis 3.0.7 as a deamon: Let's call this "A" .
On a Windows Server 2012 machine installed Redis 3.2.1 as a service: Let's call this "B".
I want to migrate the key of "IdentityRepo" from A to B. In order to achive that I tried to execute the following command on Redis A.
migrate <IP of B> 6379 "IdentityRepo" 3 1000 COPY REPLACE
The following error occured:
(error) ERR Target instance replied with error: ERR DUMP payload version or checksum are wrong
What can be the problem?
The encoding version was changed between these v3.0 to v3.2 due to the addition of quick lists, so MIGRATE as well as DUMP/RESTORE will not work in that scenario.
To work around it, you'll need to read the value from the old database and then write it to the new one using any Redis client.
Related
I setup a ScyllDB on my Debian 9.6 machine. When I run cqlsh I can connect to it and create tables, do queries etc..
Now I tried to write a simple program in C++ using the Datstax driver and it can't connect. It always blocks when it tries to connect.
The scylla package I installed is:
scylla | 3.0.11-0.20191126.3c91bad0d-1~stretch
cpp_driver is the current master from github: https://github.com/datastax/cpp-driver
Now I tried to run the examples/simple project which is included in the driver, so I assume that it should work, but it shows the same problem. I don't get any errors it just blocks
CassCluster* cluster = cass_cluster_new();
CassSession* session = cass_session_new();
char* hosts = "127.0.0.1";
cass_cluster_set_contact_points(cluster, hosts);
cass_cluster_set_protocol_version(cluster, CASS_PROTOCOL_VERSION_V4);
connect_future = cass_session_connect(session, cluster);
// here it blocks now forever...
er = cass_future_error_code(connect_future);
I also tried to run it on an Ubuntu 16.04 but it shows the same problem. Since the connect works, using the cqlsh I think it shouldn't be a configuration problem, but rather something with the cpp_driver.
I also traced the TCP connection, and I can see that the cpp_driver talks to the server, which looks similar to cqlsh conversation.
I finally found the solution for this issue. We were using cpp_driver 2.15.1 which apparently got some change in the even handling according to their release notes. When I downgraded to 2.15.0 the problem was gone and connection could be successfully established.
I'm using Hyperledger Fabric and now I'm trying to make a backup of the current situation and restore it on a different computer.
I'm following the procedure found in hyperledger-fabric-backup-and-restore.
The main steps being:
Copy the crypto-config and the channel-artifacts directory
Copy the content of all peers and orderer containers
Modify the docker-compose.yaml to link containers volumes to the local directory where I have the backup copy.
Yet it's not working properly in my case: when I restart the network with ./byfn.hs up I first have all the containers correctly up and running then, whatever operation I try and execute on the channel (peer channel create, peer channel join, peer channel update) fails with error:
Error: got unexpected status: BAD_REQUEST -- error applying config update to existing channel 'mychannel': error authorizing update: error validating ReadSet: proposed update requires that key [Group] /Channel/Application be at version 0, but it is currently at version 1
Is there anything I should do which is not mentioned on hyperledger-fabric-backup-and-restore ?
I got the same error while trying to create a channel. Turning the "network down" and then "network up" solved my problem.
Using "Redis server v=3.2.1 sha=00000000:0 malloc=jemalloc-4.0.3 bits=64 build=bcc0f4a36956ba3e" all hget that I did get updated value from a hash and work nice.
Using "Redis server v=3.2.10 sha=00000000:0 malloc=jemalloc-3.6.0 bits=64 build=c8b45a0ec7dc67c6" with same config file base hget return always nil. Using two new parameters: "list-max-ziplist-entries 512
list-max-ziplist-value 64" I can get hget working again, but if I change in redis master a value of object, 3.2.10 version will not update that value and 3.2.1 will.
3.2.1 is compiled from me and 3.2.10 is from CentOS.
I did not found any weird/error/warn log in client or server logs. I am trying to understand why I am getting nil or values that do not update. I waited sometime to full resync, but 3.2.10 continue showing nil or outdated value (I am changing manually values to test if 3.2.10 is getting updates or not).
Forgot to do feedback. maxmemory was the problem. solved
I created two sepaerate directories in which I installed the Standalone Mule ESB server:
/ee/mmc-distribution-mule-console-bundle-3.5.2-HF1
/ee2/mmc-distribution-mule-console-bundle-3.5.2-HF1
I start up the first server, and below is the status:
[root#x240perf2 mmc-distribution-mule-console-bundle-3.5.2-HF1]# ./status.sh
MMC is running as PID=1998.
Mule Enterprise Edition is running as PID=2619.
Then I try to start the second instance:
[root#x240perf2 mmc-distribution-mule-console-bundle-3.5.2-HF1]# ./startup.sh
Port 8585 is in use, please make it available and try again.
So apparently the port 8585 is being used by the original instnace
So I stop the first instnace, and start the second istance, which comes up successfully, as follows:
./startup.sh
Please enter the desired port for Mule [Default 7777]:
Starting MMC, please wait...
class com.sun.jersey.multipart.impl.MultiPartConfigProvider
class com.sun.jersey.multipart.impl.MultiPartReader
class com.sun.jersey.multipart.impl.MultiPartWriter
[11-13 16:49:19] WARN HttpSessionSecurityContextRepository [http-bio-8585-exec-1]: Failed to create a session, as response has been committed. Unable to store SecurityContext.
[11-13 16:49:32] WARN HttpMethodBase [http-bio-8585-exec-12]: Going to buffer response body of large or unknown size. Using getResponseBodyAsStream instead is recommended.
[11-13 16:49:38] WARN HttpSessionSecurityContextRepository [http-bio-8585-exec-12]: Failed to create a session, as response has been committed. Unable to store SecurityContext.
Nov 13, 2014 4:49:50 PM org.apache.catalina.core.StandardServer await
INFO: A valid shutdown command was received via the shutdown port. Stopping the Server instance.
Nov 13, 2014 4:49:50 PM org.apache.coyote.AbstractProtocol pause
INFO: Pausing ProtocolHandler ["http-bio-8585"]
But notice it seems to be using 8585 for tomcat (of which I know little about, except it some sort of app server, never used it)
I examined this site:
http://www.mulesoft.org/documentation/display/33X/Running+Multiple+Mule+Instances
but it does nto discuss the issue., and the page it points do does not seem current. Did I misunderstand something
Is it possible to run two separate instances of Mule ESB at the same time
and if so, how ? (how would I change the port its using, what file should I modify)
Thanks
Edit: my second post in response to answer:
(BTW: I am using Mule ESB standalone Enterprise Edition 3.5.2)
To make sure I did not have any apps that were running
on port 8585, I shutdown my original instance, and created two new instances, and made sure no apps were deployed to either instance.
I brought up the first instance without issue, but the second instance I brought up still gives me the port 8585 in use error (from startup.sh)
This site says that the MMC default port is 7777, but the tomcat default port on which it runs is 8585
http://www.mulesoft.org/documentation/display/current/Setting+Up+MMC-Mule+ESB+Communications
I used the following command to find all files within my second instance of por t 8585
find . -type f |xargs grep "8585
Other than log files I got two hits
startup.sh
and
/mmc-3.5.2-HF1/apache-tomcat-7.0.52/conf/server.xml
I did NOT find in either instance the $MULE_HOME/apps/mmc/mule-config.xml (probably because I have no apps deployed)
In the server.xml, the MMC apparently uses tomcat to
handle the MMC applicaiton, and server.xml contains
the following:
<Connector port="8585" protocol="HTTP/1.1"
So I guess I could change 8585 to 8586 at this point, but ...
The startup.sh has serveral (about 9 or 10) hardcode dreferences to 8585 to check that the MMC is running and take action if it is or is not running
So do I actually have to change the entire startup.sh to replace 8585 with 8586 i the second instance as well as change the server.xml port 8585 reference ?
Thanks
You can run as many instances as you want, as long they don't use the same ports. Looks like you are deploying something in port 8585, so in the second instance you have to select a different port.
Is that port being used in any application that you developed and deployed in the Mule runtime?
Also, if you are using the Mule runtime with the MMC agent activated, you also have to change the port for the agent in the second instance. I think you can do that in the /conf/wrapper.conf or by passing to the startup script the following parameter:
-Dmule.mmc.bind.port=7778
(or any port that is free).
You can run as many as you want.
In MMC we can able to deploy and run many applications each applications has its own instance
Today we found our host status is "Needs Attention".
We have upgraded the WMF 3.0.
And to check the health status and it reports the following error:
A Hardware Management error has occurred trying to contact server
iwwbgc8.dir.slb.com :a:DestinationUnreachable :The WS-Management
service cannot process the request. The service cannot find the
resource identified by the resource URI and selectors. .
Check that WinRM is installed and running on server
iwwbgc8.dir.slb.com. For more information use the command "winrm
helpmsg hresult".
ID: 2927 Details: Unknown error (0x8033803b)
Following the post: How to Interpret Job Failures in VMM and How to troubleshoot the “Needs Attention” and “Not Responding” host status in System Center 2012 Virtual Machine Manager
But the error is still there.
And there does some performance issue in events but by following the post How to manually rebuild Performance Counters for Windows Server 2008 64bit or Windows Server 2008 R2 systems, the performance counter can't not be fixed.
Error:
Installing the performance counter strings for service .NET Data Provider for Oracle (_) failed. The first DWORD in the Data section
contains the error code.
Cannot repair performance counters for .NET Data Provider for Oracle service. Reinstall the performance counters manually using the
LODCTR tool.
Event filter with query "SELECT * FROM __InstanceModificationEvent WITHIN 60 WHERE TargetInstance ISA "Win32_Processor" AND
TargetInstance.LoadPercentage > 99" could not be reactivated in
namespace "//./root/CIMV2" because of error 0x80041003. Events cannot
be delivered through this filter until the problem is corrected.
Unable to read Server Queue performance data from the Server service. The first four bytes (DWORD) of the Data section contains the
status code, the second four bytes contains the IOSB.Status and the
next four bytes contains the IOSB.Information.
Any idea bout it?
Later we found the issue is caused by the WMF 3.0 upgrade.
We follow the post Managing Hyper-V hosts using Virtual Machine Manager fails with Error: 0x8033803b after installing WMF 3.0 and apply the hotfix.
The hotfix (Windows6.1-KB2781512-x64) is applied but the issue still exists.
At last I select the solution to uninstall the WMF 3.0.
And the issue is fixed at last.