bitcoin cash bitcoin-cli cannot connect to RPC server - bitcoin

I'm running a bitcoin server and I validate its working by running bitcoin-cli getinfo. I have a new bitcoin-cash server which basically sets up the same but when I run a cli command it errors.
When I run this command from the server;
bitcoin-cli -rpcuser bitcoin -stdinrpcpass REDACTED_1aAbY -conf /data01/bitcoin/bitcoin.conf -rpcport 8332 getinfo
I get this result;
error: Could not locate RPC credentials. No authentication cookie could be found, and RPC password is not set. See -rpcpassword and -stdinrpcpass. Configuration file: (/home/ubuntu/.bitcoin/bitcoin.conf)
No matter the flags I use on bitcoin-cli I get the same error. The error indicates a conf files should be at /home/ubuntu/.bitcoin/bitcoin.conf but that file and dir don't exist.
My server config looks like;
server=1
txindex=1
zmqpubrawtx=tcp://127.0.0.1:28332
zmqpubhashblock=tcp://127.0.0.1:28332
rpcallowip=127.0.0.1
rpcallowip=0.0.0.0/0
rpcuser=bitcoin
rpcpassword=REDACTED_1aAbY
rpcbind=0.0.0.0
rest=1
daemon=1
datadir=/data01/bitcoin
rpcworkqueue=128
whitelist=0.0.0.0/0
rpcallowip=::/0
printtoconsole=1
If I stop the bitcoind exe from running then I get this error;
Make sure the bitcoind server is running and that you are connecting to the correct RPC port.```
That tells me at least I'm running against what I think I am.

As it turns out, you can't connect to the rpc server while the server is still downloading blocks. Once it finishes, you can connect to the server. Using bitcoin-cli getinfo was the original problem.

The (local) cli command is independent of rpc authentication.
Just try the command like this:
bitcoin-cli getinfo
bitcoin-cli -rpcport=8332 getinfo

Related

How do I run multiple configuration commands in Dell EMC OS10 with Paramiko?

I am trying to run a series of commands to configure a vlan on a Dell EMC OS10 server using Paramiko. However I am running into a rather frustrating problem.
I want to run the following
# configure terminal
(config)# interface vlan 3
(conf-if-vl-3)# description VLAN-TEST
(conf-if-vl-3)# end
However, I can't seem to figure out how to achieve this with paramiko.SSHClient().
When I try to use sshclient.exec_command("show vlan") it works great, it runs this command and exits. However, I don't know how to run more than one command with a single exec_command.
If I run sshclient.exec_command("configure") to access the configuration shell, the command completes and I believe the channel is closed, since my next command sshclient.exec_command("interface vlan ...") is not successful since the switch is no longer in configure mode.
If there is a way to establish a persistent channel with exec_command that would be ideal.
Instead I have resorted to a function as follows
chan = sshClient.invoke_shell()
chan.send("configure\n")
chan.send("interface vlan 3\n")
chan.send("description VLAN_TEST\n")
chan.send("end\n")
Oddly, this works when I run it from a Python terminal one command at a time.
However, when I call this function from my Python main, it fails. Perhaps the channel is closed too soon when it goes out of scope from the function call?
Please advise if there is a more reasonable way to do this
Regarding sending commands to the configure mode started with SSHClient.exec_commmand, see:
Execute (sub)commands in secondary shell/command on SSH server in Python Paramiko
Though it's quite common that "devices" do not support the "exec" channel at all:
Executing command using Paramiko exec_command on device is not working
Regarding your problem with invoke_shell, it's quite possible that the server needs some time to get ready for the next command.
Quick-and-dirty solution is to "sleep" shortly between the individual send calls.
Better solution to is to wait for command prompt before sending the next command.

Apache VFS SFTP Connection hangs

I am using Apache VFS to upload a file to an SFTP server, if the file is newer than the file on the server or doesn't exist there yet. The server connection uses SSH Keys for Authentication.
I am using the following java code (plus error handling etc.) to connect to the server and check the file modification date-time:
DefaultFileSystemManager manager = new DefaultFileSystemManager();
manager.addProvider("sftp", new SftpFileProvider());
manager.init();
FileSystemOptions opts = createDefaultOptions();
BytesIdentityInfo identityInfo = new BytesIdentityInfo(server.sshKey.getBytes(), null);
SftpFileSystemConfigBuilder.getInstance().setIdentityProvider(opts, identityInfo);
remoteFileObject = manager.resolveFile(new URI("sftp",server.UserName,server.HostName,server.Port,remoteFilePath,null,null).toString(), createDefaultOptions(server.Key));
FileContent content = remoteFileObject.getContent();
return content.getLastModifiedTime();
The SSH key is in the format -----BEGIN RSA PRIVATE KEY----- etc.; as exported by puttyGen under Conversions -> Export OpenSSH Key (i.e. the old format of OpenSSH key, not the new one).
I have tested this code on Windows, with a locally hosted SFTP server (i.e. also on the same Windows machine), and it works successfully.
I am now wanting to use this in a Linux environment (RHEL), connecting to an AWS Transfer SFTP server, secured using SSH keys as described.
I can connect successfully using the SFTP command from the Linux OS shell:
sftp -oIdentityFile=/path/to/test.ppk USER#xxx.xxx.xxx.xxx
But, when I try to run the java code, the code hangs on the call to manager.resolveFile.
After half an hour (I think - this might not be related), I get the following in /var/log/messages:
systemd-logind[1297]: Session 115360 logged out. Waiting for processes to exit.
systemd[1]: session-115360.scope: Succeeded.
systemd-logind[1297]: Removed session 115360.
I don't have SELinux enabled, so I don't think that's interfering in any way.
Can anyone help suggest what might be causing this?
There were a couple of things, as it turns out:
Timeout
The timeout can be set when you configure the SftpFileSystemConfigBuilder, by using the .setSessionTimeout(FileSystemOptions, Duration) method call. This reduces the timeout which, if nothing else, makes the issue easier to debug.
The Session comments in the messages log were not related to the issue. Instead, the issue happened because the exec channel is disabled on the SFTP server, but VFS is trying to use it. At a simple level, this can be disabled using setDisableDetectExecChannel on the SftpFileSystemConfigBuilder object - but you should know the implications of this before doing so.

Hyperledger Fabric - backup and restore

I'm using Hyperledger Fabric and now I'm trying to make a backup of the current situation and restore it on a different computer.
I'm following the procedure found in hyperledger-fabric-backup-and-restore.
The main steps being:
Copy the crypto-config and the channel-artifacts directory
Copy the content of all peers and orderer containers
Modify the docker-compose.yaml to link containers volumes to the local directory where I have the backup copy.
Yet it's not working properly in my case: when I restart the network with ./byfn.hs up I first have all the containers correctly up and running then, whatever operation I try and execute on the channel (peer channel create, peer channel join, peer channel update) fails with error:
Error: got unexpected status: BAD_REQUEST -- error applying config update to existing channel 'mychannel': error authorizing update: error validating ReadSet: proposed update requires that key [Group] /Channel/Application be at version 0, but it is currently at version 1
Is there anything I should do which is not mentioned on hyperledger-fabric-backup-and-restore ?
I got the same error while trying to create a channel. Turning the "network down" and then "network up" solved my problem.

Unable to instantiate the chaincode in muticloud setup

I am trying to achieve the multicloud architecture. My network has 2 peers, 1 orderer and a webclient. This network is in Azure. I am trying to add a peer from Google Cloud Platform to the channel of Azure. For this, I created a crypto-config for 3rd peer from Azure webclient. But in the crypto-config, I made the changes like peers in Azure have their own certificates while for the 3rd peer, I placed the newly created certificates. Now I can install, instantiate, invoke and do queries in the peers(1 and 2). And I can install the chaincodes in 3rd peer. But I am unable to instantiate the chaincodes.
Getting the following error: Error: could not assemble transaction, err proposal response was not successful, error code 500, msg error starting container: error starting container: Post http://unix.sock/containers/create?name=dev-(CORE_PEER_ID)-documentCC-1: dial unix /var/run/docker.sock: connect: permission denied
Can anyone guide me on this.
Note: All the peers, orderer, webclient are running in different vm(s)
#soundarya
It doesn’t matter how many places your solution is deployed
The problem is you are running docker by using sudo command try to add docker to sudo group
Below block will help you out
https://www.digitalocean.com/community/questions/how-to-fix-docker-got-permission-denied-while-trying-to-connect-to-the-docker-daemon-socket
To learn more concept about docker.sock
You can refer to my answer in another Can anyone explain docker.sock

tensorflow serving: failed to connect to 'ipv4:127.0.0.1:9000'

I have installed and configured tensorflow serving on an "AWS t2.large Ubuntu 14.04" server.
When I attempt to test the server with the mnist_client utility by executing the command, bazel-bin/tensorflow_serving/example/mnist_client --num_tests=1000 --server=localhost:9000, I receive the following error message:
E0605 05:03:54.617558520 1232 tcp_client_posix.c:191] failed to connect to 'ipv4:127.0.0.1:9000': timeout occurred
Any idea how to fix this?
I haven't heard of anything like this, but did note that (at least with other test clients) when the server was not ready/up yet, requests would timeout. So my guess is that the server is not up yet or perhaps in some other bad state.
I met the same problem before. The root cause is mnist_client was run in my local machine instead of the server, because the command connects to localhost bazel-bin/tensorflow_serving/example/mnist_client --num_tests=1000 --server=localhost:9000
It succeeds when I run mnist_client utility in the server.