How to connect a bitcoin node to a complete node through the local network? - bitcoin

I currently have a full bitcoin node on my local network and I would like to create another node but only update it by the local network.
I have tried with:
./bitcoind -connect<iplocal>
and
./bitcoind -addnode<iplocal>
but I get the following error
syntax error near unexpected token `newline'
Appreciate your help.

Juan,
It should be ./bitcoind -connect=<iplocal> rather than ./bitcoind -connect<iplocal> and you should be using -connect and not -addnode since the former will ensure that the node connects to only the host specified.

Related

aioamqp can't connect to rabbitmqserver

The server error is:
but when I use pika to connect to the rabbitmq server with the same parameter such as same username, password, virtual host, the pika can connect to the server.
the Error code is blow:
transport, protocol = await aioamqp.connect(host='localhost',
virtualhost='/', port=5672, ssl=False, insist=True,
login_method='AMQPLAIN') # use default parameters
How can I solve it?
I have fixed the error, and the solution is below:
1\ the master branch code did not correspond to aioamqp pypy code.
2\ so I git clone the master branch code to my project and make some test for it, then id did well for me.

Hyperledger Fabric - backup and restore

I'm using Hyperledger Fabric and now I'm trying to make a backup of the current situation and restore it on a different computer.
I'm following the procedure found in hyperledger-fabric-backup-and-restore.
The main steps being:
Copy the crypto-config and the channel-artifacts directory
Copy the content of all peers and orderer containers
Modify the docker-compose.yaml to link containers volumes to the local directory where I have the backup copy.
Yet it's not working properly in my case: when I restart the network with ./byfn.hs up I first have all the containers correctly up and running then, whatever operation I try and execute on the channel (peer channel create, peer channel join, peer channel update) fails with error:
Error: got unexpected status: BAD_REQUEST -- error applying config update to existing channel 'mychannel': error authorizing update: error validating ReadSet: proposed update requires that key [Group] /Channel/Application be at version 0, but it is currently at version 1
Is there anything I should do which is not mentioned on hyperledger-fabric-backup-and-restore ?
I got the same error while trying to create a channel. Turning the "network down" and then "network up" solved my problem.

Unable to instantiate the chaincode in muticloud setup

I am trying to achieve the multicloud architecture. My network has 2 peers, 1 orderer and a webclient. This network is in Azure. I am trying to add a peer from Google Cloud Platform to the channel of Azure. For this, I created a crypto-config for 3rd peer from Azure webclient. But in the crypto-config, I made the changes like peers in Azure have their own certificates while for the 3rd peer, I placed the newly created certificates. Now I can install, instantiate, invoke and do queries in the peers(1 and 2). And I can install the chaincodes in 3rd peer. But I am unable to instantiate the chaincodes.
Getting the following error: Error: could not assemble transaction, err proposal response was not successful, error code 500, msg error starting container: error starting container: Post http://unix.sock/containers/create?name=dev-(CORE_PEER_ID)-documentCC-1: dial unix /var/run/docker.sock: connect: permission denied
Can anyone guide me on this.
Note: All the peers, orderer, webclient are running in different vm(s)
#soundarya
It doesn’t matter how many places your solution is deployed
The problem is you are running docker by using sudo command try to add docker to sudo group
Below block will help you out
https://www.digitalocean.com/community/questions/how-to-fix-docker-got-permission-denied-while-trying-to-connect-to-the-docker-daemon-socket
To learn more concept about docker.sock
You can refer to my answer in another Can anyone explain docker.sock

How to fix the error of geth

Q)After installing geth getting an error while attaching.How to fix this error?
geth attach
Fatal: Unable to attach to remote geth: no known transport for URL scheme "c"
If you're using 1.8, you need to include the IPC path:
geth attach ipc:\\.\pipe\geth.ipc
See https://github.com/ethereum/go-ethereum/issues/15746
If you using windows then try:
geth attach http://localhost:8545
this will properly work for windows work.
you don't have access to personal and mine objects using localhost ... eth.personal
undefined. Apparently you need to use ipc to get to the lower level objects

tensorflow serving: failed to connect to 'ipv4:127.0.0.1:9000'

I have installed and configured tensorflow serving on an "AWS t2.large Ubuntu 14.04" server.
When I attempt to test the server with the mnist_client utility by executing the command, bazel-bin/tensorflow_serving/example/mnist_client --num_tests=1000 --server=localhost:9000, I receive the following error message:
E0605 05:03:54.617558520 1232 tcp_client_posix.c:191] failed to connect to 'ipv4:127.0.0.1:9000': timeout occurred
Any idea how to fix this?
I haven't heard of anything like this, but did note that (at least with other test clients) when the server was not ready/up yet, requests would timeout. So my guess is that the server is not up yet or perhaps in some other bad state.
I met the same problem before. The root cause is mnist_client was run in my local machine instead of the server, because the command connects to localhost bazel-bin/tensorflow_serving/example/mnist_client --num_tests=1000 --server=localhost:9000
It succeeds when I run mnist_client utility in the server.