Hyperledger Fabric - How to clear out the DEV environment after each blockchain network test? - crash

Fabric 1.4.3 version. Blockchain network with 1 Oderer (solo) + 1 Org, running on Docker.
Trying to instantiate chaincode, due to PANIC error on PEER0, and peer crashes.
Impossible to instantiate chaincode, because PEER0 crashes doing the process.
At CLI docker prompt, I did this command sequence:
1) $> peer channel create -o $ORDERERNAME -c $CHANNELNAME -f $CONFIGTXFOLDER/devchannel.tx --tls --cafile=$ORDERER_TLSCACERT
Result in Cli: UTC [cli.common] readBlock -> INFO 04e Received block: 0
2) $> peer channel join -o $ORDERERNAME -b $CONFIGTXFOLDER/devgenesis.block --tls --cafile=$ORDERER_TLSCACERT
Result in Cli: UTC [channelCmd] executeJoin -> INFO 03e Successfully submitted proposal to join channel
3) $> peer chaincode install -n $CHCODENAME -p $CHCODEPATH -v $CHCODEVERSION -l node --tls --cafile $ADMIN_PEER_TLSCACERT
Result in Cli: UTC [chaincodeCmd] install -> INFO 04a Installed remotely response:<status:200 payload:"OK" >
4) $> peer chaincode instantiate -C $CHANNELNAME -n $CHCODENAME -v $CHCODEVERSION -o $ORDERERNAME -c '{"Args":["init","a","100","b","200"]}' -P "AND ('GuaraniMSP.admin')" --tls --cafile $ORDERER_TLSCACERT --tlsRootCertFiles $CORE_PEER_TLS_ROOTCERT_FILE
Result before crash in PEER0:
UTC [gossip.state] commitBlock -> ERRO 87e Got error while committing(unexpected Previous block hash. Expected PreviousHash = [c87a4b77e4c790f78b0c2e3c97d97de9907a09daf5dc2f039c7e3b3e1440f5d1], PreviousHash referred in the latest block= [953e31164a84d6d1b9b446130d1e7d5af8ede818284e8fa7c315b2125b519e38]
github.com/hyperledger/fabric/common/ledger/blkstorage/fsblkstorage.(*blockfileMgr).addBlock
[...]
UTC [gossip.state] deliverPayloads -> PANI 87f Cannot commit block to the ledger due to unexpected Previous block hash. Expected PreviousHash = [c87a4b77e4c790f78b0c2e3c97d97de9907a09daf5dc2f039c7e3b3e1440f5d1], PreviousHash referred in the latest block= [953e31164a84d6d1b9b446130d1e7d5af8ede818284e8fa7c315b2125b519e38]
github.com/hyperledger/fabric/common/ledger/blkstorage/fsblkstorage.(*blockfileMgr).addBlock
[...]
/opt/go/src/runtime/asm_amd64.s:1333
panic: Cannot commit block to the ledger due to unexpected Previous block hash. Expected PreviousHash = [c87a4b77e4c790f78b0c2e3c97d97de9907a09daf5dc2f039c7e3b3e1440f5d1], PreviousHash referred in the latest block= [953e31164a84d6d1b9b446130d1e7d5af8ede818284e8fa7c315b2125b519e38]
github.com/hyperledger/fabric/common/ledger/blkstorage/fsblkstorage.(*blockfileMgr).addBlock
[...]
I opened an issue at Hyperledger Fabric JIRA about this situation, and I received the information that I need to clear out the environment to ensure there are no artifacts from prior trials.
Issue FABB-147 at Hyperledger Fabric (My logs are there).
I saw a problem documented close to my, but with differences, at peers get crash after anchor peer update
Now I have to certify that my Hyperledger Fabric environment is clear. But how?
Is there any documented procedure or checklist to verify if there are no artifacts from prior trials in Hyperledger Fabric?
If I suppress the ORDERER_GENERAL_GENESISFILE and ORDERER_GENERAL_GENESISPROFILE settings for orderer.yaml and docker-compose.yaml, will the orderer element start normally, and then the network will understand that it should use the genesis.block informed when creating the new channel?
Thanks in advance for your help.

Run this command docker system prune
Start Your network
again
hope this help u as it helped me

While not an answer to cleaning your environment, here's a possible solution to your underlying problem.
The peer channel create command will generate a $CHANNELNAME.block file in its current working directory. This is the block you need to use when running peer channel join, not a genesis block.
$> peer channel create ... -c $CHANNELNAME
$> peer channel join ... -b $CONFIGTXFOLDER/devgenesis.block
Will ruin your day by causing your peers to crash most of the time, and rarely work after recreating the ledger from a new genesis block. Try this instead:
$> peer channel create ... -c $CHANNELNAME
$> peer channel join ... -b $CHANNELNAME.block

I was using the genesis block from the system fabric network, loaded on the Ordener (from orderer.yaml or docker-compose.yaml), to join the channel, and that was wrong.
The source of my error was that I was using the system genesis block instead of using the genesis block generated in the peer channel create command.
I understood that the peer channel create command uses a transaction block model (.tx) as input, and as an output it generates a genesis block (.block) that must be used as input in the peer channel join command.
The correct sequence is:
Step 1) $> peer channel create -o $ORDERERNAME -c $CHANNELNAME -f $CONFIGTXFOLDER/devchanneltrack.tx --outputBlock $CHANNELFOLDER/devchannelgen.block --tls --cafile=$ORDERER_TLSCACERT
Step 2) $> peer channel join -o $ORDERERNAME -b $CHANNELFOLDER/devchannelgen.block --tls --cafile=$ORDERER_TLSCACERT
I made corrections to the scripts, and now I can join the channel. Then I did the installation and instantiation of new chaincodes on the channel, and they worked perfectly.

Related

Connection to SSH from a pipeline in Azure DevOps is still not working

I saw few other posts (in particular this one) about it but there are from last year. I still have this issue right now. I opened the Preview features from the User settings but I can't turn off this feature.
My pipelines use SSH connection to run some commands on a virtual machine (basically, pull a Docker image).
All my pipelines are failing. How can I fix it or update the SSH connections?
Update
I set up the Service connection
and I use it in my pipelines with this YAML code:
- task: SSH#0
displayName: 'SSH: stop shinyproxy'
inputs:
sshEndpoint: $(server)
commands: |
echo $(pwd) | sudo -S docker stop shinyproxy
failOnStdErr: false
continueOnError: true
All pipelines, new and old, get the same error
##[error]Failed to connect to remote machine. Verify the SSH service connection details. Error: Error: All configured authentication methods failed
at doNextAuth (/home/vsts/work/_tasks/SSH_91443475-df55-4874-944b-39253b558790/0.213.0/node_modules/ssh2/lib/client.js:803:21)
at tryNextAuth (/home/vsts/work/_tasks/SSH_91443475-df55-4874-944b-39253b558790/0.213.0/node_modules/ssh2/lib/client.js:993:7)
at USERAUTH_FAILURE (/home/vsts/work/_tasks/SSH_91443475-df55-4874-944b-39253b558790/0.213.0/node_modules/ssh2/lib/client.js:373:11)
at 51 (/home/vsts/work/_tasks/SSH_91443475-df55-4874-944b-39253b558790/0.213.0/node_modules/ssh2/lib/protocol/handlers.misc.js:337:16)
at Protocol.onPayload (/home/vsts/work/_tasks/SSH_91443475-df55-4874-944b-39253b558790/0.213.0/node_modules/ssh2/lib/protocol/Protocol.js:2025:10)
at AESGCMDecipherNative.decrypt (/home/vsts/work/_tasks/SSH_91443475-df55-4874-944b-39253b558790/0.213.0/node_modules/ssh2/lib/protocol/crypto.js:987:26)
at Protocol.parsePacket [as _parse] (/home/vsts/work/_tasks/SSH_91443475-df55-4874-944b-39253b558790/0.213.0/node_modules/ssh2/lib/protocol/Protocol.js:1994:25)
at Protocol.parse (/home/vsts/work/_tasks/SSH_91443475-df55-4874-944b-39253b558790/0.213.0/node_modules/ssh2/lib/protocol/Protocol.js:293:16)
at Socket. (/home/vsts/work/_tasks/SSH_91443475-df55-4874-944b-39253b558790/0.213.0/node_modules/ssh2/lib/client.js:713:21)
at Socket.emit (node:events:527:28) {
level: 'client-authentication'
I have never had this issue before.
Based on the other post, highlighted by Antonia, the solution has to be applied on the Ubuntu machine.
To fix it, open Terminal and edit /etc/ssh/sshd_config and, at the end of it, add this line
/etc/ssh/sshd_config
After that, restart. It is working for me.

How do I resolve Invalid SSH Key Entry error when starting App with GCE

I'm trying to launch my app on Google Compute Engine, and I get the following error:
Sep 26 22:46:09 debian google_guest_agent[411]: ERROR non_windows_accounts.go:199 Invalid ssh key entry - unrecognized format: ssh-rsa AAAAB...
I'm having a hard time interpreting it. I have the following startup script:
# Talk to the metadata server to get the project id
PROJECTID=$(curl -s "http://metadata.google.internal/computeMetadata/v1/project/project-id" -H "Metadata-Flavor: Google")
REPOSITORY="github_sleepywakes_thunderroost"
# Install logging monitor. The monitor will automatically pick up logs sent to
# syslog.
curl -s "https://storage.googleapis.com/signals-agents/logging/google-fluentd-install.sh" | bash
service google-fluentd restart &
# Install dependencies from apt
apt-get update
apt-get install -yq ca-certificates git build-essential supervisor
# Install nodejs
mkdir /opt/nodejs
curl https://nodejs.org/dist/v16.15.0/node-v16.15.0-linux-x64.tar.gz | tar xvzf - -C /opt/nodejs --strip-components=1
ln -s /opt/nodejs/bin/node /usr/bin/node
ln -s /opt/nodejs/bin/npm /usr/bin/npm
# Get the application source code from the Google Cloud Repository.
# git requires $HOME and it's not set during the startup script.
export HOME=/root
git config --global credential.helper gcloud.sh
git clone https://source.developers.google.com/p/${PROJECTID}/r/${REPOSITORY} /opt/app/github_sleepywakes_thunderroost
# Install app dependencies
cd /opt/app/github_sleepywakes_thunderroost
npm install
# Create a nodeapp user. The application will run as this user.
useradd -m -d /home/nodeapp nodeapp
chown -R nodeapp:nodeapp /opt/app
# Configure supervisor to run the node app.
cat >/etc/supervisor/conf.d/node-app.conf << EOF
[program:nodeapp]
directory=/opt/app/github_sleepywakes_thunderroost
command=npm start
autostart=true
autorestart=true
user=nodeapp
environment=HOME="/home/nodeapp",USER="nodeapp",NODE_ENV="production"
stdout_logfile=syslog
stderr_logfile=syslog
EOF
supervisorctl reread
supervisorctl update
# Application should now be running under supervisor
My instance shows I have 2 public SSH keys. The second begins like this one in the error, but after about 12 characters it is different.
Any idea why this might be occurring?
Thanks in advance.
Once you deployed your VM instance, its a default setting that the SSH key isn't
configure yet, but you can also configure the SSH key upon deploying the VM instance.
To elaborate the answer of #JohnHanley, I tried to test in my environment.
Created a VM instance, verified the SSH configuration. As a default configuration there's no SSH key configured as I said earlier you can configure SSH key upon deploying the VM
Created a SSH key pair via CLI, you can use this link for instruction details
Navigate your VM instance, Turn off > EDIT > Security > Add Item > SSH key 1 - copy+paste generated SSH key pair > Save > Power ON VM instance
Then test the VM instance if accessible.
Documentation link How to Add SSH keys to project metadata.

Hyperledger peer creation error

I'm trying to create a genesis block with the command:
peer channel create -c myc
following the install_instantiate.md file.
However I keep getting an error-message that there is no ordering service endpoint created. Somebody who can help me with this please?
I guess you are working with Fabric 1.0.0-alpha. There
the API has been changed so that you have to pass an orderer now.
peer channel create -o orderer0:7050 -c mychannel
Please have a look at the following document:
https://fabric-rtd.readthedocs.io/en/latest/getting_started.html#create-channel
Have you tried to specifiy the orderer address ?
If you are using docker :
CORE_PEER_COMMITTER_LEDGER_ORDERER=orderer:7050 peer channel create -c myc
If it doesn't work, try the Getting Setup, or see the Troubleshooting section

Can't use tc in docker container

I am using tc to limit send rate in a docker container.
Added below script into Dockerfile:
tc qdisc add dev eth0 root handle 1: htb default 2
tc class add dev eth0 parent 1:1 classid 1:2 htb rate 2mbit ceil 2mbit prio 2
tc qdisc add dev eth0 parent 1:2 handle 2: sfq perturb 10
tc filter add dev eth0 protocol ip parent 1:0 u32 match ip dst 192.168.1.124 flowid 1:2
Run docker under root account via this command:
docker run --cap-add=NET_ADMIN --name lqt_build -d -p 8443:8443 -p 443:443 -p 3478:3478 lqt_build
But it still show this error:
Step 25 : RUN cd /usr/share/ta/ && sudo ./tt rate
---> Running in fb6a4477ad6c
RTNETLINK answers: Operation not permitted
RTNETLINK answers: Operation not permitted
RTNETLINK answers: Operation not permitted
RTNETLINK answers: Operation not permitted
We have an error talking to the kernel
[8] System error: read parent: connection reset by peer
It seems the kernel prevents apps in the container from changing some kernel settings even though they're running as root. I guess the container doesn't have its own kernel but runs on the kernel shared with (potentially) many other containers so it can't be allowed to touch settings of the underlying kernel. Does anyone have experience with this issue?
The root cause is the use of tc in Dokcerfile. NET_ADMIN capability does not take effect at that time. Tc command works fine after docker container is running. Thanks user2915097.

dotcloud push on cygwin fails with "rsync error: unexplained error (code 255)" (similar with git and hg)

Though I have followed the usual steps for using the dotCloud CLI under Cygwin, dotcloud push fails in all cases: --rsync, --hg, and --git.
I am on Windows 8 and Cygwin.
How can I push successfully?
Sample output:
me#host /cygdrive/d/project
$ dotcloud push --rsync
==> Pushing code with rsync from "./" to application myapp
rsync: connection unexpectedly closed (0 bytes received so far) [sender]
rsync error: unexplained error (code 255) at /home/lapo/package/rsync-3.0.9-1/src/rsync-3.0.9/io.c(605) [sender=3.0.9]
me#host /cygdrive/d/project
$ dotcloud push --git
Permission denied (publickey,password).r from "./" to application myapp
fatal: The remote end hung up unexpectedly
me#host /cygdrive/d/project
$ dotcloud push --hg
==> Pushing code with mercurial from "./" to application myapp
abort: no suitable response from remote hg!
Error: Mercurial returned a fatal error
You may be running into a bug in Cygwin's group permissions. Vineet Gupta gives a workaround in his blog. The problem comes from the very strict permissions expected by ssh around the keys, and the solution is to set the permission on the ssh key properly (to 600, rw by owner only). Cygwin seems to need the group to be added manually.
Updating the steps to get the dotCloud CLI installed, including setting the permissions, leads to:
Start the Cygwin Setup.
Select default choices until you reach the package selection dialog.
Enable the following packages:
net/openssh
net/rsync
devel/git
devel/mercurial
python/python (make sure it’s at least 2.6!)
web/wget
After the installation, you should have a Cygwin icon on your desktop. Start it: you will get a command-line shell.
Download easy_install
wget http://peak.telecommunity.com/dist/ez_setup.py
Install easy_install
python ez_setup.py
You now have easy_install; let’s use it to install pip:
easy_install pip
Now install dotcloud (the CLI)
pip install dotcloud
Set up the CLI with your credentials. This will also download the ssh key.
dotcloud setup
New Step Update the permissions on your dotCloud key:
chgrp Users ~/.dotcloud_cli/dotcloud.key
chmod 600 ~/.dotcloud_cli/dotcloud.key
Now you should be able to dotcloud push
If you have multiple dotCloud accounts, then you will need to repeat this process for each account, since each account has its own key. Also note that you shouldn't have to set these permissions manually, but it seems like the group ownership is sometimes the wrong default in Cygwin. Linux and OSX don't seem to show this problem, though the permissions must be 600 for all OSes, so it is worth checking.