I'm working with hyperleger fabric 2.2. And I'm setting up a chain code on test network. I'm not able to get the values from GetAttributeValue. Below I'm pasting my chain code.
identity := ctx.GetClientIdentity()
userID, found, err := identity.GetAttributeValue("userID")
if err != nil || !found {
return fmt.Errorf("cannot get userID from the certificates")
}
userName, found, err := identity.GetAttributeValue("userName")
if err != nil || !found {
return fmt.Errorf("cannot get userName from the certificates")
}
Can anyone tell that how I can get the value by using GetAttributeValue.
below is the script that I'm running
Script
./network.sh down
./network.sh up createChannel -ca -s couchdb
./network.sh deployCC -ccn bank -ccp ../chaincode/chaincode/bank-chaincode/ -ccl go
export PATH=${PWD}/../bin:$PATH
export FABRIC_CFG_PATH=$PWD/../config/
echo "1"
export FABRIC_CA_CLIENT_HOME=${PWD}/organizations/peerOrganizations/org1.example.com/
fabric-ca-client register --caname ca-org1 --id.name owner --id.secret ownerpw --id.type client --tls.certfiles "${PWD}/organizations/fabric-ca/org1/tls-cert.pem"
echo "1"
fabric-ca-client enroll -u https://owner:ownerpw#localhost:7054 --caname ca-org1 -M "${mailto:pwd}/organizations/peerorganizations/org1.example.com/users/owner#org1.example.com/msp" --tls.certfiles "${PWD}/organizations/fabric-ca/org1/tls-cert.pem"
echo "1"
cp "${PWD}/organizations/peerOrganizations/org1.example.com/msp/config.yaml" "${mailto:pwd}/organizations/peerorganizations/org1.example.com/users/owner#org1.example.com/msp/config.yaml"
echo "1"
export FABRIC_CA_CLIENT_HOME=${PWD}/organizations/peerOrganizations/org2.example.com/
fabric-ca-client register --caname ca-org2 --id.name buyer --id.secret buyerpw --id.type client --tls.certfiles "${PWD}/organizations/fabric-ca/org2/tls-cert.pem"
echo "1"
fabric-ca-client enroll -u https://buyer:buyerpw#localhost:8054 --caname ca-org2 -M "${mailto:pwd}/organizations/peerorganizations/org2.example.com/users/buyer#org2.example.com/msp" --tls.certfiles "${PWD}/organizations/fabric-ca/org2/tls-cert.pem"
# sh -c 'fabric-ca-server start --ca.certfile /etc/hyperledger/fabric-ca-server-config/ca.bank.com-cert.pem --ca.keyfile /etc/hyperledger/fabric-ca-server-config/priv_sk --cfg.identities.allowremove --cfg.affiliations.allowremove -b admin:adminpw'
echo "1"
cp "${PWD}/organizations/peerOrganizations/org2.example.com/msp/config.yaml" "${mailto:pwd}/organizations/peerorganizations/org2.example.com/users/buyer#org2.example.com/msp/config.yaml"
export CORE_PEER_TLS_ENABLED=true
export CORE_PEER_LOCALMSPID="Org1MSP"
export CORE_PEER_TLS_ROOTCERT_FILE=${PWD}/organizations/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/ca.crt
export mailto:core_peer_mspconfigpath=${pwd}/organizations/peerorganizations/org1.example.com/users/admin#org1.example.com/msp
export CORE_PEER_ADDRESS=localhost:7051
export BANK=$(echo -n "{\"metaData \": {\"addedBy\": \"owner\",\"updatedBy\": \"owner\",\"transaction\": \"done\",\"participants\":{\"field\": {\"userID\": \"x\",\"userName\": \"ravin\",\"organizationID\": \"Org1\",\"organizationName\": \"myorgs\"}}},\"id\": \"owner\",\"name\": \"bank1\"}" | base64 | tr -d \\n)
peer chaincode invoke -o localhost:7050 --ordererTLSHostnameOverride orderer.example.com --tls --cafile "${PWD}/organizations/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem" -C mychannel -n bank -c '{"function":"CreateBank","Args":[]}' --transient "{\"bank\":\"$BANK\"}"
If there is anything wrong in the script also suggest me please.
Related
ansible.posix.synchronize, a wrapper for rsync, is failing with message
"msg": "Warning: Permanently added <host> (ECDSA) to the list of known hosts.\r\n=========================================================================\nUse of this computer system is for authorized and management approved use\nonly. All usage is subject to monitoring. Unauthorized use is strictly\nprohibited and subject to prosecution and/or corrective action up to and\nincluding termination of employment.\n=========================================================================\nrsync error: received SIGINT, SIGTERM, or SIGHUP (code 20) at rsync.c(644) [sender=3.1.3]\n"
My playbook
---
- name: Test rsync
hosts: all
become: yes
become_user: postgres
tasks:
- name: Populate scripts/common using copy
copy:
src: common/
dest: /home/postgres/scripts/common
- name: Populate scripts/common using rsync
ansible.posix.synchronize:
src: common/
dest: /home/postgres/scripts/common
Populate scripts/common using copy executes with no problem.
Full error output
fatal: [<host>]: FAILED! => {
"changed": false,
"cmd": "sshpass -d3 /usr/bin/rsync --delay-updates -F --compress --archive --rsh='/usr/bin/ssh -S none -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null' --rsync-path='sudo -u postgres rsync' --out-format='<<CHANGED>>%i %n%L' /opt/common/ pg_deployment#<host>t:/home/postgres/scripts/common",
"invocation": {
"module_args": {
"_local_rsync_password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"_local_rsync_path": "rsync",
"_substitute_controller": false,
"archive": true,
"checksum": false,
"compress": true,
"copy_links": false,
"delay_updates": true,
"delete": false,
"dest": "pg_deployment#<host>:/home/postgres/scripts/common",
"dest_port": null,
"dirs": false,
"existing_only": false,
"group": null,
"link_dest": null,
"links": null,
"mode": "push",
"owner": null,
"partial": false,
"perms": null,
"private_key": null,
"recursive": null,
"rsync_opts": [],
"rsync_path": "sudo -u postgres rsync",
"rsync_timeout": 0,
"set_remote_user": true,
"src": "/opt/common/",
"ssh_args": null,
"ssh_connection_multiplexing": false,
"times": null,
"verify_host": false
}
},
"msg": "Warning: Permanently added '<host>' (ECDSA) to the list of known hosts.\r\n=========================================================================\nUse of this computer system is for authorized and management approved use\nonly. All usage is subject to monitoring. Unauthorized use is strictly\nprohibited and subject to prosecution and/or corrective action up to and\nincluding termination of employment.\n=========================================================================\nrsync error: received SIGINT, SIGTERM, or SIGHUP (code 20) at rsync.c(644) [sender=3.1.3]\n",
"rc": 5
}
Notes:
User pg_deployment has passwordless sudo to postgres. This ansible playbook is being run inside a docker container.
After messing with it a bit more, I found that I can directly run the rsync command (not using ansible)
SSHPASS=<my_ssh_pass> sshpass -e /usr/bin/rsync --delay-updates -F --compress --archive --rsh='/usr/bin/ssh -S none -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null' --rsync-path='sudo -u postgres rsync' --out-format='<<CHANGED>>%i %n%L' common pg_deployment#<host>:/home/postgres/
The only difference I can see is I used sshpass -e while ansible defaulted to sshpass -d#. Could the credentials ansible was trying to pass in be incorrect? If they are incorrect for ansible.posix.synchronize then why aren't they incorrect for other ansible tasks?
EDIT
Confirmed that if I run
sshpass -d10 /usr/bin/rsync --delay-updates -F --compress --archive --rsh='/usr/bin/ssh -S none -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null' --rsync-path='sudo -u postgres rsync' --out-format='<<CHANGED>>%i %n%L' common pg_deployment#<host>:/home/postgres
(I chose random number for the file descriptor d10) I get the same error as above
"msg": "Warning: Permanently added <host> (ECDSA) to the list of known hosts.\r\n=========================================================================\nUse of this computer system is for authorized and management approved use\nonly. All usage is subject to monitoring. Unauthorized use is strictly\nprohibited and subject to prosecution and/or corrective action up to and\nincluding termination of employment.\n=========================================================================\nrsync error: received SIGINT, SIGTERM, or SIGHUP (code 20) at rsync.c(644) [sender=3.1.3]\n"
Suggesting that the problem is whatever ansible is using as the file descriptor? It isn't a huge problem since I can just pass in the sshpass as an env variable in my docker container since it's ephemeral, but I still would like to know what is going on with ansible here.
SOLUTION (using command)
---
- name: Create Postgres Cluster
hosts: all
become: yes
become_user: postgres
tasks:
- name: Create Scripts Directory
file:
path: /home/postgres/scripts
state: directory
- name: Populate scripts/common
command: sshpass -e /usr/bin/rsync --delay-updates -F --compress --archive --rsh='/usr/bin/ssh -S none -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null' --rsync-path='sudo -u postgres rsync' --out-format='<<CHANGED>>%i %n%L' common pg_deployment#<host>:/home/postgres/scripts
delegate_to: 127.0.0.1
become: no
I've tried many combinations to get this to work but cannot for some reason. I am not using keys in our environment so passwords will have to do.
I've tried proxyjump and sshuttle as well.
It's strange has the ping module works but when trying another module or playbook it doesn't work.
Rough set up is:
laptop running ubuntu with ansible installed
[laptop] ---> [productionjumphost] ---> [production_iosxr_router]
ansible.cfg:
[ssh_connection]
pipelining = True
ssh_args = -o ControlMaster=auto -o ControlPersist=30m -o ControlPath=/tmp/ansible-%r#%h:%p -F ssh.config
~/.ssh/config == ssh.cfg: (configured both)
Host modeljumphost
HostName modeljumphost.fqdn.com.au
User user
Port 22
Host productionjumphost
HostName productionjumphost.fqdn.com.au
User user
Port 22
Host model_iosxr_router
HostName model_iosxr_router
User user
ProxyCommand ssh -W %h:22 modeljumphost
Host production_iosxr_router
HostName production_iosxr_router
User user
ProxyCommand ssh -W %h:22 productionjumphost
inventory:
[local]
192.168.xxx.xxx
[router]
production_iosxr_router ansible_connection=network_cli ansible_user=user ansible_ssh_pass=password
[router:vars]
ansible_network_os=iosxr
ansible_ssh_common_args: '-o ProxyCommand="ssh -W %h:%p -q user#productionjumphost.fqdn.com.au"'
ansible_user=user
ansible_ssh_pass=password
playbook.yml:
---
- name: Network Getting Started First Playbook
hosts: router
gather_facts: no
connection: network_cli
tasks:
- name: show version
iosxr_command:
commands: show version
I can run an ad-hoc ansible command and a successful ping is returned:
result: ansible production_iosxr_router -i inventory -m ping -vvvvv
production_iosxr_router | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": false,
"invocation": {
"module_args": {
"data": "pong"
}
},
"ping": "pong"
}
running playbook: ansible-playbook -i inventory playbook.yml -vvvvv
production_iosxr_router | FAILED! => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": false,
"msg": "[Errno -2] Name or service not known"
}
I am trying to provison an EC2 instance and to install a LAMP server on it using Ansible from localhost. I have successfully provisioned the instance, but I was not able to install apache,php and mysql due to this error "Failed to connect to the host via ssh.".
OS: El Capitan 10.11.6
Ansible: 2.0.2.0
Here is the playbook: `---
- hosts: localhost
connection: local
gather_facts: no
vars_files:
- "vars/{{ project_name }}.yml"
- "vars/vpc_info.yml"
tasks:
- name: Provision
local_action:
module: ec2
region: "xxxxxx"
vpc_subnet_id: "xxxxxx"
assign_public_ip: yes
key_name: "xxxxxxx"
instance_type: "t2.nano"
image: "xxxxxxxx"
wait: yes
instance_tags:
Name: "LAMP"
class: "test"
environment: "dev"
project: "{{ project_name }}"
az: a
exact_count: 1
count_tag:
Name: "LAMP"
monitoring: yes
register: ec2a
- hosts: lamp
roles:
- lamp_server
The content of the ansible.cfg file:
[defaults]
private_key_file=/Users/nico/.ssh/xxxxx.pem
The inventory:
lamp ansible_ssh_host=<EC2 IP> ansible_user=ubuntu
The command used for running the playbook:
ansible-playbook -i inventory ec2_up.yml -e project_name="lamp_server" -vvvv
Output:
ESTABLISH SSH CONNECTION FOR USER: ubuntu
<xxxxxxxxxx> SSH: EXEC ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o 'IdentityFile="/Users/nico/.ssh/xxxxxxx.pem"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=ubuntu -o ConnectTimeout=10 -o ControlPath=/Users/nico/.ansible/cp/ansible-ssh-%h-%p-%r xxxxxxx '/bin/sh -c '"'"'( umask 22 && mkdir -p "` echo $HOME/.ansible/tmp/ansible-tmp-1475186461.08-93383010782630 `" && echo "` echo $HOME/.ansible/tmp/ansible-tmp-1475186461.08-93383010782630 `" )'"'"''
52.28.251.117 | UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh.",
"unreachable": true
}
I have read a lot of threads regarding this error, but nothing helped me. :(
ansible-playbook -i inventory ec2_up.yml -e project_name="lamp_server" -vvvv -c paramiko
I wrote below command, which will copy the id_dsa.pub file to other server as part of my auto login feature. But every time below message is coming on the console:
spawn scp -o StrictHostKeyChecking=no /opt/mgtservices/.ssh/id_dsa.pub root#12.43.22.47:/root/.ssh/id_dsa.pub
Password:
Password:
Below script I wrote for this:
function sshkeygenerate()
{
if ! [ -f $HOME/.ssh/id_dsa.pub ] ;then expect -c" spawn ssh-keygen -t dsa -f $HOME/.ssh/id_dsa
expect y/n { send y\r ; exp_continue } expect passphrase): { send \r ; exp_continue}expect again: { send \r ; exp_continue}
spawn chmod 700 $HOME/.ssh && chmod 700 $HOME/.ssh/*
exit "
fi
expect -c"spawn scp -o StrictHostKeyChecking=no $HOME/.ssh/id_dsa.pub root"#"12.43.22.47:/root/.ssh/id_dsa.pub
expect *assword: { send $ROOTPWD\r }expect yes/no { send yes\r ; exp_continue }
spawn ssh -o StrictHostKeyChecking=no root"#"12.43.22.47 \"chmod 755 /root/.ssh/authorized_keys\"
expect *assword: { send $ROOTPWD\r }expect yes/no { send yes\r ; exp_continue }
spawn ssh -o StrictHostKeyChecking=no root"#"12.43.22.47 \"cat /root/.ssh/id_dsa.pub >> /root/.ssh/authorized_keys\"
expect *assword: { send $ROOTPWD\r }expect yes/no { send yes\r; exp_continue }
sleep 1
exit"
}
You should first create a passwordless ssh to the destination server, then you won't need to enter the password while you will do the scp.
Assuming 192.168.0.11 is the destination machine:
1) ssh-keygen -t rsa
2) ssh sheena#192.168.0.11 mkdir -p .ssh
3) cat .ssh/id_rsa.pub | ssh sheena#192.168.0.11 'cat >> .ssh/authorized_keys'
4) ssh sheena#192.168.0.11 "chmod 700 .ssh; chmod 640 .ssh/authorized_keys"
Link for the refernce:
http://www.tecmint.com/ssh-passwordless-login-using-ssh-keygen-in-5-easy-steps/
I create a db with certutil
myproject/bin/pkcert
echo "dartdart" > pwdfile
certutil -N -d 'sql:./' -f pwdfile
Then, I import my certificate validated by an authority
certutil -d "sql:./" -A -t "C,C,C" -n "my_cert" -i certificate.crt
I check if it work
certutil -L -d 'sql:./'
Certificate Nickname Trust Attributes
SSL,S/MIME,JAR/XPI
my_cert C,C,C
My main.dart
library main;
import "dart:io";
void main() {
var testPkcertDatabase = Platform.script.resolve('./pkcert')
.toFilePath();
SecureSocket.initialize(database: testPkcertDatabase,
password: 'dartdart');
HttpServer
.bindSecure(InternetAddress.ANY_IP_V6,
8443,
certificateName: 'my_cert')
.then((server) {
server.listen((HttpRequest request) {
request.response.write('Hello, world!');
request.response.close();
});
});
}
I execute, and I get this error:
Uncaught Error: CertificateException: Cannot find server certificate by nickname: my_cert (OS Error: security library: read-only database., errno = -8126)
I missed something or I should report a bug?
Thank you.
Have you tried to add -s "cn=mycert" to
certutil -d "sql:./" -A -t "C,C,C" -n "my_cert" -i certificate.crt
and use it with
HttpServer
.bindSecure(InternetAddress.ANY_IP_V6,
8443,
certificateName: 'CN=my_cert')
as shown in this blog post http://jamesslocum.com/post/70003236123 ?