Dart, use SSL emitted by an authority - ssl

I create a db with certutil
myproject/bin/pkcert
echo "dartdart" > pwdfile
certutil -N -d 'sql:./' -f pwdfile
Then, I import my certificate validated by an authority
certutil -d "sql:./" -A -t "C,C,C" -n "my_cert" -i certificate.crt
I check if it work
certutil -L -d 'sql:./'
Certificate Nickname Trust Attributes
SSL,S/MIME,JAR/XPI
my_cert C,C,C
My main.dart
library main;
import "dart:io";
void main() {
var testPkcertDatabase = Platform.script.resolve('./pkcert')
.toFilePath();
SecureSocket.initialize(database: testPkcertDatabase,
password: 'dartdart');
HttpServer
.bindSecure(InternetAddress.ANY_IP_V6,
8443,
certificateName: 'my_cert')
.then((server) {
server.listen((HttpRequest request) {
request.response.write('Hello, world!');
request.response.close();
});
});
}
I execute, and I get this error:
Uncaught Error: CertificateException: Cannot find server certificate by nickname: my_cert (OS Error: security library: read-only database., errno = -8126)
I missed something or I should report a bug?
Thank you.

Have you tried to add -s "cn=mycert" to
certutil -d "sql:./" -A -t "C,C,C" -n "my_cert" -i certificate.crt
and use it with
HttpServer
.bindSecure(InternetAddress.ANY_IP_V6,
8443,
certificateName: 'CN=my_cert')
as shown in this blog post http://jamesslocum.com/post/70003236123 ?

Related

How I can use GetAttributeValue in hyperleger fabric 2.2

I'm working with hyperleger fabric 2.2. And I'm setting up a chain code on test network. I'm not able to get the values from GetAttributeValue. Below I'm pasting my chain code.
identity := ctx.GetClientIdentity()
userID, found, err := identity.GetAttributeValue("userID")
if err != nil || !found {
return fmt.Errorf("cannot get userID from the certificates")
}
userName, found, err := identity.GetAttributeValue("userName")
if err != nil || !found {
return fmt.Errorf("cannot get userName from the certificates")
}
Can anyone tell that how I can get the value by using GetAttributeValue.
below is the script that I'm running
Script
./network.sh down
./network.sh up createChannel -ca -s couchdb
./network.sh deployCC -ccn bank -ccp ../chaincode/chaincode/bank-chaincode/ -ccl go
export PATH=${PWD}/../bin:$PATH
export FABRIC_CFG_PATH=$PWD/../config/
echo "1"
export FABRIC_CA_CLIENT_HOME=${PWD}/organizations/peerOrganizations/org1.example.com/
fabric-ca-client register --caname ca-org1 --id.name owner --id.secret ownerpw --id.type client --tls.certfiles "${PWD}/organizations/fabric-ca/org1/tls-cert.pem"
echo "1"
fabric-ca-client enroll -u https://owner:ownerpw#localhost:7054 --caname ca-org1 -M "${mailto:pwd}/organizations/peerorganizations/org1.example.com/users/owner#org1.example.com/msp" --tls.certfiles "${PWD}/organizations/fabric-ca/org1/tls-cert.pem"
echo "1"
cp "${PWD}/organizations/peerOrganizations/org1.example.com/msp/config.yaml" "${mailto:pwd}/organizations/peerorganizations/org1.example.com/users/owner#org1.example.com/msp/config.yaml"
echo "1"
export FABRIC_CA_CLIENT_HOME=${PWD}/organizations/peerOrganizations/org2.example.com/
fabric-ca-client register --caname ca-org2 --id.name buyer --id.secret buyerpw --id.type client --tls.certfiles "${PWD}/organizations/fabric-ca/org2/tls-cert.pem"
echo "1"
fabric-ca-client enroll -u https://buyer:buyerpw#localhost:8054 --caname ca-org2 -M "${mailto:pwd}/organizations/peerorganizations/org2.example.com/users/buyer#org2.example.com/msp" --tls.certfiles "${PWD}/organizations/fabric-ca/org2/tls-cert.pem"
# sh -c 'fabric-ca-server start --ca.certfile /etc/hyperledger/fabric-ca-server-config/ca.bank.com-cert.pem --ca.keyfile /etc/hyperledger/fabric-ca-server-config/priv_sk --cfg.identities.allowremove --cfg.affiliations.allowremove -b admin:adminpw'
echo "1"
cp "${PWD}/organizations/peerOrganizations/org2.example.com/msp/config.yaml" "${mailto:pwd}/organizations/peerorganizations/org2.example.com/users/buyer#org2.example.com/msp/config.yaml"
export CORE_PEER_TLS_ENABLED=true
export CORE_PEER_LOCALMSPID="Org1MSP"
export CORE_PEER_TLS_ROOTCERT_FILE=${PWD}/organizations/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/ca.crt
export mailto:core_peer_mspconfigpath=${pwd}/organizations/peerorganizations/org1.example.com/users/admin#org1.example.com/msp
export CORE_PEER_ADDRESS=localhost:7051
export BANK=$(echo -n "{\"metaData \": {\"addedBy\": \"owner\",\"updatedBy\": \"owner\",\"transaction\": \"done\",\"participants\":{\"field\": {\"userID\": \"x\",\"userName\": \"ravin\",\"organizationID\": \"Org1\",\"organizationName\": \"myorgs\"}}},\"id\": \"owner\",\"name\": \"bank1\"}" | base64 | tr -d \\n)
peer chaincode invoke -o localhost:7050 --ordererTLSHostnameOverride orderer.example.com --tls --cafile "${PWD}/organizations/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem" -C mychannel -n bank -c '{"function":"CreateBank","Args":[]}' --transient "{\"bank\":\"$BANK\"}"
If there is anything wrong in the script also suggest me please.

Ansible synchronize (rsync) fails

ansible.posix.synchronize, a wrapper for rsync, is failing with message
"msg": "Warning: Permanently added <host> (ECDSA) to the list of known hosts.\r\n=========================================================================\nUse of this computer system is for authorized and management approved use\nonly. All usage is subject to monitoring. Unauthorized use is strictly\nprohibited and subject to prosecution and/or corrective action up to and\nincluding termination of employment.\n=========================================================================\nrsync error: received SIGINT, SIGTERM, or SIGHUP (code 20) at rsync.c(644) [sender=3.1.3]\n"
My playbook
---
- name: Test rsync
hosts: all
become: yes
become_user: postgres
tasks:
- name: Populate scripts/common using copy
copy:
src: common/
dest: /home/postgres/scripts/common
- name: Populate scripts/common using rsync
ansible.posix.synchronize:
src: common/
dest: /home/postgres/scripts/common
Populate scripts/common using copy executes with no problem.
Full error output
fatal: [<host>]: FAILED! => {
"changed": false,
"cmd": "sshpass -d3 /usr/bin/rsync --delay-updates -F --compress --archive --rsh='/usr/bin/ssh -S none -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null' --rsync-path='sudo -u postgres rsync' --out-format='<<CHANGED>>%i %n%L' /opt/common/ pg_deployment#<host>t:/home/postgres/scripts/common",
"invocation": {
"module_args": {
"_local_rsync_password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"_local_rsync_path": "rsync",
"_substitute_controller": false,
"archive": true,
"checksum": false,
"compress": true,
"copy_links": false,
"delay_updates": true,
"delete": false,
"dest": "pg_deployment#<host>:/home/postgres/scripts/common",
"dest_port": null,
"dirs": false,
"existing_only": false,
"group": null,
"link_dest": null,
"links": null,
"mode": "push",
"owner": null,
"partial": false,
"perms": null,
"private_key": null,
"recursive": null,
"rsync_opts": [],
"rsync_path": "sudo -u postgres rsync",
"rsync_timeout": 0,
"set_remote_user": true,
"src": "/opt/common/",
"ssh_args": null,
"ssh_connection_multiplexing": false,
"times": null,
"verify_host": false
}
},
"msg": "Warning: Permanently added '<host>' (ECDSA) to the list of known hosts.\r\n=========================================================================\nUse of this computer system is for authorized and management approved use\nonly. All usage is subject to monitoring. Unauthorized use is strictly\nprohibited and subject to prosecution and/or corrective action up to and\nincluding termination of employment.\n=========================================================================\nrsync error: received SIGINT, SIGTERM, or SIGHUP (code 20) at rsync.c(644) [sender=3.1.3]\n",
"rc": 5
}
Notes:
User pg_deployment has passwordless sudo to postgres. This ansible playbook is being run inside a docker container.
After messing with it a bit more, I found that I can directly run the rsync command (not using ansible)
SSHPASS=<my_ssh_pass> sshpass -e /usr/bin/rsync --delay-updates -F --compress --archive --rsh='/usr/bin/ssh -S none -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null' --rsync-path='sudo -u postgres rsync' --out-format='<<CHANGED>>%i %n%L' common pg_deployment#<host>:/home/postgres/
The only difference I can see is I used sshpass -e while ansible defaulted to sshpass -d#. Could the credentials ansible was trying to pass in be incorrect? If they are incorrect for ansible.posix.synchronize then why aren't they incorrect for other ansible tasks?
EDIT
Confirmed that if I run
sshpass -d10 /usr/bin/rsync --delay-updates -F --compress --archive --rsh='/usr/bin/ssh -S none -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null' --rsync-path='sudo -u postgres rsync' --out-format='<<CHANGED>>%i %n%L' common pg_deployment#<host>:/home/postgres
(I chose random number for the file descriptor d10) I get the same error as above
"msg": "Warning: Permanently added <host> (ECDSA) to the list of known hosts.\r\n=========================================================================\nUse of this computer system is for authorized and management approved use\nonly. All usage is subject to monitoring. Unauthorized use is strictly\nprohibited and subject to prosecution and/or corrective action up to and\nincluding termination of employment.\n=========================================================================\nrsync error: received SIGINT, SIGTERM, or SIGHUP (code 20) at rsync.c(644) [sender=3.1.3]\n"
Suggesting that the problem is whatever ansible is using as the file descriptor? It isn't a huge problem since I can just pass in the sshpass as an env variable in my docker container since it's ephemeral, but I still would like to know what is going on with ansible here.
SOLUTION (using command)
---
- name: Create Postgres Cluster
hosts: all
become: yes
become_user: postgres
tasks:
- name: Create Scripts Directory
file:
path: /home/postgres/scripts
state: directory
- name: Populate scripts/common
command: sshpass -e /usr/bin/rsync --delay-updates -F --compress --archive --rsh='/usr/bin/ssh -S none -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null' --rsync-path='sudo -u postgres rsync' --out-format='<<CHANGED>>%i %n%L' common pg_deployment#<host>:/home/postgres/scripts
delegate_to: 127.0.0.1
become: no

proxycommand doesnt seem to work with ansible and my environment

I've tried many combinations to get this to work but cannot for some reason. I am not using keys in our environment so passwords will have to do.
I've tried proxyjump and sshuttle as well.
It's strange has the ping module works but when trying another module or playbook it doesn't work.
Rough set up is:
laptop running ubuntu with ansible installed
[laptop] ---> [productionjumphost] ---> [production_iosxr_router]
ansible.cfg:
[ssh_connection]
pipelining = True
ssh_args = -o ControlMaster=auto -o ControlPersist=30m -o ControlPath=/tmp/ansible-%r#%h:%p -F ssh.config
~/.ssh/config == ssh.cfg: (configured both)
Host modeljumphost
HostName modeljumphost.fqdn.com.au
User user
Port 22
Host productionjumphost
HostName productionjumphost.fqdn.com.au
User user
Port 22
Host model_iosxr_router
HostName model_iosxr_router
User user
ProxyCommand ssh -W %h:22 modeljumphost
Host production_iosxr_router
HostName production_iosxr_router
User user
ProxyCommand ssh -W %h:22 productionjumphost
inventory:
[local]
192.168.xxx.xxx
[router]
production_iosxr_router ansible_connection=network_cli ansible_user=user ansible_ssh_pass=password
[router:vars]
ansible_network_os=iosxr
ansible_ssh_common_args: '-o ProxyCommand="ssh -W %h:%p -q user#productionjumphost.fqdn.com.au"'
ansible_user=user
ansible_ssh_pass=password
playbook.yml:
---
- name: Network Getting Started First Playbook
hosts: router
gather_facts: no
connection: network_cli
tasks:
- name: show version
iosxr_command:
commands: show version
I can run an ad-hoc ansible command and a successful ping is returned:
result: ansible production_iosxr_router -i inventory -m ping -vvvvv
production_iosxr_router | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": false,
"invocation": {
"module_args": {
"data": "pong"
}
},
"ping": "pong"
}
running playbook: ansible-playbook -i inventory playbook.yml -vvvvv
production_iosxr_router | FAILED! => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": false,
"msg": "[Errno -2] Name or service not known"
}

logstash http_poller ssl certification issue

I am trying to use logstash http_poller to query a server RESTAPI. I download the server pem through explore, and generate jks file with keytool. but we still get error "PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target". Don't know what wrong.
The config like below:
http_poller {
urls => {
restapi => {
method => get
url => "https://path_to_resources
headers => {
Accept => "application/json"
}
truststore => "/path/generated.truststore.jks"
truststore_password => "xxx"
ssl_certificate_validation => false
auth => {
user => "xxx"
password => "xxx"
}
}
}
request_timeout => 60
interval => 60000
codec => "json"
metadata_target => "http_poller_metadata"
}
}
By the way, what impact if ssl_certificate_validation is set as false?
I interpret OPs intention as to hopefully being able to disable TLS verification, which we still cant (logstash-7.11.1) and I plow on with how to get a trust store for these cases. This Q was one of my hits in pursuit of the same.
Some appliances will be running self signed certificates (another discussion ppl...) - so a small script to setup such a trust store could be helpful, especially if you are about to set up some automation internally.
Another caveat is that the self signed certificate still has to have a matching host name.
Based on the example from https://www.elastic.co/guide/en/logstash/current/plugins-inputs-http_poller.html
NB! Further error checking, etc. is left at your discretion.
#!/bin/bash
# Fetch an http server's TLS certificate and
# create or update a JAVA keystore / truststore
usage () {
echo "usage: get-cert.sh <hostname>:<port>"
exit 1
}
TRUSTSTORE=cacert/trust.jks
PARAM=$1
HOSTNAME=$(echo "$PARAM" | cut -d: -f 1)
PORT=$(echo "$PARAM" | cut -d: -f 2)
REST=$(echo "$PARAM" | cut -d: -f 3-)
[ -z "$HOSTNAME" ] && usage
[ -z "$PORT" ] && usage
[ -n "$REST" ] && usage
OUTPUT=$(
openssl \
s_client \
-showcerts \
-connect "${HOSTNAME}":"${PORT}" </dev/null 2>/dev/null | \
openssl \
x509 \
-outform PEM)
EC=$?
[ $EC -ne 0 ] && { echo "ERROR EC=$EC - $OUTPUT" ; exit $EC ; }
keytool \
-import \
-storepass changeit \
-alias ${HOSTNAME} \
-noprompt \
-file <(echo "$OUTPUT") \
-keystore ${TRUSTSTORE}
Using some bash specific possibilities here. The alternative is to go through temporary files, as pr the official example (see link above).
Apparently your certificate is invalid .
Regarding
ssl_certificate_validation
it doesn't have real impact , http-puller is based on manticore, a ruby libary which relay on Apache HC
which does not support this hook see

password prompt keep on coming on the console

I wrote below command, which will copy the id_dsa.pub file to other server as part of my auto login feature. But every time below message is coming on the console:
spawn scp -o StrictHostKeyChecking=no /opt/mgtservices/.ssh/id_dsa.pub root#12.43.22.47:/root/.ssh/id_dsa.pub
Password:
Password:
Below script I wrote for this:
function sshkeygenerate()
{
if ! [ -f $HOME/.ssh/id_dsa.pub ] ;then expect -c" spawn ssh-keygen -t dsa -f $HOME/.ssh/id_dsa
expect y/n { send y\r ; exp_continue } expect passphrase): { send \r ; exp_continue}expect again: { send \r ; exp_continue}
spawn chmod 700 $HOME/.ssh && chmod 700 $HOME/.ssh/*
exit "
fi
expect -c"spawn scp -o StrictHostKeyChecking=no $HOME/.ssh/id_dsa.pub root"#"12.43.22.47:/root/.ssh/id_dsa.pub
expect *assword: { send $ROOTPWD\r }expect yes/no { send yes\r ; exp_continue }
spawn ssh -o StrictHostKeyChecking=no root"#"12.43.22.47 \"chmod 755 /root/.ssh/authorized_keys\"
expect *assword: { send $ROOTPWD\r }expect yes/no { send yes\r ; exp_continue }
spawn ssh -o StrictHostKeyChecking=no root"#"12.43.22.47 \"cat /root/.ssh/id_dsa.pub >> /root/.ssh/authorized_keys\"
expect *assword: { send $ROOTPWD\r }expect yes/no { send yes\r; exp_continue }
sleep 1
exit"
}
You should first create a passwordless ssh to the destination server, then you won't need to enter the password while you will do the scp.
Assuming 192.168.0.11 is the destination machine:
1) ssh-keygen -t rsa
2) ssh sheena#192.168.0.11 mkdir -p .ssh
3) cat .ssh/id_rsa.pub | ssh sheena#192.168.0.11 'cat >> .ssh/authorized_keys'
4) ssh sheena#192.168.0.11 "chmod 700 .ssh; chmod 640 .ssh/authorized_keys"
Link for the refernce:
http://www.tecmint.com/ssh-passwordless-login-using-ssh-keygen-in-5-easy-steps/