ssh connection refused when deploying meteor app from nitrous.io to Linode server using meteor up - ssh

See https://github.com/arunoda/meteor-up/issues/171
I am trying to deploy my meteor app from my nitrous box to a remote server in Linode.
I follow the instruction in meteor up and got
Invalid mup.json file: Server username does not exit
mup.json
// Server authentication info
"servers": [
{
"host": "123.456.78.90",
// "username": "root",
// or pem file (ssh based authentication)
"pem": "~/.ssh/id_rsa",
"sshOptions": { "Port": 1024 }
}
]
So I uncomment the username: "roote line in mup.json and I did mup logs -n 300 and got the following error:
[123.456.78.90] ssh: connect to host 123.456.78.90 port 1024: Connection refused
I suspect I may did something wrong in setting up the SSH key. I can access my remote server without password after setting up my ssh key in ~/.ssh/authorized_keys.
The content of the authorized_keys looks like this:
ssh-rsa XXXXXXXXXX..XXXX== root#apne1.nitrousbox.com
Do you guys have any ideas of what went wrong?

Problem solved by uncommenting the username and changing the port to 22:
// Server authentication info
"servers": [
{
"host": "123.456.78.90",
"username": "root",
// or pem file (ssh based authentication)
"pem": "~/.ssh/id_rsa",
"sshOptions": { "Port": 22 }
}
]

Related

homebridge ssh connection wrong username

im trying to use a ssh plugin for the homebridge to turn off my truenas. if havve the following config
{
"accessory": "SSH",
"name": "Remote Command",
"command": "sudo shutdown -h now",
"sshConfig": {
"host": "truenas",
"username": "root",
"privateKey": "/home/pi/dev/SSH Shutdown/privatekey.ssh"
}
}
im getting the error
Error: Invalid username
at Client.connect (/usr/lib/node_modules/homebridge-ssh/node_modules/ssh2/lib/client.js:130:11)
at connect (/usr/lib/node_modules/homebridge-ssh/node_modules/ssh-exec/index.js:53:12)
at ReadFileContext.callback (/usr/lib/node_modules/homebridge-ssh/node_modules/ssh-exec/index.js:109:7)
at FSReqCallback.readFileAfterOpen [as oncomplete] (node:fs:314:13)
The SSH connection with that username and private key is working from the same pi. I tested that. I tried the same with another user and got the same error.
can someone help? Am I missing someothing obvisous?

Problem in enable Authenticate Mesos APIs in Mesosphere/ DCOS

Authentication is not enable for Mesos APIs by default.
After Install DCOS I want config Mesos API Authentication on it.
I'm going to set authentication for mesos APIs like : register_frameworks, run_task,...
the problem is after my configuration, DCOS GUI and marathon dosent work correctly.
I configured DCOS as follow:
Mesos environment variable config:
path:/opt/mesosphere/etc/mesos-master
#Authentication part
MESOS_LOG_DIR=/var/log/mesos
#Framework authentication
MESOS_AUTHENTICATORS="crammd5"
MESOS_AUTHENTICATE_FRAMEWORKS=true
MESOS_AUTHENTICATE_HTTP_FRAMEWORKS=true
MESOS_HTTP_FRAMEWORK_AUTHENTICATORS="basic"
MESOS_ACLS=/opt/mesosphere/etc/acls
MESOS_AUTHENTICATE=true
MESOS_CREDENTIALS=/opt/mesosphere/etc/mesos_credentials_auth.json
MESOS_ROLE=foo
Marathon environment variable config:
path:/opt/mesosphere/marathon
#authentication section
MARATHON_MESOS_AUTHENTICATION=enabled
#MARATHON_HTTP_CREDENTIALS=marathon:123456
MARATHON_MESOS_AUTHENTICATION_PRINCIPAL=marathon
MARATHON_MESOS_ROLE=foo
MARATHON_MESOS_AUTHENTICATION_SECRET_file=/opt/mesosphere/etc/marathon.secret
Marathon environment variable config: path:/opt/mesosphere/metronome
METRONOME_MESOS_AUTHENTICATION_ENABLED=true
METRONOME_MESOS_AUTHENTICATION_PRINCIPAL=metronome
METRONOME_MESOS_ROLE=foo
METRONOME_MESOS_AUTHENTICATION_SECRET_FILE= /opt/mesosphere/etc/metronome.secret
/opt/mesosphere/etc/metronome.secret (contain metronome secret without new line)
123456
/opt/mesosphere/etc/marathon.secret (contain marathon secret without new line)
123456
/opt/mesosphere/etc/acls
{
"run_tasks": [
{
"principals": {
"type": "ANY"
},
"users": {
"type": "ANY"
}
}
],
"register_frameworks": [
{
"principals": {
"type": "ANY"
},
"roles": {
"type": "ANY"
}
}
]
}
/opt/mesosphere/etc/mesos_credentials_auth.json
{
"credentials" : [
{
"principal": "principal1",
"secret": "secret1"
},
{
"principal": "principal2",
"secret": "secret2"
},
{
"principal": "marathon",
"secret": "123456"
},
{
"principal": "metronome",
"secret": "123456"
}
]
}
When I enable this configuration and stop and start dcos-mesos-master:
systemctl stop dcos-mesos-master.service
systemctl start dcos-mesos-master.service
systemctl stop dcos-marathon.service
systemctl start dcos-marathon.service
systemctl stop dcos-metronome.service
systemctl start dcos-metronome.service
http://IP/services page in DCOS dosnt work. I think its beacuase authentication of marathon don't set correctly. bcs this address dosent work after enable authentication configuration:\
http://IP/service/marathon/v2/deployments?_timestamp=1560449507192
I got this errors in mesos log after enable metronome authentication:
I0613 17:35:12.176092 305 authenticator.cpp:98] Creating new server
SASL connection
I0613 17:35:12.177258 304 master.cpp:10255] Re-authenticating
scheduler-aca98ea7-be34-49d1-9200-5ef8c15da153#172.17.0.2:15201;
discarding outstanding authentication
I0613 17:35:12.177523 304 master.cpp:10285] Ignoring stale
authentication result of scheduler-aca98ea7-be34-49d1-9200-
5ef8c15da153#172.17.0.2:15201
I0613 17:35:12.177582 304 authenticator.cpp:98] Creating new server
SASL connection
I0613 17:35:12.178586 302 master.cpp:10255] Re-authenticating
scheduler-aca98ea7-be34-49d1-9200-5ef8c15da153#172.17.0.2:15201;
discarding outstanding authentication
I0613 17:35:12.178850 302 master.cpp:10285] Ignoring stale
authentication result of scheduler-aca98ea7-be34-49d1-9200-
5ef8c15da153#172.17.0.2:15201
After search, finally I got my answer:
These security features are only available on "DC/OS mesosphere Enterprise" and you cant config it in open source version.
also I opened github issue with more details:(I hope it will be usefull)
https://github.com/mesosphere/marathon/issues/6942

Terraform remote-exec on windows with ssh

I have setup a Windows server and installed ssh using Chocolatey. If I run this manually I have no problems connecting and running my commands. When I try to use Terraform to run my commands it connects successfully but doesn't run any commands.
I started by using winrm and then I could run commands but due to some problem with creating a service fabric cluster over winrm I decided to try using ssh instead and when running things manually it worked and the cluster went up. So that seems to be the way forward.
I have setup a Linux VM and got ssh working by using the private key. So I have tried to use the same config as I did with the Linux VM on the Windows but it still asked me to use my password.
What could the reason be for being able to run commands over ssh manually and using Terraform only connect but no commands are run? I am running this on OpenStack with Windows 2016
null_resource.sf_cluster_install (remote-exec): Connecting to remote host via SSH...
null_resource.sf_cluster_install (remote-exec): Host: 1.1.1.1
null_resource.sf_cluster_install (remote-exec): User: Administrator
null_resource.sf_cluster_install (remote-exec): Password: true
null_resource.sf_cluster_install (remote-exec): Private key: false
null_resource.sf_cluster_install (remote-exec): SSH Agent: false
null_resource.sf_cluster_install (remote-exec): Checking Host Key: false
null_resource.sf_cluster_install (remote-exec): Connected!
null_resource.sf_cluster_install: Creation complete after 4s (ID: 5017581117349235118)
Here is the script im using to run the commands:
resource "null_resource" "sf_cluster_install" {
# count = "${local.sf_count}"
depends_on = ["null_resource.copy_sf_package"]
# Changes to any instance of the cluster requires re-provisioning
triggers = {
cluster_instance_ids = "${openstack_compute_instance_v2.sf_servers.0.id}"
}
connection = {
type = "ssh"
host = "${openstack_networking_floatingip_v2.sf_floatIP.0.address}"
user = "Administrator"
# private_key = "${file("~/.ssh/id_rsa")}"
password = "${var.admin_pass}"
}
provisioner "remote-exec" {
inline = [
"echo hello",
"powershell.exe Write-Host hello",
"powershell.exe New-Item C:/tmp/hello.txt -type file"
]
}
}
Put the connection block inside the provisioner block:
provisioner "remote-exec" {
connection = {
type = "ssh"
...
}
inline = [
"echo hello",
"powershell.exe Write-Host hello",
"powershell.exe New-Item C:/tmp/hello.txt -type file"
]
}

SSL Error: Handshake failed with fatal error - Querying fabric-sdk-rest server on a Fabric Network with TLS enabled

I started a Multi-Host Fabric Network usind docker swarm made up of 1 CA-server, 1 Orderer, 2 Peers (both in Org1, one on PC1 and one on PC2), 2 CouchBD (one for each Peer) with fabric-sdk-rest running on PC2.
Now if I disable TLS in the Fabric Network, everything works fine. But if i enable the TLS in the network, the SDK cannot connect to the peers failing to query.
Here I show the configuration of the network and the fabric-sdk-rest:
(crypto-config.yaml)
OrdererOrgs:
- Name: Orderer
Domain: example.com
Specs:
- Hostname: orderer
PeerOrgs:
- Name: Org1
Domain: org1.example.com
Template:
Count: 2
Users:
Count: 0
(datasources.json)
{
"db": {
"name": "db",
"connector": "memory"
},
"fabricDataSource": {
"name": "fabricDataSource",
"connector": "fabric",
"keyStoreFile": "/tmp/fabricSDKStore",
"fabricUser": {
"username": "Admin#org1.example.com",
"mspid": "Org1MSP",
"cryptoContent": {
"privateKey":"$HOME/mynetwork/crypto-config/peerOrganizations/org1.example.com/users/Admin#org1.example.com/msp/keystore/KEY_sk",
"signedCert":"$HOME/mynetwork/crypto-config/peerOrganizations/org1.example.com/users/Admin#org1.example.com/msp/signcerts/Admin#org1.example.com-cert.pem"
}
},
"COMMENT_orgs":"Referenced by peers to avoid having to configure the same file location multiple times. Change CACertFile locations for your fabric",
"orgs": [
{ "name":"org1", "CACertFile":"$HOME/mynetwork/crypto-config/peerOrganizations/org1.example.com/ca/ca.org1.example.com-cert.pem"}
],
"COMMENT_peers" : "Configured array is for use with the fabric-sample when running it in a local docker set up. eventURL and publicCertFile not currently used.",
"peers": [
{ "requestURL":"grpcs://peer1.org1.example.com:7051", "eventURL":"grpcs://peer1.org1.example.com:7053", "orgIndex":"0", "publicCertFile":"$HOME/mynetwork/crypto-config/peerOrganizations/org1.example.com/peers/peer1.org1.example.com/msp/signcerts/peer1.org1.example.com-cert.pem", "hostname":"peer1" }
],
"COMMENT_peers_secure" : "UNUSED. This is a copy of the above with grpcs URLs. Replace peers content with this if grpcs urls are needed.",
"peers-secure": [
{ "requestURL":"grpcs://peer1.org1.example.com:7051", "eventURL":"grpcs://peer1.org1.example.com:7053", "orgIndex":"0", "publicCertFile":"$HOME/mynetwork/crypto-config/peerOrganizations/org1.example.com/peers/peer1.org1.example.com/msp/signcerts/peer1.org1.example.com-cert.pem", "hostname":"peer1" }
],
"orderers": [
{ "url":"grpcs://orderer.example.com:7050", "CACertFile":"$HOME/mynetwork/crypto-config/ordererOrganizations/example.com/ca/ca.example.com-cert.pem", "publicCertFile": "$HOME/mynetwork/crypto-config/ordererOrganizations/example.com/orderers/orderer.example.com/msp/signcerts/orderer.example.com-cert.pem", "hostname":"orderer"}
],
"COMMENT_orderers_secure" : "UNUSED. This is a copy of the above with grpcs URLs. Replace orderers content with this if grpcs urls are needed.",
"orderers-secure": [
{ "url":"grpcs://orderer.example.com:7050", "CACertFile":"$HOME/mynetwork/crypto-config/ordererOrganizations/example.com/ca/ca.example.com-cert.pem", "publicCertFile": "$HOME/mynetwork/crypto-config/ordererOrganizations/example.com/orderers/orderer.example.com/msp/signcerts/orderer.example.com-cert.pem", "hostname":"orderer"}
],
"COMMENT_channels":"fabric-sdk-node Client class requires channel information to be configured during bootstrap.",
"channels": [
{ "name":"mychannel", "peersIndex":[0], "orderersIndex":[0] }
],
"channels-first-network": [
{ "name":"mychannel", "peersIndex":[0,1,2,3], "orderersIndex":[0] }
]
}
}
Once started the Hyperledger Fabric SDK REST server at https://0.0.0.0:3000, when I try to make the GET channels query from the explorer, I get the following error:
error: [fabricconnector.js]: Failed to queryChannels: Error: 14 UNAVAILABLE: Connect Failed
Error not handled for the GET request /api/fabric/1_0/channels: Error: 14 UNAVAILABLE: Connect Failed
at Object.exports.createStatusError ($HOME/mynetwork/fabric-sdk-rest/packages/loopback-connector-fabric/node_modules/grpc/src/common.js:87:15)
at Object.onReceiveStatus ($HOME/mynetwork/fabric-sdk-rest/packages/loopback-connector-fabric/node_modules/grpc/src/client_interceptors.js:1214:28)
at InterceptingListener._callNext ($HOME/mynetwork/fabric-sdk-rest/packages/loopback-connector-fabric/node_modules/grpc/src/client_interceptors.js:590:42)
at InterceptingListener.onReceiveStatus ($HOME/mynetwork/fabric-sdk-rest/packages/loopback-connector-fabric/node_modules/grpc/src/client_interceptors.js:640:8)
at callback ($HOME/mynetwork/fabric-sdk-rest/packages/loopback-connector-fabric/node_modules/grpc/src/client_interceptors.js:867:24)
E0510 10:51:04.780559355 12247 ssl_transport_security.cc:989] Handshake failed with fatal error SSL_ERROR_SSL: error:14090086:SSL routines:ssl3_get_server_certificate:certificate verify failed.
Has anyone ever seen this error? Can anyone help me get through this, please?

Chaincode container can't connect to the local peer due to certificate signed by unknown authority

First of all I'd like to mention, that my setup works like a charm when there's no TLS enabled. It works even in Docker Swarm on AWS.
The problem starts when I enable TLS. When I deploy my .bna file via Composer, my newly created chaincode container produces the following logs:
2017-08-23 13:14:16.389 UTC [Composer] Info -> INFO 001 Setting the Composer pool size to 8
2017-08-23 13:14:16.402 UTC [shim] userChaincodeStreamGetter -> ERRO 002 Error trying to connect to local peer: x509: certificate signed by unknown authority
Error starting chaincode: Error trying to connect to local peer: x509: certificate signed by unknown authority
Funny thing is, that this works when deploying .bna via the composer playground (when the TLS is still enabled in my fabric)...
Below is my connection profile:
{
"name": "test",
"description": "test",
"type": "hlfv1",
"orderers": [
{
"url": "grpcs://orderer.company.com:7050",
"cert": "-----BEGIN CERTIFICATE-----blabla1\n-----END CERTIFICATE-----\n"
}
],
"channel": "channelname",
"mspID": "CompanyMSP",
"ca": {
"url": "https://ca.company.com:7054",
"name": "ca-company",
"trustedRoots": [
"-----BEGIN CERTIFICATE-----\nblabla2\n-----END CERTIFICATE-----\n"
],
"verify": true
},
"peers": [
{
"requestURL": "grpcs://peer0.company.com:7051",
"eventURL": "grpcs://peer0.company.com:7053",
"cert": "-----BEGIN CERTIFICATE-----\nbalbla3\n-----END CERTIFICATE-----\n"
}
],
"keyValStore": "/home/composer/.composer-credentials",
"timeout": 300
}
My certs have been generated by cryptogen tool, hence:
orderers.0.cert contains value of crypto-config/ordererOrganizations/company.com/orderers/orderer.company.com/msp/tlscacerts/tlsca.company.com-cert.pem
peers.0.cert contains value of crypto-config/peerOrganizations/company.com/peers/peer0.company.com/msp/tlscacerts/tlsca.company.com-cert.pem
ca.trustedRoots.0 contains crypto-config/peerOrganizations/company.com/peers/peer0.company.com/tls/ca.crt
I've got the feeling, that my trustedRoots certificate is wrong...
UPDATE
When I do docker inspect chaincode_container I can see that it misses ENV variable: CORE_PEER_TLS_ROOTCERT_FILE=/etc/hyperledger/fabric/peer.crt, while the chaincode container deployed via playground does have it...
When the chaincode image is built, the TLS certificate that it uses to build the trusted roots is the rootcert from:
# TLS Settings
# Note that peer-chaincode connections through chaincodeListenAddress is
# not mutual TLS auth. See comments on chaincodeListenAddress for more info
tls:
enabled: false
cert:
file: tls/server.crt
key:
file: tls/server.key
rootcert:
file: tls/ca.crt
The TLS certificate that the peer uses to run the gRPC service is the cert one.
By the way - You're using the release branch code, not the one in master - is that correct?