So, I’ve tried to build a new image exist snapshot from my ec2 instance with current keypair and having this issue.
Error message:
2021/08/31 15:13:42 packer-plugin-amazon_v1.0.0_x5.0_darwin_amd64 plugin: 2021/08/31 15:13:42 [DEBUG] SSH handshake err: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
2021/08/31 15:13:42 packer-plugin-amazon_v1.0.0_x5.0_darwin_amd64 plugin: 2021/08/31 15:13:42 [DEBUG] Detected authentication error. Increasing handshake attempts
2021/08/31 15:13:42 ui error: ==> amazon-ebs.vl-template: Error waiting for SSH: Packer experienced an authentication error when trying to connect via SSH. This can happen if your username/password are wrong. You may want to double-check your credentials as part of your debuggin
g process. original error: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
Here is my template:
source "amazon-ebs" "my-template" {
profile = "${var.profile}"
region = "${var.region}"
ami_name = "${var.ami_name}-${local.timestamp}"
ami_virtualization_type = "hvm"
communicator = "ssh"
ssh_interface = "public_ip"
ssh_username = "${var.ssh_username}"
ssh_keypair_name = "${var.ssh_keypair_name}"
ssh_private_key_file = "${var.ssh_private_key_file}"
ssh_agent_auth = true
ssh_timeout = "10m"
aws_polling {
delay_seconds = 60
max_attempts = 60
}
ebs_optimized = false
instance_type = "${var.instance_type}"
vpc_id = "${var.vpc_id}"
subnet_id = "${var.subnet_id}"
security_group_id = "${var.security_group_id}"
launch_block_device_mappings {
device_name = "/dev/sda1"
volume_size = "${var.root_volume_size_gb}"
volume_type = "${var.volume_type}"
delete_on_termination = true
}
source_ami_filter {
filters = {
name = "${var.filter_name}"
root-device-type = "ebs"
virtualization-type = "hvm"
}
most_recent = true
owners = ["${var.owner_id}"]
}
}
build {
sources = ["source.amazon-ebs.my-template"]
}
My packer command:
packer build -var “ssh_private_key_file=/Users/movmac024/.ssh/mykey.pem” -var “profile=myaws” -var “region=ap-southeast-1” hcl/mytemplate.pkr.hcl
Found out that only use ssh_private_key_file or ssh_agent_auth when ssh_keypair_name is utilized.
Related
rabbitmq version 3.10.0
tell me how to write rabbitmq.conf correctly without using advanced.config
work BindDN in another server--> uid=myuserinfreeipa,cn=users,cn=accounts,dc=mydc1,dc=mydc2
work SearchFilter in another server ---> "(&(uid=%u)(memberOf=cn=mygroupinfreeipa,cn=groups,cn=accounts,dc=mydc1,dc=mydc2)(!(nsaccountlock=TRUE)))"
work BaseDN in another server --> "cn=users,cn=accounts,dc=mydc1,dc=mydc2"
rabbitmq.conf
auth_backends.1 = ldap
auth_ldap.servers.1 = my.server.com
auth_ldap.timeout = 500
auth_ldap.port = 389
auth_ldap.user_dn_pattern = CN=${username},OU=Users,dc=mydc1,dc=mydc2
auth_ldap.use_ssl = false
ssl_options.cacertfile = /etc/rabbitmq/ca.crt
auth_ldap.dn_lookup_bind.user_dn = test
auth_ldap.dn_lookup_bind.password = password
auth_ldap.dn_lookup_attribute = distinguishedName
auth_ldap.dn_lookup_base = cn=users,cn=accounts,dc=mydc1,dc=mydc2
auth_ldap.log = network
advanced.config
[
{
rabbitmq_auth_backend_ldap,
[
{
tag_queries, [
{administrator,{in_group,"CN=mygroupinfreeipa,dc=mydc1,dc=mydc2","member"}},
{management, {constant, true}}
]
}
]%% rabbitmq_auth_backend_ldap,
}
].
tail -f /var/log/rabbitmq/rabbit#amqptest.log
LDAP CHECK: login for test
LDAP connecting to servers: ["my.server.com"]
LDAP network traffic: bind request = {'BindRequest',3,"xxxx",
{simple,"xxxx"}}
LDAP network traffic: bind reply = {ok,
{'LDAPMessage',1,
{bindResponse,
{'BindResponse',invalidCredentials,
[],[],asn1_NOVALUE,asn1_NOVALUE}},
asn1_NOVALUE}}
LDAP bind returned "invalid credentials": xxxx
LDAP connecting to servers: ["my.server.com"]
LDAP network traffic: bind request = {'BindRequest',3,"xxxx",
{simple,"xxxx"}}
LDAP bind error: "xxxx" {'EXIT',
{{badmatch,
{error,
{asn1,
{function_clause,
[{'ELDAPv3',encode_restricted_string,
[{refused,"test",[]},[<<4>>]]
Hell, I am trying to deploy rke k8s with terraform, but I am not able to connect to the desired host via ssh:
time="2022-02-28T11:17:38+01:00" level=warning msg="Failed to set up SSH tunneling for host [poc-k8s.my-domain.com]: Can't retrieve Docker Info: error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.24/info\": Unable to access node with address [poc-k8s.my-domain.com:22] using SSH. Please check if you are able to SSH to the node using the specified SSH Private Key and if you have configured the correct SSH username. Error: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain"
and this is the .tf file I am using:
terraform {
required_providers {
rke = {
source = "rancher/rke"
version = "1.3.0"
}
}
}
provider "rke" {
log_file = "rke_debug.log"
}
resource "rke_cluster" "cluster" {
nodes {
address = "poc-k8s.my-domain.com"
user = "root"
role = ["controlplane", "worker", "etcd"]
ssh_key = file("~/.ssh/root_key")
}
nodes {
address = "poc-k8s.my-domain.com"
user = "root"
role = ["worker", "etcd"]
ssh_key = file("~/.ssh/root_key")
}
addons_include = [
"https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml",
"https://gist.githubusercontent.com/superseb/499f2caa2637c404af41cfb7e5f4a938/raw/930841ac00653fdff8beca61dab9a20bb8983782/k8s-dashboard-user.yml",
]
}
resource "local_file" "kube_cluster_yaml" {
filename = "~/.kube/kube_config_cluster.yml"
sensitive_content = "rke_cluster.cluster.kube_config_yaml"
}
The key if of course correct and I am able to connect to the desired host:
ssh -i ~/.ssh/root_key root#poc-k8s.my-domain.com
what am I missing here?
[Update]
Cluster resource has delay_on_creation property that can be used
resource "rke_cluster" "cluster" {
delay_on_creation = 180
(...)
}
I'm facing a similar issue. On the second run of terrafor apply it works correctly. In my case the issue is that docker is not up fast enough for RKE provider.
I've found following workaround from citynetwork /
citycloud-examples:
resource "rke_cluster" "cluster" {
(...)
depends_on = [null_resource.wait-for-docker]
}
resource "null_resource" "wait-for-docker" {
provisioner "local-exec" {
command = "sleep 180"
}
depends_on = [
# list of servers docker being installed on
(...)
]
}
It waits for 180s which is not ideal, though.
I try to use Terraform to create a DigitalOcean node on which consul is installed.
I'm using the following .tf file but it hangs up and do not copy the consul .zip file onto the droplet.
I got the following error message after a couple of minutes:
ssh: handshake failed: ssh: unable to authenticate, attempted methods
[none publickey], no supported methods remain
The droplets are correctly created though. I can login on command line with the key I specified (thus not specifying password). I'm guessing the connection part might be faulty but not sure what I'm missing.
Any idea ?
variable "do_token" {}
# Configure the DigitalOcean Provider
provider "digitalocean" {
token = "${var.do_token}"
}
# Create nodes
resource "digitalocean_droplet" "consul" {
count = "1"
image = "ubuntu-14-04-x64"
name = "consul-${count.index+1}"
region = "lon1"
size = "1gb"
ssh_keys = ["7b:51:d3:e3:ae:6e:c6:e2:61:2d:40:56:17:54:fc:e3"]
connection {
type = "ssh"
user = "root"
agent = true
}
provisioner "file" {
source = "consul_0.7.1_linux_amd64.zip"
destination = "/tmp/consul_0.7.1_linux_amd64.zip"
}
provisioner "remote-exec" {
inline = [
"sudo unzip -d /usr/local/bin /tmp/consul_0.7.1_linux_amd64.zip"
]
}
}
Terraform requires that you specify the private SSH key to use for the connection with private_key You can create a new variable containing the path to your private key for use with Terraform's file interpolation function:
connection {
type = "ssh"
user = "root"
agent = true
private_key = "${file("${var.private_key_path}")}"
}
You face this issue, because you have a ssh key protected by a password. To solve this issue you should generate a key without password.
I cannot get terraform's ssh to connect via private aws keypair for chef provisioning - the error looks to just be a timeout:
aws_instance.app (chef): Connecting to remote host via SSH...
aws_instance.app (chef): Host: 96.175.120.236:32:
aws_instance.app (chef): User: ubuntu
aws_instance.app (chef): Password: false
aws_instance.app (chef): Private key: true
aws_instance.app (chef): SSH Agent: true
aws_instance.app: Still creating... (5m30s elapsed)
Error applying plan:
1 error(s) occurred:
* dial tcp 96.175.120.236:32: i/o timeout
Terraform does not automatically rollback in the face of errors.
Instead, your Terraform state file has been partially updated with
any resources that successfully completed. Please address the error
above and apply again to incrementally change your infrastructure.
Here is my terraform plan - note the ssh settings.. the key_name setting is set to my AWS keypair name and the ssh_for_chef.pem is the private key
variable "AWS_ACCESS_KEY" {}
variable "AWS_SECRET_KEY" {}
provider "aws" {
region = "us-east-1"
access_key = "${var.AWS_ACCESS_KEY}"
secret_key = "${var.AWS_SECRET_KEY}"
}
resource "aws_instance" "app" {
ami = "ami-88aa1ce0"
count = "1"
instance_type = "t1.micro"
key_name = "ssh_for_chef"
security_groups = ["sg-c43490e1"]
subnet_id = "subnet-75dd96e2"
associate_public_ip_address = true
provisioner "chef" {
server_url = "https://api.chef.io/organizations/xxxxxxx"
validation_client_name = "xxxxxxx-validator"
validation_key = "/home/user01/Documents/Devel/chef-repo/.chef/xxxxxxxx-validator.pem"
node_name = "dubba_u_7"
run_list = [ "motd_rhel" ]
user_name = "user01"
user_key = "/home/user01/Documents/Devel/chef-repo/.chef/user01.pem"
ssl_verify_mode = "false"
}
connection {
type = "ssh"
user = "ubuntu"
private_key = "${file("/home/user01/Documents/Devel/ssh_for_chef.pem")}"
}
}
Any ideas?
I'm not sure if we had the same problem, since you didn't specify if you were able to ssh to the instance.
In my case, I was running terraform from within the VPC, and the connection was allowed with a security groups, which can't be used with a public IP.
the solution is simple (but you will have to use the new conditional interpolations of terraform v.0.8.0) -
Define this variable - variable use_public_ip { default = true }
Then, inside the connection section of the chef provisioner, add the following line -
host = "${var.use_public_ip ? aws_instance.instance.public_ip : aws_instance.instance.private_ip}"
If you wish to use the public IP, set the variable as true, otherwise, set it to false.
I use this for aws -
connection {
user = "ubuntu"
host = "${var.use_public_ip ? aws_instance.instance.public_ip : aws_instance.instance.private_ip}","
}
In my Cassandra config I have enabled user authentication and connect with cqlsh over ssl.
I'm having trouble implementing the same with gocql, following is my code:
cluster := gocql.NewCluster("127.0.0.1")
cluster.Authenticator = gocql.PasswordAuthenticator{
Username: "myuser",
Password: "mypassword",
}
cluster.SslOpts = &gocql.SslOptions {
CertPath: "/path/to/cert.pem",
}
When I try to connect I get following error:
gocql: unable to create session: connectionpool: unable to load X509 key pair: open : no such file or directory
In python I can do this with something like:
from cassandra.cluster import Cluster
from cassandra.auth import PlainTextAuthProvider
USER = 'username'
PASS = 'password'
ssl_opts = {'ca_certs': '/path/to/cert.pem',
'ssl_version': PROTOCOL_TLSv1
}
credentials = PlainTextAuthProvider(username = USER, password = PASS)
# define host, port, cqlsh protocaol version
cluster = Cluster(contact_points= HOST, protocol_version= CQLSH_PROTOCOL_VERSION, auth_provider = credentials, port = CASSANDRA_PORT)
I checked the gocql and TLS documentation here and here but I'm unsure about how to set ssl options.
You're adding a cert without a private key, which is where the "no such file or directory" error is coming from.
Your python code is adding a CA; you should do the same with the Go code:
gocql.SslOptions {
CaPath: "/path/to/cert.pem",
}