rabbitmq version 3.10.0
tell me how to write rabbitmq.conf correctly without using advanced.config
work BindDN in another server--> uid=myuserinfreeipa,cn=users,cn=accounts,dc=mydc1,dc=mydc2
work SearchFilter in another server ---> "(&(uid=%u)(memberOf=cn=mygroupinfreeipa,cn=groups,cn=accounts,dc=mydc1,dc=mydc2)(!(nsaccountlock=TRUE)))"
work BaseDN in another server --> "cn=users,cn=accounts,dc=mydc1,dc=mydc2"
rabbitmq.conf
auth_backends.1 = ldap
auth_ldap.servers.1 = my.server.com
auth_ldap.timeout = 500
auth_ldap.port = 389
auth_ldap.user_dn_pattern = CN=${username},OU=Users,dc=mydc1,dc=mydc2
auth_ldap.use_ssl = false
ssl_options.cacertfile = /etc/rabbitmq/ca.crt
auth_ldap.dn_lookup_bind.user_dn = test
auth_ldap.dn_lookup_bind.password = password
auth_ldap.dn_lookup_attribute = distinguishedName
auth_ldap.dn_lookup_base = cn=users,cn=accounts,dc=mydc1,dc=mydc2
auth_ldap.log = network
advanced.config
[
{
rabbitmq_auth_backend_ldap,
[
{
tag_queries, [
{administrator,{in_group,"CN=mygroupinfreeipa,dc=mydc1,dc=mydc2","member"}},
{management, {constant, true}}
]
}
]%% rabbitmq_auth_backend_ldap,
}
].
tail -f /var/log/rabbitmq/rabbit#amqptest.log
LDAP CHECK: login for test
LDAP connecting to servers: ["my.server.com"]
LDAP network traffic: bind request = {'BindRequest',3,"xxxx",
{simple,"xxxx"}}
LDAP network traffic: bind reply = {ok,
{'LDAPMessage',1,
{bindResponse,
{'BindResponse',invalidCredentials,
[],[],asn1_NOVALUE,asn1_NOVALUE}},
asn1_NOVALUE}}
LDAP bind returned "invalid credentials": xxxx
LDAP connecting to servers: ["my.server.com"]
LDAP network traffic: bind request = {'BindRequest',3,"xxxx",
{simple,"xxxx"}}
LDAP bind error: "xxxx" {'EXIT',
{{badmatch,
{error,
{asn1,
{function_clause,
[{'ELDAPv3',encode_restricted_string,
[{refused,"test",[]},[<<4>>]]
Related
I am looking integrate the rabbitmq with LDAP and allows administrator access who are in the mentioned AD Group.
configuration files
rabbitmq.conf
auth_backends.1 = ldap
auth_ldap.servers.1 = example.com
auth_ldap.dn_lookup_attribute = sAMAccountName
auth_ldap.dn_lookup_base = OU=Standard,OU=Users,DC=example,DC=com
auth_ldap.user_dn_pattern = ${username}
auth_ldap.use_ssl = false
auth_ldap.port = 389
auth_ldap.log = network_unsafe
advanced.config
[{rabbitmq_auth_backend_ldap,[
{tag_queries, [{administrator, {in_group, "rabbitusers_group,OU=Security,OU=Groups,DC=example,DC=com","member"}},
{management, {constant,true}}]}
]}].
getting same even auth_ldap.dn_lookup_attribute sAMAccountName replaced with distinguishedName
I noticed in log --- user have tag administrator? false
Hell, I am trying to deploy rke k8s with terraform, but I am not able to connect to the desired host via ssh:
time="2022-02-28T11:17:38+01:00" level=warning msg="Failed to set up SSH tunneling for host [poc-k8s.my-domain.com]: Can't retrieve Docker Info: error during connect: Get \"http://%2Fvar%2Frun%2Fdocker.sock/v1.24/info\": Unable to access node with address [poc-k8s.my-domain.com:22] using SSH. Please check if you are able to SSH to the node using the specified SSH Private Key and if you have configured the correct SSH username. Error: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain"
and this is the .tf file I am using:
terraform {
required_providers {
rke = {
source = "rancher/rke"
version = "1.3.0"
}
}
}
provider "rke" {
log_file = "rke_debug.log"
}
resource "rke_cluster" "cluster" {
nodes {
address = "poc-k8s.my-domain.com"
user = "root"
role = ["controlplane", "worker", "etcd"]
ssh_key = file("~/.ssh/root_key")
}
nodes {
address = "poc-k8s.my-domain.com"
user = "root"
role = ["worker", "etcd"]
ssh_key = file("~/.ssh/root_key")
}
addons_include = [
"https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml",
"https://gist.githubusercontent.com/superseb/499f2caa2637c404af41cfb7e5f4a938/raw/930841ac00653fdff8beca61dab9a20bb8983782/k8s-dashboard-user.yml",
]
}
resource "local_file" "kube_cluster_yaml" {
filename = "~/.kube/kube_config_cluster.yml"
sensitive_content = "rke_cluster.cluster.kube_config_yaml"
}
The key if of course correct and I am able to connect to the desired host:
ssh -i ~/.ssh/root_key root#poc-k8s.my-domain.com
what am I missing here?
[Update]
Cluster resource has delay_on_creation property that can be used
resource "rke_cluster" "cluster" {
delay_on_creation = 180
(...)
}
I'm facing a similar issue. On the second run of terrafor apply it works correctly. In my case the issue is that docker is not up fast enough for RKE provider.
I've found following workaround from citynetwork /
citycloud-examples:
resource "rke_cluster" "cluster" {
(...)
depends_on = [null_resource.wait-for-docker]
}
resource "null_resource" "wait-for-docker" {
provisioner "local-exec" {
command = "sleep 180"
}
depends_on = [
# list of servers docker being installed on
(...)
]
}
It waits for 180s which is not ideal, though.
In my centos7 server, I have set up Telegraf and InfluxDB. InfluxDB successfully receives data from Telegraf and stores them in the database. But when I reconfigure both services to use https, I see the following error in Telegraf's logs
Dec 29 15:13:11 localhost.localdomain telegraf[31779]: 2020-12-29T13:13:11Z E! [outputs.influxdb] When writing to [https://127.0.0.1:8086]: Post "https://127.0.0.1:8086/write?db=GRAFANA": dial tcp 127.0.0.1:8086: connect: connection refused
Dec 29 15:13:11 localhost.localdomain telegraf[31779]: 2020-12-29T13:13:11Z E! [agent] Error writing to outputs.influxdb: could not write any address
InfluxDB doesn't show any errors in it's logs.
Below is my telegraf.conf file:
[agent]
hostname = "local"
flush_interval = "15s"
interval = "15s"
# Input Plugins
[[inputs.cpu]]
percpu = true
totalcpu = true
collect_cpu_time = false
report_active = false
[[inputs.disk]]
ignore_fs = ["tmpfs", "devtmpfs", "devfs"]
[[inputs.io]]
[[inputs.mem]]
[[inputs.net]]
[[inputs.system]]
[[inputs.swap]]
[[inputs.netstat]]
[[inputs.processes]]
[[inputs.kernel]]
# Output Plugin InfluxDB
[[outputs.influxdb]]
database = "GRAFANA"
urls = [ "https://127.0.0.1:8086" ]
insecure_skip_verify = true
username = "telegrafuser"
password = "metricsmetricsmetricsmetrics"
And this is the uncommented [http] section of the influxdb.conf
# Determines whether HTTP endpoint is enabled.
enabled = false
# Determines whether the Flux query endpoint is enabled.
flux-enabled = true
# The bind address used by the HTTP service.
bind-address = ":8086"
# Determines whether user authentication is enabled over HTTP/HTTPS.
auth-enabled = false
# Determines whether HTTPS is enabled.
https-enabled = true
# The SSL certificate to use when HTTPS is enabled.
https-certificate = "/etc/ssl/server-cert.pem"
# Use a separate private key location.
https-private-key = "/etc/ssl/server-key.pem"
I cannot get terraform's ssh to connect via private aws keypair for chef provisioning - the error looks to just be a timeout:
aws_instance.app (chef): Connecting to remote host via SSH...
aws_instance.app (chef): Host: 96.175.120.236:32:
aws_instance.app (chef): User: ubuntu
aws_instance.app (chef): Password: false
aws_instance.app (chef): Private key: true
aws_instance.app (chef): SSH Agent: true
aws_instance.app: Still creating... (5m30s elapsed)
Error applying plan:
1 error(s) occurred:
* dial tcp 96.175.120.236:32: i/o timeout
Terraform does not automatically rollback in the face of errors.
Instead, your Terraform state file has been partially updated with
any resources that successfully completed. Please address the error
above and apply again to incrementally change your infrastructure.
Here is my terraform plan - note the ssh settings.. the key_name setting is set to my AWS keypair name and the ssh_for_chef.pem is the private key
variable "AWS_ACCESS_KEY" {}
variable "AWS_SECRET_KEY" {}
provider "aws" {
region = "us-east-1"
access_key = "${var.AWS_ACCESS_KEY}"
secret_key = "${var.AWS_SECRET_KEY}"
}
resource "aws_instance" "app" {
ami = "ami-88aa1ce0"
count = "1"
instance_type = "t1.micro"
key_name = "ssh_for_chef"
security_groups = ["sg-c43490e1"]
subnet_id = "subnet-75dd96e2"
associate_public_ip_address = true
provisioner "chef" {
server_url = "https://api.chef.io/organizations/xxxxxxx"
validation_client_name = "xxxxxxx-validator"
validation_key = "/home/user01/Documents/Devel/chef-repo/.chef/xxxxxxxx-validator.pem"
node_name = "dubba_u_7"
run_list = [ "motd_rhel" ]
user_name = "user01"
user_key = "/home/user01/Documents/Devel/chef-repo/.chef/user01.pem"
ssl_verify_mode = "false"
}
connection {
type = "ssh"
user = "ubuntu"
private_key = "${file("/home/user01/Documents/Devel/ssh_for_chef.pem")}"
}
}
Any ideas?
I'm not sure if we had the same problem, since you didn't specify if you were able to ssh to the instance.
In my case, I was running terraform from within the VPC, and the connection was allowed with a security groups, which can't be used with a public IP.
the solution is simple (but you will have to use the new conditional interpolations of terraform v.0.8.0) -
Define this variable - variable use_public_ip { default = true }
Then, inside the connection section of the chef provisioner, add the following line -
host = "${var.use_public_ip ? aws_instance.instance.public_ip : aws_instance.instance.private_ip}"
If you wish to use the public IP, set the variable as true, otherwise, set it to false.
I use this for aws -
connection {
user = "ubuntu"
host = "${var.use_public_ip ? aws_instance.instance.public_ip : aws_instance.instance.private_ip}","
}
In my Cassandra config I have enabled user authentication and connect with cqlsh over ssl.
I'm having trouble implementing the same with gocql, following is my code:
cluster := gocql.NewCluster("127.0.0.1")
cluster.Authenticator = gocql.PasswordAuthenticator{
Username: "myuser",
Password: "mypassword",
}
cluster.SslOpts = &gocql.SslOptions {
CertPath: "/path/to/cert.pem",
}
When I try to connect I get following error:
gocql: unable to create session: connectionpool: unable to load X509 key pair: open : no such file or directory
In python I can do this with something like:
from cassandra.cluster import Cluster
from cassandra.auth import PlainTextAuthProvider
USER = 'username'
PASS = 'password'
ssl_opts = {'ca_certs': '/path/to/cert.pem',
'ssl_version': PROTOCOL_TLSv1
}
credentials = PlainTextAuthProvider(username = USER, password = PASS)
# define host, port, cqlsh protocaol version
cluster = Cluster(contact_points= HOST, protocol_version= CQLSH_PROTOCOL_VERSION, auth_provider = credentials, port = CASSANDRA_PORT)
I checked the gocql and TLS documentation here and here but I'm unsure about how to set ssl options.
You're adding a cert without a private key, which is where the "no such file or directory" error is coming from.
Your python code is adding a CA; you should do the same with the Go code:
gocql.SslOptions {
CaPath: "/path/to/cert.pem",
}