I am trying to connect to DB2 with SSL using Pyspark on Linux. I tried the following
user = "xxxx"
password = "xxxxx"
jdbcURL = "jdbc:db2://nn.nn.nnn.nnn:nnnn/<database>"
prop = {"user": user, "password": password, "driver":
"com.ibm.db2.jcc.DB2Driver", "sslConnection": "true"}
table = "<schema>.<table name>"
df = spark.read.jdbc(url=jdbcURL, table=table, properties=prop)
I also tried
user = "xxxx"
password = "xxxxx"
jdbcURL = "jdbc:db2://nn.nn.nnn.nnn:nnnn/<database>"
prop = {"user": user, "password": password, "driver":
"com.ibm.db2.jcc.DB2Driver", "sslConnection": "true"
"Security": "SSL", "SSLServerCertificate": "<path to arm file>"}}
table = "<schema>.<table name>"
df = spark.read.jdbc(url=jdbcURL, table=table, properties=prop)
I get the same error in both cases
or socket output stream.
Error location: Reply.fill() - socketInputStream.read (-1). Message:
sun.security.validator.ValidatorException: PKIX path building failed:
sun.security.provider.certpath.SunCertPathBuilderException: unable to
find valid certification path to requested target. ERRORCODE=-4499,
SQLSTATE=08001
I am not sure of the syntax to specify the .arm path. Stuck. Please help
Ganesh
P.S. I can connect with python and ibm_db module using
import ibm_db
conn = ibm_db.connect("DATABASE=<database>
;HOSTNAME=nn.nn.nnn.nnn:nnnn;SECURITY=SSL;SSLServerCertificate=<path to arm
file> ;UID=<user>;PWD=<password>","","")
This works.
This was solved by re-installing the Java certificate.
Related
I'm right now stuck with some configuration I have in my kubernetes. In my lab I want to configure oauth2-proxy to use keycloak as an identity provider. I've everything ready but when trying to login using keycloak it shows a 403 Forbidden error "Login Failed: The upstream identity provider returned an error: invalid_scope"
Pod logs:
[2022/11/03 08:49:31] [oauthproxy.go:752] Error while parsing OAuth2 callback: invalid_scope
08:30:38,734 WARN [org.keycloak.events] (default task-43) type=LOGIN_ERROR, realmId=test, clientId=oauth2-proxy, userId=null, ipAddress=10.50.21.171, error=invalid_request, response_type=code, redirect_uri=https://oauth.test.dev/oauth2/callback, response_mode=query
08:34:11,933 ERROR [org.keycloak.services] (default task-41) KC-SERVICES0093: Invalid parameter value for: scope
I've look for documentation and I don't see why is complaining about the scopes as I've them right.
This is my oauth2-proxy values:
provider = "keycloak-oidc"
provider_display_name = "Keycloak"
cookie_domains = ".test.dev"
oidc_issuer_url = "https://keycloak.test.dev/auth/realms/test"
reverse_proxy = true
email_domains = [ "*" ]
scope = "openid profile email groups"
whitelist_domains = ["test.dev",".test.dev"]
pass_authorization_header = true
pass_access_token = true
pass_user_headers = true
set_authorization_header = true
set_xauthrequest = true
cookie_refresh = "1m"
cookie_expire = "30m"`
And in keycloak I have the oauth2-proxy client created with Groups and Audience mappers.
I see these errors in keycloak:
LOGIN_ERROR
Client oauth2-proxy
Error invalid_request
response_type code
redirect_uri https://oauth.test.dev/oauth2/callback
response_mode query
If someone has experience with this and can point me to the right direction and tell me what I'm doing wrong I would be very grateful
Thank you
I've tried different configurations and overwriting the scope parameter in the container but still the same issue. I expect to login correctly using keycloak.
rabbitmq version 3.10.0
tell me how to write rabbitmq.conf correctly without using advanced.config
work BindDN in another server--> uid=myuserinfreeipa,cn=users,cn=accounts,dc=mydc1,dc=mydc2
work SearchFilter in another server ---> "(&(uid=%u)(memberOf=cn=mygroupinfreeipa,cn=groups,cn=accounts,dc=mydc1,dc=mydc2)(!(nsaccountlock=TRUE)))"
work BaseDN in another server --> "cn=users,cn=accounts,dc=mydc1,dc=mydc2"
rabbitmq.conf
auth_backends.1 = ldap
auth_ldap.servers.1 = my.server.com
auth_ldap.timeout = 500
auth_ldap.port = 389
auth_ldap.user_dn_pattern = CN=${username},OU=Users,dc=mydc1,dc=mydc2
auth_ldap.use_ssl = false
ssl_options.cacertfile = /etc/rabbitmq/ca.crt
auth_ldap.dn_lookup_bind.user_dn = test
auth_ldap.dn_lookup_bind.password = password
auth_ldap.dn_lookup_attribute = distinguishedName
auth_ldap.dn_lookup_base = cn=users,cn=accounts,dc=mydc1,dc=mydc2
auth_ldap.log = network
advanced.config
[
{
rabbitmq_auth_backend_ldap,
[
{
tag_queries, [
{administrator,{in_group,"CN=mygroupinfreeipa,dc=mydc1,dc=mydc2","member"}},
{management, {constant, true}}
]
}
]%% rabbitmq_auth_backend_ldap,
}
].
tail -f /var/log/rabbitmq/rabbit#amqptest.log
LDAP CHECK: login for test
LDAP connecting to servers: ["my.server.com"]
LDAP network traffic: bind request = {'BindRequest',3,"xxxx",
{simple,"xxxx"}}
LDAP network traffic: bind reply = {ok,
{'LDAPMessage',1,
{bindResponse,
{'BindResponse',invalidCredentials,
[],[],asn1_NOVALUE,asn1_NOVALUE}},
asn1_NOVALUE}}
LDAP bind returned "invalid credentials": xxxx
LDAP connecting to servers: ["my.server.com"]
LDAP network traffic: bind request = {'BindRequest',3,"xxxx",
{simple,"xxxx"}}
LDAP bind error: "xxxx" {'EXIT',
{{badmatch,
{error,
{asn1,
{function_clause,
[{'ELDAPv3',encode_restricted_string,
[{refused,"test",[]},[<<4>>]]
I've created a rabbitmq.conf and advanced.config for RabbitMQ intended to allow LDAP authentication with internal fallback. Because RabbitMQ is dumb and tries to use the installing user's appdata which is a terrible design for a Windows service, I've also redirected locations with environment variables:
RABBITMQ_BASE = D:\RabbitMQData\
RABBITMQ_CONFIG_FILE = D:\RabbitMQData\config\rabbitmq.conf
RABBITMQ_ADVANCED_CONFIG_FILE = D:\RabbitMQData\config\advanced.config
The config locations appear to be working correctly as they are referenced in the startup information and cause no errors on startup.
rabbitmq.conf (trimmed to relevant portions)
auth_backends.1 = ldap
auth_backends.2 = internal
auth_ldap.servers.1 = domain.local
auth_ldap.use_ssl = true
auth_ldap.port = 636
auth_ldap.dn_lookup_bind = as_user
auth_ldap.log = network
log.dir = D:\\RabbitMQData\\log
log.file.level = info
log.file.rotation.date = $D0
log.file.rotation.size = 10485760
advanced.config
[
{rabbitmq_auth_backend_ldap, [
{ssl_options, [{cacertfile,"D:\\RabbitMQData\\SSL\\ca.pem"},
{certfile,"D:\\RabbitMQData\\SSL\\server_certificate.pem"},
{keyfile,"D:\\RabbitMQData\\SSL\\server_key.pem"},
{verify, verify_peer},
{fail_if_no_peer_cert, true}
]},
{user_bind_pattern, ""},
{user_dn_pattern, ""},
{dn_lookup_attribute, "sAMAccountName"},
{dn_lookup_base, "DC=domain,DC=local"},
{group_lookup_base,"OU=Groups,DC=domain,DC=local"},
{vhost_access_query, {in_group, "cn=RabbitUsers,OU=Groups,DC=domain,DC=local"}},
{tag_queries, [
{administrator, {in_group, "CN=RabbitAdmins,OU=Groups,DC=domain,DC=local"}},
{management, {in_group, "CN=RabbitAdmins,OU=Groups,DC=domain,DC=local"}}
]}
]}
].
I'm using auth_ldap.log = network so there should be an ldap_auth.log file in my log directory which would help me troubleshoot but it's not there. Why would this occur? I've not seen any documented settings for auth_ldap logging other than .log so I would assume it would be with the other logs.
I'm currently running into issues with LDAP, specifically the error LDAP bind error: "xxxx" anonymous_auth. As I'm using simple bind via auth_ldap.dn_lookup_bind = as_user I should not be getting anonymous authentication. Without the detailed log however, I can't get additional information.
Okay looks I made two mistakes here:
Going back and re-reading, looks like I misinterpreted the documentation and believed auth_ldap.log was a separate file rather than just a setting. All the LDAP logging goes into the normal RabbitMQ log.
I had pulled Luke Bakken's config from https://groups.google.com/g/rabbitmq-users/c/Dby1OWQKLs8/discussion but the following lines ended up as:
{user_bind_pattern, ""},
{user_dn_pattern, ""}
instead of
{user_bind_pattern, "${username}"},
{user_dn_pattern, "${ad_user}"},
I had used a Powershell script with a herestring to create the config file which erroneously interpreted those variables as empty strings. Fixing that let me log on with "domain\username".
I try to use Terraform to create a DigitalOcean node on which consul is installed.
I'm using the following .tf file but it hangs up and do not copy the consul .zip file onto the droplet.
I got the following error message after a couple of minutes:
ssh: handshake failed: ssh: unable to authenticate, attempted methods
[none publickey], no supported methods remain
The droplets are correctly created though. I can login on command line with the key I specified (thus not specifying password). I'm guessing the connection part might be faulty but not sure what I'm missing.
Any idea ?
variable "do_token" {}
# Configure the DigitalOcean Provider
provider "digitalocean" {
token = "${var.do_token}"
}
# Create nodes
resource "digitalocean_droplet" "consul" {
count = "1"
image = "ubuntu-14-04-x64"
name = "consul-${count.index+1}"
region = "lon1"
size = "1gb"
ssh_keys = ["7b:51:d3:e3:ae:6e:c6:e2:61:2d:40:56:17:54:fc:e3"]
connection {
type = "ssh"
user = "root"
agent = true
}
provisioner "file" {
source = "consul_0.7.1_linux_amd64.zip"
destination = "/tmp/consul_0.7.1_linux_amd64.zip"
}
provisioner "remote-exec" {
inline = [
"sudo unzip -d /usr/local/bin /tmp/consul_0.7.1_linux_amd64.zip"
]
}
}
Terraform requires that you specify the private SSH key to use for the connection with private_key You can create a new variable containing the path to your private key for use with Terraform's file interpolation function:
connection {
type = "ssh"
user = "root"
agent = true
private_key = "${file("${var.private_key_path}")}"
}
You face this issue, because you have a ssh key protected by a password. To solve this issue you should generate a key without password.
In my Cassandra config I have enabled user authentication and connect with cqlsh over ssl.
I'm having trouble implementing the same with gocql, following is my code:
cluster := gocql.NewCluster("127.0.0.1")
cluster.Authenticator = gocql.PasswordAuthenticator{
Username: "myuser",
Password: "mypassword",
}
cluster.SslOpts = &gocql.SslOptions {
CertPath: "/path/to/cert.pem",
}
When I try to connect I get following error:
gocql: unable to create session: connectionpool: unable to load X509 key pair: open : no such file or directory
In python I can do this with something like:
from cassandra.cluster import Cluster
from cassandra.auth import PlainTextAuthProvider
USER = 'username'
PASS = 'password'
ssl_opts = {'ca_certs': '/path/to/cert.pem',
'ssl_version': PROTOCOL_TLSv1
}
credentials = PlainTextAuthProvider(username = USER, password = PASS)
# define host, port, cqlsh protocaol version
cluster = Cluster(contact_points= HOST, protocol_version= CQLSH_PROTOCOL_VERSION, auth_provider = credentials, port = CASSANDRA_PORT)
I checked the gocql and TLS documentation here and here but I'm unsure about how to set ssl options.
You're adding a cert without a private key, which is where the "no such file or directory" error is coming from.
Your python code is adding a CA; you should do the same with the Go code:
gocql.SslOptions {
CaPath: "/path/to/cert.pem",
}