GCP Dataproc job not finding SSL pem certs stored in buckets - ssl

I've a GCP Dataproc cluster, and i'm trying to deploy a pyspark job, which produces to a topic using SSL.
the pem files are stored in bucket gs://dataproc_kafka_code/code, and i'm accessing the pem files in the code shown below.
However, the code is not able to find the pem files, the error is as shown below :
%3|1638738651.097|SSL|rdkafka#producer-1| [thrd:app]: error:02001002:system library:fopen:No such file or directory: fopen('gs://dataproc_kafka_code/code/caroot.pem','r')
%3|1638738651.097|SSL|rdkafka#producer-1| [thrd:app]: error:2006D080:BIO routines:BIO_new_file:no such file
Traceback (most recent call last):
File "/tmp/my-job6/KafkaProducer.py", line 21, in <module>
producer = Producer(conf)
cimpl.KafkaException: KafkaError{code=_INVALID_ARG,val=-186,str="Failed to create producer: ssl.ca.location failed: error:0B084002:x509 certificate routines:X509_load_cert_crl_file:system lib"}
Code :
from confluent_kafka import Producer
kafkaBrokers='<host>:<port>'
# CA Root certificate ca.crt
caRootLocation='gs://dataproc_kafka_code/code/caroot.pem'
# user public (user.crt)
certLocation='gs://dataproc_kafka_code/code/my-bridge-user-crt.pem'
# user.key
keyLocation='gs://dataproc_kafka_code/code/user-with-certs.pem'
password='<password>'
conf = {'bootstrap.servers': kafkaBrokers,
'security.protocol': 'SSL',
'ssl.ca.location':caRootLocation,
'ssl.certificate.location': certLocation,
'ssl.key.location':keyLocation,
'ssl.key.password' : password
}
topic = 'my-topic'
producer = Producer(conf)
for n in range(100):
producer.produce(topic, key=str(n), value=" val -> "+str(n*(-1)) + " on dec 5 from dataproc ")
producer.flush()
What needs to be done to fix this ?
Also, is this the right way to provide the code access to the SSL certs ?
tia!

From error
fopen:No such file or directory: fopen('gs://dataproc_kafka_code/code/caroot.pem','r'), seems like the Producer library is trying to download the file from local filesystem.
There are couple of ways you can try to fix this by downloading these keys/certificates to local files and then pointing the conf to them:
Download using storage client API https://googleapis.dev/python/storage/latest/client.html
Or use gsutil (comes preinstalled in the VM) to download the files https://cloud.google.com/storage/docs/gsutil/commands/cp

Related

tls unsigned certificate when using terraform

The microstack.openstack project recently enabled/required tls authentication as outlined here. I am working on deploying an openstack cluster to microstack using a terraform example here. As a result of the change, I receive an unknown signed cert error when trying to create an openstack network client data source.
data "openstack_networking_network_v2" "terraform" {
name = "${var.pool}"
}
The error I get when calling terraform plan:
Error: Error creating OpenStack networking client: Post "https://XXX.XXX.XXX.132:5000/v3/auth/tokens": OpenStack connection error, retries exhausted. Aborting. Last error was: x509: certificate signed by unknown authority
with data.openstack_networking_network_v2.terraform,
on datasources.tf line 1, in data "openstack_networking_network_v2" "terraform":
1: data "openstack_networking_network_v2" "terraform" {
Is there a way to ignore the certificate error, so that I can successfully use terraform to create the openstack cluster? I have tried updating the generate-self-signed parameter, but I haven't seen any change in behavior:
sudo snap set microstack config.tls.generate-self-signed=false
I think insecure provider parameter is what you are looking for:
(Optional) Trust self-signed SSL certificates. If omitted, the OS_INSECURE environment variable is used.
Try:
provider "openstack" {
insecure = true
}
Disclaimer: I haven't tried that.
The problem was that I did not source the admin-openrc.sh file that I had downloaded from the horizon web page:
$ source admin-openrc.sh
I faced the same problem, if it could help, here my contribution :
sudo snap get microstack config.tls
Key Value
config.tls.cacert-path /var/snap/microstack/common/etc/ssl/certs/cacert.pem
config.tls.cert-path /var/snap/microstack/common/etc/ssl/certs/cert.pem
config.tls.compute {...}
config.tls.generate-self-signed true
config.tls.key-path /var/snap/microstack/common/etc/ssl/private/key.pem
In terraform directory, do :
cat /var/snap/microstack/common/etc/ssl/certs/cacert.pem : copy paste -> cacert.pem
cat /var/snap/microstack/common/etc/ssl/certs/cert.pem : copy/paste -> cert.pem
cat /var/snap/microstack/common/etc/ssl/private/key.pem : copy/past -> key.pem
And create a file in your terraform directory main.tf :
provider "openstack" {
user_name = "admin"
tenant_name = "admin"
password = "pass" (get with sudo snap get microstack config.credentials.keystone-password)
auth_url = "https://host_ip:5000/v3"
#insecure = true (uncomment & comment cacert_file + key line)
cacert_file = "/terraform_dir/cacert.pem"
#cert = "/terraform_dir/cert.pem" (if needed)
key = "/terraform_dir/private.pem"
region = "microstack" (or regionOne)
}
To finish terraform plan/apply

Karate:My Service APi requires client auth certs, The certs are not available in classpath. I have to mount them in docker [duplicate]

This question already has an answer here:
[karate][standalone] Error : could not find or read file
(1 answer)
Closed 1 year ago.
I am creating configure SSL without classpath. I have to mount certs externally. Following is my code.
'''
* def keyStoreFilePath = './certificates/clientcert.p12'
* def trustStoreFilePath = './certificates/truststore.jks'
* configure ssl = { keyStore: keyStoreFilePath, keyStorePassword: '123', keyStoreType: 'pkcs12', trustStore:trustStoreFilePath, trustStoreType:'pkcs12', trustStorePassword:'123' }
'''
But I get a file not found, as karate is looking for certs in the target folder. my certs are on the root path. Is there a way to create configure SSL without using classpath?
'''
java.lang.RuntimeException: java.io.FileNotFoundException: /Users/kiranjaghni/work/javaworkspace/poc/karateDSL/transfer-case-grid-api-auto/transfercasegrid-api-automation/target/test-classes/examples/login/trustStoreFilePath (No such file or directory)
classpath:examples/login/login.feature:14 ==> expected: <0> but was: <2>
at examples.ExamplesTest.testParallel(ExamplesTest.java:15)
'''
I think you should use the file: prefix that can read from the file-system. So then it is up to you, you can have the files anywhere, as long as they are accessible.
* def keyStoreFilePath = 'file:/any/absolute/path/certificates/clientcert.p12'
Please read the docs: https://github.com/intuit/karate#reading-files
Also please note that you need to use embedded-expressions: https://github.com/intuit/karate#embedded-expressions
* configure ssl = { keyStore: '#(keyStoreFilePath)' }

How do we send a canvas image data as an attachment to a server on Pharo?

How do we send or upload a data file to a server on Pharo. I saw some example of sending file from a directory on the machine.
It works fine.
ZnClient new
url: MyUrl;
uploadEntityfrom: FileLocator home /Path to the file;
put
In my case I don't want to send/upload file downloaded on a machine but instead I want to send/upload a file hosted somewhere or data I retrieved over the network and send it attached to another server.
How can we do that ?
Based on your previous questions I presume you are using linux. The issue here is not within Smalltak/Pharo, but the network mapping.
FTP
If you want to have a ftp, don't forget it is sending password in plaintext, set-up it a way you can mount it. There are probably plenty of ways to do this but you can try using curlftpfs. You need kernel module fuse for that, make sure you have it loaded. If it is not loaded you can do so via modprobe fuse.
The usage would be:
curlftpfs ftp.yoursite.net /mnt/ftp/ -o user=username:password,allow_other
where you fill username/password. The option allow_other allows other users at the system to use your mount.
(for more details you can see arch wiki and its curlftpfs)
Webdav
For webdav I would use the same approach, this time using davfs
You would manually mount it via mount command:
mount -t davfs https://yoursite.net:<port>/path /mnt/webdav
There are two reasonable way to setup it - systemd or fstab. The information below is taken from davfs2 Arch wiki:
For systemd:
/etc/systemd/system/mnt-webdav-service.mount
[Unit]
Description=Mount WebDAV Service
After=network-online.target
Wants=network-online.target
[Mount]
What=http(s)://address:<port>/path
Where=/mnt/webdav/service
Options=uid=1000,file_mode=0664,dir_mode=2775,grpid
Type=davfs
TimeoutSec=15
[Install]
WantedBy=multi-user.target
You can create an systemd automount unit to set a timeout:
/etc/systemd/system/mnt-webdav-service.automount
[Unit]
Description=Mount WebDAV Service
After=network-online.target
Wants=network-online.target
[Automount]
Where=/mnt/webdav
TimeoutIdleSec=300
[Install]
WantedBy=remote-fs.target
For the fstab way it is easy if you have edited fstab before (it behaves same as any other fstab entry):
/etc/fstab
https://webdav.example/path /mnt/webdav davfs rw,user,uid=username,noauto 0 0
For webdav you can even store the credentials securely:
Create a secrets file to store credentials for a WebDAV-service using ~/.davfs2/secrets for user, and /etc/davfs2/secrets for root:
/etc/davfs2/secrets
https://webdav.example/path davusername davpassword
Make sure the secrets file contains the correct permissions, for root mounting:
# chmod 600 /etc/davfs2/secrets
# chown root:root /etc/davfs2/secrets
And for user mounting:
$ chmod 600 ~/.davfs2/secrets
Back to your Pharo/Smalltalk code:
I presume you read the above and have either /mnt/ftp or /mnt/webdav mounted.
For e.g. ftp your code would simply take from the mounted directory:
ZnClient new
url: MyUrl;
uploadEntityfrom: FileLocator '/mnt/ftp/your_file_to_upload';
put
Edit Bassed on the comments.
The issue is that the configuration of the ZnClient is in the Pharo itself and the json file is also generated there.
One quick and dirty solution - would be to use above mentined with a shell command:
With ftp for example:
| commandOutput |
commandOutput := (PipeableOSProcess command: 'curlftpfs ftp.yoursite.net /mnt/ftp/ -o user=username:password,allow_other') output.
Transcript show: commandOutput.
Other approach is more sensible. Is to use Pharo FTP or WebDav support via FileSystemNetwork.
To load ftp only:
Gofer it
smalltalkhubUser: 'UdoSchneider' project: 'FileSystemNetwork';
configuration;
load.
#ConfigurationOfFileSystemNetwork asClass project stableVersion load: 'FTP'
to load Webdav only:
Gofer it
smalltalkhubUser: 'UdoSchneider' project: 'FileSystemNetwork';
configuration;
load.
#ConfigurationOfFileSystemNetwork asClass project stableVersion load: 'Webdav'
To get everything including tests:
Gofer it
smalltalkhubUser: 'UdoSchneider' project: 'FileSystemNetwork';
configuration;
loadStable.
With that you should be able to get a file for example for ftp:
| ftpConnection wDir file |
"Open a connection"
ftpConnection := FileSystem ftp: 'ftp://ftp.sh.cvut.cz/'.
"Getting working directory"
wDir := ftpConnection workingDirectory.
file := '/Arch/lastsync' asFileReference.
"Close connection - do always!"
ftpConnection close.
Then your upload via (ftp) would look like this:
| ftpConnection wDir file |
"Open connection"
ftpConnection := FileSystem ftp: 'ftp://your_ftp'.
"Getting working directory"
wDir := ftpConnection workingDirectory.
file := '/<your_file_path' asFileReference.
ZnClient new
url: MyUrl;
uploadEntityfrom: FileLocator file;
put
"Close connection - do always!"
ftpConnection close.
The Webdav would be similar.

Jmeter testing integration with IBM dtapower

Need your help in setting the SSL manager in Jmeter for performance testing with IBM datapower.
I tried the below steps to Add cert.
• Added (* .jks /*.p12 ) file in the jmeter GUI > Options > SSL Manager.
• I tried the setting the jks file in system.properties file too.
Path : *\jMETER\apache-jmeter-3.0\apache-jmeter-3.0\bin\system.properties
# Truststore properties (trusted certificates)
#javax.net.ssl.trustStore=/path/to/[jsse]cacerts
#javax.net.ssl.trustStorePassword
#javax.net.ssl.trustStoreProvider
#javax.net.ssl.trustStoreType [default = KeyStore.getDefaultType()]
# Keystore properties (client certificates)
# Location
javax.net.ssl.keyStore=****.jks -- Added
#
#The password to your keystore
javax.net.ssl.keyStorePassword=****-- Added
#
#javax.net.ssl.keyStoreProvider
#javax.net.ssl.keyStoreType [default = KeyStore.getDefaultType()]
I dont see the SSL handshake jMETER and datapower even after i followed ablove steps. Getting below error from datapower.
12:47:26 AM ssl error 51751363 10.123.98.73 0x806000ca valcred (###_CVC_Reverse_Server): SSL Proxy Profile '###_SSLPP_Reverse_Server': connection error: peer did not send a certificate
12:47:26 AM mpgw error 51751363 10.123.98.73 0x80e00161 source-https (###_HTTPS_FSH_CON_****): Request processing failed: Connection terminated before request headers read because of the connection error occurs, from URL: 10.123.98.73:58394
12:47:26 AM ssl error 51751363 10.123.98.73 0x8120002f sslproxy (####_SSLPP_Reverse_Server): SSL library error: error:140890C7:SSL routines:SSL3_GET_CLIENT_CERTIFICATE:peer did not
Can you please advice how to send the cert(.jks/ .p12) file from jmeter.
Change "Implementation" of your HTTP Request sampler(s) to Java. The fastest and the easiest way of doing this is using HTTP Request Defaults.
If you're using .p12 keystores you will need an extra line in the system.properties file like:
javax.net.ssl.keyStoreType=pkcs12
JMeter restart is required to pick the properties up.
See How to Set Your JMeter Load Test to Use Client Side Certificates article for more information.

Wanted: Instructions for adding client-node encryption enabled Cassandra cluster to DataStax OpsCenter 5.1.0

I have a Cassandra cluster with client-node encryption enabled. I am trying to add this cluster to an instance of OpsCenter 5.1.0, but it is not able to connect to the cluster. The log file seems to complain about not being able to verify the SSL certificate:
`
INFO: Starting factory opscenterd.ThriftService.NoReconnectCassandraClientFactory instance at 0x7f2ce05c8638>
2015-06-10 15:09:46+0000 [] WARN: Unable to verify ssl certificate.
2015-06-10 15:09:46+0000 [] Unhandled Error
Traceback (most recent call last):
File "/opt/opscenter-5.1.0/lib/py-debian/2.7/amd64/twisted/python/log.py", line 84, in callWithLogger
return callWithContext({"system": lp}, func, *args, **kw)
File "/opt/opscenter-5.1.0/lib/py-debian/2.7/amd64/twisted/python/log.py", line 69, in callWithContext
return context.call({ILogContext: newCtx}, func, *args, **kw)
File "/opt/opscenter-5.1.0/lib/py-debian/2.7/amd64/twisted/python/context.py", line 59, in callWithContext
return self.currentContext().callWithContext(ctx, func, *args, **kw)
File "/opt/opscenter-5.1.0/lib/py-debian/2.7/amd64/twisted/python/context.py", line 37, in callWithContext
return func(*args,**kw)
--- exception caught here ---
File "/opt/opscenter-5.1.0/lib/py-debian/2.7/amd64/twisted/internet/epollreactor.py", line 217, in _doReadOrWrite
why = selectable.doRead()
File "/opt/opscenter-5.1.0/lib/py-debian/2.7/amd64/twisted/internet/tcp.py", line 137, in doRead
return Connection.doRead(self)
File "/opt/opscenter-5.1.0/lib/py-debian/2.7/amd64/twisted/internet/tcp.py", line 452, in doRead
data = self.socket.recv(self.bufferSize)
File "build/lib/python2.7/site-packages/opscenterd/SslUtils.py", line 12, in ssl_simple_verifyCB
opscenterd.Utils.SSLVerifyException: SSL certificate invalid
My question is: what are the step-by-step instructions for being able to add a client-node encrypted cluster to opscenter?
Which .pem and .keystore files are needed exactly, how do I get hold of them?
The DataStax documentation on that topic is not detailed enough and therefore not really helpful. I assume some people out there must have managed to set this up successfully and I am sure that a detailed explanation / instructions would be appreciated by many.
One thing to note here, although the docs do mention generating a key per node, in practice this isn't very scalable. In most systems it is common to create the one keystore with the required keys and certificate(s) and then use this across all the nodes in your cluster and your client applications as needed. You export the certificate from this keystore and use this for OpsCenter. OpsCenter is (as far as SSL is concerned) a SSL client like any other client.
So you have to export your key from your java keystore, convert it to .pem format and use that for the opscenterd process. The agents are java based so they can use the java keystore. The DS docs are there but its a bit fragmented so its a question of looking in the right places :-)
I'm going to use the OpsCenter latest docs here as a reference. I'm assuming you are only using SSL between OpsCenter and Cassandra and OpsCenter Agents and Cassandra
Prepping the server certificates:
https://docs.datastax.com/en/cassandra/2.1/cassandra/security/secureSSLCertificates_t.html
Configuring client to node SSL:
https://docs.datastax.com/en/cassandra/2.1/cassandra/security/secureSSLClientToNode_t.html
using cqlsh with SSL (optional):
https://docs.datastax.com/en/cassandra/2.1/cassandra/security/secureCqlshSSL_t.html
To convert the key to a pem format see step 7 here:
https://docs.datastax.com/en/latest-opsc/opsc/online_help/opscAddingCluster_t.html
Examples
Note all these examples assume 1-way SSL. You generated a key in a file called /etc/dse/keystore and the certificate in a file called /etc/dse/truststore
To be honest I've never really had a lot of luck in adding SSL enabled clusters directly in the OpsCenter UI. I've always found creating the cluster.conf file and agent address.yaml files by hand far quicker and easier.
Note the SSL files like truststore, key.pem etc need to be on all the local machines that need them.
Example agent /var/lib/datastax-agent/conf/address.yaml file (note the use_ssl is for the opscenter <> agents SSL which we're not using here)
stomp_interface: 192.168.56.29
use_ssl: 0
# ssl_keystore settings if using ssl
ssl_keystore: /etc/dse/truststore
ssl_keystore_password: datastax
Example opscenter /etc/opscenter/clusters/<cluster_name>.conf file
[jmx]
username =
password =
port = 7199
[kerberos_client_principals]
[kerberos]
[agents]
ssl_keystore = /etc/dse/truststore
ssl_keystore_password = datastax
[kerberos_hostnames]
[kerberos_services]
[cassandra]
ssl_ca_certs = /etc/dse/key.pem
ssl_validate = False
seed_hosts = 192.168.56.22
Other tips etc
I always find if Im troubleshooting SSL connections in DSE / Cassandra. I'll strip out all the SSL and get the cluster working nomrmally first, then I'll configure SSL one step at a time, like turning on node to node SSL, then client to node, then OpsCenter and so on. Debugging all the SSL errors is not for the feint hearted!
Links
Other doc links you might find useful:
https://docs.datastax.com/en/opscenter/5.2/opsc/configure/opscConnectionConfig_r.html
https://docs.datastax.com/en/opscenter/5.2/opsc/configure/agentAddressConfiguration.html