Could anyone connect Cloud SQL with cloud sql proxy pod - ssl

I'm trying to setup a very basic wordpress setup as explained in this document: https://cloud.google.com/kubernetes-engine/docs/tutorials/persistent-disk
And cloud sql proxy is giving me certificate errors:
esonika#cloudshell:~ (esonika)$ k logs wordpress-8d7998ccd-xnfn9 -c cloudsql-proxy
2022/12/30 10:43:38 using credential file for authentication; email=cloudsql-proxy#esonika.iam.gserviceaccount.com
2022/12/30 10:43:38 Listening on 127.0.0.1:3306 for esonika:europe-west9:mysql-wordpress-instance
2022/12/30 10:43:38 Ready for new connections
2022/12/30 10:44:01 New connection for "esonika:europe-west9:mysql-wordpress-instance"
2022/12/30 10:44:02 couldn't connect to "esonika:europe-west9:mysql-wordpress-instance": x509: certificate is valid for 38-968d77ed-a928-4b25-97d3-5451b5f3c670.europe-west9.sql.goog, not esonika:mysql-wordpress-instance
I dont know why a certificate such as "38-968d77ed-a928-4b25-97d3-5451b5f3c670.europe-west9.sql.goog" is created and where.
Tried resetting ssl configurations and it didn't work.

Usually, if you don't explicitly set a SSL connection on your Cloud SQL instance, the communication with the database is in plain text.
EXCEPT when you create a tunnel with Cloud SQL proxy. This time, a secure connection is created, with encrypted data. The encryption is ensure by this automatically and ephemeral certificate created by the proxy.

Here is a doc which might help you in connecting to Cloud SQL from GKE using sidecar pods.

Thanks, The document doesn't list anything that I haven't tried. I think there is an internal issue with cloud_sql_proxy, that's why I decided to switch Cloud SQL to a private network only and wordpress pod is directly connecting to Cloud SQL private IP.

I was running into the same issue around the time you posted this question. I also reset SSL configuration on the DB like you did. My solution was upgrading from the version 1.11 to 1.33.2. It resolved all of the x509 errors. No clue why it suddenly stopped working.

Related

How to configure ssl certificates and keys for confluent kafka python client?

I am using confluent-kafka. I've to retrieve messages from kafka-broker using ssl. Now, I've configured broker using these properties (partial):
listeners=SSL://:9092
security.inter.broker.protocol = SSL
Console consumer/producer seem to be working fine with this ssl configuration.
For console consumer/producer, I am having following configuration:
security.protocol=SSL
ssl.truststore.location=/home/ubuntu/kafka1.server.truststore.jks
ssl.truststore.password=<intentionally>
ssl.keystore.location=/home/ubuntu/kafka1.server.keystore.jks
ssl.keystore.password=<intentionally>
ssl.key.password=<intentionally>
Now, from console perspective things are working fine.
I am having problem in figuring out how to connect to broker using python-client consumer (with ssl enabled).
Documentation talks about these 3 properties to be set:
ssl.ca.location
ssl.certificate.location
ssl.key.location
But, does not mention where or how to get data for these guys.
Please help me out. Thanks.
To configure these properties, you have to create x509 compliant certificates using OpenSSL instead of keytool. See this page, here is explained how to create the x509 certificates.

How to make an ssl based tcp connection to memsql in Go

I'm trying to setup an ssl based tcp connection to memsql using Go.
The application/services are running as openshift pods and written in Go.
Can I have one-way authentication to memsql from the service?
Do I need to enable any port in memsql to listen for tls based ssl connection?
Apart from updating the DSN in my service to tls=true, what can be the alternative to customise this configuration.
Can someone suggest an efficient way to connect to memsql with ssl enabled?
I've followed the memsql documentation and inserted the certificates to memsql master and aggregator, as well as made the permission check enabled, but still I'm able to get into the memsql without giving the rootCertificate in the login.
Currently the connection is established by following code:
db, err := sql.Open("mysql", DSN) and
DSN=root:#tcp(IPAddress:3306)/riodev?interpolateParams=true&parseTime=true
Can you clarify what your question is? The SSL authentication is one-way, the client verifies the server. The server verifies the client via their login information.
No, MemSQL uses the same port for SSL and non-SSL connections.
You may also need to configure the SSL certificate, as described in https://github.com/go-sql-driver/mysql#tls.
Most client libraries support connecting with SSL.
I've followed the memsql documentation and inserted the certificates to memsql master and aggregator, as well as made the permission check enabled, but still I'm able to get into the memsql without giving the rootCertificate in the login.
Is it possible the connection is already using SSL? It may be using SSL-preferred mode without verifying the certificate.

Removing Rogue SSL Certs on AWS

I have a client site set up on AWS with multiple servers running HTPPS behind an Elastic Load Balancer. At some point, someone from the client's team attempted to update the SSL Cert by installing a new one directly on one of the servers (instead of in the ELB).
I was able to upload a new cert to the ELB, but when traffic is directed towards the server with the improperly installed cert, it triggers a security warning.
No one can seem to answer who attempted this install, how they went about, or where they installed it.
What's the best way to go about finding and removing it?
Thanks,
ty
If it's installed on the server, it has very little to do with AWS. I see you tagged the question with apache so I assume the server is running Apache Web Server. You will have to connect into that server and remove the SSL settings from the Apache Web Server configuration, just like you would with an Apache Web Server install anywhere else.

Connect to on premises DB2 server using Bluemix secure gateway and TLS

I have been trying to connect my Node.js Public Bluemix app to a DB2 server which is behind a firewall using the secure gateway service of Bluemix. When I try that by just using TCP everything works fine. I am now trying to use the TLS:Mutual Auth option and I can't make it work.
I followed this tutorial (https://developer.ibm.com/bluemix/2015/04/17/securing-destinations-tls-bluemix-secure-gateway/) and the tunnel seems to be created (I can see that at logs of the gateway client) but no data is coming through.
In the object Options which is a parameter of tls.connect, if I set rejectUnauthorized: true then I get "UNABLE_TO_GET_ISSUER_CERT" while I am using the generated certificates of the destination. If I set rejectUnauthorized: false, then it seems to work and the connection opens but nothing comes through, it just hangs. In both cases, I am using the same code that works when TLS is not set up and is based on the ibm_db node driver for DB2.
Has anyone experience with this, I have been struggling with it for some days now and any help would be much appreciated.
After some discussion, we determined that part of the problem was explicitly stating a piece of the cert chain in the CA, causing the UNABLE_TO_GET_ISSUER_CERT error to be emitted. This can be resolved by either adding the full chain to the CA or not explicitly adding anything to the CA (as the cert is publicly signed).
An underlying issue that was identified is that the ibm_db node driver for DB2 does not appear to work as expected for TLS connections.

How to conect Google Apps Script JDBC to Amazon RDS with SSL

Google launched support for SSL connections in the JDBC service. Google added three new connection parameters to support this feature: _serverSslCertificate, _clientSslCertificate, and _clientSslKey. The documentation is available here:
https://developers.google.com/apps-script/reference/jdbc/jdbc#getConnection(String,Object)
When a database in Amazon is created, we can add SSL support to it:
http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/UsingWithRDS.SSL.html
As an example if we create a MariaDB database, we just have to download the next certificate: rds-ca-2015-root.pem
And access to database with the next command:
mysql -h mymariadbinstance.abcd1234.rds-us-east-1.amazonaws.com --ssl-ca=[full path]rds-combined-ca-bundle.pem --ssl-verify-server-cert
And apply SSL to an specific user:
GRANT USAGE ON *.* TO 'encrypted_user'#'%' REQUIRE SSL
So, how can we conect with SSL to an Amazon Database using the GAS JDBC API?
The question isn't entirely clear to me, but it looks like you asking how you can pass the server cert while initiating the jdbc connection. The API has a _serverSslCertificate argument that should do just this.
https://developers.google.com/apps-script/reference/jdbc/jdbc#getConnection(String,Object)
Try it out and let us know if you still have issues.