I have jupyterhub installed in my server and all these days I was logging in without SSL certificate and everything was working fine. Now I have SSL certificate (.cer) and I have generated key from it and made changes in config file for c.JupyterHub.ssl_cert = '/opt/mycert.cer' and
c.JupyterHub.ssl_key='/opt/mykey.pem'. But when I start jupyterhub I get below error.
[I 2021-03-23 09:50:15.434 JupyterHub proxy:646] Starting proxy # http://10.203.6.43:8080/
_tls_common.js:113
c.context.setCert(cert);
^
Error: error:0906D06C:PEM routines:PEM_read_bio:no start line
at Object.createSecureContext (_tls_common.js:113:17)
at Server (_tls_wrap.js:868:27)
at new Server (https.js:62:14)
at Object.createServer (https.js:84:10)
at new ConfigurableProxy (/opt/anaconda3/lib/node_modules/configurable-http-proxy/lib/configproxy.js:223:32)
at Object.<anonymous> (/opt/anaconda3/lib/node_modules/configurable-http-proxy/bin/configurable-http-proxy:333:13)
at Module._compile (internal/modules/cjs/loader.js:688:30)
at Object.Module._extensions..js (internal/modules/cjs/loader.js:699:10)
at Module.load (internal/modules/cjs/loader.js:598:32)
at tryModuleLoad (internal/modules/cjs/loader.js:537:12)
[C 2021-03-23 09:50:16.537 JupyterHub app:2517] Failed to start proxy
Traceback (most recent call last):
File "/opt/anaconda3/lib/python3.8/site-packages/jupyterhub/app.py", line 2515, in start
await self.proxy.start()
File "/opt/anaconda3/lib/python3.8/site-packages/jupyterhub/proxy.py", line 673, in start
_check_process()
File "/opt/anaconda3/lib/python3.8/site-packages/jupyterhub/proxy.py", line 669, in _check_process
raise e from None
RuntimeError: Proxy failed to start with exit code 1
configurable-http-proxy --ip 10.203.6.43 --port 8080
10:05:21.706 [ConfigProxy] warn: REST API is not authenticated.
10:05:21.713 [ConfigProxy] info: Proxying http://10.203.6.43:8080 to (no default)
10:05:21.713 [ConfigProxy] info: Proxy API at http://localhost:8081/api/routes
I have generated key using below command
openssl x509 -inform der -in /opt/mycert.cer -pubkey -noout > /opt/mykey.pem
After looking at similar issues online still I don’t seem to overcome it. Can someone kindly help me with this error.
Related
I've a GCP Dataproc cluster, and i'm trying to deploy a pyspark job, which produces to a topic using SSL.
the pem files are stored in bucket gs://dataproc_kafka_code/code, and i'm accessing the pem files in the code shown below.
However, the code is not able to find the pem files, the error is as shown below :
%3|1638738651.097|SSL|rdkafka#producer-1| [thrd:app]: error:02001002:system library:fopen:No such file or directory: fopen('gs://dataproc_kafka_code/code/caroot.pem','r')
%3|1638738651.097|SSL|rdkafka#producer-1| [thrd:app]: error:2006D080:BIO routines:BIO_new_file:no such file
Traceback (most recent call last):
File "/tmp/my-job6/KafkaProducer.py", line 21, in <module>
producer = Producer(conf)
cimpl.KafkaException: KafkaError{code=_INVALID_ARG,val=-186,str="Failed to create producer: ssl.ca.location failed: error:0B084002:x509 certificate routines:X509_load_cert_crl_file:system lib"}
Code :
from confluent_kafka import Producer
kafkaBrokers='<host>:<port>'
# CA Root certificate ca.crt
caRootLocation='gs://dataproc_kafka_code/code/caroot.pem'
# user public (user.crt)
certLocation='gs://dataproc_kafka_code/code/my-bridge-user-crt.pem'
# user.key
keyLocation='gs://dataproc_kafka_code/code/user-with-certs.pem'
password='<password>'
conf = {'bootstrap.servers': kafkaBrokers,
'security.protocol': 'SSL',
'ssl.ca.location':caRootLocation,
'ssl.certificate.location': certLocation,
'ssl.key.location':keyLocation,
'ssl.key.password' : password
}
topic = 'my-topic'
producer = Producer(conf)
for n in range(100):
producer.produce(topic, key=str(n), value=" val -> "+str(n*(-1)) + " on dec 5 from dataproc ")
producer.flush()
What needs to be done to fix this ?
Also, is this the right way to provide the code access to the SSL certs ?
tia!
From error
fopen:No such file or directory: fopen('gs://dataproc_kafka_code/code/caroot.pem','r'), seems like the Producer library is trying to download the file from local filesystem.
There are couple of ways you can try to fix this by downloading these keys/certificates to local files and then pointing the conf to them:
Download using storage client API https://googleapis.dev/python/storage/latest/client.html
Or use gsutil (comes preinstalled in the VM) to download the files https://cloud.google.com/storage/docs/gsutil/commands/cp
On a specific URL I am getting ssl.SSLError: [SSL: WRONG_SIGNATURE_TYPE] wrong signature type using Python's requests library.
I have tried the answers at python requests : SSL error during requests? but they do not help.
Not even verify=False from, e.g., Python Requests throwing SSLError, helps:
url = 'https://infoteka.bg.ac.rs/ojs/index.php/Infoteka/issue/view/20'
import requests
response = requests.get(url, verify=False)
gives
Traceback (most recent call last):
File "/usr/local/lib/python3.8/dist-packages/urllib3/connectionpool.py", line 699, in urlopen
...
File "/usr/lib/python3.8/ssl.py", line 1309, in do_handshake
self._sslobj.do_handshake()
ssl.SSLError: [SSL: WRONG_SIGNATURE_TYPE] wrong signature type (_ssl.c:1131)
I have the most recent distribution package of openssl:
$ openssl version
OpenSSL 1.1.1f 31 Mar 2020
Though I note that upgrading to 1.1.1g stated at https://stackoverflow.com/a/63387377/589165 could perhaps solve the problem.
My webbrowsers do not have a problem with the particular page (as far as I can tell), while both wget and curl have the same problem as requests.
What are the options here? Would it be necessary to manually install a newer version of openssl? Or are there other possible workarounds?
Trying to update dependencies on a phoenix app by running: mix deps.get
The only STOUT is:
07:20:21.642 [error] SSL: :certify: ssl_handshake.erl:1507:Fatal error: certificate expired
07:20:21.674 [error] SSL: :certify: ssl_handshake.erl:1507:Fatal error: certificate expired
Registry update failed (http_error)
{:failed_connect, [{:to_address, {'repo.hex.pm', 443}}, {:inet, [:inet], {:tls_alert, 'certificate expired'}}]}
** (Mix) Failed to fetch registry
I have updated elixir and erlang with brew update but that hasn't helped.
Since the certificate for repo.hex.pm is not expired in reality but is very recently issued the error message might be cause by a wrong time on your computer. Thus make sure that you have the current time on your system and try again.
I am trying to configure the puppetserver and agent to use external CA with - Root self-signed CA & Master,Agent having its own ssl certificate
Configurations in puppetserver:
/etc/puppetlabs/puppetserver/bootstrap.cfg
# To enable the CA service, leave the following line uncommented
# puppetlabs.services.ca.certificate-authority-service/certificate-authority-service
# To disable the CA service, comment out the above line and uncomment the line below
puppetlabs.services.ca.certificate-authority-disabled-service/certificate-authority-disabled-service
/etc/puppetlabs/puppetserver/conf.d/webserver.conf
ssl-cert : /usr/cachelogic/var/device-pki/dev_cert.pem
ssl-key : /usr/cachelogic/var/device-pki/dev_key.pem
ssl-ca-cert : /usr/cachelogic/var/device-pki/CAcert.pem
ssl-crl-path : /etc/puppetlabs/puppet/ssl/crl.pem
puppetserver service was started successfully.
Configurations in puppet agent:
/etc/puppetlabs/puppet/puppet.conf
hostcert = /usr/cachelogic/var/device-pki/dev_cert.pem
hostprivkey = /usr/cachelogic/var/device-pki/dev_key.pem
localcacert = /usr/cachelogic/var/device-pki/CAcert.pem
While starting the puppet agent following this the error message that I get.
Debug: Using cached certificate for ca
Debug: Creating new connection for https://cp3.zzz152d1.cdn:8140
Debug: Using cached certificate for ca
Error: Could not run: stack level too deep
/opt/puppetlabs/puppet/lib/ruby/vendor_ruby/puppet.rb:63
Any pointers on debugging this issue will be helpful. Thanks.
I have a Cassandra cluster with client-node encryption enabled. I am trying to add this cluster to an instance of OpsCenter 5.1.0, but it is not able to connect to the cluster. The log file seems to complain about not being able to verify the SSL certificate:
`
INFO: Starting factory opscenterd.ThriftService.NoReconnectCassandraClientFactory instance at 0x7f2ce05c8638>
2015-06-10 15:09:46+0000 [] WARN: Unable to verify ssl certificate.
2015-06-10 15:09:46+0000 [] Unhandled Error
Traceback (most recent call last):
File "/opt/opscenter-5.1.0/lib/py-debian/2.7/amd64/twisted/python/log.py", line 84, in callWithLogger
return callWithContext({"system": lp}, func, *args, **kw)
File "/opt/opscenter-5.1.0/lib/py-debian/2.7/amd64/twisted/python/log.py", line 69, in callWithContext
return context.call({ILogContext: newCtx}, func, *args, **kw)
File "/opt/opscenter-5.1.0/lib/py-debian/2.7/amd64/twisted/python/context.py", line 59, in callWithContext
return self.currentContext().callWithContext(ctx, func, *args, **kw)
File "/opt/opscenter-5.1.0/lib/py-debian/2.7/amd64/twisted/python/context.py", line 37, in callWithContext
return func(*args,**kw)
--- exception caught here ---
File "/opt/opscenter-5.1.0/lib/py-debian/2.7/amd64/twisted/internet/epollreactor.py", line 217, in _doReadOrWrite
why = selectable.doRead()
File "/opt/opscenter-5.1.0/lib/py-debian/2.7/amd64/twisted/internet/tcp.py", line 137, in doRead
return Connection.doRead(self)
File "/opt/opscenter-5.1.0/lib/py-debian/2.7/amd64/twisted/internet/tcp.py", line 452, in doRead
data = self.socket.recv(self.bufferSize)
File "build/lib/python2.7/site-packages/opscenterd/SslUtils.py", line 12, in ssl_simple_verifyCB
opscenterd.Utils.SSLVerifyException: SSL certificate invalid
My question is: what are the step-by-step instructions for being able to add a client-node encrypted cluster to opscenter?
Which .pem and .keystore files are needed exactly, how do I get hold of them?
The DataStax documentation on that topic is not detailed enough and therefore not really helpful. I assume some people out there must have managed to set this up successfully and I am sure that a detailed explanation / instructions would be appreciated by many.
One thing to note here, although the docs do mention generating a key per node, in practice this isn't very scalable. In most systems it is common to create the one keystore with the required keys and certificate(s) and then use this across all the nodes in your cluster and your client applications as needed. You export the certificate from this keystore and use this for OpsCenter. OpsCenter is (as far as SSL is concerned) a SSL client like any other client.
So you have to export your key from your java keystore, convert it to .pem format and use that for the opscenterd process. The agents are java based so they can use the java keystore. The DS docs are there but its a bit fragmented so its a question of looking in the right places :-)
I'm going to use the OpsCenter latest docs here as a reference. I'm assuming you are only using SSL between OpsCenter and Cassandra and OpsCenter Agents and Cassandra
Prepping the server certificates:
https://docs.datastax.com/en/cassandra/2.1/cassandra/security/secureSSLCertificates_t.html
Configuring client to node SSL:
https://docs.datastax.com/en/cassandra/2.1/cassandra/security/secureSSLClientToNode_t.html
using cqlsh with SSL (optional):
https://docs.datastax.com/en/cassandra/2.1/cassandra/security/secureCqlshSSL_t.html
To convert the key to a pem format see step 7 here:
https://docs.datastax.com/en/latest-opsc/opsc/online_help/opscAddingCluster_t.html
Examples
Note all these examples assume 1-way SSL. You generated a key in a file called /etc/dse/keystore and the certificate in a file called /etc/dse/truststore
To be honest I've never really had a lot of luck in adding SSL enabled clusters directly in the OpsCenter UI. I've always found creating the cluster.conf file and agent address.yaml files by hand far quicker and easier.
Note the SSL files like truststore, key.pem etc need to be on all the local machines that need them.
Example agent /var/lib/datastax-agent/conf/address.yaml file (note the use_ssl is for the opscenter <> agents SSL which we're not using here)
stomp_interface: 192.168.56.29
use_ssl: 0
# ssl_keystore settings if using ssl
ssl_keystore: /etc/dse/truststore
ssl_keystore_password: datastax
Example opscenter /etc/opscenter/clusters/<cluster_name>.conf file
[jmx]
username =
password =
port = 7199
[kerberos_client_principals]
[kerberos]
[agents]
ssl_keystore = /etc/dse/truststore
ssl_keystore_password = datastax
[kerberos_hostnames]
[kerberos_services]
[cassandra]
ssl_ca_certs = /etc/dse/key.pem
ssl_validate = False
seed_hosts = 192.168.56.22
Other tips etc
I always find if Im troubleshooting SSL connections in DSE / Cassandra. I'll strip out all the SSL and get the cluster working nomrmally first, then I'll configure SSL one step at a time, like turning on node to node SSL, then client to node, then OpsCenter and so on. Debugging all the SSL errors is not for the feint hearted!
Links
Other doc links you might find useful:
https://docs.datastax.com/en/opscenter/5.2/opsc/configure/opscConnectionConfig_r.html
https://docs.datastax.com/en/opscenter/5.2/opsc/configure/agentAddressConfiguration.html