I have verified ssl certificate (got it when bought hosting). It consist of four parts: 1. private key, 2. certificate, 3. root sertificate, 4. intermediate certificate. I made two files .key(private), .crt(certificate, intermediate, root) and confugure nginx. Everything good, my domain shows https, and https://www.sslshopper.com says that it works.
But when I set telegram bot webhook
def start_request():
url = 'https://api.telegram.org/bot{token}/{method}'.format(
token = 'myToken',
method = 'setWebhook'
)
data = {
'url' : 'MyDomain',
'certificate' : open('myCertificate', 'r')
}
r = requests.post(url, data = data)
webhook status always
result
url "myDomain"
has_custom_certificate false
pending_update_count 5
last_error_date 1515041749
last_error_message "Wrong response from the webhook: 403 Forbidden"
and
nginx log says
149.154.167.214 - - [04/Jan/2018:07:07:00 +0300] "POST myDomain" 403 997 "-" "-"
Is certificate problem ?
What certificate or part of the certificate and in what format I should send to telegram ?
I think setWebhook is seccessd. You can use this Android application to determine.
BTW, you can refer to this guide, use curl to debug yourself.
In my case I take root certificate (only root) and convert it to .der and to .pem ,
openssl x509 -in root.crt -outform der -out root.der
openssl x509 -in root.der -inform der -outform pem -out root.pem
after that I set webhook with "Awesome Telegram Bot" android application using root.pem certificate.
And using getWebhookInfo method I get
url "https://myDomain"
has_custom_certificate true
Related
I am developing a mutating webhook with kind and as I understand, the API end-point should be https. The certificate and key of the API server should be signed with the CA of the cluster itself so as to get around issue of self-signed certificates. And, for that, the following are the recommended steps:
Create key - openssl genrsa -out app.key 2048
Create CSR - openssl req -new -key app.key -subj "/CN=${CSR_NAME}" -out app.csr -config csr.conf
Create CSR object in kubernetes - kubectl create -f csr.yaml
Approve CSR - kubectl certificate approve csr_name
Extract PEM - kubectl get csr app.csr -o jsonpath='{.status.certificate}' | openssl base64 -d -A -out app.pem
Notes
1. The csr.conf has details to set-up the CSR successfully.
2. The csr.yaml is written for the kuberenetes kind CertificateSigningRequest.
3. The csr_name is defined in CertificateSigningRequest.
4. The spec.request in csr.yaml is set to cat app.csr | base64 | tr -d '\n'.
5. The app.pem and app.key are used to set-up the https end-point.
The end-point is definitely reachable but errors out as:
Internal error occurred: failed calling webhook "com.me.webhooks.demo": Post https://webhook.sidecars.svc:443/mutate?timeout=10s: x509: certificate signed by unknown authority
How do I get around the certificate signed by unknown authority issue?
References:
1. Writing a very basic kubernetes mutating admission webhook
2. Diving into Kubernetes MutatingAdmissionWebhook
It doesn't need to be signed with the cluster's CA root. It just needs to match the CA bundle in the webhook configuration.
I am trying to send http GET/POST requests to applications that are hidden behind a reverse proxy. Communication with the reverse proxy is via https and the proxy requires a client certificate.
It looks like that the keystore certificate (gatling.http.ssl.keyStore.file) is not used to authenticate with the reverse proxy. I assume this because:
if I request https://reverse-proxy-address without specifying a proxy, I receive an answer (basically the same as if I access the URL within a browser with the client certificate) -> certificate is used for the request.
if I specify a proxy with http.proxy(Proxy("reverse-proxy-address", port)) and sent a request to http://hidden-url I receive a "org.asynchttpclient.exception.RemotelyClosedException: Remotely closed" (Gatling 2.3.1) or "java.io.IOException: Premature close" (Gatling 3.0.3)
I haven't found a hint how I can specify that the client certificate is used for authentication with the reverse proxy. Maybe the client certificate is already used to authenticate with the reverse proxy and something else is not configured correctly. I don't know how to analyze further...
Hope that someone else already faced the same issue and know the solution. Also hints so that I can dig deeper are more than welcome!
Thanks
I was doing that with Gatling 2.x on OSX. It requires a few steps more. Setting cert path in gatling.conf is not enough.
I received CRT.pem and KEY.pem files. I created p12 cert based on the key pair.
openssl pkcs12 -export -in client1-crt.pem -inkey client1-key.pem -out cert.p12
Then I created store and imported the cert to keystore.
keytool -importkeystore -deststorepass mycert -destkeystore filename-new-keystore.jks -srckeystore cert.p12 -srcstoretype PKCS12
The next step is to set correct path in gatling.conf (it depends on OS)
my gatling conf:
gatling {
http {
ssl {
keyStore {
type = "PKCS12"
file = "/Users/lukasz/cert.p12"
password = ""
}
trustStore {
type = ""
file = "/Library/Java/JavaVirtualMachines/jdk1.8.0_111.jdk/Contents/Home/jre/lib/security/cacerts"
password = "changeit"
}
}
}
}
This way I was able to use custom certificate with Gatling. I'm not sure if this is a workaround or this is a proper way to handle custom certificate by JVM.
I am trying to enable client certificate verification from some php scripts. My server -debian / lighttpd- has a SSL certificate since a while, it works.
It is configured this way :
$SERVER["socket"] == ":443" {
ssl.engine = "enable"
ssl.cipher-list = "EECDH+AESGCM:EDH+AESGCM:AES128+EECDH:AES128+EDH"
ssl.pemfile = "/etc/lighttpd/certs/mydomain.net.pem"
ssl.ca-file =
"/etc/lighttpd/certs/my.domain.ca-bundle"
ssl.dh-file = "/etc/lighttpd/certs/dhparam.pem"
ssl.ec-curve = "secp384r1"
ssl.verifyclient.activate = "enable"
ssl.verifyclient.enforce = "disable"
ssl.verifyclient.username = "SSL_CLIENT_S_DN_CN"
ssl.verifyclient.depth = 1
ssl.verifyclient.exportcert = "enable"
}
I tried to follow the pretty good tutorail here
https://gist.github.com/mtigas/952344
Where the difference is that I did not create a ca.key and ca.crt since I already have them configured with my lighttpd (they are signed by Comodo).
Now, my first doubt is what should I use to sign my client.crt?
This is what I did :
openssl genrsa -des3 -out client1.key 2048
openssl req -new -key client1.key -out client1.csr
Now at this point I need to sign it with the CA, i'd say it is the one identified as ssl.ca-file in the lighttpd.conf, right?
so my.domain.ca-bundle I guess.
Issue is, how do I get a key for it?
I tried to use (-CAkey) the private key I used with the csr used to request the certifcate, but I get X509_check_private_key: key values mismatch
Which one should I use?
Checked a few tutorials and examples, but all of them assume you create your ouw CA
first of all you cannot create your own (CA), a (CA) is a certificate Authority that provides you a ssl certificate. In order to do that you must submit a request, the request basically is a (CSR certificate signing request).After validating your request you'll be able to download your SSL certificate. Next you have to import your ssl certificate on your server where you generated the (CSR). While importing the ssl make sure you choose to import the private key also, at the end of this step you have a .pfx file, to get the private key all you have to do is to convert the .pfx file into a pem file where you can find a .key file that contains the private key.
this article could be helpful: https://www.namecheap.com/support/knowledgebase/article.aspx/9834/69/how-can-i-find-the-private-key-for-my-ssl-certificate
PS: Note that you should import you ssl certificate on the same server you generated the (CSR) otherwise your ssl certificate will be not attached to any private key, you can use this tool 'DigiCert Utility' to import the ssl certificate, generate the (CSR), test your key...
I'm using cUrl to request data from a corporate website site using a .cer certificate that they sent me.
This is the command:
cUrl --header "Content-Type: text/xml;charset=UTF-8" \
--data #bustaRequestISEE2015ConsultazioneAttestazione.xml \
-o bustaResponseISEE2015ConsultazioneAttestazione.xml \
--cert ./caaffabisrl.cer \
https://istitutonazionaleprevidenzasociale.spcoop.gov.it/PD
When I run it, I get this error message:
curl: (58) could not load PEM client certificate, OpenSSL error error:0906D06C:PEM routines:PEM_read_bio:no start line, (no key found, wrong pass phrase, or wro
ng file format?)
Is there anybody who can help me?
Tks, Cristiano.
It is not possible to connect to a TLS server with curl using only a client certificate, without the client private key. Either they forgot to send you the private key file, or, what they sent you was not the client certificate but the server certificate for verification.
The first thing I would try is using --cacert instead of --cert. That is, tell curl that this is the server's certificate that curl can use to verify that the server is who you think it is.
You can also try removing --cert and not using --cacert, and you will probably get an error that the server is not trusted. Then add the --insecure argument and see if that works. I would not keep that argument, as then you have no proof of who you are talking to.
My guess is that it is the server cert, and that using --cacert instead of --cert will solve the problem.
My guess is that your certificate file is a DER encoded binary certificate instead of base-64 encoded certificate. To covert the from binary to base-64, you can use OpenSSL.
openssl x509 -inform der -in certificate.cer -out certificate.pem
I always forget all the arguments and have the following site bookmarked, as it gives examples of how to convert pretty much any certificate format. https://www.sslshopper.com/ssl-converter.html
First, you need to specify whether you're expected to perform two-way TLS/SSL or MTLS (mutual TLS). This would typically be the reason for sending a certificate. If they sent the server certificate, but you can connect to the server with a browser, you can down load the certificate. If their server is configured to send the server certificate and CA chain, then you can get the entire chain in a single request using "openssl s_client -connect [hostname:port] -showcerts". Save the certs in the console to a file, copying the cert blob(s) to individual cert files (cert1.crt, cert2.crt). However, if they are expecting MTLS and attempting to send a client certificate to you, either you've already generated a private key and CSR (certificate signing request) and send them the CSR. They would have then signed a certificate with their CA certificate using the CSR. The cert they returned would then need to be paired with the private key used to generate the CSR. They should not be generating the public/private key pair and sending them over mail. The private key should be stored security on the one system used to establish the connection. If it's one-way (server ssl only), then your client system (assuming it's not the browser), needs a truststore file, with the CA certificate chain installed and set to trusted. If the platform is Java, read Java's keytool documentation. Note, a keystore is for your systems public/private keypair. A truststore is for the CA certificates that you trust to sign public certificates that your system should trust as being authentic. You need to read any of the PKI x509 overviews by DigiCert, SSLABS, Sectigo, etc.
I've come across the site https://alpower.com, this site is only providing its own site certificate. Because of this I can't access the site properly with cURL as the cacerts used are only root certsificates.
The site is accessible in Firefox however. How exactly is Firefox able to verify the site's identity where as cURL isn't?
Browsers will cache intermediate certificates. So if the missing certificate was already provided by another site the browser will have it already and will use it. But, if you use a fresh browser profile you might get the same problems as you get with curl, because the intermediate certificate is not cached.
This is at least how it works with Firefox. Other browsers might look into the Authority Information Access section of the certificate and if they find the URL issuer certificate they will download the certificate to continue with the chain verification.
Most browsers are using the AIA information embedded in the certificate (see comment on browsers exceptions).
To expose the URL of the CA Issuer with openssl:
openssl x509 -in "YOUR_CERT.pem" -noout -text
There is a section Authority Information Access with CA Issuers - URI which would be the "parent" certificate (intermediate or root certificate).
This can be reproduced up to the root CA.
In a gist:
ssl_endpoint=<ENDPOINT:443>
# first, get the endpoint cert
echo | openssl s_client -showcerts -connect $ssl_endpoint 2>/dev/null | openssl x509 -outform PEM > endpoint.cert.pem
# then extract the intermediate cert URI
intermediate_cert_uri=$(openssl x509 -in endpoint.cert.pem -noout -text | (grep 'CA Issuers - URI:' | cut -d':' -f2-))
# and get the intermediate cert (convert it from DER to PEM)
curl -s "${intermediate_cert_uri}" | openssl x509 -outform PEM -inform DER > intermediate.cert.pem