I'm trying to remove a certificate or to update on Heroku without success doing the command:
heroku certs:remove --endpoint -a heroku-app-ssl
Error:
Error: Unexpected argument: heroku-app-ssl
standing on the documentation the -a heroku-app-ssl is about my Heroku app name which is that way above but giving me errors so I don't understand what is wrong.
I need to remove old certificate and put a new one but this error blocked me.
I followed this doc:
https://devcenter.heroku.com/articles/ssl-endpoint
Take the -a out:
heroku certs:remove --endpoint heroku-app-ssl
It isn't included in the documentation you referenced.
If you get an error about needing -a that probably indicates that you've got multiple Heroku apps connected to this repository, e.g. a staging and a production app. In that case you can add the -a, but you need to provide the app name (not the SSL domain) separately:
heroku certs:remove --endpoint heroku-app-ssl -a heroku-app
You may already be doing this, but I also urge you to move to the newer Heroku SSL service. It's free, and much simpler than the old SSL Endpoint paid add-on.
Related
I installed a brand new DigitalOcean droplet using a marketplace base (so on paper everything should be OK out of the box).
When trying to issue certificates, i am getting this error:
[11.13.2019_04-48-28] /root/.acme.sh/acme.sh --issue -d thehouseinkorazim.co.il -d www.thehouseinkorazim.co.il --cert-file /etc/letsencrypt/live/thehouseinkorazim.co.il/cert.pem --key-file /etc/letsencrypt/live/thehouseinkorazim.co.il/privkey.pem --fullchain-file /etc/letsencrypt/live/thehouseinkorazim.co.il/fullchain.pem -w /home/thehouseinkorazim.co.il/public_html --force
[11.13.2019_04-48-28] [Errno 2] No such file or directory [Failed to obtain SSL. [obtainSSLForADomain]]
[11.13.2019_04-48-28] 283 Failed to obtain SSL for domain. [issueSSLForDomain]
[11.13.2019_04-48-34] Trying to obtain SSL for: thehouseinkorazim.co.il and: www.thehouseinkorazim.co.il
I checked and UFW is not installed.
I do have a network firewall but it is the same one as another droplet that does allow for certificates (same rules) so I think it is not the cause.
I searched all the answers online and no luck.
I even installed certboot to manually issue certificate but same error (i did it because I know you need to register initially to get certificates and I haven't so I thought it was the cause).
Any ideas? Thanks!
update: i did a clean droplet again, this is the issue without anything I did manually:
Cannot issue SSL. Error message: ln: failed to create symbolic link '/usr/local/lsws/admin/conf/cert/admin.crt': No such file or directory ln: failed to create symbolic link '/usr/local/lsws/admin/conf/cert/admin.key': No such file or directory 0,283 Failed to obtain SSL for domain. [issueSSLForDomain]
I checked and there is no folder "cert" under "conf" in the path written above.
There's an known SSL issue on recent version due to some environment/code changing. We already aware it and submitted a new version which has that issue fixed included. Please give it a day or two and you should be able to launch the new version from marketplace which comes with CyberPanel v1.9.2.
Best
I am trying to send my app to a Google Cloud Cluster using the kubectl command behind a corporative proxy that needs a certificate ".crt" file to be used when doing HTTPS requests.
I already ran the gcloud container clusters get-credentials... command and it also asked for a certificate. I followed the given instructions by Google and I configured my certificate file without any issue and it worked.
But when I try the kubectl get pods I am getting the following message:
"Unable to connect to the server: x509: certificate signed by unknown authority"
How can I configure my certificate file to be used by the kubectl command?
I did a search about this subject but I found too difficult steps. Could I just run something like this:
kubectl --set_ca_file /path/to/my/cert
Thank you
The short answer up to what I know is no.
here[1] you can see the step by step of how to get this done in the easiest way I found so far, is not a one line way but is the closest to that.
after having your cert files you need to run this:
gcloud compute ssl-certificates create test-ingress-1 \ --certificate [FIRST_CERT_FILE] --private-key [FIRST_KEY_FILE]
then you need to create your YAML file with the configuration (in the link there are two examples)
run this command:
kubectl apply -f [NAME_OF_YOUR_FILE].yaml
[1] https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-multi-ssl
I developed an application for a client which I host on a subdomain, now the problem is that I don't own the main domain/website. They've added a DNS record to point to the IP on which I host that app. Now I want to request a Free & automatic certificate from Let's Encrypt. But when I try the handshake it says
Getting challenge for subdomain.example.com from acme-server...
Error: http://subdomain.example.com/.well-known/acme-challenge/letsencrypt_**** is not reachable. Aborting the script.
dig output for subdomain.example.com:subdomain.example.com
Please make sure /.well-known alias is setup in WWW server.
Which makes sense cause I don't own that domain on my server. But if I try to generate it without the main domain I get:
You must include your main domain: example.com.
Cannot Execute Your Request
Details
Must include your domain example.com in the LetsEncrypt entries.
So I'm curious on how I can just set up a certificate without owning the main domain. I tried googling the issue but I couldn't find any relevant results. Any help would be much appreciated.
First
You don't need to own the domain, you just need to be able to copy a file to the location serving that domain. (You're all set there it sounds like)
Second
What tool are you using? The error message you gave makes me think the client is misconfigured. The challenge name is usually something like https://example.com/.well-known/acme-challenge/jQqx6qlM8u3wpi88N6lwvFd7SA07oK468mB1x4YIk1g. Compare that to your error:
Error: http://example.com/.well-known/acme-challenge/letsencrypt_example.com is not reachable. Aborting the script.
Third
I'm the author of Greenlock, which is compatible with Let's Encrypt. I'm confident that it will work for you.
Install
# Feel free to read the source first
curl -fsS https://get.greenlock.app/ | bash
Usage with existing webserver:
Let's say that:
You're using Apache or Nginx.
You confirm that ping example.com gives the IP of your server
You're exposing http on port 80 (otherwise verification will fail)
Your website is located in /srv/www/example.com
Your email is jon#example.com (must be a real email address)
You want to store your certificate as /etc/acme/live/example.com/fullchain.pem
This is what the command would look like:
sudo greenlock certonly --webroot \
--acme-version draft-11 --acme-url https://acme-v02.api.letsencrypt.org/directory \
--agree-tos --email jon#example.com --domains example.com \
--community-member \
--root /srv/www/example.com \
--config-dir /etc/acme
If that doesn't work on the first try then change out --acme-url https://acme-v02.api.letsencrypt.org/directory for --acme-url https://acme-staging-v02.api.letsencrypt.org/directory while you debug. Otherwise your server could become blocked from Let's Encrypt for too many bad requests. Just know that you'll have to delete the certificates from the staging environment and retry with the production url since the tool cannot tell which certificates are "production" and which ones are "testing".
The --community-member flag is optional, but will provide me with analytics and allow me to contact you about important or mandatory changes as well as other relevant updates.
After you get the success message you can then use those certificates in your webserver config and restart it.
That will work as a cron job as well. You could run it daily and it will only renew the certificate after about 75 days. You could also put a cron job to send the "update configuration" signal to your webserver (normally HUP or USR1) every few days to cause it to start using the new certificates without even restarting (...or just have it restart).
Usage without a web server
If you just want to quickly test without even having a webserver running, this will do it for you:
sudo greenlock certonly --standalone \
--acme-version draft-11 --acme-url https://acme-v02.api.letsencrypt.org/directory \
--agree-tos --email jon#example.com --domains example.com \
--community-member \
--config-dir /etc/acme/
That runs expecting that you DO NOT have a webserver running on port 80, as it will start one temporarily just for the purpose of the certificate.
sudo is required for using port 80 and for writing to root and httpd-owned directories (like /etc and /srv/www). You can run the command as your webserver's user instead if that has the correct permissions.
Use Greenlock as your webserver
We're working on an option to bypass the middleman altogether and simply use greenlock as your webserver, which would probably work great for simple vhosting like it sounds like you're doing. Let me know if that's interesting to you and I'll make sure to update you about it.
Fourth
Let's Encrypt also has an official client called certbot which will likely work just as well, perhaps better, but back in the early days it was easier for me to build my own than to use theirs due to issues which they have long since fixed.
Whats important is the sub domains A record. It should be the IP Address of from where you are trying to request the sub domains certificate.
I've been trying off and on to get a LAMP development server operational behind my corporate firewall (McAfee Web Gateway). I have a Ubuntu/Trusty64 image on a virtualbox VM provisioned through Vagrant. I cannot get "some" {most} repositories to load for a proper sudo apt-get update. I'm getting a 401 authentication required error on all 'security.ubuntu.com trusty-security/*' sources and 'archive.ubuntu.com trusty/*' sources and all fail to fetch. Therefore most all sudo apt-get install {whatever} fails and I cannot add the necessary PPA repository to install the LAMP environment I want.
I can turn off SSL verification for some things and can get many things installed - but I need SSL working correctly within this environment.
Digging deeper, I find that if I curl -v https://url.com:443, I get the
curl(60): ssl certificate error: unable to get local issuer certificate.
I have the generic bundle 'ca-bundle.crt' installed locally in /usr/local/share/ca-certificates/ and ran sudo update-ca-certificates which seemed to update ca-certificates.crt in etc/ssl/certs/.
I ran a strace -o stracker.out curl -v https://url.com:443 and searched for the failing stat() as suggested in here by No-Bugs_Hare and found that curl was looking for 'c099e901.0' in /etc/ssl/certs/ and it isn't there. Googling that particular HEXID is no joy and am stuck at this step.
Next I tried strace -o traceOppenSSL.out openssl s_client -connect url.com:443 to see if I can get more detail but can't see what causes the
verify error:num=20:unable to get local issuer certificate
followed by two other errors (I'm sure all relating to the first one), then displays the "Server Certificate" within a BEGIN / END block, followed by a bunch of other metadata. The entire session ends with
Verify return code: 21 (unable to verify the first certificate).
So, this is not my forte and I'm doing what I can to try and get this VM operational. Like I said earlier, I've been trying many things and understand most of the issue is the fact that I'm behind a McAfee firewall within my corporate structure. I don't know how to troubleshoot more than what I've explained above but I'm willing to dig deeper.
I have a few questions. Why is curl looking for that particular hex ID and where would I find or generate the beast? Are there other troubleshooting steps I should try? The VM is a server-class Ubuntu install, so I only have a SSH CLI terminal and no WindowManager GUI to work with this.
I just want to say that this is not normally something I do, but I have been tasked with it recently...
I have followed the heroku documentation for setting up SSL closely, but I am still encountering a problem.
I have added my cert to heroku using the following command:
heroku certs:add path_to_crt path_to_key
This part seems to work. I receive a message saying:
Adding SSL Endpoint to my_app ... done
I have also setup a CNAME for my hosting service to point to the endpoint associated with the cert command above. However, when I browse to the site I still receive a SSL error. It says my certificate isn't trusted and points to the *.heroku.com license, not the one I have just uploaded.
I have noticed that when I execute the following command:
heroku ssl
I receive the following:
my_domain_name has no certificate
My assumption is that there should be a certificate associated with this domain at this point.
Any ideas?
Edit: It appears that I did not wait long enough for the certificate stuff to trickle through the internets... however, my question regarding the "heroku ssl" command still puzzles me.
The Heroku ssl command is for legacy certificates:
$ heroku ssl -h
Usage: heroku ssl
list legacy certificates for an app
The command you need is heroku certs which will output the relevant certificate info for that project.