I have a server that runs Nexus, I can get access to Nexus and download artifacts via https (browser) without problem.
Now I want to get the artifact using wget via https:
wget https://195.20.100.100:8081/repository/myrepo/com/myrepo/program/1.0-SNAPSHOT/program.tar.gz
and it tells me :
WARNING: cannot verify 195.20.100.100's certificate, issued by ‘/C=US/ST=Unspecified/L=Unspecified/O=Sonatype/OU=Example/CN=*.195.20.100.100’:
Self-signed certificate encountered.
Proxy request sent, awaiting response... 401 Unauthorized
Authorization failed.
I want to know the exact steps I have to do?
Thanks in advance.
This isn't a Nexus Repository Manager issue per se, I believe you just need to do something akin to the answer in this post: wget, self-signed certs and a custom HTTPS server
Related
I'm trying to test my API using SoapUI 5.4.0. I added my website SSL certificate in Keystore and my clients SSL in Truststore. I added apikey in header and parameters in parameters section. But, still I'm getting:
response error 401 Unauthorized
Please help to fix this issue.
Have you sent the request with configured Keystore?
example:
below Screen I have configured the ssl keystore, hope you have also done the same.
And then while sending request , you need to point the ssl keystore. For every request which requires ssl you need to do this.
I've been playing around with Curl, trying to do what should be a simple POST of a file to a web service for a couple of days and not getting anywhere.
The target POST service is unauthenticated HTTPS. When trying to run my POST request via Curl or via Informatica, I am getting an SSL handshake failure with both methods.
For example:
curl -X POST -F 'file=#filename.dat' https://url
I have been able to get this to work using Postman, so I know the service works. According to network security, SSL is disabled in this environment. Am I out of luck, or is there a way to get this to work without SSL?
Specific error encountered:
curl: (35) error:14077410:SSL routines:SSL23_GET_SERVER_HELLO:sslv3 alert handshake failure
By default, a client establishing a HTTPS URL connection will check the validity of a SSL certificate - otherwise, what's the point of using SSL?
In your case, you are saying "Pretend to use HTTPS but actually, ignore the certificate", because it's invalid, or you are still getting one, or you are in the development phase (I hope the latter is true, and get or create a valid sever certificate when needed).
But curl doesn't know that. It is assuming you are asking it to establish a connection with an HTTPS endpoint - thus it will try to validate the certificate - which, in your case, may be the source of the failure.
Try curl -k -X POST -F 'file=#filename.dat' https://url
From the manpage:
-k, --insecure
(TLS) By default, every SSL connection curl makes is verified to be secure. This option allows curl to proceed and operate even for server connections otherwise considered insecure.
The server connection is verified by making sure the server's certificate contains the right name and verifies successfully using the cert store.
See this online resource for further details:
https://curl.haxx.se/docs/sslcerts.html
See also --proxy-insecure and --cacert.
I have a sails application that is hosted on digitalocean via dokku. Everying runs and deploys fine and if I havigate to my domain, I can see that the app is working.
Now I have added a TLS certificate (so that my app is accessible via HTTPS) by:
Creating my private key and CSR request.
Using them to get an certificate from CA authority.
Adding my private key and issued certificate to config/local.js
tarballing key and certificate and adding them to dokku via dokku certs:add
So after all that if I push my app to dokku it boots just fine without any errors upon deployment phase. I can clearly see that upon deployment my app should be accessible via https from buildpack logs:
...
-----> Creating https nginx.conf
-----> Running nginx-pre-reload
Reloading nginx
-----> Setting config vars
DOKKU_APP_RESTORE: 1
-----> Shutting down old containers in 60 seconds
=====> c302066ebd1ecc0ac5323c3cbbcaf9132eebf905f5616e5b4407cecf2b316969
=====> Application deployed:
http://my-domain-here.com
https://my-domain-here.com
The only problem is that when I navigate to my domain, I get "502 bad gateway" error in browser and if I look at nginx's error log of the app I can see the following errors there:
2016/07/14 03:09:30 [error] 7827#0: *391 upstream prematurely closed connection while reading response header from upstream, client: --hidden--, server: my-domain-here.com, request: "GET / HTTP/1.1", upstream: "http://172.17.0.2:5000/", host: "getmocky.com"
What is wrong? How to fix it?
Ok, I have figured it out. It turned out that if you read closely about deployment in Sails you can see there a text like
don't worry about configuring Sails to use an SSL certificate. SSL will almost always be resolved at your load balancer/proxy server, or by your PaaS provider
What this means is that from my list I have to exclude p3 and after that everything will work.
I'm working with two servers; one localhost one on the web. Both are http; I don't have an SSL certificate installed on either.
When I'm trying to make a Curl request to an https url (in this case the Facebook API), one of the servers works and the other doesn't. The CURL error is "SSL certificate problem: unable to get local issuer certificate." Upon investigation, I noticed that $_SERVER["SERVER_SOFTWARE"] outputs something different on the two servers.
Server 1, which works with CURL to https
$_SERVER["SERVER_SOFTWARE"] = Apache/2.4.10 (Win32) OpenSSL/1.0.1i PHP/5.6.3
Server 2, which doesn't work with CURL to https
$_SERVER["SERVER_SOFTWARE"] = Apache
I'm guessing that the fact that the second server has no mention of OpenSSL may have something to do with the error? Is that possible? What would I need to do to get OpenSSL on that server? Why would the first server be able to "find issuer certificate" when I don't have an SSL cert installed on it?
Since you are doing a request with curl to an external server the problem is completely unrelated to the web server software you are running locally, i.e. you don't even need to run a local web server at all. It only depends on the certificate the external server sends back to curl and if the necessary root CA can be found in the trust store of curl.
I've already purchased the SSL Certifcate from DigiCert and install it into my Nexus server (running in tomcat, jks)
It works well in firefox and chrome(green address bar indicates that a valid certificate received) , builds could be downloaded from Nexus WebUI too.
But, wget could not get the result without --no-check-certificate
something like
ERROR: cannot verify mydomain.com's certificate, issued by `/C=US/O=DigiCert Inc/OU=www.digicert.com/CN=DigiCert High Assurance CA-3':
Unable to locally verify the issuer's authority.
To connect to mydomain.com insecurely, use `--no-check-certificate'.
Unable to establish SSL connection.
Found something,
SSL connection fails with wget, curl, but succeed with firefox and lynx
linux wget not certified?
But neither of them gives a final solution, I want to know whether there are some (special) configurations on Nexus or this's a bug of wget command?
Google return many results about "digitcert wget",but I cannot find a clue either, Thank you!
You need to add the DigiCert root certificate to a store accessible by wget:
http://wiki.openwrt.org/doc/howto/wget-ssl-certs