We run a multi-node multi-master-jenkins setup with a project that triggers a project on another jenkins instance via a cURL call.
Job A on Jenkins Alpha (all CentOS6/7/8) calls Job B on Jenkins Beta (CentOS6) like this:
curl -v -k --noproxy awesomehost.awesome.net -X POST https://usernick:API_TOKEN#awesomehost.awesome.net:8443/job/Example_XY/build -F file0=#${WORKSPACE}/beautifulzip.zip -F json='{"parameter": [{"name":"myinputzip.zip", "file":"file0"}]}'
This triggering job is run on multiple nodes and when using https:// that call fails with
Initializing NSS with certpath: sql:/etc/pki/nssdb
* skipping SSL peer certificate verification
* SSL connection using TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
* Server certificate:
* subject: CN=awesomehost.awesome.net,OU=redacted,O=company,C=yeawhat
* start date: Mar 03 10:05:01 2021 GMT
* expire date: Mar 03 10:05:01 2022 GMT
* common name: awesomehost.awesome.net
* issuer: CN=nothingtoseehere,OU=movealong,O=evilcorp,L=raccooncity,ST=solid,C=yeawhat
* Server auth using Basic with user 'nick'
> POST /job/Example_XY/build HTTP/1.1
> Authorization: Basic cut==
> User-Agent: curl/7.29.0
> Host: awesomehost.awesome.net:8443
> Accept: */*
> Content-Length: 12479660
> Expect: 100-continue
> Content-Type: multipart/form-data; boundary=----------------------------34737a99beef
>
< HTTP/1.1 100 Continue
} [data not shown]
* SSL write: error -5961 (PR_CONNECT_RESET_ERROR)
* TCP connection reset by peer
Now, if I run the same cURL as http://, it works every time, but using https:// results in a failure most of the time. So it's most likely an HTTPS issue (wild guess).
But: while trying to debug, I used --trace and mysteriously everything works. every. time. Trace-time is not sufficient, but --trace - fixed the issue.
curl -v -k --trace - --noproxy awesomehost.awesome.net -X POST https://usernick:API_TOKEN#awesomehost.awesome.net:8443/job/Example_XY/build -F file0=#${WORKSPACE}/beautifulzip.zip -F json='{"parameter": [{"name":"myinputzip.zip", "file":"file0"}]}'
doesn't show the same error. Presuming some I/O related issue (all the systems share a nfs exported setup). I was curious if logfile I/O was the culprit, but running:
curl -v -k --trace - --noproxy awesomehost.awesome.net -X POST https://usernick:API_TOKEN#awesomehost.awesome.net:8443/job/Example_XY/build -F file0=#${WORKSPACE}/beautifulzip.zip -F json='{"parameter": [{"name":"myinputzip.zip", "file":"file0"}]}' 1>/dev/null
also works every time. Writing a long logfile doesn't seem to be the issue. Maybe some race condition?
Now, I don't have a real problem, as I have two ways to get stuff to work, but fixing stuff by turning on debug feels like a cheat.
cURL SSL connect error 35 with NSS error -5961 doesn't really seem to apply, as turning on debug fixes my issue.
Does anyone have a good idea how to debug the issue further? I can't promise that I can try out everything, as I am limited to non-root access. I would have to convince the - rightfully paranoid - admins to let me tinker with their farm, which I would rather not do, as my Jenkins is not the most important part of software running there.
Any ideas?
Related
I have an application running on Apache 2.4.33 and we are trying to test the sql injection vulnerability for this application using the SQLMap command as below:
sqlmap -u 'http://hostip/appurl?query_type=something&element=*' -D check -T configuration --dump
The application is runnig on ssl 443 port and when the above url is hit through browser it gets redirected to https, similar thing happens in the above command and we see the following:
[00:02:01] [INFO] testing connection to the target URL
sqlmap got a 302 redirect to 'https://hostip/appurl?query_type=something&element=. Do you want to follow? [Y/n] n
and this utility works properly.
sqlmap -u 'https://hostip/appurl?query_type=something&element=' -D check -T configuration --dump
However when we directly try to do the sqlmap command on the url which it was getting redirected we get 500 internal server error.
The Apache access log show:
10.21.12.170 - - [03/Jul/2018:12:51:12 +0530] "GET https://hostip/appurl?query_type=something&element= HTTP/1.1" 500 -
I am referring this link https://miki725.github.io/docker/crypto/2017/01/29/docker+nginx+letsencrypt.html
to enable SSL on my app which is running along with docker. So the problem here is when I run the below command
docker run -it --rm \
-v certs:/etc/letsencrypt \
-v certs-data:/data/letsencrypt \
deliverous/certbot \
certonly \
--webroot --webroot-path=/data/letsencrypt \
-d api.mydomain.com
It throws an error:
Failed authorization procedure. api.mydomain.com (http-01): urn:acme:error:unauthorized :: The client lacks sufficient authorization :: Invalid response from http://api.mydomain.com/.well-known/acme-challenge/OCy4HSmhDwb2dtBEjZ9vP3HgjVXDPeghSAdqMFOFqMw:
So can any one please help me and let me know if I am missing something or doing something wrong.
What seems to be missing from that article and possibly from your setup is that the hostname api.mydomain.com needs to have a public DNS record pointing to the IP address of the machine on which the Nginx container is running.
The Let's Encrypt process is trying to access the file api.mydomain.com/.well-known/acme-challenge/OCy4HSmhDwb2dtBEjZ9vP3HgjVXDPeghSAdqMFOFqMw. This file is put there by certbot. If the address api.mydomain.com does not resolve to the address of the machine from which you are running certbot then the process will fail.
You will also need to have ports 80 and 443 open for it to work.
Based on the available info that is my best suggestion on where you can start looking to resolve the issue.
Full disclosure, I have very little idea what I'm doing.
I'm doing some troubleshooting with Curl and encryption, and I don't understand why this works for a certain website I'm testing against:
curl -v https://website
but none of these options work:
curl -v -1 https://website
curl -v -2 https://website
curl -v -3 https://website
The error I get back with all three options is:
error:14094410:SSL routines:SSL3_READ_BYTES:sslv3 alert handshake failure
I've Googl'd the heck out of this error, and it seems like there are a millions reasons for Curl to return this.
I know that the -2 option uses super old and busted SSL, -3 uses less old (but still busted) SSL and that -1 uses TLS. The version of Curl I'm using doesn't seem to work if I try to get granular with --tlsv1.0, etc. I don't have permission to install a newer version of Curl on the machines I'm testing on.
So, my question is this: How do I know what method Curl is using to connect to https:// sites if I don't explicitly tell it what to use?
It depends entirely on what is negotiated with the peer. You would need to examine the handshake trace in each specific case.
I'm setting up a domain registry as described here:
https://docs.docker.com/registry/deploying/
I generated a certificate for docker.mydomain.com and started the docker using their command on my server:
docker run -d -p 5000:5000 --restart=always --name registry \
-v `pwd`/certs:/certs \
-e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/domain.crt \
-e REGISTRY_HTTP_TLS_KEY=/certs/domain.key \
registry:2
I've started the docker and pointed to certificates I obtained using letsencrypt (https://letsencrypt.org/).
Now, when I browse to https://docker.mydomain.com:5000/v2/ I will get a page with just '{}', with a green lock (succesful secure page request).
But when I try to do a docker login docker.mydomain.com:5000 from a different server I see a error in the registry docker:
TLS handshake error from xxx.xxx.xxx.xxx:51773: remote error: bad certificate
I've tried some different variations in setting up the certificates, and gotten errors like:
remote error: unknown certificate authority
and
tls: first record does not look like a TLS handshake
What am I missing?
Docker seams to not support SNI : https://github.com/docker/docker/issues/9969
Update : Docker now should support SNI.
It's mean, when connecting to your server during the tls transaction, the docker client do not specify the domain name, so your server show the default certificate.
The solution could be to change to default certificate of your server to be to one valid for the docker domain.
This site works only in browsers with SNI support.
To check if your (sub-)domain works with clients not SNI-aware, you can use ssllabs.com/ssltest : If you DONT see the message, "This site works only in browsers with SNI support. " then it will works.
Simply, I am going run locally popular example of WEBRTC app:
github.com/webrtc/apprtc
The apprtc installed, and even works locally without turn server ( "Same origin policy" don't allow use Google TURN server, which works only from apprtc.appspot.com: access-control-allow-origin:"https://apprtc.appspot.com").
But I know that in real internet world (nats and firewalls) I need turn server. So I have decided to use own STUN/TURN server:
code.google.com/p/coturn/
I am trying integrate my apprtc with coturn:
+apprtc: http://localhost:8080/?wstls=false
+coturn: http://localhost: 3478
and I have questions:
a) Do I need execute some turnadmin commands, which are described in INSTALL guide?
Or it will be enaugh to run turnserver from example:
my_name#my_machine:~/WEBRTC/turnserver-4.4.5.2/examples/scripts/restapi$ ./secure_relay_secret.sh
which contains:
if [ -d examples ] ; then
cd examples
fi
export LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/usr/local/lib/:/usr/local/mysql/lib/
export DYLD_LIBRARY_PATH=${DYLD_LIBRARY_PATH}:/usr/local/lib/:/usr/local/mysql/lib/
PATH="./bin/:../bin/:../../bin/:${PATH}" turnserver -v --syslog -a -L 127.0.0.1 -L ::1 -E 127.0.0.1 -E ::1 --max-bps=3000000 -f -m 3 --min-port=32355 --max-port=65535 --use-auth-secret --static-auth-secret=logen --realm=north.gov --cert=turn_server_cert.pem --pkey=turn_server_pkey.pem --log-file=stdout -q 100 -Q 300 --cipher-list=ALL $#
b) When I open localhost: 3478 in browser I see:
"TURN Server
use https connection for the admin session:
What uri is for rest API?
c) In rest API I need pass some parameters: username and key. Is it enough?
Will be enough to simply add extra -u switch to turnserver command? Need I some extra configurations?
e) How solve "Same origin policy"? I am not going experiment with the same ports and nginx, but simply set "access-control-allow-origin" header to turnserver response. How do it without nginx proxy? Or maby some others solutions?
d) Are some other important issues, which person running apprtc app and coturn server should know?
edit
For me the most problem was thinking that Coturn has own api method which return TURN servers - but has not. So it is requird to do it myself - on own http server. Below is example in python/django:
from hashlib import sha1
import hmac
TURN_SERVER_SECRET_KEY = 'my_pass'
def get_turn_servers(request):
if 'username' not in request.GET.keys():
return HttpResponseForbidden()
unix_timestamp_tomorrow = int(time()) + (24*60*60)
new_username = str(unix_timestamp_tomorrow)+':'+request.GET['username']
hashed = hmac.new(TURN_SERVER_SECRET_KEY, new_username, sha1)
password = hashed.digest().encode("base64").rstrip('\n')
turn_udp_uri = 'turn:%s:3478?transport=udp' % settings.DOMAIN.split(':')[0] #bez portu
turn_tcp_uri = 'turn:%s:3478?transport=tcp' % settings.DOMAIN.split(':')[0]
return JsonResponse({
'username':new_username,
'password':password,
'uris':[turn_udp_uri,
turn_tcp_uri,
]
})
Helpful will be groups:
https://groups.google.com/forum/#!forum/turn-server-project-rfc5766-turn-server
https://groups.google.com/forum/#!forum/discuss-webrtc
If sombody needs webrtc in django code, please write to me.