I'm trying to build OpenCog from here and when I issue this command
octool -rdcpav -l default
It builds everything but it then gets to the step of installing Link-Grammar and this happens
[octool] Installing Link-Grammar....
--2020-06-13 10:09:36-- http://www.abisource.com/downloads/link-grammar/current/
Resolving www.abisource.com (www.abisource.com)... 130.89.149.216
Connecting to www.abisource.com (www.abisource.com)|130.89.149.216|:80... connected.
HTTP request sent, awaiting response... 302 Found
Location: https://www.abisource.com/downloads/link-grammar/current/ [following]
--2020-06-13 10:09:37-- https://www.abisource.com/downloads/link-grammar/current/
Connecting to www.abisource.com (www.abisource.com)|130.89.149.216|:443... connected.
OpenSSL: error:1425F102:SSL routines:ssl_choose_client_version:unsupported protocol
Unable to establish SSL connection.
I'm on ubuntu 20.04 LTS
www.abisource.com supports only TLS version 1.0, which is now broken (or at least weakened) and way obsolete. According to its headers it is Apache 2.2.15 (Fedora) which dates from 2010!
This therefore appears to be the same problem as OpenSSL v1.1.1 ssl_choose_client_version unsupported protocol except Ubuntu instead of Debian and wget (used by octool) instead of openvpn. Try the accepted anser there: edit /etc/ssl/openssl.cnf under [system_default_sect] to downgrade MinProtocol=TLSv1 and possibly CipherString=DEFAULT:#SECLEVEL=1 -- the server's DHE key is 1k, and I don't recall if that works at level 2, although its cert is absurdly RSA 4k!
UPDATE: Okay, I downloaded and installed Ubuntu 20.04 including source for libssl1.1 and looked at it, and they did NOT keep the Debian approach here, they changed it. Specifically, they didn't change the openssl.cnf file to require TLSv1.2, instead they compiled OpenSSL/libssl to make the default SECLEVEL 2 and to have SECLEVEL 2 force TLSv1.2 (which it doesn't upstream).
However, you can still fix it by adding the desired (weak) configuration to openssl.cnf:
somewhere in the default section, i.e. before the first line beginning with [, add a line
openssl_conf = openssl_configuration
I like putting it at the very top, but that's just me.
technically at any section boundary, but much-easiest at the end, add three new sections:
[openssl_configuration]
ssl_conf = ssl_configuration
[ssl_configuration]
system_default = tls_system_default
[tls_system_default]
CipherString = DEFAULT:#SECLEVEL=1
Note that since MinProtocol wasn't already there you don't need to add it (the code default is okay) but you can if you want.
Now it works:
$ wget https://www.abisource.com/
--2020-06-20 05:11:11-- https://www.abisource.com/
Resolving www.abisource.com (www.abisource.com)... 130.89.149.216
Connecting to www.abisource.com (www.abisource.com)|130.89.149.216|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 7687 (7.5K) [text/html]
Saving to: ‘index.html’
index.html 100%[===================>] 7.51K --.-KB/s in 0.002s
2020-06-20 05:11:12 (3.90 MB/s) - ‘index.html’ saved [7687/7687]
This is, as you commented, a global change. You can change it for this specific operation by editting your copy of octool to add the option --ciphers=DEFAULT:#SECLEVEL=1 to the wget command(s). With the original openssl.cnf:
$ wget --ciphers=DEFAULT:#SECLEVEL=1 https://www.abisource.com/
--2020-06-20 05:15:21-- https://www.abisource.com/
Resolving www.abisource.com (www.abisource.com)... 130.89.149.216
Connecting to www.abisource.com (www.abisource.com)|130.89.149.216|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 7687 (7.5K) [text/html]
Saving to: ‘index.html.1’
index.html.1 100%[===================>] 7.51K --.-KB/s in 0s
2020-06-20 05:15:22 (330 MB/s) - ‘index.html.1’ saved [7687/7687]
Related
I was using wget to download GEDI data from LP DAAC data pool. It always returns an error of "unable to establish SSL connection". I attempted wget in promote or Pycharm and added the "--no-check-certificate" configuration.
The wget is the newest release (1.21.3,64bit).
OS: windows11.
from the following massages, I guess the connection to EarthData is successful because it returns the data downloading link that I can open manually in the browser and then can start downloading. This error could happen in the last step that wget starts accessing the returned link and then downloading.
returned messages:
--2022-08-14 09:51:09-- https://e4ftl01.cr.usgs.gov//GEDI_L1_L2/GEDI/GEDI01_B.002/2019.04.20/GEDI01_B_2019110092939_O01996_01_T03334_02_005_01_V002.h5
Resolving e4ftl01.cr.usgs.gov (e4ftl01.cr.usgs.gov)... 2001:49c8:4000:127d::133:130, 152.61.133.130
Connecting to e4ftl01.cr.usgs.gov (e4ftl01.cr.usgs.gov)|2001:49c8:4000:127d::133:130|:443... failed: Bad file descriptor.
Connecting to e4ftl01.cr.usgs.gov (e4ftl01.cr.usgs.gov)|152.61.133.130|:443... connected.
HTTP request sent, awaiting response... 302 Found
Location: https://urs.earthdata.nasa.gov/oauth/authorize?scope=uid&app_type=401&client_id=ijpRZvb9qeKCK5ctsn75Tg&response_type=code&redirect_uri=https%3A%2F%2Fe4ftl01.cr.usgs.gov%2Foauth&state=aHR0cHM6Ly9lNGZ0bDAxLmNyLnVzZ3MuZ292Ly9HRURJX0wxX0wyL0dFREkvR0VESTAxX0IuMDAyLzIwMTkuMDQuMjAvR0VESTAxX0JfMjAxOTExMDA5MjkzOV9PMDE5OTZfMDFfVDAzMzM0XzAyXzAwNV8wMV9WMDAyLmg1 [following]
--2022-08-14 09:51:55-- https://urs.earthdata.nasa.gov/oauth/authorize?scope=uid&app_type=401&client_id=ijpRZvb9qeKCK5ctsn75Tg&response_type=code&redirect_uri=https%3A%2F%2Fe4ftl01.cr.usgs.gov%2Foauth&state=aHR0cHM6Ly9lNGZ0bDAxLmNyLnVzZ3MuZ292Ly9HRURJX0wxX0wyL0dFREkvR0VESTAxX0IuMDAyLzIwMTkuMDQuMjAvR0VESTAxX0JfMjAxOTExMDA5MjkzOV9PMDE5OTZfMDFfVDAzMzM0XzAyXzAwNV8wMV9WMDAyLmg1
Resolving urs.earthdata.nasa.gov (urs.earthdata.nasa.gov)... 2001:4d0:241a:4081::89, 198.118.243.33
Connecting to urs.earthdata.nasa.gov (urs.earthdata.nasa.gov)|2001:4d0:241a:4081::89|:443... failed: Bad file descriptor.
Connecting to urs.earthdata.nasa.gov (urs.earthdata.nasa.gov)|198.118.243.33|:443... connected.
Unable to establish SSL connection.
What is the difference between the two https wget request? One is getting download while other is having certificate issue on same machine
-- not working
/home/bsoft>wget https://storage.googleapis.com/minikube/iso/minikube-v1.25.2.iso
--2022-04-29 01:16:42-- https://storage.googleapis.com/minikube/iso/minikube-v1.25.2.iso
Resolving storage.googleapis.com (storage.googleapis.com)... 216.58.221.48, 142.250.194.240, 142.250.206.112, ...
Connecting to storage.googleapis.com (storage.googleapis.com)|216.58.221.48|:443... connected.
ERROR: cannot verify storage.googleapis.com's certificate, issued by ‘emailAddress=certadmin#netskope.com,CN=ca.stlgs.goskope.com,OU=86e3620a322d5cba9f90e0eedfd92cdd,O=bsoft technology,L=Gurugram,ST=IN,C=IN’:
Self-signed certificate encountered.
To connect to storage.googleapis.com insecurely, use `--no-check-certificate'.
/home/bsoft>
-- working
/home/ravi>wget https://speed.hetzner.de/100MB.bin
--2022-04-29 09:57:31-- https://speed.hetzner.de/100MB.bin
Resolving speed.hetzner.de (speed.hetzner.de)... 88.198.248.254, 2a01:4f8:0:59ed::2
Connecting to speed.hetzner.de (speed.hetzner.de)|88.198.248.254|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 104857600 (100M) [application/octet-stream]
Saving to: ‘100MB.bin.2’
100MB.bin.2 100%[=================================================================================================================================================================>] 100.00M 783KB/s in 98s
2022-04-29 09:59:10 (1.02 MB/s) - ‘100MB.bin.2’ saved [104857600/104857600]
/home/ravi>
I want to do a simple C++ web get similar to what is done by this curl command. I can use asio from boost. I must use boost 1.49
curl https://mysite.dev/api/v1/search?q=test -k --cert
C:\work\testCert.pem
The server is requiring the client certificate.
I started by using this as an example http://www.boost.org/doc/libs/1_49_0/doc/html/boost_asio/example/ssl/client.cpp
and I added modifications by adding calls to the context like
ctx.set_options(boost::asio::ssl::context::default_workarounds);
ctx.use_certificate_file("C:\\work\\testCert.pem", boost::asio::ssl::context_base::pem);
ctx.use_private_key_file("C:\\work\\testKey.pem", boost::asio::ssl::context_base::pem);
My Request Looks like this:
GET /api/v1/search?q=test HTTP/1.0
Host: mysite.dev
Accept: */*
but I keep getting messages like this
Error: sslv3 alert handshake failure
clearly there is a step I am missing in the handshake process
The solution was to disable SSLv3 support, appartently most servers disable this because of design flaws.
ctx.set_options(boost::asio::ssl::context::default_workarounds |
boost::asio::ssl::context::no_sslv2 |
boost::asio::ssl::context::no_sslv3);
I'm totally new to the apache httpd stuff
I setup my host ServerHost1 as a file server with httpd
# httpd -v
Server version: Apache/2.4.6 (Red Hat Enterprise Linux)
Server built: Dec 2 2014 08:09:42
I have put the file TestFile.txt under /var/www/html/TestDir/TestFile.txt
I modified part of the httpd.conf as follow
<Directory />
Order deny,allow
Allow from all
</Directory>
On a test host TestHost1 with full Internet access, I can downloaded my file with wget
TestHost1]# wget http://ServerHost1/TestDir/TestFile.txt
--2016-03-17 13:39:12-- http://ServerHost1/TestDir/TestFile.txt
Resolving ServerHost1 (ServerHost1)... <IP address>
Connecting to ServerHost1 (ServerHost1)|<IP address>|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 2859976598 (2.7G) [application/octet-stream]
Saving to: ‘TestFile.txt’
2% [> ] 60,645,376 24.0MB/s
On the host sitting on a semi-isolated network TestHost2, I have to use proxy for wget to work. It works fine with google
TestHost2]# wget google.ca
--2016-03-17 13:53:26-- http://google.ca/
Resolving proxy.com (proxy.com)... <ProxyIP>
Connecting to proxy.com (proxy.com)|<ProxyIP>|:3128... connected.
Proxy request sent, awaiting response... 301 Moved Permanently
Location: http://www.google.ca/ [following]
--2016-03-17 13:53:26-- http://www.google.ca/
Reusing existing connection to proxy.com:3128.
Proxy request sent, awaiting response... 200 OK
Length: unspecified [text/html]
Saving to: ‘index.html’
[ <=> ] 19,928 --.-K/s in 0.1s
2016-03-17 13:53:27 (159 KB/s) - ‘index.html’ saved [19928]
However when I try to get my file from ServerHost1, it gets ERROR 503: Service Unavailable
TestHost2]# wget http://ServerHost1/TestDir/TestFile.txt
--2016-03-17 13:57:13-- http://ServerHost1/TestDir/TestFile.txt
Resolving proxy.com (proxy.com)...<ProxyIP>
Connecting to proxy.com (proxy.com)|<ProxyIP>|:3128... connected.
Proxy request sent, awaiting response... 503 Service Unavailable
2016-03-17 13:57:13 ERROR 503: Service Unavailable.
So the question is
(1) Why am I seeing 503 ServiceUnavailable when the file is apparently available (since I can downloaded from testhost1)?
(2) How do I configure my httpd.conf file so that TestHost2 can wget the file from ServerHost1?
Maybe try with ProxyRequests as described in Apache docs https://httpd.apache.org/docs/2.4/mod/mod_proxy.html
Currently we have an Apache 2.2.3 server with mod_ssl 2.2.3 running Django, with users authenticating by using a x509 certificate.
So far the system is running perfectly except for a single user, who when trying to upload a file receives 400 Bad Request error, and the contents of the ssl_error_log regarding this operation are:
[<date>] [error] [client <client ip>] request failed: error reading the headers, referer: <referrer url>
The contents of the ssl_access_log are:
<client ip> - - [<date>] "POST <target page> HTTP/1.1" 400 321
Also, the user's browser is Firefox as far as I know.
I am completely unable to reproduce this bug and so far none of the other users have experienced it. Could you point out some reasons for this to happen?
I've experienced connectivity that stops the upstream after an X amount of bytes is sent. X was a pretty low value, as in enough to request some simple pages, but not to deal with ajax requests much less upload files. As far as I recall, this connectivity problem occurred only when tethering (from a specific Android phone, but I didnt even test other phones).
So if the upstream gets interrupted and the upload stalls, it makes sense apache would return this error, according to this post: "Apache waits a time equal to the Timeout directive (defaults to 5 minutes if not defined) for a response from the client. It is likely Apache is waiting for the CRLF that indicates the end of the headers, yet it is never received.."