CouchDB Permissions over HTTPS - ssl

UPDATE / SUMMARY:
I created a blog article here about the process I went through and my config file has changed slightly from below:
https://medium.com/#silverbackdan/installing-couchdb-2-0-nosql-with-centos-7-and-certbot-lets-encrypt-f412198c3051#.216m9mk1m
Main issues with HTTPS:
If running HTTP and HTTPS, shard dbs appear on HTTPS
Fauxton features lacking over HTTPS (admin user management, config management, setup wizard, Mango indexing/querying)
Not sure if they should be, but databases over HTTP and HTTPS are not the same
I hope I'm just missing something really obvious
ORIGINAL POST:
I'm trying to configure HTTPS (SSL) with CouchDB 2.0. I'm compiling a guide for others to be able to follow as well but have come across some issues.
I think over HTTPS, I don't have the same permissions as when I enable HTTP and use that instead. In Fauxton over HTTP I can see the configuration and I can run the setup procedure. With HTTPS I'm getting errors where it says I cannot create a database (which it tries to do automatically) because they start with an underscore. Most databases get set up but there's a few which show errors such as "_cluster_setup" when I visit the Configuration page.
Additionally I get repeating error messages which does not stop CouchDB, but it says the database "_users" does not exist (database_does_not_exist). It doesn't exist when I enable and connect over HTTP, but it does exist when I connect over HTTPS. If I enable both HTTP and HTTPS then with my HTTPS connection I end up having a lot of shard databases (I'm new to NoSQL and CouchDB so I'm not sure what that's about, but they appear when errors show up similar to the above - creating databases starting with underscores). Either way, I see those shard databases when logged in via HTTPS but not HTTP (Fauxton shows them as "unable to load, and then I am just deleting them from the data directory at the moment)
There are also issues with accessing Fauxton over HTTPS using Chrome, but I think that's a known bug and it's OK to use Firefox or Safari at the moment.
Can anybody tell me if there are any settings which mean that a connection over port 6984 using HTTPS can have the same administrative rights as 5984 of HTTP? ...Or what the permissions issues there may be that results in the HTTPS connection bringing up these errors about underscores at the beginning of table names as I think that could basically resolve my main issues.
Here's my local.ini file which may be of some use (I have also commented out ";httpd={couch_httpd, start_link, []}" in default.ini as it says to here: https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=48203146
; CouchDB Configuration Settings
; Custom settings should be made in this file. They will override settings
; in default.ini, but unlike changes made to default.ini, this file won't be
; overwritten on server upgrade.
[couchdb]
;max_document_size = 4294967296 ; bytes
;os_process_timeout = 5000
uuid = **REMOVED**
[couch_peruser]
; If enabled, couch_peruser ensures that a private per-user database
; exists for each document in _users. These databases are writable only
; by the corresponding user. Databases are in the following form:
; userdb-{hex encoded username}
;enable = true
; If set to true and a user is deleted, the respective database gets
; deleted as well.
;delete_dbs = true
[chttpd]
;port = 5984
;bind_address = 0.0.0.0
; Options for the MochiWeb HTTP server.
;server_options = [{backlog, 128}, {acceptor_pool_size, 16}]
; For more socket options, consult Erlang's module 'inet' man page.
;socket_options = [{recbuf, 262144}, {sndbuf, 262144}, {nodelay, true}]
[httpd]
; NOTE that this only configures the "backend" node-local port, not the
; "frontend" clustered port. You probably don't want to change anything in
; this section.
; Uncomment next line to trigger basic-auth popup on unauthorized requests.
WWW-Authenticate = Basic realm="administrator"
bind_address = 0.0.0.0
; Uncomment next line to set the configuration modification whitelist. Only
; whitelisted values may be changed via the /_config URLs. To allow the admin
; to change this value over HTTP, remember to include {httpd,config_whitelist}
; itself. Excluding it from the list would require editing this file to update
; the whitelist.
config_whitelist = [{httpd,config_whitelist}, {log,level}, {etc,etc}]
[query_servers]
;nodejs = /usr/local/bin/couchjs-node /path/to/couchdb/share/server/main.js
[httpd_global_handlers]
;_google = {couch_httpd_proxy, handle_proxy_req, <<"http://www.google.com">>}
[couch_httpd_auth]
; If you set this to true, you should also uncomment the WWW-Authenticate line
; above. If you don't configure a WWW-Authenticate header, CouchDB will send
; Basic realm="server" in order to prevent you getting logged out.
require_valid_user = true
secret = **REMOVED**
[os_daemons]
; For any commands listed here, CouchDB will attempt to ensure that
; the process remains alive. Daemons should monitor their environment
; to know when to exit. This can most easily be accomplished by exiting
; when stdin is closed.
;foo = /path/to/command -with args
[daemons]
; enable SSL support by uncommenting the following line and supply the PEM's below.
; the default ssl port CouchDB listens on is 6984
httpsd = {couch_httpd, start_link, [https]}
[ssl]
cert_file = /home/couchdb/couchdb/certs/cert.pem
key_file = /home/couchdb/couchdb/certs/privkey.pem
;password = somepassword
; set to true to validate peer certificates
;verify_ssl_certificates = false
; Set to true to fail if the client does not send a certificate. Only used if verify_ssl_certificates is true.
;fail_if_no_peer_cert = false
; Path to file containing PEM encoded CA certificates (trusted
; certificates used for verifying a peer certificate). May be omitted if
; you do not want to verify the peer.
cacert_file = /home/couchdb/couchdb/certs/chain.pem
; The verification fun (optional) if not specified, the default
; verification fun will be used.
;verify_fun = {Module, VerifyFun}
; maximum peer certificate depth
ssl_certificate_max_depth = 1
;
; Reject renegotiations that do not live up to RFC 5746.
secure_renegotiate = true
; The cipher suites that should be supported.
; Can be specified in erlang format "{ecdhe_ecdsa,aes_128_cbc,sha256}"
; or in OpenSSL format "ECDHE-ECDSA-AES128-SHA256".
;ciphers = ["ECDHE-ECDSA-AES128-SHA256", "ECDHE-ECDSA-AES128-SHA"]
ciphers = undefined
; The SSL/TLS versions to support
tls_versions = [tlsv1, 'tlsv1.1', 'tlsv1.2']
; To enable Virtual Hosts in CouchDB, add a vhost = path directive. All requests to
; the Virual Host will be redirected to the path. In the example below all requests
; to http://example.com/ are redirected to /database.
; If you run CouchDB on a specific port, include the port number in the vhost:
; example.com:5984 = /database
[vhosts]
REMOVEDDOMAIN.COM:* = ./database
[update_notification]
;unique notifier name=/full/path/to/exe -with "cmd line arg"
; To create an admin account uncomment the '[admins]' section below and add a
; line in the format 'username = password'. When you next start CouchDB, it
; will change the password to a hash (so that your passwords don't linger
; around in plain-text files). You can add more admin accounts with more
; 'username = password' lines. Don't forget to restart CouchDB after
; changing this.
[admins]
;admin = mysecretpassword
**REMOVED** = **REMOVED**
[cors]
origins = *
credentials = true
headers = accept, authorization, content-type, origin, referer
methods = GET, PUT, POST, HEAD, DELETE

I've been in touch with the CouchDB team via a chat. CouchDB has been well tested using haproxy, so I've been advised to simply use haproxy instead as erlang can be very difficult to configure for SSL. I'll update the article I've written with complete instructions using haproxy once I've got everything working.

Related

Mlflow authorization with spnego

I saw this topic about Kerberos authntication - https://github.com/mlflow/mlflow/issues/2678 . It was in 2020 . Our team trying to do authentication with kerberos by spnego. We did spnego on nginx server and it is fine - and get code 200 when we do curl to mlflow http uri . BUT we can't do it with mlflow environment variable .
The question is - Does mlflow has some feature to make authentication with spnego or not? Or it has just these environment variables for authentication and such methods :
MLFLOW_TRACKING_USERNAME and MLFLOW_TRACKING_PASSWORD - username and password to use with HTTP Basic authentication. To use Basic authentication, you must set both environment variables .
MLFLOW_TRACKING_TOKEN - token to use with HTTP Bearer authentication. Basic authentication takes precedence if set.
MLFLOW_TRACKING_INSECURE_TLS - If set to the literal true, MLflow does not verify the TLS connection, meaning it does not validate certificates or hostnames for https:// tracking URIs. This flag is not recommended for production environments. If this is set to true then MLFLOW_TRACKING_SERVER_CERT_PATH must not be set.
MLFLOW_TRACKING_SERVER_CERT_PATH - Path to a CA bundle to use. Sets the verify param of the requests.request function (see https://requests.readthedocs.io/en/master/api/). When you use a self-signed server certificate you can use this to verify it on client side. If this is set MLFLOW_TRACKING_INSECURE_TLS must not be set (false).
MLFLOW_TRACKING_CLIENT_CERT_PATH - Path to ssl client cert file (.pem). Sets the cert param of the requests.request function (see https://requests.readthedocs.io/en/master/api/). This can be used to use a (self-signed) client certificate.
I looked at the source code. No, the mlflow.utils.rest_utils.http_request function doesn't support SPNEGO in any way – it can only send HTTP 'Basic' or 'Bearer' authorization headers.
However, it should be relatively easy to change it to generate a 'Negotiate' header using pyspnego, or even to use requests-gssapi given that it already uses Requests internally:
# For Linux:
import requests_gssapi
# For Windows:
#import requests_negotiate_sspi
def http_request(...):
...
if not auth_str:
# For Linux:
kwargs["auth"] = requests_gssapi.HTTPSPNEGOAuth()
# For Windows:
#kwargs["auth"] = requests_negotiate_sspi.HttpNegotiateAuth()
...

Python's default SSL certificate context not working in requests method when behind proxy, works fine otherwise

I have the below function in my code, which works perfectly fine when I'm not behind any proxy. In fact, without even mentioning the certifi default CA certificate, it works fine if I pass verify=TRUE, I guess, because it works in the same way.
def reverse_lookup(lat, long):
cafile=certifi.where()
params={'lat' : float(lat), 'lon' : float(long), 'format' : 'json',
'accept-language' : 'en', 'addressdetails' : 1}
response = requests.get("https://nominatim.openstreetmap.org/reverse", params=params, verify=cafile)
#response = requests.get("https://nominatim.openstreetmap.org/reverse", params=params, verify=True) <-- this works as well
result = json.loads(response.text)
return result['address']['country'], result['address']['state'], result['address']['city']
When I run the same code from within my enterprise infrastructure (where I'm behind proxy), I make some minor changes in the code mentioning the proxy as parameter in requests method:
def reverse_lookup(lat, long):
cafile=certifi.where()
proxies = {"https" : "https://myproxy.com"}
params={'lat' : float(lat), 'lon' : float(long), 'format' : 'json',
'accept-language' : 'en', 'addressdetails' : 1}
response = requests.get("https://nominatim.openstreetmap.org/reverse", params=params, verify=cafile, proxies=proxies)
result = json.loads(response.text)
return result['address']['country'], result['address']['state'], result['address']['city']
But it gives me one out of these 3 SSL errors at different times, if I set verify=True or verify=certifi.where():
CERTIFICATE_VERIFY_FAILED
UNKNOWN_PROTOCOL
WRONG_VERSION_NUMBER
Only time it works is when I completely bypass the SSL verification with verify=False
My questions are:
Since I'm sending the https request via proxy, is it ok if I bypass SSL verification ?
How to make the default context of SSL verification work in this case, when I'm behind proxy ?
Any help is appreciated. Code tested in both Python 2.7.15 and 3.9
Since I'm sending the https request via proxy, is it ok if I bypass SSL verification ?
Do you need the protection offered by HTTPS, i.e. encryption of the application data (like passwords, but also the full URL) to protect against sniffing or modifications by a malicious man in the middle? If you don't need the protection, then you can bypass certificate validation.
How to make the default context of SSL verification work in this case, when I'm behind proxy ?
The proxy is doing SSL interception and when doing this issues a new certificate for this site based on an internal CA. If this is expected (i.e. not an attack) then you need to import the CA from the proxy as trusted with verify='proxy-ca.pem'. Your IT department should be able to provide you with the proxy CA.
But it gives me one out of these 3 SSL errors at different times, if I
set verify=True or verify=certifi.where():
CERTIFICATE_VERIFY_FAILED
UNKNOWN_PROTOCOL
WRONG_VERSION_NUMBER
It should only give you CERTIFICATE_VERIFY_FAILED. The two other errors indicate wrong proxy settings, typically setting https_proxy to https://... instead of http://... (which also can be seen in your code).

HaProxy Transparent Proxy To AWS S3 Static Website Page

I am using haproxy to balance a cluster of servers. I am attempting to add a maintenance page to the haproxy configuration. I believe I can do this by defining a server declaration in the backend with the 'backup' modifier. Question I have is, how can I use a maintenance page hosted remotely on AWS S3 bucket (static website) without actually redirecting the user to that page (i.e. the haproxy server 'redir' definition).
If I have servers: a, b, c. All servers go down for maintenance then I want all requests to be resolved by server definition d (which is labeled with 'backup') to a static address on S3. Note, that I don't want paths to carry over and be evaluated on s3, it should always render the static maintenance page.
This is definitely possible.
First, declare a backup server, which will only be used if the non-backup servers are down.
server s3-fallback example.com.s3-website-us-east-1.amazonaws.com:80 backup
The following configuration entries are used to modify the request or the response only if we're using the alternate path. We're using two tests in the following examples:
# { nbsrv le 1 } -- if the number of servers in this backend is <= 1
# (and)
# { srv_is_up(s3-fallback) } -- if the server named "s3-fallback" is up; "server name" is the arbitrary name we gave the server in the config file
# (which would mean it's the "1" server that is up for this backend)
So, now that we have a backup back-end, we need a couple of other directives.
Force the path to / regardless of the request path.
http-request set-path / if { nbsrv le 1 } { srv_is_up(s3-fallback) }
If you're using an essentially empty bucket with an error document, then this isn't really needed, since any request path would generate the same error.
Next, we need to set the Host: header in the outgoing request to match the name of the bucket. This isn't technically needed if the bucket is named the same as the Host: header that's already present in the request we received from the browser, but probably still a good idea. If the bucket name is different, it needs to go here.
http-request set-header host example.com if { nbsrv le 1 } { srv_is_up(s3-fallback) }
If the bucket name is not a valid DNS name, then you should include the entire web site endpoint here. For a bucket called "example" --
http-request set-header host example.s3-website-us-east-1.amazonaws.com if { nbsrv le 1 } { srv_is_up(s3-fallback) }
If your clients are sending you their cookies, there's no need to relay these to S3. If the clients are HTTPS and the S3 connection is HTTP, you definitely wat to strip these.
http-request del-header cookie if { nbsrv le 1 } { srv_is_up(s3-fallback) }
Now, handling the response...
You probably don't want browsers to cache the responses from this alternate back-end.
http-response set-header cache-control no-cache if { nbsrv le 1 } { srv_is_up(s3-fallback) }
You also probably don't want to return "200 OK" for these responses, since technically, you are displaying an error page, and you don't want search engines to try to index this stuff. Here, I've chosen "503 Service Unavailable" but any valid response code would work... 500 or 502, for example.
http-response set-status 503 if { nbsrv le 1 } { srv_is_up(s3-fallback) }
And, there you have it -- using an S3 bucket website endpoint as a backup backend, behaving no differently than any other backend. No browser redirect.
You could also configure the request to S3 to use HTTPS, but since you're just fetching static content, that seems unnecessary. If the browser is connecting to the proxy with HTTPS, that section of the connection will still be secure, although you do need to scrub anything sensitive from the browser's request, since it will be forwarded to S3 unencrypted (see "cookie," above).
This solution is tested on HAProxy 1.6.4.
Note that by default, the DNS lookup for the S3 endpoint will only be done when HAProxy is restarted. If that IP address changes, HAProxy will not see the change, without additional configuration -- which is outside the scope of this question, but see the resolvers section of the configuration manual.
I do use S3 as a back-end server behind HAProxy in several different systems, and I find this to be an excellent solution to a number of different issues.
However, there is a simpler way to have a custom error page for use when all the backends are down, if that's what you want.
errorfile 503 /etc/haproxy/errors/503.http
This directive is usually found in global configuration, but it's also valid in a backend -- so this raw file will be automatically returned by the proxy for any request that tries to use this back-end, if all of the servers in this back-end are unhealthy.
The file is a raw HTTP response. It's essentially just written out to the client as it exists on the disk, with zero processing, so you have to include the desired response headers, including Connection: close. Each line of the headers and the line after the headers must end with \r\n to be a valid HTTP response. You can also just copy one of the others, and modify it as needed.
These files are limited by the size of a response buffer, which I believe is tune.bufsize, which defaults to 16,384 bytes... so it's only really good for small files.
HTTP/1.0 503 Service Unavailable\r\n
Cache-Control: no-cache\r\n
Connection: close\r\n
Content-Type: text/plain\r\n
\r\n
This site is offline.
Finally, note that in spite of the fact that you're wanting to "transparently proxy a request," I don't think the phrase "transparent proxy" is the correct one for what you're trying to do, because a "transparent proxy" implies that either the client or the server or both would see each other's IP addresses on the connection and think they were communicating directly, with no proxy in between, because of some skullduggery done by the proxy and/or network infrastructure to conceal the proxy's existence in the path. This is not what you're looking for.

How to make sure SSL is enabled properly on Active Directory server?

How to make sure SSL is enabled properly on Active Directory server?
On server itself if I run ldp, I think I can connect on 636 port.
I see something like this in output:
ld = ldap_sslinit("localhost", 636, 1);
Error <0x0> = ldap_set_option(hLdap, LDAP_OPT_PROTOCOL_VERSION, LDAP_VERSION3);
Error <0x0> = ldap_connect(hLdap, NULL);
Error <0x0> = ldap_get_option(hLdap,LDAP_OPT_SSL,(void*)&lv);
Host supports SSL, SSL cipher strength = 128 bits
Established connection to localhost.
Retrieving base DSA information...
Result <0>: (null)
Matched DNs:
Getting 1 entries:
>> Dn:
**** and 10-12 more lines ****
Does this mean SSL is enabled properly?
What about errors in 2-4 lines?
Thanks.
Yes, SSL was enabled.
URLs provided by me in comments have more details.

SSLCaertBadFile error heroku curb

I have a rake task that pulls and parses JSON data over an SSL connection from an external API.
I use a gem that wraps this external API and have no problems running locally, but the task fails when run on heroku with #<Curl::Err::SSLCaertBadFile: Curl::Err::SSLCaertBadFile>
I installed the piggyback SSL add-on, hoping it might fix it, but no dice.
Any ideas?
UPDATE
I managed to fix it by disabling ssl verification on the curl request previously set by the following two fields:
request.ssl_verify_peer
request.ssl_verify_host
I don't know enough about SSL to know exactly why the error was caused by these settings in a heroku environment or what the implications of disabling this are, aside from reduced security.
It is a bad idea to disable certificate checking. See http://www.rubyinside.com/how-to-cure-nethttps-risky-default-https-behavior-4010.html, http://jamesgolick.com/2011/2/15/verify-none..html and associated references for more on that topic.
The issue is that your HTTP client doesn't know where to find the CA certificates bundle on heroku.
You don't mention what client you are using, but here is an example for using net/https on heroku:
require "net/https"
require "uri"
root_ca_path = "/etc/ssl/certs"
url = URI.parse "https://example.com"
http = Net::HTTP.new(url.host, url.port)
http.use_ssl = (url.scheme == "https")
if (File.directory?(root_ca_path) && http.use_ssl?)
http.ca_path = root_ca_path
http.verify_mode = OpenSSL::SSL::VERIFY_PEER
http.verify_depth = 5
end
request = Net::HTTP::Get.new(url.path)
response = http.request(request)
Here is an example using Faraday:
Faraday.new "https://example.com", ssl: { ca_path: "/etc/ssl/certs" }
Good luck.