Trying to make haskell warp-tls authenticate the client - authentication

I'm trying to make a https server that authenticates the client but it serves curl --verbose --cacert ca.crt https://localhost:3443 successfully. I think it should reject for lack of client creds and require curl --verbose --cert client.crt --key client.key --cacert ca.crt https://localhost:3443. Here's the code:
{-# LANGUAGE DataKinds, ScopedTypeVariables, TypeOperators #-}
module MockServerMain where
import Network.Wai
import Network.Wai.Handler.Warp
import Network.Wai.Handler.WarpTLS
import Servant
import Network.TLS
type MyApi = Get '[JSON] String
api :: Proxy MyApi
api = Proxy
server :: Server MyApi
server = return "Hello from Haskell!"
app :: Application
app = serve api server
main :: IO ()
main =
let
stngs = mkSettings "localhost.crt" "ca.crt" "localhost.key"
warpOpts = setPort 3443 defaultSettings
in
runTLS stngs warpOpts app
mkSettings :: FilePath -> FilePath -> FilePath -> TLSSettings
mkSettings crtFile chainFile keyFile = do
let
hooks = def
{ onClientCertificate = \_ -> return CertificateUsageAccept
}
tlsSs = ( tlsSettingsChain crtFile [chainFile] keyFile )
{ tlsServerHooks = hooks
, tlsWantClientCert = True
}
in tlsSs
I suppose the obvious explanation is that I told it to always accept in the onClientCertificate hook, but the docs say I'm not expected to validate. I know that the tlsWantClientCert is having an effect because if I comment out the hooks it fails. When I only used tlsSettings rather than tlsSettingsChain, the behaviour was the same: accepting the credless curl. I know that the certs are OK because I can make nginx do what this haskell server is supposed to do and allow curls only with creds.
What am I supposed to do with the CA certificate that I want this server to check clients against? This server isn't supposed to know the CA private key.
I think this is the root of the problem: where's the constructor taking ServerParams:
data TLSSettings
= TLSSettingsSimple
{ settingDisableCertificateValidation :: Bool
, settingDisableSession :: Bool
, settingUseServerName :: Bool
}
| TLSSettings TLS.ClientParams
deriving (Show)

Related

converting .pem keys to .der compiles but results in errors

Calling my hyper based API now ported to HTTPS, with Python's requests I'm getting
SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1129)' on every request.
As per the docs for tokio_rustls:
Certificate has to be DER-encoded X.509
PrivateKey has to be DER-encoded ASN.1 in either PKCS#8 or PKCS#1 format.
The keys I used in my PKEY and CERT variables are my certbot generated .pem keys converted to .der format using those commands:
openssl x509 -outform der -in /etc/letsencrypt/live/mysite.com/cert.pem -out /etc/letsencrypt/live/mysite.com/cert.der
openssl pkcs8 -topk8 -inform PEM -outform DER -in /etc/letsencrypt/live/mysite.com/privkey.pem -out /etc/letsencrypt/live/mysite.com/privkey.der -nocrypt
And loaded up with include_bytes!() macro.
Well it compiles, polls... and just throws this error on every request Bad connection: cannot decrypt peer's message whilst the caller gets the SSLError mentioned in the beginning.
Here is the script used for the API:
fn tls_acceptor_impl(cert_der: &[u8], key_der: &[u8]) -> tokio_rustls::TlsAcceptor {
let key = PrivateKey(cert_der.into());
let cert = Certificate(key_der.into());
Arc::new(
ServerConfig::builder()
.with_safe_defaults()
.with_no_client_auth()
.with_single_cert(vec![cert], key)
.unwrap(),
)
.into()
}
fn tls_acceptor() -> tokio_rustls::TlsAcceptor {
tls_acceptor_impl(PKEY, CERT)
}
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error + Send + Sync>> {
let addr = SocketAddr::from(...);
let mut listener = tls_listener::builder(tls_acceptor())
.max_handshakes(10)
.listen(AddrIncoming::bind(&addr).unwrap());
let (tx, mut rx) = mpsc::channel::<tokio_rustls::TlsAcceptor>(1);
let http = Http::new();
loop {
tokio::select! {
conn = listener.accept() => {
match conn.expect("Tls listener stream should be infinite") {
Ok(conn) => {
let http = http.clone();
// let tx = tx.clone();
// let counter = counter.clone();
tokio::spawn(async move {
// let svc = service_fn(move |request| handle_request(tx.clone(), counter.clone(), request));
if let Err(err) = http.serve_connection(conn, service_fn(my_query_handler)).await {
eprintln!("Application error: {}", err);
}
});
},
Err(e) => {
eprintln!("Bad connection: {}", e);
}
}
},
message = rx.recv() => {
// Certificate is loaded on another task; we don't want to block the listener loop
let acceptor = message.expect("Channel should not be closed");
}
}
}
How can I make any sense of the errors, when the certificate keys work on Web (as they are the apache2 server's keys)? I've tried various other encodings, that are against the docs, and all fail in the same way.
I'm not familiar enough with rust, but I know that proper configuration of a TLS server (no matter which language) requires the server certificate, the key matching the certificate and all intermediate CA needed to build the trust chain from server certificate to the root CA on the clients machine. These intermediate CA are not provided in your code. That's why the Python code fails to validate the certificate.
What you need is probably something like this:
ServerConfig::builder()
....
.with_single_cert(vec![cert, intermediate_cert], key)
Where intermediate_cert is the internal representation of the Let’s Encrypt R3 CA, which you can find here.

Make curl call to grpc service

I'm trying to learn grpc using kotlin and make a simple grpc service with following proto definition :
syntax = "proto3";
option java_multiple_files = true;
option java_package = "br.bortoti";
option java_outer_classname = "StockProto";
option objc_class_prefix = "HLW";
package br.bortoti;
import "google/api/annotations.proto";
service StockService {
rpc GetStock (GetStockRequest) returns (Stock) {
option(google.api.http) = {
get: "V1/stocks/{stock}"
body: "*"
};
}
}
message Stock {
string ticker = 1;
}
message GetStockRequest {
string ticker = 1;
}
message GetStockReply {
string ticker = 1;
}
so, i'm basically mapping a service to a get request.
but when i try to call this url from curl like :
curl http://localhost:8088/V1/stocks/1
i get the error :
curl: (1) Received HTTP/0.9 when not allowed
and from the server side i have :
INFO: Transport failed
io.netty.handler.codec.http2.Http2Exception: Unexpected HTTP/1.x request: GET /V1/stocks/1
how can i make the server accept http 1.1 calls? is it even possible?
Maybe this is two question.
The gRPC use HTTP2 and need lots of headers so it is diffcult request by curl. Maybe you need grpcurl
And the path V1/stocks/{stock} need use grpc-gateway toghter, you can reference grpc-gateway for more detail.
Since you are learn how to use gRPC, maybe you can reference this project: helloworlde/grpc-java-sample, feel free to translate chinese.

How to add basic auth to Scotty middleware?

I'm currently making a Scotty API and I couldn't find any examples of basicAuth implementations (Wai Middleware HttpAuth).
Specifically, I want to add basic auth headers (user, pass) to SOME of my endpoints (namely, ones that start with "admin"). I have everything set up, but I can't seem to make the differentiation as to which endpoints require auth and which ones don't. I know I need to use something like this, but it uses Yesod, and I wasn't able to translate it to Scotty.
So far, I have this:
routes :: (App r m) => ScottyT LText m ()
routes = do
-- middlewares
middleware $ cors $ const $ Just simpleCorsResourcePolicy
{ corsRequestHeaders = ["Authorization", "Content-Type"]
, corsMethods = "PUT":"DELETE":simpleMethods
}
middleware $ basicAuth
(\u p -> return $ u == "username" && p == "password")
"My Realm"
-- errors
defaultHandler $ \str -> do
status status500
json str
-- feature routes
ItemController.routes
ItemController.adminRoutes
-- health
get "/api/health" $
json True
But it adds authentication to all my requests. I only need it in some of them.
Thank you so much!
You can use the authIsProtected field of the AuthSettings to define a function Request -> IO Bool that determines if a particular (Wai) Request is subject to authorization by basic authentication. In particular, you can inspect the URL path components and make a determination that way.
Unfortunately, this means that the check for authorization is completely separated from the Scotty routing. This works fine in your case but can make fine-grained control of authorization by Scotty route difficult.
Anyway, the AuthSettings are the overloaded "My Realm" string in your source, and according to the documentation, the recommended way of defining the settings is to use the overloaded string to write something like:
authSettings :: AuthSettings
authSettings = "My Realm" { authIsProtected = needsAuth }
That looks pretty horrible, but anyway, the needsAuth function will have signature:
needsAuth :: Request -> IO Bool
so it can inspect the Wai Request and render a decision in IO on whether or not the page needs basic authentication first. Calling pathInfo on the Request gives you a list of path components (no hostname and no query parameters). So, for your needs, the following should work:
needsAuth req = return $ case pathInfo req of
"admin":_ -> True -- all admin pages need authentication
_ -> False -- everything else is public
Note that these are the parsed non-query path components, so /admin and /admin/ and /admin/whatever and even /admin/?q=hello are protected, but obviously /administrator/... is not.
A full example:
{-# LANGUAGE OverloadedStrings #-}
import Web.Scotty
import Network.Wai.Middleware.HttpAuth
import Data.Text () -- needed for "admin" overloaded string in case
import Network.Wai (Request, pathInfo)
authSettings :: AuthSettings
authSettings = "My Realm" { authIsProtected = needsAuth }
needsAuth :: Request -> IO Bool
needsAuth req = return $ case pathInfo req of
"admin":_ -> True -- all admin pages need authentication
_ -> False -- everything else is public
main = scotty 3000 $ do
middleware $ basicAuth (\u p -> return $ u == "username" && p == "password") authSettings
get "/admin/deletedb" $ do
html "<h1>Password database erased!</h1>"
get "/" $ do
html "<h1>Homepage</h1><p>Please don't <a href=/admin/deletedb>Delete the passwords</a>"

Cannot have file provisioner working with Terraform on DigitalOcean

I try to use Terraform to create a DigitalOcean node on which consul is installed.
I'm using the following .tf file but it hangs up and do not copy the consul .zip file onto the droplet.
I got the following error message after a couple of minutes:
ssh: handshake failed: ssh: unable to authenticate, attempted methods
[none publickey], no supported methods remain
The droplets are correctly created though. I can login on command line with the key I specified (thus not specifying password). I'm guessing the connection part might be faulty but not sure what I'm missing.
Any idea ?
variable "do_token" {}
# Configure the DigitalOcean Provider
provider "digitalocean" {
token = "${var.do_token}"
}
# Create nodes
resource "digitalocean_droplet" "consul" {
count = "1"
image = "ubuntu-14-04-x64"
name = "consul-${count.index+1}"
region = "lon1"
size = "1gb"
ssh_keys = ["7b:51:d3:e3:ae:6e:c6:e2:61:2d:40:56:17:54:fc:e3"]
connection {
type = "ssh"
user = "root"
agent = true
}
provisioner "file" {
source = "consul_0.7.1_linux_amd64.zip"
destination = "/tmp/consul_0.7.1_linux_amd64.zip"
}
provisioner "remote-exec" {
inline = [
"sudo unzip -d /usr/local/bin /tmp/consul_0.7.1_linux_amd64.zip"
]
}
}
Terraform requires that you specify the private SSH key to use for the connection with private_key You can create a new variable containing the path to your private key for use with Terraform's file interpolation function:
connection {
type = "ssh"
user = "root"
agent = true
private_key = "${file("${var.private_key_path}")}"
}
You face this issue, because you have a ssh key protected by a password. To solve this issue you should generate a key without password.

SSL options in gocql

In my Cassandra config I have enabled user authentication and connect with cqlsh over ssl.
I'm having trouble implementing the same with gocql, following is my code:
cluster := gocql.NewCluster("127.0.0.1")
cluster.Authenticator = gocql.PasswordAuthenticator{
Username: "myuser",
Password: "mypassword",
}
cluster.SslOpts = &gocql.SslOptions {
CertPath: "/path/to/cert.pem",
}
When I try to connect I get following error:
gocql: unable to create session: connectionpool: unable to load X509 key pair: open : no such file or directory
In python I can do this with something like:
from cassandra.cluster import Cluster
from cassandra.auth import PlainTextAuthProvider
USER = 'username'
PASS = 'password'
ssl_opts = {'ca_certs': '/path/to/cert.pem',
'ssl_version': PROTOCOL_TLSv1
}
credentials = PlainTextAuthProvider(username = USER, password = PASS)
# define host, port, cqlsh protocaol version
cluster = Cluster(contact_points= HOST, protocol_version= CQLSH_PROTOCOL_VERSION, auth_provider = credentials, port = CASSANDRA_PORT)
I checked the gocql and TLS documentation here and here but I'm unsure about how to set ssl options.
You're adding a cert without a private key, which is where the "no such file or directory" error is coming from.
Your python code is adding a CA; you should do the same with the Go code:
gocql.SslOptions {
CaPath: "/path/to/cert.pem",
}