I want to download a file in groovy over a connection that is both using single sign on (SSO) over HTTPS (SSL) is there an easy way to do this. I'm not intending to build a full blown application so security is not as much of a concern.
def data = new URL("https://server/context/servlet?param1=value1").getText()
print data
I currently do the download using curl but would ideally not have to call curl. current used call below.
curl --negotiate -u user:pass -L --insecure -o filename.txt "https://server/context/servlet?param1=value1"
Two key points to the solution i'm looking for
- It does not involve making a system call to curl
- It does not include manually setting up a certificate.
Would consider libraries.
To avoid the SSL PKIX validation check, in Groovy, you can implement a X509TrustManager in the same way that you do it in Java.
Note that this disable the validation server certificate validation, therefore it's a security risk:
import javax.net.ssl.*
// create a TrustManager to avoid PKIX path validation
def trustManager = [
checkClientTrusted: { chain, authType -> },
checkServerTrusted: { chain, authType -> },
getAcceptedIssuers: { null }
] as X509TrustManager
// creat a sslCtx to use "lax" trustManager
def context = SSLContext.getInstance("TLS")
context.init(null, [trustManager] as TrustManager[], null)
// set as default ssl context
HttpsURLConnection.setDefaultSSLSocketFactory(context.getSocketFactory())
// finally you can connect to the server "insecurely"
def data = new URL("https://server/context/servlet?param1=value1").getText()
print data
About your second question, to provide a basic authentication like curl does with --user argument, you can set a default user/password for your connections using Authenticator class:
Authenticator.setDefault (new Authenticator() {
protected PasswordAuthentication getPasswordAuthentication() {
return new PasswordAuthentication ("user", "pass".toCharArray())
}
})
Note that is possible to do so on other ways in Groovy using some libraries, but this is a possible way using standard Java classes.
Related
I saw this topic about Kerberos authntication - https://github.com/mlflow/mlflow/issues/2678 . It was in 2020 . Our team trying to do authentication with kerberos by spnego. We did spnego on nginx server and it is fine - and get code 200 when we do curl to mlflow http uri . BUT we can't do it with mlflow environment variable .
The question is - Does mlflow has some feature to make authentication with spnego or not? Or it has just these environment variables for authentication and such methods :
MLFLOW_TRACKING_USERNAME and MLFLOW_TRACKING_PASSWORD - username and password to use with HTTP Basic authentication. To use Basic authentication, you must set both environment variables .
MLFLOW_TRACKING_TOKEN - token to use with HTTP Bearer authentication. Basic authentication takes precedence if set.
MLFLOW_TRACKING_INSECURE_TLS - If set to the literal true, MLflow does not verify the TLS connection, meaning it does not validate certificates or hostnames for https:// tracking URIs. This flag is not recommended for production environments. If this is set to true then MLFLOW_TRACKING_SERVER_CERT_PATH must not be set.
MLFLOW_TRACKING_SERVER_CERT_PATH - Path to a CA bundle to use. Sets the verify param of the requests.request function (see https://requests.readthedocs.io/en/master/api/). When you use a self-signed server certificate you can use this to verify it on client side. If this is set MLFLOW_TRACKING_INSECURE_TLS must not be set (false).
MLFLOW_TRACKING_CLIENT_CERT_PATH - Path to ssl client cert file (.pem). Sets the cert param of the requests.request function (see https://requests.readthedocs.io/en/master/api/). This can be used to use a (self-signed) client certificate.
I looked at the source code. No, the mlflow.utils.rest_utils.http_request function doesn't support SPNEGO in any way – it can only send HTTP 'Basic' or 'Bearer' authorization headers.
However, it should be relatively easy to change it to generate a 'Negotiate' header using pyspnego, or even to use requests-gssapi given that it already uses Requests internally:
# For Linux:
import requests_gssapi
# For Windows:
#import requests_negotiate_sspi
def http_request(...):
...
if not auth_str:
# For Linux:
kwargs["auth"] = requests_gssapi.HTTPSPNEGOAuth()
# For Windows:
#kwargs["auth"] = requests_negotiate_sspi.HttpNegotiateAuth()
...
The microstack.openstack project recently enabled/required tls authentication as outlined here. I am working on deploying an openstack cluster to microstack using a terraform example here. As a result of the change, I receive an unknown signed cert error when trying to create an openstack network client data source.
data "openstack_networking_network_v2" "terraform" {
name = "${var.pool}"
}
The error I get when calling terraform plan:
Error: Error creating OpenStack networking client: Post "https://XXX.XXX.XXX.132:5000/v3/auth/tokens": OpenStack connection error, retries exhausted. Aborting. Last error was: x509: certificate signed by unknown authority
with data.openstack_networking_network_v2.terraform,
on datasources.tf line 1, in data "openstack_networking_network_v2" "terraform":
1: data "openstack_networking_network_v2" "terraform" {
Is there a way to ignore the certificate error, so that I can successfully use terraform to create the openstack cluster? I have tried updating the generate-self-signed parameter, but I haven't seen any change in behavior:
sudo snap set microstack config.tls.generate-self-signed=false
I think insecure provider parameter is what you are looking for:
(Optional) Trust self-signed SSL certificates. If omitted, the OS_INSECURE environment variable is used.
Try:
provider "openstack" {
insecure = true
}
Disclaimer: I haven't tried that.
The problem was that I did not source the admin-openrc.sh file that I had downloaded from the horizon web page:
$ source admin-openrc.sh
I faced the same problem, if it could help, here my contribution :
sudo snap get microstack config.tls
Key Value
config.tls.cacert-path /var/snap/microstack/common/etc/ssl/certs/cacert.pem
config.tls.cert-path /var/snap/microstack/common/etc/ssl/certs/cert.pem
config.tls.compute {...}
config.tls.generate-self-signed true
config.tls.key-path /var/snap/microstack/common/etc/ssl/private/key.pem
In terraform directory, do :
cat /var/snap/microstack/common/etc/ssl/certs/cacert.pem : copy paste -> cacert.pem
cat /var/snap/microstack/common/etc/ssl/certs/cert.pem : copy/paste -> cert.pem
cat /var/snap/microstack/common/etc/ssl/private/key.pem : copy/past -> key.pem
And create a file in your terraform directory main.tf :
provider "openstack" {
user_name = "admin"
tenant_name = "admin"
password = "pass" (get with sudo snap get microstack config.credentials.keystone-password)
auth_url = "https://host_ip:5000/v3"
#insecure = true (uncomment & comment cacert_file + key line)
cacert_file = "/terraform_dir/cacert.pem"
#cert = "/terraform_dir/cert.pem" (if needed)
key = "/terraform_dir/private.pem"
region = "microstack" (or regionOne)
}
To finish terraform plan/apply
I have one flask powered app, I'm trying to enable OIDC SSO for this app. I opted for wso2 as the identity server. I have created a callback URL and added the needful things in both the Identity Server and the flask app as shown below. The app is able to flow through the credential logging page and after that, I'm getting an SSL certificate verification error.
My try:
I have tried by using self signed certificates and app.run(ssl_context='adhoc') didn't worked.
Code Snippet:
from flask import Flask, g
from flask_oidc import OpenIDConnect
# import ssl
logging.basicConfig(level=logging.DEBUG)
app = Flask(__name__)
app.config.update({
'SECRET_KEY': 'SomethingNotEntirelySecret',
'TESTING': True,
'DEBUG': True,
'OIDC_CLIENT_SECRETS': 'client_secrets.json',
'OIDC_ID_TOKEN_COOKIE_SECURE': False,
'OIDC_REQUIRE_VERIFIED_EMAIL': False,
})
oidc = OpenIDConnect(app)
#app.route('/private')
#oidc.require_login
def hello_me():
# import pdb;pdb.set_trace()
info = oidc.user_getinfo(['email', 'openid_id'])
return ('Hello, %s (%s)! Return' %
(info.get('email'), info.get('openid_id')))
if __name__ == '__main__':
# app.run(host='sanudev', debug=True)
# app.run(debug=True)
# app.run(ssl_context='adhoc')
app.run(ssl_context=('cert.pem', 'key.pem'))
# app.run(ssl_context=('cert.pem', 'key.pem'))
Client Info:
{
"web": {
"auth_uri": "https://localhost:9443/oauth2/authorize",
"client_id": "hXCcX_N75aIygBIY7IwnWRtRpGwa",
"client_secret": "8uMLQ92Pm8_dPEjmGSoGF7Y6fn8a",
"redirect_uris": [
"https://sanudev:5000/oidc_callback"
],
"userinfo_uri": "https://localhost:9443/oauth2/userinfo",
"token_uri": "https://localhost:9443/oauth2/token",
"token_introspection_uri": "https://localhost:9443/oauth2/introspect"
}
}
App Info:
python 3.8
Flask 1.1.2
Hi Answering my own question just to reach the community effectively, here I can express where did I stuck and all the stories behind the fix.
TLDR:
The SSL issue was appearing because in OIDC flow wso2 server has to communicate or transfer secure-auth token only through the SSL tunnel. This is a mandatory standard need to keep for security purposes.
Yes carbon server has SSL certificate (self signed one) to make the secure token transfer through SSL Tunnel client also has to make at least self-signed certificate configuration.
Since I was using the flask-oidc library there is a provision to allow that, please refer to the configuration here.
{
"web": {
"auth_uri": "https://localhost:9443/oauth2/authorize",
"client_id": "someid",
"client_secret": "somesecret",
"redirect_uris": [
"https://localhost:5000/oidc_callback"
],
"userinfo_uri": "http://localhost:9763/oauth2/userinfo",
"token_uri": "http://localhost:9763/oauth2/token",
"token_introspection_uri": "http://localhost:9763/oauth2/introspect",
"issuer": "https://localhost:9443/oauth2/token" # This can solve your issue
}
}
For quick development purpose you can enable Secure connection in HTTPS by adding ad-hoc config in flask app run settings.
if __name__ == '__main__':
# app.run(ssl_context=('cert.pem', 'key.pem')) # for self signed cert
app.run(debug=True, ssl_context='adhoc') # Adhoc way of making https
Let me preface this answer with this one Caveat:
DO NOT DO THIS IN PRODUCTION ENVIRONMENTS
No, serously, do not do this in production, this should only be done for development purposes.
Anyways, open the oauth2client\transport.py file.
You're going to see this file location in your error that is spit out. for me it was in my anaconda env
AppData\Local\Continuum\anaconda3\envs\conda_env\lib\site-packages\oauth2client\transport.py
Find this line (line 73 for me)
def get_http_object(*args, **kwargs):
"""Return a new HTTP object.
Args:
*args: tuple, The positional arguments to be passed when
contructing a new HTTP object.
**kwargs: dict, The keyword arguments to be passed when
contructing a new HTTP object.
Returns:
httplib2.Http, an HTTP object.
"""
return httplib2.Http(*args, **kwargs)
change the return to
return httplib2.Http(*args, **kwargs, disable_ssl_certificate_validation=True)
You may need to do the same thing to line 445 of flask_oidc/__init__.py
credentials.refresh(httplib2.Http(disable_ssl_certificate_validation=True))
Upgrading certifi package should solve the problem.
pip install --upgrade certifi
It worked for me when I faced the exactly the same issue.
As Part of our automation we need to set the ssl certificate.
If I am setting in the feature file (as shown below) it works perfectly fine. But I have huge number of feature files and want to define this globally so that this ssl is used in all the feature files.
And configure ssl = { keyStore: 'wmcloudPreProd2_truststore.jks', keyStorePassword: 'manage', keyStoreType: 'jks' };
Looking for a way to define this ssl configuration for complete automation project.
Thanks in advance
Easy, in karate-config.js you can do:
karate.configure('ssl', { keyStore: 'wmcloudPreProd2_truststore.jks', keyStorePassword: 'manage', keyStoreType: 'jks' });
This is mentioned in the doc: https://github.com/intuit/karate#configure
I am currently working on a prototype for a WCF service that will make use of client-certificate authentication. We would like to be able to directly publish our application to IIS, but also allow SSL offloading using IIS ARR (Application Request Routing).
After digging through the documentation, I have been able to test both configurations successfully. I am able to retrieve the client certificate used to authenticate from:
X-Arr-ClientCert - the header that contains the certificate when using ARR.
X509CertificateClaimSet - when published directly to IIS, this is how to retrieve the Client Certificate
To verify that the request is allowed, I match the thumbprint of the certificate to the expected thumbprint that is configured somewhere. To my surprise, when getting the certificate through different methods, the same certificate has different thumbprints.
To verify what is going on, I have converted the "RawData" property on both certificates to Base64 and found that it's the same, except that in the case of the X509CertificateClaimSet, there are spaces in the certificate data, while in the case of ARR, there are not. Otherwise, both strings are the same:
My question:
Has anyone else run into this, and can I actually rely on thumbprints? If not, my backup plan is to implement a check on Subject and Issuer, but I am open to other suggestions.
I have included some (simplified) sample code below:
string expectedThumbprint = "...";
if (OperationContext.Current.ServiceSecurityContext == null ||
OperationContext.Current.ServiceSecurityContext.AuthorizationContext.ClaimSets == null ||
OperationContext.Current.ServiceSecurityContext.AuthorizationContext.ClaimSets.Count <= 0)
{
// Claimsets not found, assume that we are reverse proxied by ARR (Application Request Routing). If this is the case, we expect the certificate to be in the X-ARR-CLIENTCERT header
IncomingWebRequestContext request = WebOperationContext.Current.IncomingRequest;
string certBase64 = request.Headers["X-Arr-ClientCert"];
if (certBase64 == null) return false;
byte[] bytes = Convert.FromBase64String(certBase64);
var cert = new System.Security.Cryptography.X509Certificates.X509Certificate2(bytes);
return cert.Thumbprint == expectedThumbprint;
}
// In this case, we are directly published to IIS with Certificate authentication.
else
{
bool correctCertificateFound = false;
foreach (var claimSet in OperationContext.Current.ServiceSecurityContext.AuthorizationContext.ClaimSets)
{
if (!(claimSet is X509CertificateClaimSet)) continue;
var cert = ((X509CertificateClaimSet)claimSet).X509Certificate;
// Match certificate thumbprint to expected value
if (cert.Thumbprint == expectedThumbprint)
{
correctCertificateFound = true;
break;
}
}
return correctCertificateFound;
}
Not sure what your exact scenario is, but I've always liked the Octopus Deploy approach to secure server <-> tentacle (client) communication. It is described in their Octopus Tentacle communication article. They essentially use the SslStream class, self-signed X.509 certificates and trusted thumbprints configured on both server and tentacle.
-Marco-
When setting up the test again for a peer review by colleagues, it appears that my issue has gone away. I'm not sure if I made a mistake (probably) or if rebooting my machine helped, but in any case, the Thumbprint now is reliable over both methods of authentication.