Trouble using Basic Authentication in Grails - authentication

I am using CAS + LDAP with Spring Security for user authentication for a Grails 2.2.3 app. However, I would also like to create a simple API with this application which will require a different authentication method. I would like to use Basic Authentication for this. However, adding basicAuth to my Config.groovy file is not working, and I am getting this error:
authentication.ProviderManager Authentication attempt using org.springframework.security.cas.authentication.CasAuthenticationProvider
www.BasicAuthenticationFilter Authentication request for user: grantmc failed: org.springframework.security.authentication.ProviderNotFoundException: No AuthenticationProvider found for org.springframework.security.authentication.UsernamePasswordAuthenticationToken
I know I'm using the correct credentials, so I'm not sure what the error is saying exactly or what I need to do to fix it. Does anyone have any ideas?
Here are the settings in resources.groovy:
initialDirContextFactory(DefaultSpringSecurityContextSource, "ldap://ldap.company.com:389") {
userDn = "CN=Admin,OU=Users,DC=company,DC=com"
password = "password"
}
ldapUserSearch(FilterBasedLdapUserSearch,
"OU=Users,DC=company,DC=com",
"employeeId={0}",
initialDirContextFactory) { }
ldapUserDetailsMapper(MyLdapUserDetailsContextMapper) { }
ldapAuthoritiesPopulator(DefaultLdapAuthoritiesPopulator,
initialDirContextFactory,
"ou=Groups,OU=Users,DC=company,DC=com") {
groupRoleAttribute = "cn"
groupSearchFilter = "member={0}"
rolePrefix = ""
searchSubtree = true
convertToUpperCase = true
ignorePartialResultException = true
}
userDetailsService(LdapUserDetailsService, ldapUserSearch, ldapAuthoritiesPopulator) {
userDetailsMapper = ref('ldapUserDetailsMapper')
}
And my settings in config.groovy:
grails.plugins.springsecurity.cas.serverUrlPrefix = 'https://cas.company.com/cas'
grails.plugins.springsecurity.cas.loginUri = "/login"
grails.plugins.springsecurity.cas.serviceUrl = "${grails.serverURL}/j_spring_cas_security_check"
grails.plugins.springsecurity.providerNames = ['casAuthenticationProvider']
grails.plugins.springsecurity.defaultTargetUrl = '/'
grails.plugins.springsecurity.alwaysUseDefault = false
grails.plugins.springsecurity.useBasicAuth = true
grails.plugins.springsecurity.basic.realmName = "API"
grails.plugins.springsecurity.filterChain.chainMap = [
'/api/**': 'JOINED_FILTERS,-exceptionTranslationFilter',
'/**': 'JOINED_FILTERS,-basicAuthenticationFilter,-basicExceptionTranslationFilter'
]
grails.plugins.springsecurity.securityConfigType = 'InterceptUrlMap'
grails.plugins.springsecurity.interceptUrlMap = [
'/api/**': ['IS_AUTHENTICATED_FULLY'],
'/**': ['IS_AUTHENTICATED_FULLY']
]

Related

Error While creating CNAME at Cloudflare through Terraform

What I did?
Created a terraform module with provider as cloudflare
provider "cloudflare" {
}
Provided token to the shell environment using variable CLOUDFLARE_API_TOKEN
Token have access to the zone say: example.com
Creating a CNAME record which is targeting to my S3 bucket using:
resource "cloudflare_record" "cname-bucket" {
zone_id = var.domain
name = var.bucket_name
value = "${var.bucket_name}.s3-website.${var.region}.amazonaws.com"
proxied = true
type = "CNAME"
}
After applying this module, getting error:
Error: failed to create DNS record: error from makeRequest: HTTP status 400: content "{\"success\":false,\"errors\":[{\"code\":7003,\"message\":\"Could not route to \\/zones\\/example.com\\/dns_records, perhaps your object identifier is invalid?\"},{\"code\":7000,\"message\":\"No route for that URI\"}],\"messages\":[],\"result\":null}"
When I tried creating the same using cloudflare with browser, everything works fine but when trying same with terraform, getting the above error.
Access my token have: example.com - DNS:Edit
What I need?
What I am doing wrong here?
How to create this CNAME record using terraform module?
It looks like the problem is zone_id = var.domain line in your cloudflare_record resource. You are using example.com as the zone_id , but instead you should be using your Cloudflare Zone ID.
You can find you Zone ID in your Cloudflare Dashboard for your domain: check in Overview , on the right column in the API section.
locals {
domain = "example.com"
hostname = "example.com" # TODO: Varies by environment
}
variable "CLOUDFLARE_ACCOUNT_ID" {}
variable "CLOUDFLARE_API_TOKEN" { sensitive = true }
provider "cloudflare" {
api_token = var.CLOUDFLARE_API_TOKEN
account_id = var.CLOUDFLARE_ACCOUNT_ID
}
resource "cloudflare_zone" "default" {
zone = local.domain
plan = "free"
}
resource "cloudflare_record" "a" {
zone_id = cloudflare_zone.default.id
name = local.hostname
value = "192.0.2.1"
type = "A"
ttl = 1
proxied = true
}
Source https://github.com/kriasoft/terraform-starter-kit
As an alternative to the other answers. You can use this module. In this case, your code will look like this:
terraform {
required_providers {
cloudflare = {
source = "cloudflare/cloudflare"
}
}
}
variable "cloudflare_api_token" {
type = string
sensitive = true
description = "The Cloudflare API token."
}
provider "cloudflare" {
api_token = var.cloudflare_api_token
}
module "bucket" {
source = "registry.terraform.io/alex-feel/zone/cloudflare"
version = "1.8.0"
zone = var.domain # For instance, it may be example.com
records = [
{
record_name = "bucket_cname"
type = "CNAME"
name = var.bucket_name # A subdomain of the example.com domain
value = "${var.bucket_name}.s3-website.${var.region}.amazonaws.com" # Where the subdomain should point to
proxied = true
}
]
}
To use the module with this configuration, your token must have at least the following privileges for the desired zone: DNS:Edit, Zone:Edit, Zone Settings:Edit. And to use all the features of the module, you need an additional privilege: Page Rules:Edit.
P.S. You do not need the zone_id for this configuration.

Airflow authentication with RBAC and Key cloak

I want to implement rbac based auth in airflow with keycloak. Can someone help me with it.
I have creaed the webserver.config file and I am using docker to up the airflow webserver.
from airflow.www_rbac.security import AirflowSecurityManager
from flask_appbuilder.security.manager import AUTH_OAUTH
import os
import json
AUTH_TYPE = AUTH_OAUTH
AUTH_USER_REGISTRATION_ROLE = "Admin"
OAUTH_PROVIDERS = [
{
'name': 'keycloak',
'icon': 'fa-user-circle',
'token_key': 'access_token',
'remote_app': {
'base_url': 'http://localhost:8180/auth/realms/airflow/protocol/openid-connect/',
'request_token_params': {
'scope': 'email profile'
},
'request_token_url': None,
'access_token_url': 'http://localhost:8180/auth/realms/airflow/protocol/openid-connect/token',
'authorize_url': 'http://localhost:8180/auth/realms/airflow/protocol/openid-connect/auth',
'consumer_secret': "98ec2e89-9902-4577-af8c-f607e34aa659"
}
}
]
I have also set the ariflow.cfg
rbac = True
authenticate = True
But still its not redirecting to the keycloak when the airflow is loaded.
I use :
docker build --rm --build-arg AIRFLOW_DEPS="datadog,dask" --build-arg PYTHON_DEPS="flask_oauthlib>=0.9" -t airflow .
and
docker run -d -p 8080:8080 airflow webserver
TO execute it.
I maybe coming late to this one and my answer may not work exactly as I'm using a different auth provider, however it's still OAuth2 and in a previous life I used Keycloak so my solution should also work there.
My answer makes use of authlib (At time of writing newer versions of airflow have switched. I am on 2.1.2)
I've raised a feature request against Flask-AppBuilder which Airflow uses as it's OAuth hander should really take care of things when the scope includes openid (you'd need to add this to your scopes)
From memory keycloak returns id_token along side the access_token and refresh_token and so this code simply decodes what has already been returned.
import os
import logging
import re
import base64
import yaml
from flask import session
from airflow.www.security import AirflowSecurityManager
from flask_appbuilder.security.manager import AUTH_OAUTH
basedir = os.path.abspath(os.path.dirname(__file__))
MY_PROVIDER = 'keycloak'
class customSecurityiManager(AirflowSecurityManager):
def oauth_user_info(self, provider, resp):
if provider == MY_PROVIDER:
log.debug("{0} response received : {1}".format(provider,resp))
id_token = resp["id_token"]
log.debug(str(id_token))
me = self._azure_jwt_token_parse(id_token)
log.debug("Parse JWT token : {0}".format(me))
if not me.get("name"):
firstName = ""
lastName = ""
else:
firstName = me.get("name").split(' ')[0]
lastName = me.get("name").split(' ')[-1]
return {
"username": me.get("email"),
"email": me.get("email"),
"first_name": firstName,
"last_name": lastName,
"role_keys": me.get("groups", ['Guest'])
}
else:
return {}
log = logging.getLogger(__name__)
AUTH_TYPE = AUTH_OAUTH
AUTH_USER_REGISTRATION = True
AUTH_USER_REGISTRATION_ROLE = "Guest"
AUTH_ROLES_SYNC_AT_LOGIN = True
CSRF_ENABLED = True
AUTH_ROLES_MAPPING = {
"Airflow_Users": ["User"],
"Airflow_Admins": ["Admin"],
}
OAUTH_PROVIDERS = [
{
'name': MY_PROVIDER,
'icon': 'fa-circle-o',
'token_key': 'access_token',
'remote_app': {
'server_metadata_url': WELL_KNOWN_URL,
'client_id': CLIENT_ID,
'client_secret': CLIENT_SECRET,
'client_kwargs': {
'scope': 'openid groups',
'token_endpoint_auth_method': 'client_secret_post'
},
'access_token_method': 'POST',
}
}
]
SECURITY_MANAGER_CLASS = customSecurityManager
Ironically the Azure provider already returns id_token and it's handled so my code makes use of that existing parsing
The code decodes id_token
Note you can turn on debug logging with the environmental variable AIRFLOW__LOGGING__FAB_LOGGING_LEVEL set to DEBUG.
If you switch on debug logs and see an entry like the following (note the id_token) you can probably use the code I've supplied.
DEBUG - OAUTH Authorized resp: {'access_token': '<redacted>', 'expires_in': 3600, 'id_token': '<redacted>', 'refresh_token': '<redacted>, 'scope': 'openid groups', 'token_type': 'Bearer', 'expires_at': <redacted - unix timestamp>}
The id_token is in 3 parts joined by a full stop . The middle part contains the user data and is simply base64 encoded

Bigquery create table from Sheet files using Terraform

I'm trying to create a BQ table using Terraform ingesting data from Google Sheets here is my external_data_configuration block
resource "google_bigquery_table" "sheet" {
dataset_id = google_bigquery_dataset.bq-dataset.dataset_id
table_id = "sheet"
external_data_configuration {
autodetect = true
source_format = "GOOGLE_SHEETS"
google_sheets_options {
skip_leading_rows = 1
}
source_uris = [
"https://docs.google.com/spreadsheets/d/xxxxxxxxxxxxxxxxx",
]
}
I made the file public but when I try to create the table I get the error:
Error: googleapi: Error 400: Error while reading table: sheet, error
message: Failed to read the spreadsheet. Errors: No OAuth token with
Google Drive scope was found., invalid
I read Terraform documentation and it seems that I need to specify access_token and scopes in my provider.tf file I just don't know how to do that as I think it will conflict with my current authentication method (service account)
Solution
Add the scopes argument to your provider.tf
provider "google" {
credentials = "${file("${var.path}/secret.json")}"
scopes = ["https://www.googleapis.com/auth/drive","https://www.googleapis.com/auth/bigquery"]
project = "${var.project_id}"
region = "${var.gcp_region}"
}
You need to add the scope for Google Driver and Bigquery
I suspect you only need to supply the scopes, while retaining the existing service account credentials. Service account credential files don't specify scope. Per the terraform documentation, the following scopes are used by default:
> https://www.googleapis.com/auth/compute
> https://www.googleapis.com/auth/cloud-platform
> https://www.googleapis.com/auth/ndev.clouddns.readwrite
> https://www.googleapis.com/auth/devstorage.full_control
> https://www.googleapis.com/auth/userinfo.email
By default, most GCP services accept and use the cloud-platform scope. However, Google Drive does not accept/use the cloud-platform scope, and so this particular feature in BigQuery requires additional scopes to be specified. In order to make this work you should augment the default terraform list of scopes that with the Google Drive scope https://www.googleapis.com/auth/drive (relevant BQ documentation). For a more exhaustive list of documented scopes, see https://developers.google.com/identity/protocols/oauth2/scopes
Access token implies that you've already gone through an authentication flow and supplied the necessary scope(s), so it doesn't make sense that you'd supply both scopes and token. You'd either generate the token with the scopes, or you'd use service account with additional scopes.
Hope this helps.
Example:
resource "google_service_account" "gdrive-connector" {
project = "test-project"
account_id = "gdrive-connector"
display_name = "Service account Google Drive transfers"
}
data "google_service_account_access_token" "gdrive-connector" {
target_service_account = google_service_account.gdrive-connector.email
scopes = ["https://www.googleapis.com/auth/drive", "https://www.googleapis.com/auth/bigquery"]
lifetime = "300s"
}
provider "google" {
alias = "gdrive-connector"
access_token = data.google_service_account_access_token.gdrive-connector.access_token
}
resource "google_bigquery_dataset_iam_member" "gdrive-connector" {
project = "test-project"
dataset_id = "test-dataset"
role = "roles/bigquery.dataOwner"
member = "serviceAccount:${google_service_account.gdrive-connector.email}"
}
resource "google_project_iam_member" "gdrive-connector" {
project = "test-project"
role = "roles/bigquery.jobUser"
member = "serviceAccount:${google_service_account.gdrive-connector.email}"
}
resource "google_bigquery_table" "sheets_table" {
provider = google.gdrive-connector
project = "test-project"
dataset_id = "test-dataset"
table_id = "sheets_table"
external_data_configuration {
autodetect = true
source_format = "GOOGLE_SHEETS"
google_sheets_options {
skip_leading_rows = 1
}
source_uris = [
"https://docs.google.com/spreadsheets/d/xxxxxxxxxxxxxxxx/edit?usp=sharing",
]
}
}

How to change multiple sites dns in CloudFlare account?

I have a lot of sites on an CloudFlare account, sometimes when servers are migrate, i need to change every domain DNS in CF manually. How can I use some tool or script, that helps me to download all domains info, and than easy change it?
Maybe some Terraform example? I didnt use Terraform yet, so just thinking about ways how to automate this proccess.
Tnx.
Yes, you can use Terraform for this. There are an official Cloudflare Provider, the documentation for which you can find here.
When using the provider "directly", your Terraform configuration will look like this:
terraform {
required_providers {
cloudflare = {
source = "cloudflare/cloudflare"
version = ">= 3.12.1"
}
}
}
variable "cloudflare_api_token" {
type = string
sensitive = true
}
provider "cloudflare" {
api_token = var.cloudflare_api_token
}
resource "cloudflare_zone" "acme_com" {
zone = "acme.com"
}
You may be interested in the following Cloudflare resources to use them in your configuration:
cloudflare_zone
cloudflare_zone_settings_override
cloudflare_record
Also, you can use this module. Then your configuration may look like this:
terraform {
required_providers {
cloudflare = {
source = "cloudflare/cloudflare"
version = ">= 3.12.1"
}
}
}
variable "cloudflare_api_token" {
type = string
sensitive = true
}
provider "cloudflare" {
api_token = var.cloudflare_api_token
}
module "acme_com" {
source = "registry.terraform.io/alex-feel/zone/cloudflare"
version = "1.7.0"
zone = "acme.com"
}
There are examples to help you get started with the module.
And here is a concrete, ready-to-use example that you can use in your specific case when using the module:
terraform {
required_providers {
cloudflare = {
source = "cloudflare/cloudflare"
version = ">= 3.12.1"
}
}
}
variable "cloudflare_api_token" {
type = string
sensitive = true
}
provider "cloudflare" {
api_token = var.cloudflare_api_token
}
locals {
# All your zones go here
zones = ["acme.com", "example.com"]
# Your IP for A records for all the zones goes here
ip = "192.0.2.1"
}
module "all_domains" {
source = "registry.terraform.io/alex-feel/zone/cloudflare"
version = "1.7.0"
for_each = toset(local.zones)
zone = each.value
records = [
{
record_name = "a_main"
type = "A"
value = local.ip
}
]
}
In this case, it will be enough for you to list all your domains in the zones variable and specify the desired IP in the ip variable. As a result, an A record with the specified IP will be created for each of your domains.
To get all your zones you can use Cloudflare API List Zones method. So your request will look like this:
curl --request GET \
--url https://api.cloudflare.com/client/v4/zones \
--header 'Authorization: Bearer YOUR_TOKEN'

Configuring FreeRadius with LDAP for WPA2 Enterprise

I need help configuring freeradius with WPA2 Enterprise via LDAP.
LDAP normally works for other services, however, it does not work for WPA2E.
We have also managed WPA2E to work with hard coded username/password fine. So we know all the components on their own work, but do not work together.
We have the freeradius server configured fine to work with the LDAP service.
Any help is appreciated
Here is my ldap setting for free radius modules/ldap file (mostly irrelevant for this issue)
ldap {
server = "ldapmaster.domain.com,ldapslave.domain.com"
identity = "uid=binder,ou=services,dc=security,dc=domain,dc=com"
password = asdfasdfasdf
basedn = "ou=internal,ou=users,dc=security,dc=domain,dc=com"
filter = "(mail=%{%{Stripped-User-Name}:-%{User-Name}})"
ldap_connections_number = 5
max_uses = 0
timeout = 4
timelimit = 3
net_timeout = 1
tls {
start_tls = yes
require_cert = "never"
}
dictionary_mapping = ${confdir}/ldap.attrmap
password_attribute = userPassword
edir_account_policy_check = no
keepalive {
idle = 60
probes = 3
interval = 3
}}
Also have the following setup for eap.conf
eap {
default_eap_type = peap
timer_expire = 60
ignore_unknown_eap_types = no
cisco_accounting_username_bug = no
max_sessions = 4096
md5 {
}
leap {
}
gtc {
auth_type = PAP
}
tls {
certdir = ${confdir}/certs
cadir = ${confdir}/certs
private_key_password = whatever
private_key_file = ${certdir}/server.key
certificate_file = ${certdir}/server.pem
CA_file = ${cadir}/ca.pem
dh_file = ${certdir}/dh
random_file = /dev/urandom
CA_path = ${cadir}
cipher_list = "DEFAULT"
make_cert_command = "${certdir}/bootstrap"
cache {
enable = no
max_entries = 255
}
verify {
}
}
ttls {
default_eap_type = md5
copy_request_to_tunnel = no
use_tunneled_reply = no
virtual_server = "inner-tunnel"
}
peap {
default_eap_type = mschapv2
copy_request_to_tunnel = no
use_tunneled_reply = no
virtual_server = "inner-tunnel"
}
mschapv2 {
}}
Also have two sites enabled, default and inner-tunnel:
default
authorize {
preprocess
suffix
eap {
ok = return
}
expiration
logintime
ldap
}
authenticate {
eap
ldap
}
inner-tunnel
authorize {
mschap
update control {
Proxy-To-Realm := LOCAL
}
eap {
ok = return
}
expiration
ldap
logintime
}
authenticate {
Auth-Type MS-CHAP {
mschap
}
eap
ldap
}
Here is a sample log I am seeing in the debug logs:
https://gist.github.com/anonymous/10483144
You appear to of removed the symlink between sites-available/inner-tunnel and sites-enabled/inner-tunnel
If you look in the log it's complaining it can't find the inner-tunnel server, which it requires to perform MSCHAPv2 auth in the TLS tunnel of the PEAP authentication.
server {
PEAP: Setting User-Name to emre#domain.com
Sending tunneled request
EAP-Message = 0x0205001a01656d72654071756269746469676974616c2e636f6d
FreeRADIUS-Proxied-To = 127.0.0.1
User-Name = "emre#domain.com"
server inner-tunnel {
No such virtual server "inner-tunnel"
} # server inner-tunnel
You add the symlink back, and list the ldap module at the top of the authorize section in the inner-tunnel server. You will also need to map the attribute holding the user's Cleartext-Password to the User-Password attribute, using the ldap attrmap file.
If you do not have the user's Cleartext-Password in the directory (for example if it's hashed), then you should use EAP-TTLS-PAP, and list the LDAP module in the authenticate section of the inner-tunnel server, then add:
if (User-Password) {
update control {
Auth-Type := LDAP
}
}
To the authorize section of the inner-tunnel server.