I have worked on setting the authentication for Kafka clients in past. I have I have refered:
https://kafka.apache.org/documentation/#security
https://docs.confluent.io/current/kafka/authentication_sasl/index.html#sasl-configuration-for-kafka-brokers
And other links as well.
As mentioned in docs we need to have jaas configuration file to specify the authentication method, I had one like below:
KafkaClient {
org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required
LoginStringClaim_sub="admin";
};
Which basically adds the OAuth authentication for kafka clients.
The question is - can I have multiple authentication methods enabled on kafka broker
I mean can I enable both OAuthBearer and PLAIN authentication on Kafka, and let the client authenticate by any one of these methods.
OK, I found how we can do it.
Multiple SASL mechanisms can be enabled on the broker simultaneously while each client has to choose one mechanism.
In JAAS config file, we have to specify the configuration for the multiple login modules as below:
KafkaServer {
org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required
LoginStringClaim_sub="admin";
org.apache.kafka.common.security.plain.PlainLoginModule required
username="admin"
password="admin-secret"
user_admin="admin-secret"
user_alice="alice-secret";
};
Then we have to enable the SASL mechanisms in server.properties:
# List of enabled mechanisms, can be more than one
sasl.enabled.mechanisms=OAUTHBEARER,PLAIN
And Then Specify the SASL security protocol and mechanism for inter-broker communication in server.properties
# Configure SASL_SSL if SSL encryption is enabled, otherwise configure SASL_PLAINTEXT
security.inter.broker.protocol=SASL_SSL
# Configure the appropriate inter-broker protocol
sasl.mechanism.inter.broker.protocol=PLAIN
Credit to - https://docs.confluent.io/current/kafka/authentication_sasl/index.html#enabling-multiple-sasl-mechanisms
Related
When I check the definition of "WebhookClientConfig" of API of Kubernetes I found comments like this:
// `caBundle` is a PEM encoded CA bundle which will be used to validate the webhook's server certificate.
// If unspecified, system trust roots on the apiserver are used.
// +optional
CABundle []byte `json:"caBundle,omitempty" protobuf:"bytes,2,opt,name=caBundle"`
in WebhookClientConfig
I wonder to know, what's exactly the "system trust roots "?
and I'm afraid the internal signer for CSR API of Kubernetes is not one of them.
It is a good practice to use secure network connections. A Webhook-endpoint in Kubernetes is typically an endpoint in a private network. A custom private CABundle can be used to generate the TLS certificate to achieve a secure connection within the cluster. See e.g. contacting the webhook.
Webhooks can either be called via a URL or a service reference, and can optionally include a custom CA bundle to use to verify the TLS connection.
This CABundle is optional. See also service reference for how to connect.
If the webhook is running within the cluster, then you should use service instead of url. The service namespace and name are required. The port is optional and defaults to 443. The path is optional and defaults to "/".
Here is an example of a mutating webhook configured to call a service on port "1234" at the subpath "/my-path", and to verify the TLS connection against the ServerName my-service-name.my-service-namespace.svc using a custom CA bundle
In 2020, Microsoft will be addressing CVE-2017-8563 a set of unsafe default configurations for LDAP channel binding and LDAP signing which exist on Active Directory domain controllers that let LDAP clients communicate with them without enforcing LDAP channel binding and LDAP signing.
Due to above change the LDAP clients that do not enable or support signing will not connect.
LDAP Simple Binds over non-TLS connections will not work if LDAP signing is required.
This does not mean we have to move all LDAP applications to port 636 and switch to SSL/TLS. When SASL with signing is used, LDAP Clients that do enable or support signing will connect over port 389.
Hence, the LDAP simple binds now needs to be converted into SASL like DIGEST-MD5 and add a support for signing through qop as a "auth-int". However, in large applications ldap authentication happens at HTTP Server level instead of a java program and in my case it's Apache HTTPServer 2.4.x.
Currently, I'm having Basic authentication provider configured as below in Apache HTTPServer (Windows & Linux platforms), which needs to be replaced with an SASL authentication mechanism like GSSAPI, GSS-SPNEGO or DIGEST-MD5:
# Basic Authentication provider
<AuthnProviderAlias ldap MyEnterpriseLdap>
AuthLDAPURL "ldap://machine1.abcd.com:389/CN=Users,DC=abcd,DC=com?sAMAccountName?sub?(objectClass=*)"
AuthLDAPBindDN "CN=rohit,CN=Users,DC=abcd,DC=com"
AuthLDAPBindPassword "abc123"
LDAPReferrals Off
</AuthnProviderAlias>
# Authenticated resources
<LocationMatch ^/+WebApp/+(;.*)?>
AuthName "WebApp"
AuthType Basic
AuthBasicProvider MyEnterpriseLdap
Require valid-user
</LocationMatch>
I'm looking POC examples for any of the below 3 options for SASL with Apache & Active directory:
1. DIGEST-MD5 using mod_auth_digest: This mechanism does not look up in ldap and it has not yet implemented qop "auth-int".
Is there any other third party apache 2.4.x module for digest_md5 that will look-up in ldap and supports qop "auth-int"?
2. GSSAPI mod_auth_gssapi: Looks like using mod_auth_gssapi it's possible for Apache HTTPServer to lookup for users & their credentials in Active directory and thereby authenticate using GSSAPI mechanism.
Is there any documentation OR POC example stating the required configuration to do in Windows & Linux for Apache HTTPServer 2.4.x for GSSAPI, So as to authenticate using GSSAPI mechanism with Microsoft Active directory?
3. mod_authn_sasl & Cyrus SASL: A third party library which is now evolving for Windows platform.
I'm looking for an concrete documentation/POC example with any SASL mechanism to implement this library with Apache(Windows & Linux platforms) using Active directory.
OR Is there any other way to enable SASL for Apache HTTPServer with Active directory?
Also, checked SO for SASL LDAP authentication failure (Here, LDAPS is used) AND In apache httpd configuration, what LDAP SASL mechanism is used during ldap authentication? (Though Apache does not provide SASL as a OOB configuration, using modules like mod_auth_gssapi SASL is possible)
Note:
1. The application already supports LDAP(simple binds) & LDAPS configurations, So we don't want users to forcefully use LDAPS. Instead we want to enable/implement an SASL mechanism for non-SSL/TLS configurations.
2. Disabling the LDAP signing for non TLS connections is not an option because, When SASL with signing is used, LDAP Clients that do enable or support signing can connect over port 389.
I've posted this in detail, so that it can be helpful for others who are impacted with Microsoft's 2020 update for channel binding & signing.
Thanks.
Microsoft earlier decided to enroll an security update in 2020 to enable LDAPChannel Binding and LDAP Server Signing as a default configuration as in the below screenshot.
However, Due to customers raising concerns on this update & SASL limitations (Not supported by 3rd party authentication mechanisms like Apache HTTPServer). The Microsoft has now rolled-back this enforcement and have left to customer decide whether to enforce the settings or not.
Moreover, Microsoft has also confirmed that there will be no more updates related to the enforcement of LDAPChannel Binding and LDAP Server Signing in future.
That is the March 10th 2020's security update on LDAPChannel Binding and LDAP Server Signing will be the last update to these settings.
Microsoft have updated their article on security advisory as below: ADV190023
It is highly recommended to make use of LDAPS instead of LDAP or any SASL protocols.
I am new to Apache Kafka, and here is what I have done so far,
Downloaded kafka_2.12-2.1.0
Make Batch file for Zookeeper to run zookeeper server:
start kafka_2.12-2.1.0.\bin\windows\zookeeper-server-start.bat kafka_2.12-2.1.0.\config\zookeeper.properties
Make Batch File for Apache Kafka server
start kafka_2.12-2.1.0\bin\windows\kafka-server-start.bat kafka_2.12-2.1.0\config\server.properties
Started A Producer using batch file.
start kafka_2.12-2.1.0.\bin\windows\kafka-console-producer.bat --broker-list localhost:9092 --topic 3drocket-player
It is running fine but now I am looking for authentication. As I have to implement a consumer with specific auth settings (requirement by the client). Like security protocol is SASL_SSL and SSL mechanism is GSSAPI.
For this reason, I tried to search and find confluet documentation but the problem is it is too abstract that how to take each and every step.
I am looking for detail configuration steps according to my setup. How to configure my kafka server with SASL SSL and GSSAPI protocol. Initially I found that GSSAPI/Keberos has a separate server then, do i need to install more server? Within Confluent Kafka is there any built-in solution.
Configure a SASL port in server.properties
e.g)
listeners=SASL_SSL://host.name:port
security.inter.broker.protocol=SASL_SSL
sasl.mechanism.inter.broker.protocol=GSSAPI
sasl.enabled.mechanisms=GSSAPI
sasl.kerberos.service.name=kafka
ssl.keystore.location=/path/to/keystore.jks
ssl.keystore.password=keystore_password
ssl.truststore.location=/path/to/truststore.jks
ssl.truststore.password=truststore_password
ssl.enabled.protocols=TLSv1.2,TLSv1.1,TLSv1
https://kafka.apache.org/documentation/#security_configbroker
https://kafka.apache.org/documentation/#security_sasl_config
Client:
When you run the Kafka client, you need to set these properties.
security.protocol=SASL_SSL
ssl.truststore.location=/path/to/truststore.jks
ssl.truststore.password=truststore_password
sasl.mechanism=GSSAPI
sasl.kerberos.service.name=kafka
https://kafka.apache.org/documentation/#security_configclients
https://kafka.apache.org/documentation/#security_sasl_kerberos_clientconfig
Then configure the JAAS configuration
KafkaClient {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
keyTab="path/to/kafka_client.keytab"
storeKey=true
useTicketCache=false
principal="kafka-client-1#EXAMPLE.COM";
};
...
SASL/GSSAPI is for organizations using Kerberos (for example, by using Active Directory). You don’t need to install a new server just for Apache Kafka®. Ask your Kerberos administrator for a principal for each Kafka broker in your cluster and for every operating system user that will access Kafka with Kerberos authentication (via clients and tools).
https://docs.confluent.io/current/kafka/authentication_sasl/authentication_sasl_gssapi.html#kafka-sasl-auth-gssapi
....
I'm developing an application that, in summary, uses MQTT to send sensor values to a broker to later visualize that data in a dashboard web application.
I have five microcontrollers connected to the broker and I've set up a server certificate for the broker and client certificates for each microcontroller.
The problem is that, in the mosquitto.conf file I require the use of client certificates for the clients that want to connect, so if I want to subscribe to a topic from my web application I need a client certificate. I'm trying to find the right approach for accomplishing this, but it seems that having a certificate and key in a machine you cannot control is a big security risk.
It would be ideal if someone knew a way of tweaking the mosquitto configuration file or establish some kind of exception (maybe similar to ACL's) to only require client certificates for certain clients (in my case, the microcontrollers) and use username#password for the others (web clients). Is it possible to do such a thing?
Any help would be much appreciated
EDIT (regarding #hardillb 's answer)
My mosquitto.conf:
pid_file /var/run/mosquitto.pid
persistence true
persistence_location /var/lib/mosquitto/
log_dest file /var/log/mosquitto/mosquitto.log
include_dir /etc/mosquitto/conf.d
per_listener_settings true
listener 9873
protocol websockets
#http_dir /home/jamengual/Desktop/UIB/TFG/mqtt/webAPP
cafile /etc/mosquitto/ca_certificates/ca.crt
keyfile /etc/mosquitto/certs/server.key
certfile /etc/mosquitto/certs/server.crt
listener 8883
cafile /etc/mosquitto/ca_certificates/ca.crt
keyfile /etc/mosquitto/certs/server.key
certfile /etc/mosquitto/certs/server.crt
require_certificate true
The "per_listener_settings true" makes the server go to an active(exited) state.
From the mosquitto.conf guide:
Talking about authentication mechanisms:
Both certificate and PSK based encryption are configured on a
per-listener basis.
Talking about the per_listener_settings options:
per_listener_settings [ true | false ]
If true, then authentication and access control settings will be controlled on a per-listener basis. The following options are affected:
password_file, acl_file, psk_file, allow_anonymous, allow_zero_length_clientid, auth_plugin, auth_opt_*, auto_id_prefix.
So I understand that the per_listener_settings option might not be necessary for the require_certificate part. However, I still need it to configure the usernames and passwords for the websockets.
Is there something wrong with my configuration file?
Link to my question about how to store client certificates and keys in the client's machine
Mosquitto allows you to have multiple listeners per broker that all share the same topic space.
Listeners can support native MQTT, MQTT over Websockets (including Websockets over TLS) and MQTT over TLS.
It also has the per_listener_settings option which allows you to specify different authentication options for different listeners. This option was added in mosquitto version 1.5.
So in this case, you can create a MQTT over TLS listener and use client certificates to authenticate those users (devices) and a MQTT over Websocket listener that will use username/password authentication.
e.g. something like this (but probably using a authentication plugin rather than acl/password files)
per_listener_settings true
listener 1884
cafile /path/to/ca
certfile /path/to/cert
keyfile /path/to/key
require_certificate true
acl_file /path/to/acl_file
listener 8883
protocol websockets
acl_file /path/to/acl_file
password_file /path/to/password
You can also include the ca_file, cert_file and key_file options for the websocket listener, to enable Websockets over TLS (but don't use the require_certificate because browser side client certificate handling for websockets is not a great experience, as they don't ask which to use). But normally I would normally use something like NGINX to proxy for the websocket listener and also do the TLS termination.
Details of all the options can be found in the mosquitto.conf man page: https://mosquitto.org/man/mosquitto-conf-5.html
I'm using Spring Security for X.509 preauthentication.
To make sure the client sends its certificate per HTTP request, is it necessary to:
Modify pom.xml to set <wantClientAuth> and <needClientAuth> to true
Set Apache's SSLVerifyClient to require reference
Based on reading, the web server must tell the client-side to sends its certificate in order for the client to actually send it. I'm confused if Spring Security AND Apache configuration is required to achieve this.
Spring Security configuration has nothing to do with whether the client sends a certificate or not. That's decided at the SSL protocol level and hence by the negotiation between the client and the server. Your question is a bit unclear in that it refers to a maven pom and an Apache configuration without explaining how your system is set up. Are you running the maven Jetty plugin with an Apache server in front?
Spring Security's X.509 authentication won't work if the SSL connection doesn't terminate at the servlet container. So if you have HTTPS between the client and Apache, and a non-SSL connection from Apache to the servlet container, then the client certificate won't normally be available.
If you are using an AJP connector, then you can configure Apache to pass the certificate on to the back end using the ExportCertData option. If you aren't, you can still take the exported certificate and pass it as a request header (you'll find examples of this elsewhere on SO). You would also need to customize the Spring Security X.509 code to extract the certificate from the header, rather than the standard java property name which it uses by default.