I am working on Mosquitto, and using Redis as the back end to handle both username/password pair authentication and ACL. I am using JPmens' authentication plugin to do this.
mosquitto conf:
auth_opt_backends redis
auth_plugin /etc/mosquitto/auth-plug.so
auth_opt_redis_host 127.0.0.1
auth_opt_redis_port 6379
auth_opt_redis_userquery GET %s
auth_opt_redis_aclquery GET %s-%s
Everything is working fine. But when I start using topic with spaces, it simply denied me from publish/subscribe.
I have already set this topic value in Redis:
SET "user1-sample topic" 2
Mosquitto logs:
Denied PUBLISH from sample_publisher (d0, q2, r0, m1, 'sample topic', ... (10 bytes))
Is there anything, I can do to make this work, like acl query change or change in redis-data.
Looking at this question and answers it implies that the following query may work:
auth_opt_redis_aclquery GET "%s-%s%"
Related
I am stuck at using SSL in IBM Websphere MQ (9.2).
I am building a client library for MQ and to get more familiar with MQ on the server side I have installed IBM MQ Developer edition and ran the supplied scripts to create a 'default' MQ server instance.
Created an client connection for the DEV.APP.SVRCONN server connection
Created a personal certificate by using the IBM Key management tool and named it ibmwebspheremq
Enabled SSL on the Queue Manager (QM1) and labelled it ibmwebspheremq
Updated the SSL configuration for the DEV.APP.SVRCONN channel and set the cipherspec property to TLS 1.2, 256-bit Secure Hash Algorithm, 128-bit AES encryption (TLS_RSA_WITH_AES_128_CBC_SHA256) and made SSL required.
Tested my settings with:
amqssslc -l ibmwebspheremq -k C:\ProgramData\IBM\MQ\qmgrs\QM1\ssl\key -c DEV.APP.SVRCONN -x 127.0.0.1 -s TLS_RSA_WITH_AES_128_CBC_SHA256 -m QM1
And that gave me:
Sample AMQSSSLC start
Connecting to queue manager QM1
Using the server connection channel DEV.APP.SVRCONN
on connection name 127.0.0.1.
Using SSL CipherSpec TLS_RSA_WITH_AES_128_CBC_SHA256
Using SSL key repository stem C:\ProgramData\IBM\MQ\qmgrs\QM1\ssl\key
Certificate Label: ibmwebspheremq
No OCSP configuration specified.
MQCONNX ended with reason code 2035
Error details (from log):
The active values of the channel were 'MCAUSER(app) CLNTUSER(Wilko)
SSLPEER(SERIALNUMBER=61:9B:A4:3E,CN=DESKTOP-ROH98N2,C=NL)
SSLCERTI(CN=DESKTOP-ROH98N2,C=NL) ADDRESS(DESKTOP-ROH98N2)'. The
MATCH(RUNCHECK) mode of the DISPLAY CHLAUTH MQSC command can be used to
identify the relevant CHLAUTH record.
ACTION:
Ensure that the application provides a valid user ID and password, or change
the queue manager connection authority (CONNAUTH) configuration to OPTIONAL to
allow client applications to connect which have not supplied a user ID and
password.
----- cmqxrmsa.c : 2086 -------------------------------------------------------
22/11/2021 15:51:37 - Process(15880.45) User(MUSR_MQADMIN) Program(amqrmppa.exe)
Host(DESKTOP-ROH98N2) Installation(Installation1)
VRMF(9.2.3.0) QMgr(QM1)
Time(2021-11-22T14:51:37.594Z)
CommentInsert1(DEV.APP.SVRCONN)
CommentInsert2(15880(1112))
CommentInsert3(127.0.0.1)
AMQ9999E: Channel 'DEV.APP.SVRCONN' to host '127.0.0.1' ended abnormally.
EXPLANATION:
The channel program running under process ID 15880(1112) for channel
'DEV.APP.SVRCONN' ended abnormally. The host name is '127.0.0.1'; in some cases
the host name cannot be determined and so is shown as '????'.
ACTION:
Look at previous error messages for the channel program in the error logs to
determine the cause of the failure. Note that this message can be excluded
completely or suppressed by tuning the "ExcludeMessage" or "SuppressMessage"
attributes under the "QMErrorLog" stanza in qm.ini. Further information can be
found in the System Administration Guide.
----- amqrmrsa.c : 630 --------------------------------------------------------
I am kind of stuck, I also saw in the log that there is PEER related info dumped, but I am not sing the SSLPEER settings (I just want to let everyone connect with the same certificate).
EDIT 2:
Output from RUNMQSC QM1 and command DISPLAY QMGR CONNAUTH:
1 : DISPLAY QMGR CONNAUTH
AMQ8408I: Display Queue Manager details.
QMNAME(QM1) CONNAUTH(DEV.AUTHINFO)
Output from RUNMQSC QM1 and command DISPLAY AUTHINFO(name-from-previous-command):
3 : DISPLAY AUTHINFO(DEV.AUTHINFO)
AMQ8566I: Display authentication information details.
AUTHINFO(DEV.AUTHINFO) AUTHTYPE(IDPWOS)
ADOPTCTX(YES) DESCR( )
CHCKCLNT(REQDADM) CHCKLOCL(OPTIONAL)
FAILDLAY(1) AUTHENMD(OS)
ALTDATE(2021-11-18) ALTTIME(15.09.20)
Output from DISPLAY CHLAUTH(*):
4 : DISPLAY CHLAUTH(*)
AMQ8878I: Display channel authentication record details.
CHLAUTH(DEV.ADMIN.SVRCONN) TYPE(USERMAP)
CLNTUSER(admin) USERSRC(CHANNEL)
AMQ8878I: Display channel authentication record details.
CHLAUTH(DEV.ADMIN.SVRCONN) TYPE(BLOCKUSER)
USERLIST(nobody)
AMQ8878I: Display channel authentication record details.
CHLAUTH(DEV.APP.SVRCONN) TYPE(ADDRESSMAP)
ADDRESS(*) USERSRC(CHANNEL)
CHCKCLNT(REQUIRED)
AMQ8878I: Display channel authentication record details.
CHLAUTH(SYSTEM.ADMIN.SVRCONN) TYPE(ADDRESSMAP)
ADDRESS(*) USERSRC(CHANNEL)
AMQ8878I: Display channel authentication record details.
CHLAUTH(SYSTEM.*) TYPE(ADDRESSMAP)
ADDRESS(*) USERSRC(NOACCESS)
I was expecting not having to provide username and password when using certificates. What am I missing here?
Your queue manager is configured to mandate passwords for any client connections that are trying to run with a resolved MCAUSER that is privileged. That is what CHCKCLNT(REQDADM) on your AUTHINFO(DEV.AUTHINFO) does.
In addition, your CHLAUTH rule for the DEV.APP.SVRCONN channel has upgraded this further to mandate passwords for ALL connections using that channel.
If your intent is to have channels that supply a certificate not be subject to this mandate, then you should add a further, more specific, CHLAUTH rule, something along these lines:-
SET CHLAUTH(DEV.APP.SVRCONN) TYPE(SSLPEERMAP) +
SSLPEER('SERIALNUMBER=61:9B:A4:3E,CN=DESKTOP-ROH98N2,C=NL') +
SSLCERTI('CN=DESKTOP-ROH98N2,C=NL') CHCKCLNT(ASQMGR) USERSRC(CHANNEL)
Bear in mind that if this connection is asserting a privileged user id, it will still be required to supply a password from the system-wide setting of CHCKCLNT(REQDADM).
Remember, if you are ever unsure which CHLAUTH rule you are matching against, all those details you saw in the error message can be used to form a DISPLAY CHLAUTH command to discover exactly which rule you have matched. Read more about that in I’m being blocked by CHLAUTH – how can I work out why?
I am testing ksqldb on AWS EC2 instances in the latest release (confluent 5.5.1) and have an access problem that I can't solve.
I have a secured Kafka sever (SASL_SSSL, SASL mode PLAIN), an unsecured Schema Registry (another issue with Avro Serializers, but ok for the moment), and a secured KSQL Server and Client.
Topics are filled properly with AVRO data (value only, no key) from a JDBC source connector.
I can access the KSQL Server with ksql without issues
I can access KSQL REST API without issues
When I list topics within ksql, I get the correct list.
When I select a push stream, I get messages when I push something into the topic (with Kafka Connect, in my case).
BUT: When I call "print topic" I get a ~60 sec block in the client, followed by a 'Timeout expired while fetching topic metadata'.
The ksql-kafka.log goes wild with repeated entries like
[2020-09-02 18:52:46,246] WARN [Consumer clientId=consumer-2, groupId=null] Bootstrap broker ip-10-1-2-10.eu-central-1.compute.internal:9093 (id: -3 rack: null) disconnected (org.apache.kafka.clients.NetworkClient:1037)
The corresponding broker log shows
Sep 2 18:52:44 ip-10-1-6-11 kafka-server-start: [2020-09-02 18:52:44,704] INFO [SocketServer brokerId=1002] Failed authentication with ip-10-1-2-231.eu-central-1.compute.internal/10.1.2.231 (Unexpected Kafka request of type METADATA during SASL handshake.) (org.apache.kafka.common.network.Selector)
This is my ksql-server.properties file:
ksql.service.id= hf_kafka_ksql_001
bootstrap.servers=ip-10-1-11-229.eu-central-1.compute.internal:9093,ip-10-1-6-11.eu-central-1.compute.internal:9093,ip-10-1-2-10.eu-central-1.compute.internal:9093
ksql.streams.state.dir=/var/data/ksqldb
ksql.schema.registry.url=http://ip-10-1-1-22.eu-central-1.compute.internal:8081
ksql.output.topic.name.prefix=ksql-interactive-
ksql.internal.topic.replicas=3
confluent.support.metrics.enable=false
# currently the keystore contains only the ksql server and the certificate chain to the CA
ssl.keystore.location=/var/kafka-ssl/ksql.keystore.jks
ssl.keystore.password=kspassword
ssl.key.password=kspassword
ssl.client.auth=true
# Need to set this to empty, otherwise the REST API is not accessible with the client key.
ssl.endpoint.identification.algorithm=
# currently the truststore contains only the CA certificate
ssl.truststore.location=/var/kafka-ssl/client.truststore.jks
ssl.truststore.password=ctpassword
security.protocol=SASL_SSL
sasl.mechanism=PLAIN
sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required \
username="ksql" \
password="ksqlsecret";
listeners=https://0.0.0.0:8088
advertised.listener=https://ip-10-1-2-231.eu-central-1.compute.internal:8088
authentication.method=BASIC
authentication.roles=admin,ksql,cli
authentication.realm=KsqlServerProps
# authentication for producers, needed for ksql commands like "Create Stream"
producer.ssl.endpoint.identification.algorithm=HTTPS
producer.security.protocol=SASL_SSL
producer.sasl.mechanism=PLAIN
producer.ssl.truststore.location=/var/kafka-ssl/client.truststore.jks
producer.ssl.truststore.password=ctpassword
producer.sasl.mechanism=PLAIN
producer.sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required \
username="ksql" \
password="ksqlsecret";
# authentication for consumers, needed for ksql commands like "Create Stream"
consumer.ssl.endpoint.identification.algorithm=HTTPS
consumer.security.protocol=SASL_SSL
consumer.ssl.truststore.location=/var/kafka-ssl/client.truststore.jks
consumer.ssl.truststore.password=ctpassword
consumer.sasl.mechanism=PLAIN
consumer.sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required \
username="ksql" \
password="ksqlsecret";
I call ksql with
ksql --user cli --password test --config-file /var/kafka-ssl/ksql_cli.properties https://ip-10-1-2-231.eu-central-1.compute.internal:8088'
This is my ksql client configuration ksql_cli.properties:
security.protocol=SSL
#ssl.client.auth=true
ssl.truststore.location=/var/kafka-ssl/client.truststore.jks
ssl.truststore.password=ctpassword
ssl.keystore.location=/var/kafka-ssl/ksql.keystore.jks
ssl.keystore.password=kspassword
ssl.key.password=kspassword
JAAS config, included as Parameter on service start
KsqlServerProps {
org.eclipse.jetty.jaas.spi.PropertyFileLoginModule required
file="/var/kafka-ssl/cli.password"
debug="false";
};
with cli.password containing the authentication users and passwords for the ksql client.
I call ksql with
ksql --user cli --password test --config-file /var/kafka-ssl/ksql_cli.properties https://ip-10-1-2-231.eu-central-1.compute.internal:8088'
I possibly have tried any permutation of keys, settings etc but to no avail. Obviously there is something wroing in key management. For me, it is surprising that usings streams is ok but the low-level topics is not.
Has someone found a solution for that issue? I am really running ou of ideas here. Thanks.
Found it! It was easy to overlook - the client's configuration needs of course. a SASL setting...
security.protocol=SASL_SSL
this is my first post on Stackoverflow, i hope i didnt choose the wrong section.
Context :
Kafka HEAP size is configured on following file :
/etc/systemd/system/kafka.service
With following parameter :
Environment="KAFKA_HEAP_OPTS=-Xms6g -Xmx6g"
OS is "CentOS Linux release 7.7.1908".
Kafka is "confluent-kafka-2.12-5.3.1-1.noarch", installed from the following repository :
# Confluent REPO
[Confluent.dist]
name=Confluent repository (dist)
baseurl=http://packages.confluent.io/rpm/5.3/7
gpgcheck=1
gpgkey=http://packages.confluent.io/rpm/5.3/archive.key
enabled=1
[Confluent]
name=Confluent repository
baseurl=http://packages.confluent.io/rpm/5.3
gpgcheck=1
gpgkey=http://packages.confluent.io/rpm/5.3/archive.key
enabled=1
I activated SSL on a 3-machine KAFKA cluster few days ago, and suddently, the following command stopped working :
kafka-topics --bootstrap-server <the.fqdn.of.server>:9093 --describe --topic <TOPIC-NAME>
Which return me the following error :
[2019-10-03 11:38:52,790] ERROR Uncaught exception in thread 'kafka-admin-client-thread | adminclient-1':(org.apache.kafka.common.utils.KafkaThread)
java.lang.OutOfMemoryError: Java heap space
at java.nio.HeapByteBuffer.<init>(HeapByteBuffer.java:57)
at java.nio.ByteBuffer.allocate(ByteBuffer.java:335)
at org.apache.kafka.common.memory.MemoryPool$1.tryAllocate(MemoryPool.java:30)
at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:112)
at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:424)
at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:385)
at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:651)
at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:572)
at org.apache.kafka.common.network.Selector.poll(Selector.java:483)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:539)
at org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.run(KafkaAdminClient.java:1152)
at java.lang.Thread.run(Thread.java:748)
On the server's log, the following line appears when i try to request it via "kafka-topics" :
/var/log/kafka/server.log :
[2019-10-03 11:41:11,913] INFO [SocketServer brokerId=<ID>] Failed authentication with /<ip.of.the.server> (SSL handshake failed) (org.apache.kafka.common.network.Selector)
I was able to use this command properly BEFORE implementing SSL on the cluster. Here is the configuration i'm using.
All functionnality work properly (consumers, producers...) except "kafka-topics" :
# SSL Configuration
ssl.truststore.location=<truststore-path>
ssl.truststore.password=<truststore-password>
ssl.keystore.type=<keystore-type>
ssl.keystore.location=<keystore-path>
ssl.keystore.password=<keystore-password>
# Enable SSL between brokers
security.inter.broker.protocol=SSL
# Listeners
listeners=SSL://<fqdn.of.the.server>:9093
advertised.listeners=SSL://<fqdn.of.the.server>:9093
There is no problem with the certificate (which is signed by internal CA, internal CA which i added to the truststore specified on the configuration). OpenSSL show no errors :
openssl s_client -connect <fqdn.of.the.server>:9093 -tls1
>> Verify return code: 0 (ok)
The following command is working pretty well with SSL, thanks to parameter "-consumer.config client-ssl.properties"
kafka-console-consumer --bootstrap-server <fqdn.of.the.server>:9093 --topic <TOPIC-NAME> -consumer.config client-ssl.properties
"client-ssl.properties" content is :
security.protocol=SSL
ssl.truststore.location=<truststore-path>
ssl.truststore.password=<truststore-password>
Right now, i'm forced to use "--zookeeper", which according to the documentation, is deprecated :
--zookeeper <String: hosts> DEPRECATED, The connection string for
the zookeeper connection in the form
host:port. Multiple hosts can be
given to allow fail-over.
And of course, it's working pretty well :
kafka-topics --zookeeper <fqdn.of.the.server>:2181 --describe --topic <TOPIC-NAME>
Topic:<TOPIC-NAME> PartitionCount:3 ReplicationFactor:2
Configs:
Topic: <TOPIC-NAME> Partition: 0 Leader: <ID-3> Replicas: <ID-3>,<ID-1> Tsr: <ID-1>,<ID-3>
Topic: <TOPIC-NAME> Partition: 1 Leader: <ID-1> Replicas: <ID-1>,<ID-2> Isr: <ID-2>,<ID-1>
Topic: <TOPIC-NAME> Partition: 2 Leader: <ID-2> Replicas: <ID-2>,<ID-3> Isr: <ID-2>,<ID-3>
So, my question is : why am i unable to use "--bootstrap-server" atm ? Because of the "zookeeper" deprecation, i'm worried about not to be able to consult my topics, and their details...
I believe that kafka-topics needs the same option than kafka-console-consumer, aka "-consumer.config"...
Ask if any additionnal precision needed.
Thanks a lot, hope my question is clear and readable.
Blyyyn
I finally found a way to deal with this SSL error. The key is to use the following setting :
--command-config client-ssl.properties
This is working with the most part of KAFKA commands, like kafka-consumer-groups, and of course kafka-topics. See examples below :
kafka-consumer-groups --bootstrap-server <kafka-hostname>:<kafka-port> --group <consumer-group> --topic <topic> --reset-offsets --to-offset <offset> --execute --command-config <ssl-config>
kafka-topics --list --bootstrap-server <kafka-hostname>:<kafka-port> --command-config client-ssl.properties
ssl-config was "client-ssl.properties",see initial post for content.
Beware, by using IP address on , you'll have an error if the machine certificate doesnt have alternative name with that IP address. Try to have correct DNS resolution and use FQDN if possible.
Hope this solution will help, cheers!
Blyyyn
Stop your Brokers and run below ( assuming you have more that 1.5GB RAM on your server)
export KAFKA_HEAP_OPTS="-Xmx1G -Xms1G"
then start your Brokers on all 3 nodes and then try it.
Note that for consumer and producer clients you need to prefix security.protocol accordingly inside your client-ssl.properties.
For Kafka Consumers:
consumer.security.protocol=SASL_SSL
For Kafka Producers:
producer.security.protocol=SASL_SSL
I created a three node etcd cluester, config and start is already OK, but when I check the /var/log/messages, it shows
etcd: rejected connection from "172.17.0.3:43192" (error "tls: first
record does not look like a TLS handshake", ServerName "")
How can I fix it ?
I have checked the health of etcd :
member 48b0dff99d5c867e is healthy: got healthy result from https://172.17.0.9:2379
member 646dab89331aabab is healthy: got healthy result from https://172.17.0.8:2379
member b45603216bfac234 is healthy: got healthy result from https://172.17.0.10:2379
That shows Ok, but when I cat the /var/log/messages, it always shows this error :
Jan 12 20:08:57 master etcd: rejected connection from
"172.17.0.3:43160" (error "tls: first record does not look like a TLS
handshake", ServerName "")
Jan 12 20:08:57 master etcd: rejected
connection from "172.17.0.3:43162" (error "tls: oversized record
received with length 21536", ServerName "")
I got this message for the etcd peer communication when switching from http to https for peer communication. Apparently etcd has persistent peer information that overrides the command line options so it continued to use http for peer communication in spite of the command line options.
In the end, since this was a test cluster, I nuked /var/lib/etcd and the new cli configuration took hold
There is no solution from my side to fully help you with an issue but I've found couple of links that might help you in further investigations. Read them carefully, try solutions and I hope you will resolve the problem.
Github question #9917: check ETCDCTL_API variable, especially make sure --endpoints is configured with https.
Runtime reconfiguration: try to reconfigure you etcd by updating/removing/adding etcs members.
nginx ingress: check your nginx ingress annotations in case you are using nginx
google groups TLS handshake topic: Check this topic, especially comments related to VAULT_ADDR variable. I will copy paste last comment from thread here:
We were able to get everything to work, after understanding the
permission issues.
You asked: "Please confirm if you are seeing server error messages
before initializing Vault" Upon further examination, I did determine
that the errors were not happening before initializing the Vault.
The problem ended up not being related to VAULT_ADDR, and we used the
value: "http://127.0.0.1:8200"
I have the setup operation scripted, and it appears that not
everything was being run at the proper permissions. At first I was
running the scripts using the "sudo" command, which resulted in the
failures. I discovered that the permissions for the certificate key
were restricted and the file could not be accessed by my user. There
may have been other permission issues as well. But once I switched
user to root, and ran the script, everything behaved correctly.
Thanks
I want to authenticate our customers' MTAs (Exchange for the most part, pointing to us as its smart host) to our relay server (Postfix 2.11.3, CentOS 6.6) and accept mail from only those authenticated MTAs.
I've looked into SASL, but as far as I can tell, its use case is for authenticating inbound MUAs or outbound MTAs.
How does one authenticate inbound MTAs using Postfix?
Thanks,
Nathan
EDIT:
From my main.cf:
smtpd_sasl_type = dovecot
smtpd_sasl_path = private/auth
smtpd_sasl_auth_enable = yes
smtpd_sasl_security_options = noanonymous
smtp_sasl_password_maps = hash:/etc/postfix/sasl_passwd
smtpd_relay_restrictions = permit_sasl_authenticated, reject_unauth_destination
Other useful info:
postconf -a
cyrus
dovecot
vim /etc/dovecot/conf.d/10-auth.conf
auth_mechanisms = plain login
master.cf is virginal
SASL is the way to go. Postfix doesn't particularly care of it's an MUA or MTA connecting to it. If you use smtpd_sasl_auth_enable (along with smtpd_relay_restrictions = permit_sasl_authenticated and a proper SASL configuration), only authenticated connections will be able to use your server as a smarthost relay. Exchange supports this sort of thing, and it should be what you want.
I'm glad you could get it working with Dovecot - I couldn't! Fortunately, I wasn't married to Dovecot. I found this: http://initrd.org/wiki/SMTP_Relay which I followed and succeeded. Just having cert issues, but I'll take that up separately. Thanks again, Doug