RabbitMQ TLS Authentication - ssl
There is a task to configure the operation of some web services using certificate authorization.
There is:
Erlang 22.3.3
RabbitMQ 3.8.3
It makes no sense to describe their installation.
What has been done next:
1. In accordance with the article (https://www.rabbitmq.com/ssl.html) we perform the following actions:
git clone https://github.com/michaelklishin/tls-gen tls-gen
cd tls-gen / basic
CN = client PASSWORD = 123 make
make verify
make info
Copy the created certificates, change the owner
mv testca/ /etc/rabbitmq/
mv server/ /etc/rabbitmq/
mv client/ /etc/rabbitmq/
chown -R rabbitmq: /etc/rabbitmq/testca
chown -R rabbitmq: /etc/rabbitmq/server
chown -R rabbitmq: /etc/rabbitmq/client
We bring the configuration file to the form (/etc/rabbitmq/rabbitmq.config):
[
{ssl, [{versions, ['tlsv1.2', 'tlsv1.1', tlsv1]}]},
{rabbit, [
{ssl_listeners, [5671]},
{auth_mechanisms, ['PLAIN', 'AMQPLAIN', 'EXTERNAL']},
{ssl_cert_login_from, 'client'},
{ssl_options, [{cacertfile, "/ etc / rabbitmq / testca / cacert.pem"},
{certfile, "/ etc / rabbitmq / server / cert.pem"},
{keyfile, "/ etc / rabbitmq / server / key.pem"},
{verify, verify_peer},
{fail_if_no_peer_cert, true}]}]}}
].
We start the server, try to connect from the client. We get the error:
2020-05-18 17: 21: 57.166 +03: 00 [ERR] Failed to connect to broker 10.10.11.16, port 5671, vhost dmz
RabbitMQ.Client.Exceptions.BrokerUnreachableException: None of the specified endpoints were reachable
---> RabbitMQ.Client.Exceptions.PossibleAuthenticationFailureException: Possibly caused by authentication failure
---> RabbitMQ.Client.Exceptions.OperationInterruptedException: The AMQP operation was interrupted: AMQP close-reason, initiated by Library, code = 0, text = 'End of stream', classId = 0, methodId = 0, cause = System .IO.EndOfStreamException: Reached the end of the stream. Possible authentication failure.
at RabbitMQ.Client.Impl.InboundFrame.ReadFrom (Stream reader)
at RabbitMQ.Client.Impl.SocketFrameHandler.ReadFrame ()
at RabbitMQ.Client.Framing.Impl.Connection.MainLoopIteration ()
at RabbitMQ.Client.Framing.Impl.Connection.MainLoop ()
at RabbitMQ.Client.Impl.SimpleBlockingRpcContinuation.GetReply (TimeSpan timeout)
at RabbitMQ.Client.Impl.ModelBase.ConnectionStartOk (IDictionary`2 clientProperties, String mechanism, Byte [] response, String locale)
at RabbitMQ.Client.Framing.Impl.Connection.StartAndTune ()
--- End of inner exception stack trace ---
at RabbitMQ.Client.Framing.Impl.Connection.StartAndTune ()
at RabbitMQ.Client.Framing.Impl.Connection.Open (Boolean insist)
at RabbitMQ.Client.Framing.Impl.Connection..ctor (IConnectionFactory factory, Boolean insist, IFrameHandler frameHandler, String clientProvidedName)
at RabbitMQ.Client.Framing.Impl.ProtocolBase.CreateConnection (IConnectionFactory factory, Boolean insist, IFrameHandler frameHandler, String clientProvidedName)
at RabbitMQ.Client.ConnectionFactory.CreateConnection (IEndpointResolver endpointResolver, String clientProvidedName)
--- End of inner exception stack trace ---
at RabbitMQ.Client.ConnectionFactory.CreateConnection (IEndpointResolver endpointResolver, String clientProvidedName)
at RabbitMQ.Client.ConnectionFactory.CreateConnection (String clientProvidedName)
at EasyNetQ.ConnectionFactoryWrapper.CreateConnection ()
at EasyNetQ.PersistentConnection.TryToConnect ()
In the rabbitmq log:
2020-05-18 17: 24: 59.880 [info] <0.3442.0> accepting AMQP connection <0.3442.0> (10/10/15/14/1561 -> 10/10/11/166767)
2020-05-18 17: 25: 02.887 [error] <0.3442.0> closing AMQP connection <0.3442.0> (10/10/15/14/1561 -> 10/10/11/1667671):
{handshake_error, starting, 0, {error, function_clause, 'connection.start_ok', [{rabbit_ssl, peer_cert_auth_name, [client, << 48,130,3,42,48,130,2,18,160,3,2,1,2,2 , 1,2,48,13,6,9,42,134,72,134,247,13,1,1,11,5,0,48,4,49,49,32,48,30,6,3,85,4,3 12,23,84,76,83,71,101,110,83,101,108,102,83,105,103,110,101,100,116,82,111,111,116,67,65,49,13,48,11,6,3,85,4,7,12,4,36,36,36 , 36.48,30,23,13,50,48,48,53,49,56,49,52,48,49,53,53,90,23,13,51,48,48,53,49 , 54,49,52,48,49,53,53,90,48,34,49,15,48,13,6,3,85,4,3,12,6,99,108,105,101,110,116,49,15,48 , 13,6,3,85,4,10,12,6,99,108,105,101,110,116,48,130,1,34,48,13,6,9,42,134,72,134,247,13,1,1,1,5,0,3,130 1,15,0,48,130,1,10,2,130,1,1,0,183,198,116,156,3,177,131,5,148,11,154,34,99,210,88,115,60,228,180,245,80,212,113,57,181,249,20,5,164,49,72,95,153,116,103,49 , 58,119,15,48,147,107,112,243,105,122,189,44,0,193,114,138,169,250,165,97,188,158,188,95,163,37,30,75,143,21,103,11,131,223,124,96,244,111,210,30,8,175,72,206,162,14,86,63,146,215,179,226,239,48,76,122,150,200,183,82,114,1 73,116,32,224,202,196,129,131,96,34,237,34,144,177,92,200,105,212,0,133,141,118,146,229,140,246,229,137,0,9,27,180,163,233,134,0,187,110,9,126,92,172,105,96,11,11,11,11,11,11,11,11,11,11,11,11,11,11,11,11,11,11,11,11,11,11,11,11,11,11,11,11,11,11,11,11,11,11,11,1,118,11,11,118,11,118,11,118,11,118,11,118,11,118,11,118,11,118,11,118,11,118,11,118,1,118,11,11,118,11,11,11,11,1,1,1,1,1,1,1,1,1,1,1,1,1,111,1'''1,11,11,1'''1,1''''N''O'', '' 92,181,68,172,135,15,90,152,209,242,31,138,135,34,95,29,162,226,175,253,176,14
UPDATE
New rabbitmq.config:
[
{rabbit,[
{auth_backends, [rabbit_auth_backend_internal]},
{auth_mechanisms, ['PLAIN', 'AMQPLAIN', 'EXTERNAL']},
{ssl_listeners,[5671]},
{ssl_options,[
{versions,['tlsv1.2', 'tlsv1.1']},
{cacertfile, "/etc/rabbitmq/testca/cacert.pem"},
{certfile, "/etc/rabbitmq/server/cert.pem"},
{keyfile, "/etc/rabbitmq/server/key.pem"},
{verify,verify_peer},
{fail_if_no_peer_cert,true}]}
]}
].
New error:
2020-05-18 18:48:56.681 [info] <0.1410.0> Connection <0.1410.0> (10.10.15.14:52744 -> 10.10.11.16:5671) has a client-provided name: Viber.CallbackService.dll
2020-05-18 18:48:56.682 [error] <0.1410.0> Error on AMQP connection <0.1410.0> (10.10.15.14:52744 -> 10.10.11.16:5671, state: starting):
EXTERNAL login refused: user 'O=client,CN=client' - invalid credentials
Have you enabled the ssl plugin and restarted the broker?
sudo rabbitmq-plugins enable rabbitmq_auth_mechanism_ssl
sudo systemctl restart rabbitmq-server
You may also try and set the following in rabbitmq.conf:
ssl_cert_login_from = common_name
ssl_options.password = 123
And create a user called client in the broker to match the CN name in your certificate.
Related
enabling SSL for Hyperledger Fabric couchdb
I want to use couchDB(V. 2.3.1) with SSL enabled, so I added [ssl] part to /opt/couchdb/etc/local.d/docker.ini file as shown below: [ssl] port = 6984 enable = true cert_file = /etc/hyperledger/fabric/tls/server.crt key_file = /etc/hyperledger/fabric/tls/server.key cacert_file = /etc/hyperledger/fabric/tls/ca.crt [daemons] httpsd = {couch_httpd, start_link, [https]} [admins] Admin = ... [couchdb] uuid = ... but i can't access the webUI with https! having this error: This site can’t provide a secure connection "IP" uses an unsupported protocol. ERR_SSL_VERSION_OR_CIPHER_MISMATCH Unsupported protocol The client and server don't support a common SSL protocol version or cipher suite. this is the logs: [error] 2020-05-17T06:52:18.046389Z nonode#nohost <0.19077.3> -------- SSL: hello: tls_handshake.erl:127:Fatal error: handshake failure - malformed_handshake_data [error] 2020-05-17T06:52:18.046426Z nonode#nohost <0.18899.3> -------- application: mochiweb, "Accept failed error", "{error,{tls_alert,\"handshake failure\"}}" [error] 2020-05-17T06:52:18.046508Z nonode#nohost <0.18899.3> -------- CRASH REPORT Process (<0.18899.3>) with 0 neighbors exited with reason: {error,accept_failed} at mochiweb_acceptor:init/4(line:75) <= proc_lib:init_p_do_apply/3(line:247); initial_call: {mochiweb_acceptor,init,['Argument__1','Argument__2',...]}, ancestors: [https,couch_secondary_services,couch_sup,<0.202.0>], messages: [], links: [<0.253.0>], dictionary: [], trap_exit: false, status: running, heap_size: 1598, stack_size: 27, reductions: 954 can somebody please help me?
I found the solution and wrote a post about it: https://medium.com/#pouyashojaei85/enabling-ssl-for-docker-couchdb-container-127388eca1a8
syslog-ng unable to send logs via tls - handshake error
Unable to send logs to a syslog-ng docker container using TLS (6514) Logs are being transmitted successfully not using TLS on port 601. 305ef6ab4973 syslog-ng[1]: Syslog connection accepted; fd='14', client='AF_INET(172.17.0.3:35362)', local='AF_INET(0.0.0.0:6514)' 305ef6ab4973 syslog-ng[1]: SSL error while reading stream; tls_error='SSL routines:tls_process_client_certificate:peer did not return a certificate', location='/etc/syslog-ng/syslog-ng.conf:35:9' 305ef6ab4973 syslog-ng[1]: I/O error occurred while reading; fd='14', error='Connection reset by peer (104)' 305ef6ab4973 syslog-ng[1]: Syslog connection closed; fd='14', client='AF_INET(172.17.0.3:35362)', local='AF_INET(0.0.0.0:6514)' environment setup: Debian 9 VM, docker, latest syslog-ng. syslog-ng version root#305ef6ab4973:/etc/syslog-ng# syslog-ng --version syslog-ng 3 (3.21.1) Config version: 3.21 Installer-Version: 3.21.1 Revision: 3.21.1-1 Compile-Date: May 3 2019 09:11:19 Module-Directory: /usr/lib/syslog-ng/3.21 Module-Path: /usr/lib/syslog-ng/3.21 Include-Path: /usr/share/syslog-ng/include Available-Modules: cryptofuncs,kvformat,tfgetent,add-contextual-data,afsql,afuser,xml,riemann,json-plugin,geoip-plugin,redis,pacctformat,afamqp,pseudofile,hook-commands,examples,stardate,geoip2-plugin,tags-parser,system-source,graphite,date,kafka,snmptrapd-parser,confgen,afprog,basicfuncs,afsmtp,http,linux-kmsg-format,map-value-pairs,appmodel,disk-buffer,affile,afsocket,afstomp,afmongodb,csvparser,mod-java,syslogformat,cef,mod-python,sdjournal,dbparser Enable-Debug: off Enable-GProf: off Enable-Memtrace: off Enable-IPv6: on Enable-Spoof-Source: on Enable-TCP-Wrapper: on Enable-Linux-Caps: on Enable-Systemd: on generated syslog msgs with loggen on port 601 non-tls root#e41017b55dfa:loggen --stream 172.17.0.2 601 count=1816, rate = 915.72 msg/sec count=2274, rate = 914.78 msg/sec count=2732, rate = 914.93 msg/sec logs are written in the log file for 601 conn sudo tail -n 10 syslog-ng/logs/syslog-ng/tcp_601.log Jul 21 10:35:11 ip-172-17-0-3 prg00000[1234]: seq: 0000004294, thread: 0000, runid: 1563705308, stamp: 2019-07-21T10:35:11 PADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADD Jul 21 10:35:11 ip-172-17-0-3 prg00000[1234]: seq: 0000004295, thread: 0000, runid: 1563705308, stamp: 2019-07-21T10:35:11 PADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADD Jul 21 10:35:11 ip-172-17-0-3 prg00000[1234]: seq: 0000004296, thread: 0000, runid: 1563705308, stamp: 2019-07-21T10:35:11 PADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADDPADD syslog messages log (601 conn) Jul 21 10:39:14 305ef6ab4973 syslog-ng[1]: Syslog connection accepted; fd='18', client='AF_INET(****)', local='AF_INET(0.0.0.0:601)' Jul 21 10:39:44 305ef6ab4973 syslog-ng[1]: Syslog connection closed; fd='18', client='AF_INET(****)', local='AF_INET(0.0.0.0:601)' When I'm using TLS, I'm receiving the following error client-side: root#e41017b55dfa:#loggen --use-ssl 172.17.0.2 6514 error [loggen_helper.c:open_ssl_connection:247] SSL connect failed 139771316958976:error:14094410:SSL routines:ssl3_read_bytes:sslv3 alert handshake failure:../ssl/record/rec_layer_s3.c:1407:SSL alert number 40 error [ssl_plugin.c:active_thread_func:313] can not connect to 172.17.0.2:6514 (0x5566c837e800) Total runtime = 0.500195, count = 0 server-side: 305ef6ab4973 syslog-ng[1]: Syslog connection accepted; fd='14', client='AF_INET(172.17.0.3:35362)', local='AF_INET(0.0.0.0:6514)' 305ef6ab4973 syslog-ng[1]: SSL error while reading stream; tls_error='SSL routines:tls_process_client_certificate:peer did not return a certificate', location='/etc/syslog-ng/syslog-ng.conf:35:9' 305ef6ab4973 syslog-ng[1]: I/O error occurred while reading; fd='14', error='Connection reset by peer (104)' 305ef6ab4973 syslog-ng[1]: Syslog connection closed; fd='14', client='AF_INET(172.17.0.3:35362)', local='AF_INET(0.0.0.0:6514)' conn test using openssl root#e41017b55dfa:/etc/syslog-ng# openssl s_client -connect 172.17.0.2:6514 CONNECTED(00000003) depth=1 C = IL, ST = ***, L = ***, O = ***, OU = IT, CN = *** Syslog CA, emailAddress = ***#***.com verify return:1 depth=0 C = IL, ST = ***, L = ***, O = ***, OU = IT, CN = 172.17.0.2 verify return:1 140233519988800:error:14094410:SSL routines:ssl3_read_bytes:sslv3 alert handshake failure:../ssl/record/rec_layer_s3.c:1407:SSL alert number 40 --- Certificate chain 0 s:/C=IL/ST=***/L=***/O=***/OU=IT/CN=172.17.0.2 i:/C=IL/ST=***/L=***/O=***/OU=IT/CN=*** Syslog CA/emailAddress=***#***.com 1 s:/C=IL/ST=***/L=***/O=***/OU=IT/CN=*** Syslog CA/emailAddress=***#***.com i:/C=IL/ST=***/L=***/O=***/OU=IT/CN=*** Syslog CA/emailAddress=***#***.com --- Server certificate -----BEGIN CERTIFICATE----- MIID7TCCAtWgAwIBAgIBATANBgkqhkiG9w0BAQsFADCBkTELMAkGA1UEBhMCSUwx ETAPBgNVBAgMCFRlbCBBdml2MREwDwYDVQQHDAhUZWwgQXZpdjEOMAwGA1UECgwF QXJtaXMxCzAJBgNVBAsMAklUMRgwFgYDVQQDDA9Bcm1pcyBTeXNsb2cgQ0ExJTAj BgkqhkiG9w0BCQEWFm9tcmkudHNhYmFyaUBhcm1pcy5jb20wHhcNMTkwNzE4MTAx MzQ3WhcNMjAwNzE3MTAxMzQ3WjBlMQswCQYDVQQGEwJJTDERMA8GA1UECAwIVGVs IEF2aXYxETAPBgNVBAcMCFRlbCBBdml2MQ4wDAYDVQQKDAVBcm1pczELMAkGA1UE CwwCSVQxEzARBgNVBAMMCjE3Mi4xNy4wLjIwggEiMA0GCSqGSIb3DQEBAQUAA4IB DwAwggEKAoIBAQDSVTVKoNlgPk1q9MgbPF1ndDIhTFsXp62XPdNNWyP79GGunPlM o+oqJJJh+SDP/0BUivyvYdH4gFdZ40RZ138CQz1L+i9sBK4alizRIqxWT379lnYd nieMYP25uBQPw8TothegtHA30+PFg/qEVd/3bQQVFJ/z0Q6GsOkw/Qc4kS+hhP6B dny2ul8yyS4oNeM4rMo/1/F8NKsdOlt/4St2aVo5kuuyosOdKaaXzzqeVI7QdqaJ kuMwC5sGATDZ2qwr9TEgBVzZs5sFixOaA0vTb7FqVOfcBq1Crrf9qnNTzQXzjjjH 3eQ4tZXbVOTopxwR7zgqO/nR/3IAvVnirsjNAgMBAAGjezB5MAkGA1UdEwQCMAAw LAYJYIZIAYb4QgENBB8WHU9wZW5TU0wgR2VuZXJhdGVkIENlcnRpZmljYXRlMB0G A1UdDgQWBBSMTVONnqW+gof7SKD0V6uPZLoOdDAfBgNVHSMEGDAWgBTaK4jNVP3+ 1V4wUSM+Gx7iYSjFKTANBgkqhkiG9w0BAQsFAAOCAQEAQqOJbvHcjG6pYbmtwexJ C56a1qE0C9fjIlHY+EKuE1e/jTfIu1opggwTbov5BS9MHDK0As4JkwAn/36dbGKt SS3K/JXvnM8Ag5tv09zVgSKwYNRpuVTi52shn4ELIktVCUc2H7XW1W9r1GsjkXCV WhtJRP9lVJi77gxICTC5x39feA/p3BkRUIRwWPY2J8quJ37FTNBGMeX8lVAW4ipR UbG3DQgj2r/HonjmZ5kWH8Bd46RZhpE7Nt4UGRutCnyi9jo3R7PDQW1D0rhRSByO w/uTToHfaj7ZjGb9CXeV7LRuf6z5px881puqUsWYSeEh0Tm3AnTVNOzzvKE2Pp3***** -----END CERTIFICATE----- subject=/C=IL/ST=***/L=***/O=***/OU=IT/CN=172.17.0.2 issuer=/C=IL/ST=***/L=***/O=***/OU=IT/CN=*** Syslog CA/emailAddress=***#***.com --- No client certificate CA names sent Client Certificate Types: RSA sign, DSA sign, ECDSA sign Requested Signature Algorithms: RSA+SHA512:DSA+SHA512:ECDSA+SHA512:RSA+SHA384:DSA+SHA384:ECDSA+SHA384:RSA+SHA256:DSA+SHA256:ECDSA+SHA256:RSA+SHA224:DSA+SHA224:ECDSA+SHA224:RSA+SHA1:DSA+SHA1:ECDSA+SHA1 Shared Requested Signature Algorithms: RSA+SHA512:DSA+SHA512:ECDSA+SHA512:RSA+SHA384:DSA+SHA384:ECDSA+SHA384:RSA+SHA256:DSA+SHA256:ECDSA+SHA256:RSA+SHA224:DSA+SHA224:ECDSA+SHA224:RSA+SHA1:DSA+SHA1:ECDSA+SHA1 Peer signing digest: SHA512 Server Temp Key: X25519, 253 bits --- SSL handshake has read 2487 bytes and written 281 bytes Verification: OK --- New, TLSv1.2, Cipher is ECDHE-RSA-AES256-GCM-SHA384 Server public key is 2048 bit Secure Renegotiation IS supported Compression: NONE Expansion: NONE No ALPN negotiated SSL-Session: Protocol : TLSv1.2 Cipher : ECDHE-RSA-AES256-GCM-SHA384 Session-ID: Session-ID-ctx: Master-Key: 02FB22BADE731CF64439D69D1F1991F3FF3BD7C4E44AF531308DD021659B1220B8BEBE94C9934659734AB10D4AF25999 PSK identity: None PSK identity hint: None SRP username: None Start Time: 1563704954 Timeout : 7200 (sec) Verify return code: 0 (ok) Extended master secret: yes --- root#e41017b55dfa:/etc/syslog-ng# syslog-ng client config #version: 3.18 #include "scl.conf" source s_local { internal(); }; source s_network { default-network-drivers( ); }; destination test2_d { network("172.17.0.2" port(6514) transport("tls") tls( ca-dir("/etc/syslog-ng/ca.d") key-file("/etc/syslog-ng/cert.d/clientkey.pem") cert-file("/etc/syslog-ng/cert.d/clientcert.pem") ) ); }; log { source(s_local); destination(test2_d); }; destination d_local { file("/var/log/messages"); file("/var/log/messages-kv.log" template("$ISODATE $HOST $(format-welf --scope all-nv-pairs)\n") frac-digits(3)); }; log { source(s_local); source(s_network); destination(d_local); }; syslog-ng server config #version: 3.18 #include "scl.conf" source s_local { internal(); }; source test1_s { network( transport("tcp") port(601) flags(syslog-protocol) ); }; destination test1_d { file("/var/log/syslog-ng/tcp_601.log" dir_group(root) group(root) create_dirs(yes) dir_perm(0777) perm(0666) owner(root) dir_owner(root)); }; log { source("test1_s"); destination("test1_d"); }; source test2_s { network( ip(0.0.0.0) port(6514) transport("tls") tls( key-file("/etc/syslog-ng/cert.d/serverkey.pem") cert-file("/etc/syslog-ng/cert.d/servercert.pem") ca-dir("/etc/syslog-ng/ca.d")) ); }; destination test2_d { file("/var/log/syslog-ng/tls_6514.log" dir_group(root) group(root) create_dirs(yes) dir_perm(0777) perm(0666) owner(root) dir_owner(root)); }; log { source("test2_s"); destination("test2_d"); }; destination d_local { file("/var/log/messages"); file("/var/log/messages-kv.log" template("$ISODATE $HOST $(format-welf --scope all-nv-pairs)\n") frac-digits(3)); }; log { source(s_local); destination(d_local); };
You've tested your configuration using loggen --use-ssl and openssl s_client without specifying a client certificate (loggen does not support client certs, openssl s_client does). The error message on the server side is about the missing client cert: peer did not return a certificate. If you prefer not to use mutual authentication, you can make it optional by adding the peer-verify(optional-trusted) TLS option to the server config: source test2_s { network( port(6514) transport("tls") tls( key-file("/etc/syslog-ng/cert.d/serverkey.pem") cert-file("/etc/syslog-ng/cert.d/servercert.pem") ca-dir("/etc/syslog-ng/ca.d") peer-verify(optional-trusted) ) ); }; syslog-ng Admin Guide - TLS options
Logstash - Storing RabbitMQ Logs - Multiline
I have been using ELK for about six months now, and it's been great so far. I'm on logstash version 6.2.3. RabbitMQ makes up the heart of my distributed system (RabbitMQ is itself distributed), and as such it is very important that I track the logs of RabbitMQ. Most other conversations on this forum seem to use RabbitMQ as an input/output stage, but I just want to monitor the logs. The only problem I'm finding is that RabbitMQ has multiline logging, like so: =WARNING REPORT==== 19-Nov-2017::06:53:14 === closing AMQP connection <0.27161.0> (...:32799 -> ...:5672, vhost: '/', user: 'worker'): client unexpectedly closed TCP connection =WARNING REPORT==== 19-Nov-2017::06:53:18 === closing AMQP connection <0.22410.0> (...:36656 -> ...:5672, vhost: '/', user: 'worker'): client unexpectedly closed TCP connection =WARNING REPORT==== 19-Nov-2017::06:53:19 === closing AMQP connection <0.26045.0> (...:55427 -> ...:5672, vhost: '/', user: 'worker'): client unexpectedly closed TCP connection =WARNING REPORT==== 19-Nov-2017::06:53:20 === closing AMQP connection <0.5484.0> (...:47740 -> ...:5672, vhost: '/', user: 'worker'): client unexpectedly closed TCP connection I have found a brilliant code example here, which I have stripped just to the filter stage, such that it looks like this: filter { if [type] == "rabbitmq" { codec => multiline { pattern => "^=" negate => true what => "previous" } grok { type => "rabbit" patterns_dir => "patterns" pattern => "^=%{WORD:report_type} REPORT=+ %{RABBIT_TIME:time_text} ===.*$" } date { type => "rabbit" time_text => "dd-MMM-yyyy::HH:mm:ss" } mutate { type => "rabbit" add_field => [ "message", "%{#message}" ] } mutate { gsub => [ "message", "^=[A-Za-z0-9: =-]+=\n", "", # interpret message header text as "severity" "report_type", "INFO", "1", "report_type", "WARNING", "3", "report_type", "ERROR", "4", "report_type", "CRASH", "5", "report_type", "SUPERVISOR", "5" ] } } } But when I save this to a conf file and restart logstash I get the following error: [2018-04-04T07:01:57,308][INFO ][logstash.modules.scaffold] Initializing module {:module_name=>"fb_apache", :directory=>"/usr/share/logstash/modules/fb_apache/configuration"} [2018-04-04T07:01:57,316][INFO ][logstash.modules.scaffold] Initializing module {:module_name=>"netflow", :directory=>"/usr/share/logstash/modules/netflow/configuration"} [2018-04-04T07:01:57,841][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"6.2.3"} [2018-04-04T07:01:57,973][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600} [2018-04-04T07:01:58,037][ERROR][logstash.agent ] Failed to execute action {:action=>LogStash::PipelineAction::Create/pipeline_id:main, :exception=>"LogStash::ConfigurationError", :message=>"Expected one of #, { at line 3, column 15 (byte 54) after filter {\n if [type] == \"rabbitmq\" {\n codec ", :backtrace=>["/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:42:in `compile_imperative'", "/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:50:in `compile_graph'", "/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:12:in `block in compile_sources'", "org/jruby/RubyArray.java:2486:in `map'", "/usr/share/logstash/logstash-core/lib/logstash/compiler.rb:11:in `compile_sources'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:51:in `initialize'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:169:in `initialize'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline_action/create.rb:40:in `execute'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:315:in `block in converge_state'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:141:in `with_pipelines'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:312:in `block in converge_state'", "org/jruby/RubyArray.java:1734:in `each'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:299:in `converge_state'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:166:in `block in converge_state_and_update'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:141:in `with_pipelines'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:164:in `converge_state_and_update'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:90:in `execute'", "/usr/share/logstash/logstash-core/lib/logstash/runner.rb:348:in `block in execute'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/task.rb:24:in `block in initialize'"]} Any ideas what the issue could be? Thanks,
In case you are sending your logs from the rabbitMQ server to logstash with filebeat, you should configure the multiline there.
The answer is multiline indeed. The goal is to merge the lines starting with something else than a date with the previous line that started with a date. This is how : multiline.pattern: '^\d{4}-\d{2}-\d{2}' multiline.negate: true multiline.match: after Note: I previously tried to merge any lines started with space characters ^\s+ but that did not work because not all warning or error messages started with a space. Complete filebeat input (7.5.2 format) filebeat: inputs: - exclude_lines: - 'Failed to publish events caused by: EOF' fields: type: rabbitmq fields_under_root: true paths: - /var/log/rabbitmq/*.log tail_files: false timeout: 60s type: log multiline.pattern: '^\d{4}-\d{2}-\d{2}' multiline.negate: true multiline.match: after Logstash patterns: # RabbitMQ RABBITMQDATE %{MONTHDAY}-%{MONTH}-%{YEAR}::%{HOUR}:%{MINUTE}:%{SECOND} RABBITMQLINE (?m)=%{DATA:severity} %{DATA}==== %{RABBITMQDATE:timestamp} ===\n%{GREEDYDATA:message} I am sure they had good reasons to log in this odd way in RMQ 3.7.x but without knowing them, it really makes our life hard.
You can't use a codec as a filter plugin. Codecs can only be used in input or output plugins (see the doc), with the codec configuration option. You'll have to put your multiline codec in the input plugin that's producing your rabbitmq logs.
Kafka SASL zookeeper authentication
I am facing the following error while enabling SASL on Zookeeper and broker authentication. [2017-04-18 15:54:10,476] DEBUG Size of client SASL token: 0 (org.apache.zookeeper.server.ZooKeeperServer) [2017-04-18 15:54:10,476] ERROR cnxn.saslServer is null: cnxn object did not initialize its saslServer properly. (org.apache.zookeeper.server. ZooKeeperServer) [2017-04-18 15:54:10,478] ERROR SASL authentication failed using login context 'Client'. (org.apache.zookeeper.client.ZooKeeperSaslClient) [2017-04-18 15:54:10,478] DEBUG Received event: WatchedEvent state:AuthFailed type:None path:null (org.I0Itec.zkclient.ZkClient) [2017-04-18 15:54:10,478] INFO zookeeper state changed (AuthFailed) (org.I0Itec.zkclient.ZkClient) [2017-04-18 15:54:10,478] DEBUG Leaving process event (org.I0Itec.zkclient.ZkClient) [2017-04-18 15:54:10,478] DEBUG Closing ZkClient... (org.I0Itec.zkclient.ZkClient) [2017-04-18 15:54:10,478] INFO Terminate ZkClient event thread. (org.I0Itec.zkclient.ZkEventThread) [2017-04-18 15:54:10,478] DEBUG Closing ZooKeeper connected to localhost:2181 (org.I0Itec.zkclient.ZkConnection) [2017-04-18 15:54:10,478] DEBUG Close called on already closed client (org.apache.zookeeper.ZooKeeper) [2017-04-18 15:54:10,478] DEBUG Closing ZkClient...done (org.I0Itec.zkclient.ZkClient) [2017-04-18 15:54:10,480] FATAL Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer) org.I0Itec.zkclient.exception.ZkAuthFailedException: Authentication failure at org.I0Itec.zkclient.ZkClient.waitForKeeperState(ZkClient.java:947) at org.I0Itec.zkclient.ZkClient.waitUntilConnected(ZkClient.java:924) at org.I0Itec.zkclient.ZkClient.connect(ZkClient.java:1231) at org.I0Itec.zkclient.ZkClient.<init>(ZkClient.java:157) at org.I0Itec.zkclient.ZkClient.<init>(ZkClient.java:131) at kafka.utils.ZkUtils$.createZkClientAndConnection(ZkUtils.scala:79) at kafka.utils.ZkUtils$.apply(ZkUtils.scala:61) at kafka.server.KafkaServer.initZk(KafkaServer.scala:329) at kafka.server.KafkaServer.startup(KafkaServer.scala:187) at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:39) at kafka.Kafka$.main(Kafka.scala:67) at kafka.Kafka.main(Kafka.scala) [2017-04-18 15:54:10,482] INFO shutting down (kafka.server.KafkaServer) Following configuration is given in the JAAS file, which is passed as KAFKA_OPTS to take it as JVM parameter:- KafkaServer { org.apache.kafka.common.security.plain.PlainLoginModule required username="admin" password="admin-secret" user_admin="admin-secret"; }; Client { org.apache.kafka.common.security.plain.PlainLoginModule required username="admin" password="admin-secret"; }; kafka broker's server.properties has following extra fields set:- zookeeper.set.acl=true security.inter.broker.protocol=SASL_PLAINTEXT sasl.mechanism.inter.broker.protocol=PLAIN sasl.enabled.mechanisms=PLAIN ssl.client.auth=required ssl.endpoint.identification.algorithm=HTTPS ssl.keystore.location=path ssl.keystore.password=anything ssl.key.password=anything ssl.truststore.location=path ssl.truststore.password=anything Zookeeper properties are as follows: authProvider.1=org.apache.zookeeper.server.auth.DigestAuthenticationProvider jaasLoginRenew=3600000 requireClientAuthScheme=sasl
I found the issue by increasing the log level to DEBUG. Basically follow the steps below. I don't use SSL but you will integrate it without any issue. Following are my configuration files: server.properties security.inter.broker.protocol=SASL_PLAINTEXT sasl.mechanism.inter.broker.protocol=PLAIN sasl.enabled.mechanisms=PLAIN authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer allow.everyone.if.no.acl.found=true auto.create.topics.enable=false broker.id=0 listeners=SASL_PLAINTEXT://localhost:9092 advertised.listeners=SASL_PLAINTEXT://localhost:9092 num.network.threads=3 num.io.threads=8 socket.send.buffer.bytes=102400 socket.receive.buffer.bytes=102400 socket.request.max.bytes=104857600 advertised.host.name=localhost num.partitions=1 num.recovery.threads.per.data.dir=1 log.flush.interval.messages=30000000 log.flush.interval.ms=1800000 log.retention.minutes=30 log.segment.bytes=1073741824 log.retention.check.interval.ms=300000 delete.topic.enable=true zookeeper.connect=localhost:2181 zookeeper.connection.timeout.ms=6000 super.users=User:admin zookeeper.properties dataDir=/tmp/zookeeper clientPort=2181 maxClientCnxns=0 authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider requireClientAuthScheme=sasl jaasLoginRenew=3600000 producer.properties security.protocol=SASL_PLAINTEXT sasl.mechanism=PLAIN bootstrap.servers=localhost:9092 compression.type=none consumer.properties security.protocol=SASL_PLAINTEXT sasl.mechanism=PLAIN zookeeper.connect=localhost:2181 zookeeper.connection.timeout.ms=6000 group.id=test-consumer-group Now are the most important files for making your server starting without any issue: zookeeper_jaas.conf Server { org.apache.kafka.common.security.plain.PlainLoginModule required username="admin" password="admin-secret" user_admin="admin-secret"; }; kafka_server_jaas.conf KafkaServer { org.apache.kafka.common.security.plain.PlainLoginModule required username="admin" password="admin-secret" user_admin="admin-secret"; }; Client { org.apache.kafka.common.security.plain.PlainLoginModule required username="admin" password="admin-secret"; }; After doing all these configuration, on a first terminal window: Terminal 1 (start Zookeeper server) From kafka root directory $ export KAFKA_OPTS="-Djava.security.auth.login.config=/home/usename/Documents/kafka_2.11-0.10.1.0/config/zookeeper_jaas.conf" $ bin/zookeeper-server-start.sh config/zookeeper.properties Terminal 2 (start Kafka server) From kafka root directory $ export KAFKA_OPTS="-Djava.security.auth.login.config=/home/usename/Documents/kafka_2.11-0.10.1.0/config/kafka_server_jaas.conf" $ bin/kafka-server-start.sh config/server.properties [BEGIN UPDATE] kafka_client_jaas.conf KafkaClient { org.apache.kafka.common.security.plain.PlainLoginModule required username="admin" password="admin-secret"; }; Terminal 3 (start Kafka consumer) On a client terminal, export client jaas conf file and start consumer: $ export KAFKA_OPTS="-Djava.security.auth.login.config=/home/username/Documents/kafka_2.11-0.10.1.0/kafka_client_jaas.conf" $ ./bin/kafka-console-consumer.sh --new-consumer --zookeeper localhost:2181 --topic test-topic --from-beginning --consumer.config=config/consumer.properties --bootstrap-server=localhost:9092 Terminal 4 (start Kafka producer) If you also want to produce, do this on another terminal window: $ export KAFKA_OPTS="-Djava.security.auth.login.config=/home/username/Documents/kafka_2.11-0.10.1.0/kafka_client_jaas.conf" $ ./bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test-topic --producer.config=config/producer.properties [END UPDATE]
You need to create a JAAS config file for Zookeeper and make it use it. Create a file JAAS config file for Zookeeper with a content like this: Server { org.apache.zookeeper.server.auth.DigestLoginModule required user_admin="admin-secret"; }; Where user (admin) and password (admin-secret) must match with username and password that you have in Client section of Kafka JAAS config file. To make Zookeeper use the JAAS config file, pass the following JVM flag to Zookeeper pointing to the file created before. -Djava.security.auth.login.config=/path/to/server/jaas/file.conf" If you are using Zookeeper included with Kafka package you can launch Zookeeper like this, assuming that your Zookeeper JAAS config file is located in ./config/zookeeper_jaas.conf EXTRA_ARGS=-Djava.security.auth.login.config=./config/zookeeper_jaas.conf ./bin/zookeeper-server-start.sh ./config/zookeeper.properties
RabbitMQ Consumer fails while receiving a MQTT message
I'm trying to publish a MQTT message and receive the message with an AMQP consumer by using the RabbitMQ-MQTT plugin on Ubuntu14.04. I'm publishing the MQTT message with the Mosquitto-clients package. I enabled the MQTT plugin for RabbitMQ. Now if I want to send a MQTT message, my AMQP consumer code throws an exception: Traceback (most recent call last): File "consume_topic.py", line 33, in <module> channel.start_consuming() File "/usr/local/lib/python2.7/dist-packages/pika/adapters/blocking_connection.py", line 722, in start_consuming self.connection.process_data_events() File "/usr/local/lib/python2.7/dist-packages/pika/adapters/blocking_connection.py", line 88, in process_data_events if self._handle_read(): File "/usr/local/lib/python2.7/dist-packages/pika/adapters/blocking_connection.py", line 184, in _handle_read super(BlockingConnection, self)._handle_read() File "/usr/local/lib/python2.7/dist-packages/pika/adapters/base_connection.py", line 308, in _handle_read self._on_data_available(data) File "/usr/local/lib/python2.7/dist-packages/pika/connection.py", line 1134, in _on_data_available consumed_count, frame_value = self._read_frame() File "/usr/local/lib/python2.7/dist-packages/pika/connection.py", line 1201, in _read_frame return frame.decode_frame(self._frame_buffer) File "/usr/local/lib/python2.7/dist-packages/pika/frame.py", line 254, in decode_frame out = properties.decode(frame_data[12:]) File "/usr/local/lib/python2.7/dist-packages/pika/spec.py", line 2479, in decode (self.headers, offset) = data.decode_table(encoded, offset) File "/usr/local/lib/python2.7/dist-packages/pika/data.py", line 106, in decode_table value, offset = decode_value(encoded, offset) File "/usr/local/lib/python2.7/dist-packages/pika/data.py", line 174, in decode_value raise exceptions.InvalidFieldTypeException(kind) pika.exceptions.InvalidFieldTypeException: b My Pika (python) consumer code is the following: #!/usr/bin/env python import pika import sys connection = pika.BlockingConnection(pika.ConnectionParameters(host='localhost')) channel = connection.channel() channel.exchange_declare(exchange='logs',type='topic',durable=False) result = channel.queue_declare(exclusive=True) queue_name = result.method.queue binding_keys = sys.argv[1:] if not binding_keys: print >> sys.stderr, "Usage: %s [binding_key]..." % (sys.argv[0],) sys.exit(1) for binding_key in binding_keys: channel.queue_bind(exchange='logs', queue=queue_name, routing_key=binding_key) print ' [*] Waiting for logs. To exit press CTRL+C' def callback(ch, method, properties, body): print " [x] %r:%r" % (method.routing_key, body,) channel.basic_consume(callback, queue=queue_name, no_ack=True) channel.start_consuming() My RabbitMQ configuration file is the following: [{rabbit, [{tcp_listeners, [5672]}]}, {rabbitmq_mqtt, [{default_user, <<"guest">>}, {default_pass, <<"guest">>}, {allow_anonymous, true}, {vhost, <<"/">>}, {exchange, <<"logs">>}, {subscription_ttl, 1800000}, {prefetch, 10}, {ssl_listeners, []}, %% Default MQTT with TLS port is 8883 %% {ssl_listeners, [8883]} {tcp_listeners, [1883]}, {tcp_listen_options, [binary, {packet, raw}, {reuseaddr, true}, {backlog, 128}, {nodelay, true}]}]} ]. The log file shows the following: =INFO REPORT==== 14-Apr-2015::10:57:50 === accepting AMQP connection <0.1174.0> (127.0.0.1:42447 -> 127.0.0.1:5672) =INFO REPORT==== 14-Apr-2015::10:58:30 === accepting MQTT connection <0.1232.0> (127.0.0.1:53581 -> 127.0.0.1:1883) =WARNING REPORT==== 14-Apr-2015::10:58:30 === closing AMQP connection <0.1174.0> (127.0.0.1:42447 -> 127.0.0.1:5672): connection_closed_abruptly =INFO REPORT==== 14-Apr-2015::10:58:30 === closing MQTT connection <0.1232.0> (127.0.0.1:53581 -> 127.0.0.1:1883) Can anybody please help me? I googled the "pika.exceptions.IvalidFieldTypeException" and found that I'm not using a correct "Field Type", how is that?
This is most likely a bug in the specifications (decoder) for pika. I would recommend that you change library to something more frequently updated. As an example you could look at the author of pika's new library RabbitPy or my very own pika inspired library AMQP-Storm. Although, it could also be that you are running a very old version of Pika. I found this commit from gmr that should have fixed your issue. You could try to upgrade to pika 0.9.14.