RabbitMQ Consumer fails while receiving a MQTT message - rabbitmq

I'm trying to publish a MQTT message and receive the message with an AMQP consumer by using the RabbitMQ-MQTT plugin on Ubuntu14.04. I'm publishing the MQTT message with the Mosquitto-clients package. I enabled the MQTT plugin for RabbitMQ.
Now if I want to send a MQTT message, my AMQP consumer code throws an exception:
Traceback (most recent call last):
File "consume_topic.py", line 33, in <module>
channel.start_consuming()
File "/usr/local/lib/python2.7/dist-packages/pika/adapters/blocking_connection.py", line 722, in start_consuming
self.connection.process_data_events()
File "/usr/local/lib/python2.7/dist-packages/pika/adapters/blocking_connection.py", line 88, in process_data_events
if self._handle_read():
File "/usr/local/lib/python2.7/dist-packages/pika/adapters/blocking_connection.py", line 184, in _handle_read
super(BlockingConnection, self)._handle_read()
File "/usr/local/lib/python2.7/dist-packages/pika/adapters/base_connection.py", line 308, in _handle_read
self._on_data_available(data)
File "/usr/local/lib/python2.7/dist-packages/pika/connection.py", line 1134, in _on_data_available
consumed_count, frame_value = self._read_frame()
File "/usr/local/lib/python2.7/dist-packages/pika/connection.py", line 1201, in _read_frame
return frame.decode_frame(self._frame_buffer)
File "/usr/local/lib/python2.7/dist-packages/pika/frame.py", line 254, in decode_frame
out = properties.decode(frame_data[12:])
File "/usr/local/lib/python2.7/dist-packages/pika/spec.py", line 2479, in decode
(self.headers, offset) = data.decode_table(encoded, offset)
File "/usr/local/lib/python2.7/dist-packages/pika/data.py", line 106, in decode_table
value, offset = decode_value(encoded, offset)
File "/usr/local/lib/python2.7/dist-packages/pika/data.py", line 174, in decode_value
raise exceptions.InvalidFieldTypeException(kind)
pika.exceptions.InvalidFieldTypeException: b
My Pika (python) consumer code is the following:
#!/usr/bin/env python
import pika
import sys
connection = pika.BlockingConnection(pika.ConnectionParameters(host='localhost'))
channel = connection.channel()
channel.exchange_declare(exchange='logs',type='topic',durable=False)
result = channel.queue_declare(exclusive=True)
queue_name = result.method.queue
binding_keys = sys.argv[1:]
if not binding_keys:
print >> sys.stderr, "Usage: %s [binding_key]..." % (sys.argv[0],)
sys.exit(1)
for binding_key in binding_keys:
channel.queue_bind(exchange='logs',
queue=queue_name,
routing_key=binding_key)
print ' [*] Waiting for logs. To exit press CTRL+C'
def callback(ch, method, properties, body):
print " [x] %r:%r" % (method.routing_key, body,)
channel.basic_consume(callback,
queue=queue_name,
no_ack=True)
channel.start_consuming()
My RabbitMQ configuration file is the following:
[{rabbit, [{tcp_listeners, [5672]}]},
{rabbitmq_mqtt, [{default_user, <<"guest">>},
{default_pass, <<"guest">>},
{allow_anonymous, true},
{vhost, <<"/">>},
{exchange, <<"logs">>},
{subscription_ttl, 1800000},
{prefetch, 10},
{ssl_listeners, []},
%% Default MQTT with TLS port is 8883
%% {ssl_listeners, [8883]}
{tcp_listeners, [1883]},
{tcp_listen_options, [binary,
{packet, raw},
{reuseaddr, true},
{backlog, 128},
{nodelay, true}]}]}
].
The log file shows the following:
=INFO REPORT==== 14-Apr-2015::10:57:50 ===
accepting AMQP connection <0.1174.0> (127.0.0.1:42447 -> 127.0.0.1:5672)
=INFO REPORT==== 14-Apr-2015::10:58:30 ===
accepting MQTT connection <0.1232.0> (127.0.0.1:53581 -> 127.0.0.1:1883)
=WARNING REPORT==== 14-Apr-2015::10:58:30 ===
closing AMQP connection <0.1174.0> (127.0.0.1:42447 -> 127.0.0.1:5672):
connection_closed_abruptly
=INFO REPORT==== 14-Apr-2015::10:58:30 ===
closing MQTT connection <0.1232.0> (127.0.0.1:53581 -> 127.0.0.1:1883)
Can anybody please help me? I googled the "pika.exceptions.IvalidFieldTypeException" and found that I'm not using a correct "Field Type", how is that?

This is most likely a bug in the specifications (decoder) for pika. I would recommend that you change library to something more frequently updated. As an example you could look at the author of pika's new library RabbitPy or my very own pika inspired library AMQP-Storm.
Although, it could also be that you are running a very old version of Pika. I found this commit from gmr that should have fixed your issue. You could try to upgrade to pika 0.9.14.

Related

Error accessing AWS elasticache redis in cluster mode) + TLS Enabled from django service

I'm trying to connect AWS elasticache(redis in cluster mode) with TLS enabled, the library versions and django cache settings as below
====Dependencies======
redis==3.0.0
redis-py-cluster==2.0.0
django-redis==4.11.0
======settings=======
CACHES = {
'default': {
'BACKEND': 'django_redis.cache.RedisCache',
'LOCATION': "redis://xxxxxxx.mc-redis-cache-v2.zzzzz.usw2.cache.amazonaws.com:6379/0",
'OPTIONS': {
'PASSWORD': '<password>',
'REDIS_CLIENT_CLASS': 'rediscluster.RedisCluster',
'CONNECTION_POOL_CLASS': 'rediscluster.connection.ClusterConnectionPool',
'CONNECTION_POOL_KWARGS': {
'skip_full_coverage_check': True,
"ssl_cert_reqs": False,
"ssl": True
}
}
}
}
It doesn't seem to be a problem with client-class(provided by redis-py-cluster) since I'm able to access
from rediscluster import RedisCluster
startup_nodes = [{"host": "redis://xxxxxxx.mc-redis-cache-v2.zzzzz.usw2.cache.amazonaws.com", "port": "6379"}]
rc = RedisCluster(startup_nodes=startup_nodes, ssl=True, ssl_cert_reqs=False, decode_responses=True, skip_full_coverage_check=True, password='<password>')
rc.set("foo", "bar")
rc.get('foo')
'bar'
but I'm seeing this error when django service is trying to access the cache, is there any configuration detail that I might be missing?
File "/usr/lib/python3.6/site-packages/django_redis/cache.py", line 32, in _decorator
return method(self, *args, **kwargs)
File "/usr/lib/python3.6/site-packages/django_redis/cache.py", line 81, in get
client=client)
File "/usr/lib/python3.6/site-packages/django_redis/client/default.py", line 194, in get
client = self.get_client(write=False)
File "/usr/lib/python3.6/site-packages/django_redis/client/default.py", line 90, in get_client
self._clients[index] = self.connect(index)
File "/usr/lib/python3.6/site-packages/django_redis/client/default.py", line 103, in connect
return self.connection_factory.connect(self._server[index])
File "/usr/lib/python3.6/site-packages/django_redis/pool.py", line 64, in connect
connection = self.get_connection(params)
File "/usr/lib/python3.6/site-packages/django_redis/pool.py", line 75, in get_connection
pool = self.get_or_create_connection_pool(params)
File "/usr/lib/python3.6/site-packages/django_redis/pool.py", line 94, in get_or_create_connection_pool
self._pools[key] = self.get_connection_pool(params)
File "/usr/lib/python3.6/site-packages/django_redis/pool.py", line 107, in get_connection_pool
pool = self.pool_cls.from_url(**cp_params)
File "/usr/lib/python3.6/site-packages/redis/connection.py", line 916, in from_url
return cls(**kwargs)
File "/usr/lib/python3.6/site-packages/rediscluster/connection.py", line 146, in __init__
self.nodes.initialize()
File "/usr/lib/python3.6/site-packages/rediscluster/nodemanager.py", line 172, in initialize
raise RedisClusterException("ERROR sending 'cluster slots' command to redis server: {0}".format(node))
rediscluster.exceptions.RedisClusterException: ERROR sending 'cluster slots' command to redis server: {'host': 'xxxxxxx.mc-redis-cache-v2.zzzzz.usw2.cache.amazonaws.com', 'port': '6379'}
I also tried passing "ssl_ca_certs": "/etc/ssl/certs/ca-certificates.crt" to CONNECTION_POOL_KWARGS and setting the location scheme to rediss still no luck
you need to change ssl_cert_reqs=False to ssl_cert_reqs=None
Here's the link to the redis Python git repo that points to this:
https://github.com/andymccurdy/redis-py#ssl-connections

How to set up rabbitmq local queues with ansible?

I am trying to automate the deployment of a rabbitmq processing chain with Ansible.
The rabbitmq user setup below works perfectly
rabbitmq_user:
user: admin
password: supersecret
read_priv: .*
write_priv: .*
configure_priv: .*
But the queue setup below crashes
rabbitmq_queue:
name: feedfiles
login_host: 127.0.0.1
login_user: admin
login_password: admin
login_port: 5672
the crash log looks like this:
{"changed": false, "module_stderr": "Shared connection to 127.0.0.1 closed.
", "module_stdout": "
Traceback (most recent call last):
File "/tmp/ansible_i8T24e/ansible_module_rabbitmq_queue.py", line 285, in <module>
main()
File "/tmp/ansible_i8T24e/ansible_module_rabbitmq_queue.py", line 178, in main
r = requests.get(url, auth=(module.params['login_user'], module.params['login_password']))
File "/usr/lib/python2.7/dist-packages/requests/api.py", line 70, in get
return request('get', url, params=params, **kwargs)
File "/usr/lib/python2.7/dist-packages/requests/api.py", line 56, in request
return session.request(method=method, url=url, **kwargs)
File "/usr/lib/python2.7/dist-packages/requests/sessions.py", line 488, in request
resp = self.send(prep, **send_kwargs)
File "/usr/lib/python2.7/dist-packages/requests/sessions.py", line 609, in send
r = adapter.send(request, **kwargs)
File "/usr/lib/python2.7/dist-packages/requests/adapters.py", line 487,in send
raise ConnectionError(e, request=request)
requests.exceptions.ConnectionError: HTTPConnectionPool(host='127.0.0.1', port=15672): Max retries exceeded with url: /api/queues/%2F/feedfiles (Caused by NewConnectionError('<requests.packages.urllib3.connection.HTTPConnection object at 0x7fcfdaf8e7d0>: Failed to establish a new connection: [Errno 111] Connection refused',))
", "msg": "MODULE FAILURE", "rc": 1}
I have voluntarily removed the guest/guest default setup, which is why I am using the admin credentials
Any idea where the issue could come from?
EDIT:
setting the admin user tag "administrator" doesn't help

How do i send delayed message in rabbitmq using the rabbitmq-delayed-message-exchange plugin?

i have enabled the plugin using
rabbitmq-plugins enable rabbitmq_delayed_message_exchange
I am trying to create delayed exchange and attached header x-delay with 5000ms as int value and binded it to a queue it dint work.
So i tried it using Pika in python:
import pika
credentials = pika.PlainCredentials('admin', 'admin')
parameters = pika.ConnectionParameters('localhost',5672,'/',credentials)
connection = pika.BlockingConnection(pika.ConnectionParameters(host='127.0.0.1',port=5673,credentials=credentials))
channel = connection.channel()
#channel.exchange_declare(exchange='x-delayed-type', type='direct')
channel.exchange_declare("test-exchange", type="x-delayed-message", arguments={"x-delayed-type":"direct"},durable=True,auto_delete=True)
channel.queue_declare(queue='task_queue',durable=True)
channel.queue_bind(queue="task_queue", exchange="test-exchange", routing_key="task_queue")
for i in range(0,100):
channel.basic_publish(exchange='test-exchange', routing_key='task_queue',
body='gooogle',
properties=pika.BasicProperties(headers={"x-delay": 5000},delivery_mode=1))
print i
How can i make delayed exchange using delay make working?
Error Report :
ERROR REPORT==== 10-Mar-2017::13:08:09 ===
Error on AMQP connection <0.683.0> (127.0.0.1:42052 -> 127.0.0.1:5673, vhost: '/', user: 'admin', state: running), channel 1:
{{{undef,
[{erlang,system_time,[milli_seconds],[]},
{rabbit_delayed_message,internal_delay_message,4,
[{file,"src/rabbit_delayed_message.erl"},{line,179}]},
{rabbit_delayed_message,handle_call,3,
[{file,"src/rabbit_delayed_message.erl"},{line,122}]},
{gen_server,handle_msg,5,[{file,"gen_server.erl"},{line,585}]},
{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,239}]}]},
{gen_server,call,
[rabbit_delayed_message,
{delay_message,
{exchange,
{resource,<<"/">>,exchange,<<"test-exchange">>},
'x-delayed-message',true,true,false,
[{<<"x-delayed-type">>,longstr,<<"direct">>}],
undefined,undefined,
{[],[]}},
{delivery,false,false,<0.691.0>,
{basic_message,
{resource,<<"/">>,exchange,<<"test-exchange">>},
[<<"task_queue">>],
{content,60,
{'P_basic',undefined,undefined,
[{<<"x-delay">>,signedint,5000}],
1,undefined,undefined,undefined,undefined,
undefined,undefined,undefined,undefined,undefined,
undefined},
<<48,0,0,0,0,13,7,120,45,100,101,108,97,121,73,0,0,19,
136,1>>,
rabbit_framing_amqp_0_9_1,
[<<"gooogle">>]},
<<80,125,217,116,181,47,214,41,203,179,7,85,150,76,35,2>>,
false},
undefined,noflow},
5000},
infinity]}},
[{gen_server,call,3,[{file,"gen_server.erl"},{line,188}]},
{rabbit_exchange_type_delayed_message,route,2,
[{file,"src/rabbit_exchange_type_delayed_message.erl"},{line,53}]},
{rabbit_exchange,route1,3,[{file,"src/rabbit_exchange.erl"},{line,381}]},
{rabbit_exchange,route,2,[{file,"src/rabbit_exchange.erl"},{line,371}]},
{rabbit_channel,handle_method,3,
[{file,"src/rabbit_channel.erl"},{line,949}]},
{rabbit_channel,handle_cast,2,[{file,"src/rabbit_channel.erl"},{line,457}]},
{gen_server2,handle_msg,2,[{file,"src/gen_server2.erl"},{line,1032}]},
{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,239}]}]}
Paste a workable code with Rabbitmq 3.7.7:
send.py
#!/usr/bin/env python
import pika
import sys
connection = pika.BlockingConnection(pika.ConnectionParameters(host='localhost'))
channel = connection.channel()
#channel.exchange_declare(exchange='direct_logs',
# exchange_type='direct')
#channel.exchange_declare("test-exchange", type="x-delayed-message", arguments={"x-delayed-type":"direct"},durable=True,auto_delete=True)
channel.exchange_declare(exchange='test-exchange',
exchange_type='x-delayed-message',
arguments={"x-delayed-type":"direct"})
severity = sys.argv[1] if len(sys.argv) > 2 else 'info'
message = ' '.join(sys.argv[2:]) or 'Hello World!'
channel.basic_publish(exchange='test-exchange',
routing_key=severity,
properties=pika.BasicProperties(
headers={'x-delay': 5000} # Add a key/value header
),
body=message)
print(" [x] Sent %r:%r" % (severity, message))
connection.close()
receive.py
#!/usr/bin/env python
import pika
import sys
connection = pika.BlockingConnection(pika.ConnectionParameters(host='localhost'))
channel = connection.channel()
channel.exchange_declare(exchange='test-exchange',
exchange_type='x-delayed-message',
arguments={"x-delayed-type":"direct"})
result = channel.queue_declare(exclusive=True)
queue_name = result.method.queue
binding_keys = sys.argv[1:]
if not binding_keys:
sys.stderr.write("Usage: %s [binding_key]...\n" % sys.argv[0])
sys.exit(1)
for binding_key in binding_keys:
channel.queue_bind(exchange='test-exchange',
queue=queue_name,
routing_key=binding_key)
print(' [*] Waiting for logs. To exit press CTRL+C')
def callback(ch, method, properties, body):
print(" [x] %r:%r" % (method.routing_key, body))
channel.basic_consume(callback,
queue=queue_name,
no_ack=True)
channel.start_consuming()
python send.py error aaaabbbb
python receive.py error
[*] Waiting for logs. To exit press CTRL+C
[x] 'error':'aaaabbbb'

RabbitMQ new consumer hangs

I'm using rabbitmq 3.6.6 using the docker image "rabbitmq:3"
Whenever I add a new consumer to my RabbitMQ queue it hangs from anywhere to 10 seconds 10 hours.
Below is an example of code used to get the error. I also get this error in Go. So it's not library dependant.
<?php
include(__DIR__."/vendor/autoload.php");
print "Start" . PHP_EOL;
$connection = new \PhpAmqpLib\Connection\AMQPStreamConnection('xxxx', 5697, 'guest', 'guest');
$channel = $connection->channel();
$callback = function($msg) {
echo " [x] Received ", $msg->body, "\n";
};
$channel->basic_consume('repositories', '', false, false, false, false, $callback);
while(count($channel->callbacks)) {
$channel->wait();
}
When I look at the logs I see
=INFO REPORT==== 31-Jan-2017::21:14:33 ===
accepting AMQP connection <0.891.0> (10.32.0.1:54216 -> 10.44.0.3:5672)
=INFO REPORT==== 31-Jan-2017::21:14:34 ===
accepting AMQP connection <0.902.0> (10.32.0.1:54247 -> 10.44.0.3:5672)
When I do list_consumer during via rabbitmqctl I see the consumer in the list, yet no messages are processed by it.
It turns out I needed to set the Qos setting.
Some more information can be found at:
http://www.rabbitmq.com/consumer-prefetch.html
https://github.com/streadway/amqp/blob/master/channel.go#L576

RabbitMQ and Pika

I'm using python lib pika, fow work with rabbitmq.
RabbitMq runnning and listen 0.0.0.0:5672, I try connect to him from another server, and I get exception:
socket.timeout: timed out
Python code using from official doc RabbitMQ(Hello, World)
I was try disable iptables.
But if I run script with host "localhost", all good work.
My /etc/rabbitmq/rabbitmq.config
[
{rabbit, [
{tcp_listeners,[{"0.0.0.0",5672}]}
]}
].
Code:
#!/usr/bin/env python
import pika
connection = pika.BlockingConnection(pika.ConnectionParameters(host='192.168.10.150', port=5672, virtual_host='/', credentials=pika.credentials.PlainCredentials('user', '123456')))
channel = connection.channel()
channel.queue_declare(queue='task_queue', durable=True)
message = "Hello World!"
channel.basic_publish(exchange='',
routing_key='task_queue',
body=message,
properties=pika.BasicProperties(
delivery_mode = 2, # make message persistent
))
print " [x] Sent %r" % (message,)
connection.close()
Since you are connecting from another server, you should check your machine`s firewall settings