RabbitMQ and Pika - rabbitmq

I'm using python lib pika, fow work with rabbitmq.
RabbitMq runnning and listen 0.0.0.0:5672, I try connect to him from another server, and I get exception:
socket.timeout: timed out
Python code using from official doc RabbitMQ(Hello, World)
I was try disable iptables.
But if I run script with host "localhost", all good work.
My /etc/rabbitmq/rabbitmq.config
[
{rabbit, [
{tcp_listeners,[{"0.0.0.0",5672}]}
]}
].
Code:
#!/usr/bin/env python
import pika
connection = pika.BlockingConnection(pika.ConnectionParameters(host='192.168.10.150', port=5672, virtual_host='/', credentials=pika.credentials.PlainCredentials('user', '123456')))
channel = connection.channel()
channel.queue_declare(queue='task_queue', durable=True)
message = "Hello World!"
channel.basic_publish(exchange='',
routing_key='task_queue',
body=message,
properties=pika.BasicProperties(
delivery_mode = 2, # make message persistent
))
print " [x] Sent %r" % (message,)
connection.close()

Since you are connecting from another server, you should check your machine`s firewall settings

Related

Client not receiving events from Flask-SocketIO server with Redis message queue

I want to add multiprocessing to my Flask-SocketIO server so I am trying to add a Redis message queue as per the Flask-SocketIO docs. Even without adding multiprocessing, the client is not receiving any events. Everything else is working fine (e.g. the web page is being served, HTTP requests are being made, database calls are being made). There are no error messages on the front or back end. Before I added the Redis queue it was working. I verified that the 'addname' SocketIO route is being hit and that the request.sid looks right. What am I doing wrong?
Very simplified server code:
external_sio = SocketIO(message_queue='redis://')
def requester(user, sid):
global external_sio
external_sio.emit('addname', {'data': 'hello'}, room=sid)
# do some stuff with requests and databases
external_sio.emit('addname', {'data': 'goodbye'}, room=sid)
def main():
app = Flask(__name__,
static_url_path='',
static_folder='dist',
template_folder='dist')
socketio = SocketIO(app)
#socketio.on('addname')
def add_name(user):
global external_sio
external_sio.emit('addname', {'data': 'test'}, room=request.sid)
requester(user.data, request.sid)
socketio.run(app, host='0.0.0.0', port=8000)
if __name__ == '__main__':
main()
Simplified client code (React Javascript):
const socket = SocketIOClient('ipaddress:8000')
socket.emit('addname', {data: 'somename'})
socket.on('addname', ({data}) => console.log(data))
The main server also needs to be connected to the message queue. In your main server do this:
socketio = SocketIO(app, message_queue='redis://')
In your external process do this:
external_sio = SocketIO(message_queue='redis://') # <--- no app on this one

Masstransit cannot access host machine RabbitMQ from a docker container

I created a simple .net core console application with docker support. Following
Masstransit code fails to connect to RabbitMQ instance on host machine. But similar implementation using RabitMq.Client is able to connect to host machine RabbitMQ instance.
Masstransit throws
MassTransit.RabbitMqTransport.RabbitMqConnectionException: Connect
failed: ctas#192.168.0.9:5672/ --->
RabbitMQ.Client.Exceptions.BrokerUnreachableException:
host machine ip : 192.168.0.9
using Masstransit
string rabbitMqUri = "rabbitmq://192.168.0.9/";
string userName = "ctas";
string password = "ctas#123";
string assetServiceQueue = "hello";
var bus = Bus.Factory.CreateUsingRabbitMq(cfg =>
{
var host = cfg.Host(new Uri(rabbitMqUri), hst =>
{
hst.Username(userName);
hst.Password(password);
});
cfg.ReceiveEndpoint(host,
assetServiceQueue, e =>
{
e.Consumer<AddNewAssetReceivedConsumer>();
});
});
bus.Start();
Console.WriteLine("Service Running.... Press enter to exit");
Console.ReadLine();
bus.Stop();
Using RabbitMQ Client
public static void Main()
{
var factory = new ConnectionFactory();
factory.UserName = "ctas";
factory.Password = "ctas#123";
factory.VirtualHost = "watcherindustry";
factory.HostName = "192.168.0.9";
using (var connection = factory.CreateConnection())
using (var channel = connection.CreateModel())
{
channel.QueueDeclare(queue: "hello",
durable: false,
exclusive: false,
autoDelete: false,
arguments: null);
var consumer = new EventingBasicConsumer(channel);
consumer.Received += (model, ea) =>
{
var body = ea.Body;
var message = Encoding.UTF8.GetString(body);
Console.WriteLine(" [x] Received {0}", message);
};
channel.BasicConsume(queue: "hello",
autoAck: true,
consumer: consumer);
Console.WriteLine(" Press [enter] to exit.");
Console.ReadLine();
}
}
Docker file
FROM microsoft/dotnet:1.1-runtime
ARG source
WORKDIR /app
COPY ${source:-obj/Docker/publish} .
ENTRYPOINT ["dotnet", "TestClient.dll"]
I created an example, and was able to connect my host, using the preview package from masstransit.
Start rabbitmq in docker and expose ports on the host
docker run -d -p 5672:5672 -p 15672:15672 --hostname my-rabbit --name some-rabbit rabbitmq:3-management
Build and run console app.
docker build -t dotnetapp .
docker run -d -e RABBITMQ_URI=rabbitmq://guest:guest#172.17.0.2:5672 --name some-dotnetapp dotnetapp
To verify your receiving messages run
docker logs some-dotnetapp --follow
you should see the following output
Application is starting...
Connecting to rabbitmq://guest:guest#172.17.0.2:5672
Received: Hello, World [08/12/2017 04:35:53]
Received: Hello, World [08/12/2017 04:35:58]
Received: Hello, World [08/12/2017 04:36:03]
Received: Hello, World [08/12/2017 04:36:08]
Received: Hello, World [08/12/2017 04:36:13]
...
Notes:
172.17.0.2 was my-rabbit container ip address but you can replace it with your machine ip address
http://localhost:15672 is the rabbitmq management console log in with guest as username and password.
Lastly portainer.io is a very useful application to visually view you local docker environment.
Thanks for the response. I managed to resolve this issue. My findings are as follows.
to connect to a rabbitmq instance on another docker container, they have to be moved/connected to the same network. To do this
create a newtork
docker network create -d bridge my_bridge
connect both app and rabbitmq containers to same network
docker network connect my_bridge <container name>
For masstransit uri use rabbitmq container IP on that network or container name
To connect rabbitmq instance of host machine from a app on docker container.
masstransit uri should include machine name( I tried IP, that did not work)
Try using virtual host in MassTransit configuration too, not sure why you decided to omit it.
var host = cfg.Host("192.168.0.9", "watcherindustry", hst =>
{
hst.Username(userName);
hst.Password(password);
});
Look at Alexey Zimarev comment to your question, if your rabbit runs on a container then it should be on your docker-compese file and then use that entry in your endpoint definition to connect to rabbit because docker creates an internal network on which you are agnostic from source code...
rabbitmq:
container_name: "rabbitmq-yournode01"
hostname: rabbit
image: rabbitmq:3.6.6-management
environment:
- RABBITMQ_DEFAULT_USER=yourusergoeshere
- RABBITMQ_DEFAULT_PASS=yourpasswordgoeshere
- RABBITMQ_DEFAULT_VHOST=vhost
volumes:
- rabbit-volume:/var/lib/rabbitmq
ports:
- "5672:5672"
- "15672:15672"
In your app settings you should have something lie:
"ConnectionString": "host=rabbitmq:5672;virtualHost=vhost;username=yourusergoeshere;password=yourpasswordgoeshere;timeout=0;prefetchcount=1",
And if you'd use EasyNEtQ you could do:
_bus = RabbitHutch.CreateBus(_connectionString); // The one above
I hope it helps,
Juan

How do i send delayed message in rabbitmq using the rabbitmq-delayed-message-exchange plugin?

i have enabled the plugin using
rabbitmq-plugins enable rabbitmq_delayed_message_exchange
I am trying to create delayed exchange and attached header x-delay with 5000ms as int value and binded it to a queue it dint work.
So i tried it using Pika in python:
import pika
credentials = pika.PlainCredentials('admin', 'admin')
parameters = pika.ConnectionParameters('localhost',5672,'/',credentials)
connection = pika.BlockingConnection(pika.ConnectionParameters(host='127.0.0.1',port=5673,credentials=credentials))
channel = connection.channel()
#channel.exchange_declare(exchange='x-delayed-type', type='direct')
channel.exchange_declare("test-exchange", type="x-delayed-message", arguments={"x-delayed-type":"direct"},durable=True,auto_delete=True)
channel.queue_declare(queue='task_queue',durable=True)
channel.queue_bind(queue="task_queue", exchange="test-exchange", routing_key="task_queue")
for i in range(0,100):
channel.basic_publish(exchange='test-exchange', routing_key='task_queue',
body='gooogle',
properties=pika.BasicProperties(headers={"x-delay": 5000},delivery_mode=1))
print i
How can i make delayed exchange using delay make working?
Error Report :
ERROR REPORT==== 10-Mar-2017::13:08:09 ===
Error on AMQP connection <0.683.0> (127.0.0.1:42052 -> 127.0.0.1:5673, vhost: '/', user: 'admin', state: running), channel 1:
{{{undef,
[{erlang,system_time,[milli_seconds],[]},
{rabbit_delayed_message,internal_delay_message,4,
[{file,"src/rabbit_delayed_message.erl"},{line,179}]},
{rabbit_delayed_message,handle_call,3,
[{file,"src/rabbit_delayed_message.erl"},{line,122}]},
{gen_server,handle_msg,5,[{file,"gen_server.erl"},{line,585}]},
{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,239}]}]},
{gen_server,call,
[rabbit_delayed_message,
{delay_message,
{exchange,
{resource,<<"/">>,exchange,<<"test-exchange">>},
'x-delayed-message',true,true,false,
[{<<"x-delayed-type">>,longstr,<<"direct">>}],
undefined,undefined,
{[],[]}},
{delivery,false,false,<0.691.0>,
{basic_message,
{resource,<<"/">>,exchange,<<"test-exchange">>},
[<<"task_queue">>],
{content,60,
{'P_basic',undefined,undefined,
[{<<"x-delay">>,signedint,5000}],
1,undefined,undefined,undefined,undefined,
undefined,undefined,undefined,undefined,undefined,
undefined},
<<48,0,0,0,0,13,7,120,45,100,101,108,97,121,73,0,0,19,
136,1>>,
rabbit_framing_amqp_0_9_1,
[<<"gooogle">>]},
<<80,125,217,116,181,47,214,41,203,179,7,85,150,76,35,2>>,
false},
undefined,noflow},
5000},
infinity]}},
[{gen_server,call,3,[{file,"gen_server.erl"},{line,188}]},
{rabbit_exchange_type_delayed_message,route,2,
[{file,"src/rabbit_exchange_type_delayed_message.erl"},{line,53}]},
{rabbit_exchange,route1,3,[{file,"src/rabbit_exchange.erl"},{line,381}]},
{rabbit_exchange,route,2,[{file,"src/rabbit_exchange.erl"},{line,371}]},
{rabbit_channel,handle_method,3,
[{file,"src/rabbit_channel.erl"},{line,949}]},
{rabbit_channel,handle_cast,2,[{file,"src/rabbit_channel.erl"},{line,457}]},
{gen_server2,handle_msg,2,[{file,"src/gen_server2.erl"},{line,1032}]},
{proc_lib,init_p_do_apply,3,[{file,"proc_lib.erl"},{line,239}]}]}
Paste a workable code with Rabbitmq 3.7.7:
send.py
#!/usr/bin/env python
import pika
import sys
connection = pika.BlockingConnection(pika.ConnectionParameters(host='localhost'))
channel = connection.channel()
#channel.exchange_declare(exchange='direct_logs',
# exchange_type='direct')
#channel.exchange_declare("test-exchange", type="x-delayed-message", arguments={"x-delayed-type":"direct"},durable=True,auto_delete=True)
channel.exchange_declare(exchange='test-exchange',
exchange_type='x-delayed-message',
arguments={"x-delayed-type":"direct"})
severity = sys.argv[1] if len(sys.argv) > 2 else 'info'
message = ' '.join(sys.argv[2:]) or 'Hello World!'
channel.basic_publish(exchange='test-exchange',
routing_key=severity,
properties=pika.BasicProperties(
headers={'x-delay': 5000} # Add a key/value header
),
body=message)
print(" [x] Sent %r:%r" % (severity, message))
connection.close()
receive.py
#!/usr/bin/env python
import pika
import sys
connection = pika.BlockingConnection(pika.ConnectionParameters(host='localhost'))
channel = connection.channel()
channel.exchange_declare(exchange='test-exchange',
exchange_type='x-delayed-message',
arguments={"x-delayed-type":"direct"})
result = channel.queue_declare(exclusive=True)
queue_name = result.method.queue
binding_keys = sys.argv[1:]
if not binding_keys:
sys.stderr.write("Usage: %s [binding_key]...\n" % sys.argv[0])
sys.exit(1)
for binding_key in binding_keys:
channel.queue_bind(exchange='test-exchange',
queue=queue_name,
routing_key=binding_key)
print(' [*] Waiting for logs. To exit press CTRL+C')
def callback(ch, method, properties, body):
print(" [x] %r:%r" % (method.routing_key, body))
channel.basic_consume(callback,
queue=queue_name,
no_ack=True)
channel.start_consuming()
python send.py error aaaabbbb
python receive.py error
[*] Waiting for logs. To exit press CTRL+C
[x] 'error':'aaaabbbb'

logstash rabbitmq output never posts to exchange

I've got logstash running, and successfully reading in a file
rabbitmq is running, I'm watching the log, and I can see the web interface
I've configured logstash to output to a rabbitmq exchange... I think!
Here's the problem: nothing ever gets posted to the exchange, as seen in the web interface.
Any ideas?
My output config:
output {
rabbitmq {
codec => plain
host => localhost
exchange => yomtvraps
exchange_type => direct
}
file { path => "/tmp/heartbeat-from-logstash.log" }
}
UPDATE: I'm watching the rabbit log with
tail -F /usr/local/var/log/rabbitmq/rabbit\#localhost.log
As it turns out, the problem was that there was no routing key set for the exchange and queue.
A working config is:
output {
rabbitmq {
codec => plain
host => localhost
exchange => yomtvraps
exchange_type => direct
key => yomtvraps
# these are defaults but you never know...
durable => true
port => 5672
user => "guest"
password => "guest"
}
}
Here's a sample receiver code (using ruby "Bunny")
require "bunny"
conn = Bunny.new(:automatically_recover => false)
conn.start
ch = conn.create_channel
q = ch.queue("yomtvraps")
exchange = ch.direct("yomtvraps", :durable => true)
begin
puts " [*] Waiting for messages. To exit press CTRL+C"
q.bind(exchange, :routing_key => "yomtvraps").subscribe(:block => true) do |delivery_info, properties, body|
puts " [x] Received #{body}"
end
rescue Interrupt => _
conn.close
exit(0)
end
you rabbitmq's parameter seems not enough, username,password and port have not been configured.
You can configure two outputs, one is to rabbitmq, the other is to file for vertifying the log's creation and log stash is ok.
pay attention to the logstash's version(log stash, rabbitmq plugin), it gave me lots of trouble in my trial before (log stash to another redis server etc).
You could debug rabbitmq's log.
ps -ef|grep erl you could find the log file's path in the arguments.
Be sure that rabbitmq's web manager plugin is enabled, and firewall is rightly configured, then open rabbitmq's web manager, ipaddress:15672
check the exchange's type is ok (in this case 'direct' may be a correct choice), your message consumer is configured ok, and your consumer's queue has been been bound to the exchange correctly.
try to post the message to your consumer through web manager and ensure consumer work well.
Monitor your queue when log stash push log into your consumer.

Python-twisted client connection failover

I am writing a tcp proxy with Twisted framework and need a simple client failover. If proxy can not connect to one backend, then connect to next one in the list. I used
reactor.connectTCP(host, port, factory) for proxy till I came to this task, but it does not spit out error if it can not connect. How can I catch, that it can not connect and try other host or should I use some other connection method?
You can use a deferred to do that
class MyClientFactory(ClientFactory):
protocol = ClientProtocol
def __init__(self, request):
self.request = request
self.deferred = defer.Deferred()
def handleReply(self, command, reply):
# Handle the reply
self.deferred.callback(0)
def clientConnectionFailed(self, connector, reason):
self.deferred.errback(reason)
def send(_, host, port, msg):
factory = MyClientFactory(msg)
reactor.connectTCP(host, port, factory)
return factory.deferred
d = Deferred()
d.addErrback(send, host1, port1, msg1)
d.addErrback(send, host2, port2, msg2)
# ...
d.addBoth(lambda _: print "finished")
This will trigger the next errback if the first one fails, otherwise goto the print function.