I'm using RabbitMQ and ruby-amqp with Rails. When a message is received by a controller I perform the following:
def create
AMQP.start("amqp://localhost:5672") do |connection|
channel = AMQP::Channel.new(connection)
exchange = channel.direct("")
exchange.publish("some msg", :routing_key => "some key")
EventMachine.add_timer(2) do
exchange.delete
connection.close { EventMachine.stop }
end
end
end
Is there a way to keep the AMQP connection open so I don't have to call start every time a request comes in?
I assume that opening a connection to Rabbit MQ is inefficient, however I haven't found a way to pass a block of code to a persistant connection.
If you just want to keep AMQP connection open, try to set a global variable to keep connection unique.
def start_em
EventMachine.run do
$connection = AMQP.connect(CONNECTION_SETTING) unless $connection
yield
end
end
def publish(message, options = {})
start_em {
channel = AMQP::Channel.new($connection)
exchange = channel.direct('')
exchange.publish(message, {:routing_key => 'rails01'}.merge(options))
EventMachine.add_timer(1) { exchange.delete }
}
end
And don't forget to delete channel after you pulish your message.
Related
I fill up my queue, check it has the right number of tasks to work and then have workers in parallell set to prefetch(1) to ensure each just takes one task at a time.
I want each worker to work its task, send a manual acknowledgement, and keep working taking from the queue if there is more work.
If there is not more work, i.e. the queue is empty, I want the worker script to finish up and return(0).
So, this is what I have now:
require 'bunny'
connection = Bunny.new("amqp://my_conn")
connection.start
channel = connection.create_channel
queue = channel.queue('my_queue_name')
channel.prefetch(1)
puts ' [*] Waiting for messages.'
begin
payload = 'init'
until queue.message_count == 0
puts "worker working queue length is #{queue.message_count}"
_delivery_info, _properties, payload = queue.pop
unless payload.nil?
puts " [x] Received #{payload}"
raise "payload invalid" unless payload[/cucumber/]
begin
do_stuff(payload)
rescue => e
puts "Error running #{payload}: #{e.backtrace.join('\n')}"
#failing stuff
end
end
puts " [x] Done with #{payload}"
end
puts "done with queue"
connection.close
exit(0)
ensure
connection.close
end
I want to still make sure I am done when the queue is empty. This is the example from the RabbitMQ site... https://www.rabbitmq.com/tutorials/tutorial-two-ruby.html . It has a number of things we want for our work queue, most importantly manual acknowledgements. But it does not stop running and I need that to happen programmatically when the queue is done:
#!/usr/bin/env ruby
require 'bunny'
connection = Bunny.new(automatically_recover: false)
connection.start
channel = connection.create_channel
queue = channel.queue('task_queue', durable: true)
channel.prefetch(1)
puts ' [*] Waiting for messages. To exit press CTRL+C'
begin
queue.subscribe(manual_ack: true, block: true) do |delivery_info, _properties, body|
puts " [x] Received '#{body}'"
# imitate some work
sleep body.count('.').to_i
puts ' [x] Done'
channel.ack(delivery_info.delivery_tag)
end
rescue Interrupt => _
connection.close
end
How can this script be adapted to exit out when the queue has been completely worked (0 total and 0 unacked)?
From what I understand, you want your subscriber to end if there are no pending messages in the RabbitMQ queue.
Given your second script, you could avoid passing block: true, and that will return nothing when there's no more data to process. In that case, you could exit the program.
You can see that in the documentation: http://rubybunny.info/articles/queues.html#blocking_or_nonblocking_behavior
By default it's non-blocking.
We are on: akka-stream-experimental_2.11 1.0.
Inspired by the example
We wrote a TCP receiver as follows:
def bind(address: String, port: Int, target: ActorRef)
(implicit system: ActorSystem, actorMaterializer: ActorMaterializer): Future[ServerBinding] = {
val sink = Sink.foreach[Tcp.IncomingConnection] { conn =>
val serverFlow = Flow[ByteString]
.via(Framing.delimiter(ByteString("\n"), maximumFrameLength = 256, allowTruncation = true))
.map(message => {
target ? new Message(message); ByteString.empty
})
conn handleWith serverFlow
}
val connections = Tcp().bind(address, port)
connections.to(sink).run()
}
However, our intention was to have the receiver not respond at all and only sink the message. (The TCP message publisher does not care about response ).
Is it even possible? to not respond at all since akka.stream.scaladsl.Tcp.IncomingConnection takes a flow of type: Flow[ByteString, ByteString, Unit]
If yes, some guidance will be much appreciated. Thanks in advance.
One attempt as follows passes my unit tests but not sure if its the best idea:
def bind(address: String, port: Int, target: ActorRef)
(implicit system: ActorSystem, actorMaterializer: ActorMaterializer): Future[ServerBinding] = {
val sink = Sink.foreach[Tcp.IncomingConnection] { conn =>
val targetSubscriber = ActorSubscriber[Message](system.actorOf(Props(new TargetSubscriber(target))))
val targetSink = Flow[ByteString]
.via(Framing.delimiter(ByteString("\n"), maximumFrameLength = 256, allowTruncation = true))
.map(Message(_))
.to(Sink(targetSubscriber))
conn.flow.to(targetSink).runWith(Source(Promise().future))
}
val connections = Tcp().bind(address, port)
connections.to(sink).run()
}
You are on the right track. To keep the possibility to close the connection at some point you may want to keep the promise and complete it later on. Once completed with an element this element published by the source. However, as you don't want any element to be published on the connection, you can use drop(1) to make sure the source will never emit any element.
Here's an updated version of your example (untested):
val promise = Promise[ByteString]()
// this source will complete when the promise is fulfilled
// or it will complete with an error if the promise is completed with an error
val completionSource = Source(promise.future).drop(1)
completionSource // only used to complete later
.via(conn.flow) // I reordered the flow for better readability (arguably)
.runWith(targetSink)
// to close the connection later complete the promise:
def closeConnection() = promise.success(ByteString.empty) // dummy element, will be dropped
// alternatively to fail the connection later, complete with an error
def failConnection() = promise.failure(new RuntimeException)
I'm trying to publish messages with pika, using Celery tasks.
from celery import shared_task
from django.conf import settings
import json
#shared_task
def publish_message():
params = pika.URLParameters(settings.BROKER_URL + '?' + 'socket_timeout=10&' + 'connection_attempts=2')
conn = pika.BlockingConnection(parameters=params)
channel = conn.channel()
channel.exchange_declare(
exchange = 'foo',
type='topic'
)
channel.tx_select()
channel.basic_publish(
exchange = 'foo',
routing_key = 'bar',
body = json.dumps({'foo':'bar'}),
properties = pika.BasicProperties(content_type='application/json')
)
channel.tx_commit()
conn.close()
This task is called from the views.
Due to some weird reason, sometimes randomly, the messages are not getting queued. In my case, every second message is getting dropped. What am I missing here?
I would recommend that you enable confirm_delivery in pika. This will ensure that messages get delivered properly, and if for some reason the message could not be delivered. Pika will fail with either an exception, or return False.
channel.confirm_delivery()
successful = channel.basic_publish(...)
If the process fails you can try to send the message again, or log the error message from the exception so that you can act accordingly.
Try this:
chanel = conn.channel()
try:
chanel.queue_declare(queue='foo')
except:
pass
chanel.basic_publish(
exchange='',
routing_key='foo',
body=json.dumps({'foo':'bar'})
)
I have some simple code to put a few things on a queue:
val factory = new ConnectionFactory()
factory.setHost("localhost")
val connection = factory.newConnection()
val channel = connection.createChannel()
channel.basicPublish("", "myq", null, "AAA".getBytes())
channel.basicPublish("", "myq", null, "BBB".getBytes())
channel.basicPublish("", "myq", null, "CCC".getBytes())
channel.close()
connection.close()
This seems to work. After running this I can do 'rabbitmqctl list_queues' and see myq with 3 items in it.
Now (in a different process) I run reader code to grab just 1 element from the queue:
val factory = new ConnectionFactory()
factory.setHost("localhost")
val connection = factory.newConnection()
val channel = connection.createChannel()
channel.queueDeclare("myq", false, false, false, null)
val consumer = new QueueingConsumer(channel)
channel.basicConsume("myq", true, consumer)
// Grab just one message from queue
val delivery = consumer.nextDelivery()
val message = new String(delivery.getBody())
println(" [x] Received '" + message + "'")
channel.close()
connection.close()
This successfully retrieves the first item on the queue (AAA). But... now when I run 'rabbitmqctl list_queues' I see 0 items in my queue, and of course re-running my reader hangs/waits because the queue is now empty. Why did the other items in the queue disappear?
You seem to not be using basicQos. With basicQos set to one, then you can achieve what you want, otherwise RabbitMQ will considered the prefetch setting to be unlimited, and send all the messages (or as many as it can) to the process that first did a basicConsume().
More info here: http://www.rabbitmq.com/tutorials/tutorial-two-java.html bellow "Fair Dispatch"
Just spent my first few hours looking at Redis and Redis MQ.
Slowly getting the hang of Redis and was wondering how you could resend a message that is in a dead letter queue?
Also, where are the configuration options which determine how many times a message is retried before it goes into the dead letter queue?
Currently, there's no way to automatically resend messages in the dead letter queue in ServiceStack. However, you can do this manually relatively easily:
To reload messages from the dead letter queue by using:
public class AppHost {
public override Configure(){
// create the hostMq ...
var hostMq = new RedisMqHost( clients, retryCount = 2 );
// with retryCount = 2, 3 total attempts are made. 1st + 2 retries
// before starting hostMq
RecoverDLQMessages<TheMessage>(hostMq);
// add handlers
hostMq.RegisterHandler<TheMessage>( m =>
this.ServiceController.ExecuteMessage( m ) );
// start hostMq
hostMq.Start();
}
}
Which ultimately uses the following to recover (requeue) messages:
private void RecoverDLQMessages<T>( RedisMqHost hostMq )
{
var client = hostMq.CreateMessageQueueClient();
var errorQueue = QueueNames<T>.Dlq;
log.Info( "Recovering Dead Messages from: {0}", errorQueue );
var recovered = 0;
byte[] msgBytes;
while( (msgBytes = client.Get( errorQueue, TimeSpan.FromSeconds(1) )) != null )
{
var msg = msgBytes.ToMessage<T>();
msg.RetryAttempts = 0;
client.Publish( msg );
recovered++;
}
log.Info( "Recovered {0} from {1}", recovered, errorQueue );
}
Note
At the time of this writing, there's a possibility of ServiceStack losing messages. Please See Issue 229 Here, so, don't kill the process while it's moving messages from the DLQ (dead letter queue) back to the input queue. Under the hood, ServiceStack is POPing messages from Redis.