SOLVED Handle multiple AMQP Pattern inside one Rails Service using Bunny gem - ruby-on-rails-3

SOLVED PROBLEM.
Just create 2 different queues, like rpc.queue and pubsub.queue. Then you can use multiple messaging pattern in one service without any problem.
I create one rails service using Bunny and ConnectionPool Gem. This service "in my mind (cause not yet implemented)" handle multiple RMQ pattern such as Direct Messaging and RPC. These patterns initialized with different object of Connection Class and defined inside initalizer folder.
Initializer looks like this:
# RMQ Initializer for RabbitMQ Connection
class RMQ
include Lontara::RMQ
# NOTE: Call 2 Server caused errors
def self.start(url:, queue:, rpc_exchange:, pubsub_exchange:)
# Then start the consumer and subscriber
Server::RPCConsumer.new(Connection.new(url:), queue:, exchange: rpc_exchange).consume
Server::Subscriber.new(Connection.new(url:), queue:, exchange: pubsub_exchange).subscribe
end
end
RMQ.start(
url: ENV.fetch('RABBITMQ_URL', 'amqp://guest:guest#rmqserver:5672'),
queue: ENV.fetch('RABBITMQ_QUEUE_VOUCHER', 'lontara-dev.voucher'),
rpc_exchange: ENV.fetch('RABBITMQ_EXCHANGE_RPC', 'lontara-dev.rpc'),
pubsub_exchange: ENV.fetch('RABBITMQ_EXCHANGE_PUBSUB', 'lontara-dev.pubsub')
)
and Connection class:
module Lontara
module RMQ
# Class Connection initializing the connection to RabbitMQ.
class Connection
def initialize(url: ENV['RABBITMQ_URL'])
#connection = Bunny.new(url)
connection.start
#channel = channel_pool.with(&:create_channel)
yield self if block_given?
end
def close
channel.close
connection.close
end
attr_reader :connection, :channel
private
def channel_pool
#channel_pool ||= ConnectionPool.new { #connection }
end
end
end
end
The problem goes whenever these 2 Server:: (RPC and Subscriber) activated. Impacted only when use RPC as messaging, the problem is RPC Publisher does not get response from Consumer.
These steps (when RPC produce error) are:
Run Rails server
Open new terminal, and open rails console in same project
Create Request to Consumer using RPCPublisher
Publisher get response. Then send request again... On this step not get response.
Job is pending, i push ctrl+c to terminate job. Send request again, and get response...
Try again like step 4, and error...
But, if Server::Publisher not initialized on initializer, nothing error happened.
I assumed this error happened cause of thread... But i don't really get helped from other articles on internet.
My expectation is so simple:
RPC Connection requested for Get related (because RPC can reply this request) or other action requires response. And Pub/Sub (Direct) request for Create, Update, Delete since this type didn't need it.
Your answer really help me... Thankyou !

Related

Task queues and result queues with Celery and Rabbitmq

I have implemented Celery with RabbitMQ as Broker. I rely on Celery v4.4.7 since I have read that v5.0+ doesn't support RabbitMQ anymore. RabbitMQ is a MUST in my case.
Everything has been containerized then deployed as pods within Kubernetes 1.19. I am able to execute long running tasks and everything apparently looks fine at first glance. However, I have few concerns which require your expertise.
I have declared inbound and outbound queues but Celery created his owns and I do not see any message within those queues (inbound or outbound) :
inbound_queue = "_IN"
outbound_queue = "_OUT"
app = Celery()
app.conf.update(
broker_url = 'pyamqp://%s//' % path,
broker_heartbeat = None,
broker_connection_timeout = int(timeout)
result_backend = 'rpc://',
result_persistent = True,
task_queues = (
Queue(algorithm_queue, Exchange(inbound_queue), routing_key='default', auto_delete=False),
Queue(result_queue, Exchange(outbound_queue), routing_key='default', auto_delete=False),
),
task_default_queue = inbound_queue,
task_default_exchange = inbound_exchange,
task_default_exchange_type = 'direct',
task_default_routing_key = 'default',
)
#app.task(bind=True,
name='osmq.tasks.add',
queue=inbound_queue,
reply_to = outbound_queue,
autoretry_for=(Exception,),
retry_kwargs={'max_retries': 5, 'countdown': 2})
def execute(self, data):
<method_implementation>
I have implemented callbacks to get results back via REST APIs. However, randomly, it can return or not some results when the status is successfull. This is probably related to message persistency. In details, when I implement flower API to get info, status is successfull and the result is partially displayed (shortened json messages) - when I call AsyncResult, for the same status, result is either None or the right one. I do not understand the mechanism between rabbitmq queues and kombu which seems to cache the resulting message. I must guarantee to retrieve results everytime the task has been successfully executed.
def callback(uuid):
task = app.AsyncResult(uuid)
Specifically, it was that Celery 5.0+ did not support amqp:// as a result back end anymore. However, as your example, rpc:// is supported.
The relevant snippet is here: https://docs.celeryproject.org/en/stable/getting-started/backends-and-brokers/index.html#rabbitmq
We tend to always ignore_results=True in our implementation, so I can't give any practical tips of how to use rpc://, other than to infer that any response is put on an application-specific queue, instead of being able to put on a specified queue (or even different broker / rabbitmq instance) via amqp://.

Can't get Messages from ActiveMQ Queue

I'm running ActiveMQ 5.14.5. I have a Queue with some Pending messages. Screenshot from the console:
There are no active consumers.
The console reports that there are 21651 messages. However if I try and view them, it appears to be empty:
Furthermore when I try to call receive() on my org.apache.activemq.jms.pool.PooledConnection it blocks and receives no messages.
I'm fairly sure that there are messages there, and they should be retrieved. This used to work, and has stopped working.
Is there an explanation for this? There aren't any errors in the log.
Edit:
I'm using the Java client in Clojure. I didn't want to share it because it might confuse matters, but here it is. I'm using a Pooled factory in a couple of different threads. But I think the above example using the console is self-contained.
(let [factory (org.apache.activemq.ActiveMQConnectionFactory.
"Username"
"Password"
"URI")
pooled-connection-factory (org.apache.activemq.jms.pool.PooledConnectionFactory.)]
(.setConnectionFactory pooled-connection-factory factory)
(.start pooled-connection-factory)
(with-open [connection (.createConnection factory)]
(let [session (.createSession connection false, javax.jms.Session/AUTO_ACKNOWLEDGE)
destination (.createQueue session (:queue-name config))
consumer (.createConsumer session destination)]
(.start connection)
(loop [message (.receive consumer)]
(println (.getText ^org.apache.activemq.command.ActiveMQTextMessage message))
(recur (.receive consumer))))))

RabbitMQ routing key does not route

i'm trying to do a simple message queue with RabitMQ i push a message with create_message
and then trying to get the message by the routing key.
it works great when the routing key is the same. the problem is when the routing key is different i keep on getting the message with the wrong routing key:
for example
def callback(ch, method, properties, body):
print("%r:%r" % (method.routing_key, body))
def create_message(self):
connection = pika.BlockingConnection(pika.ConnectionParameters(
'localhost'))
channel = connection.channel()
channel.exchange_declare(exchange='www')
channel.queue_declare(queue='hello')
channel.basic_publish(exchange='www',
routing_key="11",
body='Hello World1111!')
connection.close()
self.get_analysis_task_celery()
def get_message(self):
connection = pika.BlockingConnection(pika.ConnectionParameters(
'localhost'))
channel = connection.channel()
channel.exchange_declare(exchange='www')
timeout = 1
connection.add_timeout(timeout, on_timeout)
channel.queue_bind(exchange="www", queue="hello", routing_key="10")
channel.basic_consume(callback,
queue='hello',
no_ack=True,
consumer_tag= "11")
channel.start_consuming()
example for my output: '11':'Hello World1111!'
what am i doing wrong?
tnx for the help
this is a total guess, since i can't see your rabbitmq server..
if you open the RabbitMQ management website and look at your exchange, you will probably see that the exchange is bound to the queue for routing key 10 and 11, both of which are bound to the same queue.
since both go to the same queue, your message will always be delivered to that queue, the consumer will always pick up the message
again, i'm guessing since i can't see your server. but check the server to make sure you don't have leftover / extra bindings

How to setup Phoenix PubSub subscriber callbacks

I've got a fairly straightforward requirement centered around 2 services (for now) built in Phoenix:
ServiceA is responsible for registering users. When a user is registered, ServiceA broadcasts a message with info about the newly created user. This is being done using the following code in a Controller action right now:
ServiceA.Endpoint.broadcast("activity:all", "new:user", %{email: "test#test.com"})
ServiceB is responsible for listening out for all of these activity broadcasts and doing something with them (essentially building up a feed of activity).
I've hit a stumbling block in that I can see ServiceA broadcasting the message to Redis (using Phoenix.PubSub.Redis), but don't fully understand how to get the subscriber on ServiceB to process it...
The following piece of code is as far as I've managed to get, which does something when a message is broadcast and then raises an exception.
Partial Subscriber Module
defmodule ServiceB.UserSubscriber do
def start_link do
sub = spawn_link &(process_feed/0)
ServiceB.Endpoint.subscribe(:user_pubsub, "activity:all")
{:ok, sub}
end
def process_feed do
receive do
params ->
IO.inspect "processing goes here..."
end
process_feed
end
end
Exception
[error] GenServer :user_pubsub terminating
** (FunctionClauseError) no function clause matching in Phoenix.PubSub.RedisServer.handle_info/2
I'm guessing I've missed a whole load of GenServer work somewhere, but can't seem to find anything online that suggests where.
The problem (as expected) was that my Subscriber module wasn't implemented as a GenServer but I was trying to replicate the same functionality (and badly!). Updating my Subscriber Model as follows has done the trick:
defmodule SubscriberService.ActivitySubscriber do
use GenServer
def start_link(channel) do
GenServer.start_link(__MODULE__, channel)
end
def init(channel) do
pid = self
ref = SubscriberService.Endpoint.subscribe(pid, channel)
{:ok, {pid, channel, ref}}
end
def handle_info(%{event: "new:user"} = message, state) do
IO.inspect "#######################"
IO.inspect "New User - Received Message:"
IO.inspect message
IO.inspect "#######################"
{:noreply, state}
end
def handle_info(message, state) do
IO.inspect "#######################"
IO.inspect "Catch All - Received Message:"
IO.inspect message
IO.inspect "#######################"
{:noreply, state}
end
end
As you can see, init/1 triggers the subscription, and the handle_info/2 functions receive the incoming messages.
If you want to see how it works in all its glory (both Publisher and Subscriber services), take a look at the repo.

How do TLS connections in EventMachine work?

I have a custom Protobuf-based protocol that I've implemented as an EventMachine protocol and I'd like to use it over a secure connection between the server and clients. Each time I send a message from a client to the server, I prepend the message with a 4-byte integer representing the size of the Protobuf serialized string to be sent such that the server knows how many bytes to read off the wire before parsing the data back into a Protobuf message.
I'm calling start_tls in the post_init callback method in both the client and server protocol handlers, with the one in the server handler being passed the server's private key and certificate. There seems to be no errors happening at this stage, based on log messages I'm printing out.
Where I get into trouble is when I begin parsing data in the receive_data callback in the server's handler code... I read 4 bytes of data off the wire and unpack it to an integer, but the integer that gets unpacked is not the same integer I send from the client (i.e. I'm sending 17, but receiving 134222349).
Note that this does not happen when I don't use TLS... everything works fine if I remove the start_tls calls in both the client and server code.
Is it the case that SSL/TLS data gets passed to the receive_data callback when TLS is used? If so, how do I know when data from the client begins? I can't seem to find any example code that discusses this use case...
OK, so via a cross-post to the EventMachine Google Group I figured out what my problem was here. Essentially, I was trying to send data from the client to the server before the TLS handshake was done because I wasn't waiting until the ssl_handshake_completed callback was called.
Here's the code I got to work, just in case anyone comes across this post in the future. :)
Handler code for the server-side:
require 'eventmachine'
class ServerHandler < EM::Connection
def post_init
start_tls :private_key_file => 'server.key', :cert_chain_file => 'server.crt', :verify_peer => false
end
def receive_data(data)
puts "Received data in server: #{data}"
send_data(data)
end
end
Handler code for the client-side:
require 'eventmachine'
class ClientHandler < EM::Connection
def connection_completed
start_tls
end
def receive_data(data)
puts "Received data in client: #{data}"
end
def ssl_handshake_completed
send_data('Hello World! - 12345')
end
end
Code to start server:
EventMachine.run do
puts 'Starting server...'
EventMachine.start_server('127.0.0.1', 45123, ServerHandler)
end
Code to start client:
EventMachine.run do
puts 'Starting client...'
EventMachine.connect('127.0.0.1', 45123, ClientHandler)
end