[RabbitMQ][AMQP] Failing to get and read single message with amqp_basic_get and amqp_read_message - rabbitmq

I want to setup a consumer with amqp to read from a specific queue. Some googling pointed out that this can be done with amqp_basic_get, and looking into the documentation, the actual message is retrieved with amqp_read_message. I also found this example which I tried to follow for implementing the basic_get. Nevertheless, I am failing to get and read a message from a specific queue.
My scenario is like this: I have two programs that communicate by publishing and consuming from the rabbitmq server. In each, a connection is declared, with two channels, one meant for consuming, and one for publishing. The flow of information is like this: program A gets the current time and publishes to rabbitmq. Upon receiving this message, program B gets its own time, packages its time and the received time in a message that it publishes to rabbitmq. Program A should consume this message. However, I cannot succeed in reading from the namedQueue.
Program A (in c++, and uses the amqp.c) is implemented as follows:
... after creating the connection
//Create channels
amqp_channel_open_ok_t *res = amqp_channel_open(conn, channelIDPub);
assert(res != NULL);
amqp_channel_open_ok_t *res2 = amqp_channel_open(conn, channelIDSub);
assert(res2 != NULL);
//Declare exchange
exchange = "exchangeName";
exchangetype = "direct";
amqp_exchange_declare(conn, channelIDPub, amqp_cstring_bytes(exchange.c_str()),
amqp_cstring_bytes(exchangetype.c_str()), 0, 0, 0, 0,
amqp_empty_table);
...
throw_on_amqp_error(amqp_get_rpc_reply(conn), printText.c_str());
//Bind the exchange to the queue
const char* qname = "namedQueue";
amqp_bytes_t queue = amqp_bytes_malloc_dup(amqp_cstring_bytes(qname));
amqp_queue_declare_ok_t *r = amqp_queue_declare(
conn, channelIDSub, queue, 0, 0, 0, 0, amqp_empty_table);
throw_on_amqp_error(amqp_get_rpc_reply(conn), "Declaring queue");
if (queue.bytes == NULL) {
fprintf(stderr, "Out of memory while copying queue name");
return;
}
amqp_queue_bind(conn, channelIDSub, queue, amqp_cstring_bytes(exchange.c_str()),
amqp_cstring_bytes(queueBindingKey.c_str()), amqp_empty_table);
throw_on_amqp_error(amqp_get_rpc_reply(conn), "Binding queue");
amqp_basic_consume(conn, channelIDSub, queue, amqp_empty_bytes, 0, 0, 1,
amqp_empty_table);
throw_on_amqp_error(amqp_get_rpc_reply(conn), "Consuming");
// ...
// In order to get a message from rabbitmq
amqp_rpc_reply_t res, res2;
amqp_message_t message;
amqp_boolean_t no_ack = false;
amqp_maybe_release_buffers(conn);
printf("were here, with queue name %s, on channel %d\n", queueName, channelIDSub);
amqp_time_t deadline;
struct timeval timeout = { 1 , 0 };//same timeout used in consume(json)
int time_rc = amqp_time_from_now(&deadline, &timeout);
assert(time_rc == AMQP_STATUS_OK);
do {
res = amqp_basic_get(conn, channelIDSub, amqp_cstring_bytes("namedQueue"), no_ack);
} while (res.reply_type == AMQP_RESPONSE_NORMAL &&
res.reply.id == AMQP_BASIC_GET_EMPTY_METHOD
&& amqp_time_has_past(deadline) == AMQP_STATUS_OK);
if (AMQP_RESPONSE_NORMAL != res.reply_type || AMQP_BASIC_GET_OK_METHOD != res.reply.id)
{
printf("amqp_basic_get error codes amqp_response_normal %d, amqp_basic_get_ok_method %d\n", res.reply_type, res.reply.id);
return false;
}
res2 = amqp_read_message(conn,channelID,&message,0);
printf("error %s\n", amqp_error_string2(res2.library_error));
printf("5:reply type %d\n", res2.reply_type);
if (AMQP_RESPONSE_NORMAL != res2.reply_type) {
printf("6:reply type %d\n", res2.reply_type);
return false;
}
payload = std::string(reinterpret_cast< char const * >(message.body.bytes), message.body.len);
printf("then were here\n %s", payload.c_str());
amqp_destroy_message(&message);
Program B (in python) is as follows
#!/usr/bin/env python3
import pika
import json
from datetime import datetime, timezone
import time
import threading
cosimTime = 0.0
newData = False
lock = threading.Lock()
thread_stop = False
connection = pika.BlockingConnection(pika.ConnectionParameters('localhost'))
connectionPublish = pika.BlockingConnection(pika.ConnectionParameters('localhost'))
channelConsume = connection.channel()
channelPublish = connectionPublish.channel()
print("Declaring exchange")
channelConsume.exchange_declare(exchange='exchangeName', exchange_type='direct')
channelPublish.exchange_declare(exchange='exchangeName', exchange_type='direct')
print("Creating queue")
result = channelConsume.queue_declare(queue='', exclusive=True)
queue_name = result.method.queue
result2 = channelPublish.queue_declare(queue='namedQueue', exclusive=False, auto_delete=False)
channelConsume.queue_bind(exchange='exchangeName', queue=queue_name,
routing_key='fromB')
channelPublish.queue_bind(exchange='exchangeName', queue="namedQueue",
routing_key='toB')
print(' [*] Waiting for logs. To exit press CTRL+C')
def callbackConsume(ch, method, properties, body):
global newData, cosimTime
print("\nReceived [x] %r" % body)
#cosimTime = datetime.datetime.strptime(body, "%Y-%m-%dT%H:%M:%S.%f%z")
with lock:
newData = True
cosimTime = body.decode()
cosimTime = json.loads(cosimTime)
#print(cosimTime)
def publishRtime():
global newData
while not thread_stop:
if newData:
#if True:
with lock:
newData = False
msg = {}
msg['rtime'] = datetime.now(timezone.utc).astimezone().isoformat(timespec='milliseconds')
msg['cosimtime'] = cosimTime["simAtTime"]
print("\nSending [y] %s" % str(msg))
channelPublish.basic_publish(exchange='exchangeName',
routing_key='toB',
body=json.dumps(msg))
#time.sleep(1)
channelConsume.basic_consume(
queue=queue_name, on_message_callback=callbackConsume, auto_ack=True)
try:
thread = threading.Thread(target = publishRtime)
thread.start()
channelConsume.start_consuming()
except KeyboardInterrupt:
print("Exiting...")
channelConsume.stop_consuming()
thread_stop = True
connection.close()
What program A outputs is:
amqp_basic_get error codes amqp_response_normal 1, amqp_basic_get_ok_method 3932232
which is the code for AMQP_BASIC_GET_EMPTY_METHOD.
Program B gets the data, and publishes continuously.
If I slightly modify B to just publish all the time a specific string, then it seems that the amqp_basic_get returns successfully, however then it fails at amqp_read_message with the code AMQP_RESPONSE_LIBRARY_EXCEPTION.
Any idea how to get this to work, what I am missing the setup?

The issue was in the queue_declare where the auto_delete parameter was not matching on both sides.

Related

How to enable TLS on a socket created with asyncio.start_server

I created a server in python using asyncio.start_server but what I want is to enable TLS for this server after its creation. I know it's possible to enable TLS using the tls parameter when creating the server but I want to do it later in the code.
I saw How do I enable TLS on an already connected Python asyncio stream?, it doesn't help me understand what to do.
Thank you for your kind answer and your time reading this.
The code for the server :
# ..init some parameters for the server
async def start_tls_connection(client_reader: asyncio.StreamReader, client_writer: asyncio.StreamWriter):
global ID
global msgList
global onlineList
global socket_server
data = await PStream.read_from_stream(streamObject=client_reader, debugMessage='Got message from client :')
if data == None:
return -1 # Handshake failure
# before proceeding we must upgrade the socket of the server to one that enables tls :
context = get_context()
logging.debug('socket before = ' + str(socket_server.sockets[0]._sock))
logging.debug(str(context))
socket_server.sockets[0]._sock = context.wrap_socket(socket_server.sockets[0]._sock)
logging.debug('socket after = ' + str(ssl_sock))
# upgrade socket here
data = await PStream.write_from_stream(streamObject=client_writer,
msg='<proceed xmlns="urn:ietf:params:xml:ns:xmpp-tls"/>',
debugMessage='Answer sent from server :')
if data == '':
return -1 # Handshake failure
# waiting for client to send a new stream header
data = await PStream.read_from_stream(streamObject=client_reader, debugMessage='Got message from client :')
if data == None:
return -1 # Handshake failure
# ... others methods for handshake
async def on_connect_client(client_reader: asyncio.StreamReader, client_writer: asyncio.StreamWriter):
logging.debug("START OF HANDSHAKE\nFIRST STEP: STREAM SESSION")
res = await stream_init(client_reader, client_writer)
if res == -1:
return res
logging.debug("START OF TLS HANDSHAKE")
res = await start_tls_connection(client_reader, client_writer)
if res == -1:
return res
logging.debug("START OF AUTHENTIFICATION")
res = await handshake_auth(client_reader, client_writer)
if res == -1:
return res
logging.debug("START OF ROOSTER PRESENCE")
res = await handshake_rooster_presence(client_reader, client_writer)
if res == -1:
return res
logging.debug("Waiting for request...")
while True:
data = await PStream.read_from_stream(streamObject=client_reader, debugMessage='Got message from client :')
if data == None:
break
async def server_start():
global socket_server
socket_server = await asyncio.start_server(on_connect_client, IP, PORT) # tls=ssl_context not here yet
logging.debug("Server just started")
async with socket_server:
await asyncio.gather(socket_server.serve_forever())
if __name__ == '__main__':
try:
asyncio.run(server_start())
except Exception as e:
print("Exception: ", str(e))
The line where
socket_server.sockets[0]._sock = context.wrap_socket(socket_server.sockets[0]._sock)
is written is where I have a problem, I don't know how to enable TLS on an already existing asyncio.start_server object.
The get_context() methods works well so don't worry about it btw.

Read 'hidden' input for CLI Dart app

What's the best way to receive 'hidden' input from a command-line Dart application? For example, in Bash, this is accomplished with:
read -s SOME_VAR
Set io.stdin.echoMode to false:
import 'dart:io' as io;
void main() {
io.stdin.echoMode = false;
String input = io.stdin.readLineSync();
// or
var input;
while(input != 32) {
input = io.stdin.readByteSync();
if(input != 10) print(input);
}
// restore echoMode
io.stdin.echoMode = true;
}
This is a slightly extended version, key differences are that it uses a finally block to ensure the mode is reset if an exception is thrown whilst the code is executing.
The code also uses a waitFor call (only available in dart cli apps) to turn this code into a synchronous call. Given this is a cli command there is no need for the complications that futures bring to the table.
The code also does the classic output of '*' as you type.
If you are doing much cli work the below code is from the dart package I'm working on called dcli. Have a look at the 'ask' method.
https://pub.dev/packages/dcli
String readHidden() {
var line = <int>[];
try {
stdin.echoMode = false;
stdin.lineMode = false;
int char;
do {
char = stdin.readByteSync();
if (char != 10) {
stdout.write('*');
// we must wait for flush as only one flush can be outstanding at a time.
waitFor<void>(stdout.flush());
line.add(char);
}
} while (char != 10);
} finally {
stdin.echoMode = true;
stdin.lineMode = true;
}
// output a newline as we have suppressed it.
print('');
return Encoding.getByName('utf-8').decode(line);
}

How do i create a TCP receiver that only consumes messages using akka streams?

We are on: akka-stream-experimental_2.11 1.0.
Inspired by the example
We wrote a TCP receiver as follows:
def bind(address: String, port: Int, target: ActorRef)
(implicit system: ActorSystem, actorMaterializer: ActorMaterializer): Future[ServerBinding] = {
val sink = Sink.foreach[Tcp.IncomingConnection] { conn =>
val serverFlow = Flow[ByteString]
.via(Framing.delimiter(ByteString("\n"), maximumFrameLength = 256, allowTruncation = true))
.map(message => {
target ? new Message(message); ByteString.empty
})
conn handleWith serverFlow
}
val connections = Tcp().bind(address, port)
connections.to(sink).run()
}
However, our intention was to have the receiver not respond at all and only sink the message. (The TCP message publisher does not care about response ).
Is it even possible? to not respond at all since akka.stream.scaladsl.Tcp.IncomingConnection takes a flow of type: Flow[ByteString, ByteString, Unit]
If yes, some guidance will be much appreciated. Thanks in advance.
One attempt as follows passes my unit tests but not sure if its the best idea:
def bind(address: String, port: Int, target: ActorRef)
(implicit system: ActorSystem, actorMaterializer: ActorMaterializer): Future[ServerBinding] = {
val sink = Sink.foreach[Tcp.IncomingConnection] { conn =>
val targetSubscriber = ActorSubscriber[Message](system.actorOf(Props(new TargetSubscriber(target))))
val targetSink = Flow[ByteString]
.via(Framing.delimiter(ByteString("\n"), maximumFrameLength = 256, allowTruncation = true))
.map(Message(_))
.to(Sink(targetSubscriber))
conn.flow.to(targetSink).runWith(Source(Promise().future))
}
val connections = Tcp().bind(address, port)
connections.to(sink).run()
}
You are on the right track. To keep the possibility to close the connection at some point you may want to keep the promise and complete it later on. Once completed with an element this element published by the source. However, as you don't want any element to be published on the connection, you can use drop(1) to make sure the source will never emit any element.
Here's an updated version of your example (untested):
val promise = Promise[ByteString]()
// this source will complete when the promise is fulfilled
// or it will complete with an error if the promise is completed with an error
val completionSource = Source(promise.future).drop(1)
completionSource // only used to complete later
.via(conn.flow) // I reordered the flow for better readability (arguably)
.runWith(targetSink)
// to close the connection later complete the promise:
def closeConnection() = promise.success(ByteString.empty) // dummy element, will be dropped
// alternatively to fail the connection later, complete with an error
def failConnection() = promise.failure(new RuntimeException)

How to recover from akka.stream.io.Framing$FramingException

On: akka-stream-experimental_2.11 1.0.
We are using Framing.delimiter in a Tcp server. When a message arrives with length greater than maximumFrameLength the FramingException is thrown and we could capture it from OnError of the ActorSubscriber.
Server Code:
def bind(address: String, port: Int, target: ActorRef, maxInFlight: Int, maxFrameLength: Int)
(implicit system: ActorSystem, actorMaterializer: ActorMaterializer): Future[ServerBinding] = {
val sink = Sink.foreach {
conn: Tcp.IncomingConnection =>
val targetSubscriber = ActorSubscriber[Message](system.actorOf(Props(new TargetSubscriber(target, maxInFlight))))
val targetSink = Flow[ByteString]
.via(Framing.delimiter(ByteString("\n"), maximumFrameLength = maxFrameLength, allowTruncation = true))
.map(raw ⇒ Message(raw))
.to(Sink(targetSubscriber))
conn.flow.to(targetSink).runWith(Source(Promise().future))
}
val connections = Tcp().bind(address, port)
connections.to(sink).run()
}
Subscriber code:
class TargetSubscriber(target: ActorRef, maxInFlight: Int) extends ActorSubscriber with ActorLogging {
private var inFlight = 0
override protected def requestStrategy = new MaxInFlightRequestStrategy(maxInFlight) {
override def inFlightInternally = inFlight
}
override def receive = {
case OnNext(msg: Message) ⇒
target ! msg
inFlight += 1
case OnError(t) ⇒
inFlight -= 1
log.error(t, "Subscriber encountered error")
case TargetAck(_) ⇒
inFlight -= 1
}
}
Problem:
Messages that are under the max frame length do not flow after this exception for that incoming connection. killing the client and re running it works fine.
ActorSubscriber does not honor supervision
What is the correct way to skip the bad message and continue with the next good message ?
Have you tried to put supervision on the targetFlow sink instead of the whole materialiser? I don't see it anywhere here and I believe it should be set on that flow directly.
Stil this is more a guess than science ;)
I had the same exception reading from a file, and for me it was solved by putting a return after last line.

How to resend from Dead Letter Queue using Redis MQ?

Just spent my first few hours looking at Redis and Redis MQ.
Slowly getting the hang of Redis and was wondering how you could resend a message that is in a dead letter queue?
Also, where are the configuration options which determine how many times a message is retried before it goes into the dead letter queue?
Currently, there's no way to automatically resend messages in the dead letter queue in ServiceStack. However, you can do this manually relatively easily:
To reload messages from the dead letter queue by using:
public class AppHost {
public override Configure(){
// create the hostMq ...
var hostMq = new RedisMqHost( clients, retryCount = 2 );
// with retryCount = 2, 3 total attempts are made. 1st + 2 retries
// before starting hostMq
RecoverDLQMessages<TheMessage>(hostMq);
// add handlers
hostMq.RegisterHandler<TheMessage>( m =>
this.ServiceController.ExecuteMessage( m ) );
// start hostMq
hostMq.Start();
}
}
Which ultimately uses the following to recover (requeue) messages:
private void RecoverDLQMessages<T>( RedisMqHost hostMq )
{
var client = hostMq.CreateMessageQueueClient();
var errorQueue = QueueNames<T>.Dlq;
log.Info( "Recovering Dead Messages from: {0}", errorQueue );
var recovered = 0;
byte[] msgBytes;
while( (msgBytes = client.Get( errorQueue, TimeSpan.FromSeconds(1) )) != null )
{
var msg = msgBytes.ToMessage<T>();
msg.RetryAttempts = 0;
client.Publish( msg );
recovered++;
}
log.Info( "Recovered {0} from {1}", recovered, errorQueue );
}
Note
At the time of this writing, there's a possibility of ServiceStack losing messages. Please See Issue 229 Here, so, don't kill the process while it's moving messages from the DLQ (dead letter queue) back to the input queue. Under the hood, ServiceStack is POPing messages from Redis.