rabbitmq management api documentation about the arguments of json response of api - rabbitmq

I am trying monitor all the queues of my rabbitmq server using RabbitMQ Management HTTP API. I need detailed documentation about all the arguments returned in json by
localhost:15672/api/queues

Check this link:
http://hg.rabbitmq.com/rabbitmq-management/raw-file/31c1d2668d39/priv/www/doc/stats.html
Most of the GET requests you can issue to the HTTP API return JSON objects with a large number of keys. While a few of these keys represent things you set yourself in a PUT request or AMQP command (e.g. queue durability or arguments), most of them represent statistics to do with the object in question. This page attempts to document them.

Related

Test async data processing flows with Karate Labs

I'm looking for best practices or the recommended approach to test async code execution with Karate.
Our use cases are all pretty similar but a basic one is:
Client makes HTTP request to API
API accepts request and creates a messages which is added to a queue
API replies with ACCEPTED / 202
Worker picks up message from queue processes it and updates database
Eventually after the work is finished another endpoint delivers updated data
How can I check with Karate that after processing has finished other endpoints return the correct result?
Concrete real life example:
Client requests a processing intensive data export to API e.g. via HTTP POST /api/export
API create message with information for creating the export and puts it on AWS SQS queue
API replies with 202
Worker receives message and creates export, uploads result as ZIP to S3 and finally creates and entry in the database symbolizing this export
Client can now query list exports endpoint e.g. via HTTP GET /api/exports
API returns 200 with the list of exports incl. the newly created entry
Generally I have two ideas on how to approach this:
Use karate retry until on the endpoint that returns the list of exports
In the API response (step #3) return the message ID and use the HTTP API of SQS to poll that endpoint until the message has been processed and then query the list endpoint to check the result
Is either of those approach recommended or should I choose an entirely different solution?
The moment queuing comes into the picture, I would not recommend retry until. It would work if you are in a hurry, but if you are ok to write a little bit of Java code, please read on. Note that this Java "glue code" needs to be written only once, and then the team responsible for writing the functional flows will be up and running.
I personally would prefer option (2) just because when a test fails, you will have a lot more diagnostic information and traces to look at.
Pretty sure you won't have a problem using AWS Java libs to do things such as polling SQS.
I think this example will answer all your questions: https://twitter.com/getkarate/status/1417023536082812935

Decode RabbitMQ payload

In our team, we exchange messages via RabbitMQ between two systems. The messages are encoded in protobuf (v3). We use NServiceBus on the sending and receiving side. We use the RabbitMQ management UI for monitoring the error queue. In production, we noticed that it is not easy to make sense of the payload of the messages in the error queue, which are base64-encoded.
What is the easiest way to get human-readable access to the messages in the error queue? We have full control over the decisions in both systems, and also discussed a switch to JSON-encoded messages (instead of protobuf). But we are otherwise happy with our protobuf-based implementation. Which is already implemented after all.
Protobuf v3 supports formatting as json, once you have the data parsed as IMessage (the base type for in-memory protobuf objects).
So you can convert a single message to be human readable as follows:
Use the webUI GetMessage function to get the message as base64 then requeue it
Convert the message back to protobuf binary via Convert.FromBase64String
Parse it back to an IMessage via ProtoMessageTypeGoesHere.Parser.ParseFrom(binaryData)
You can then convert the parsed message to Json via ToString() or Google.Protobuf.JsonFormatter.
As long as your error queue isn't going to be disrupted by the re-queuing (e.g. resetting of timestamps or reprocessing), you should be able to do this for all messages in the queue.
I wouldn't recommend using the management UI for this. A simple script or html page with a stomp client would be a lot easier to use and more error-proof, in my opinion.
However, to answer your question: to simply decode the message and replace the text, a simple javascript solution will work fine.
$(".msg-payload").text(atob($(".msg-payload").text()))
This will simply select the message field on the queue page on the RabbitMQ management UI and replace it with the decoded value (that's the function atob).
To use this, you can either run it from the console or add it as a bookmark in your browser. Simply use the code prefixed with javascript:, like so:
javascript:$(".msg-payload").text(atob($(".msg-payload").text()))

Things to consider before using message broker

I am newbee in message broker area . Currently there are quiet a good no. of message broker are there(Rabbit-mq ,zeromq ,kafka and many more).
Want to know which thing to consider while opting for any message broker for backend architecture .
Route messages to one or more of many destinations
Transform messages to an alternative representation
Perform message aggregation, decomposing messages into multiple messages and sending them to their destination, then recomposing the responses into one message to return to the user
Interact with an external repository to augment a message or store it
Invoke Web services to retrieve data
Respond to events or errors
Provide content and topic-based message routing using the publish–subscribe pattern
https://en.wikipedia.org/wiki/Message_broker
Try to use RabbitMq, simple and fast.

ActiveMQ view raw message data in web console

I'm using the web console against my AMQ 5.2 instance successfully, except for I cannot see the content of all of my messages.
If I send a test message using the web console, I can see the sample text content, but I believe the vendor app I am working with has binary or byte array message content.
Is there something I need to do to be able to view this raw data?
Thanks,
To my knowledge, it is not possible to inspect messages in the Admin Console. You can get some statistics (like how many messages have been sent etc.).
ActiveMQ does not unmarshal messages when receiving them (for performance reasons, unmarshalling is rather expensive).
Thus, if you want to have some way to inspect messages for their content, you can basically do 2 things:
Write a consumer which registers for all topics/queues, through which you can see messages' content. Drawback: if you're using queue-based interaction, your "real" consumers will not get all messages
Write an activeMQ plugin which looks at the messages. Have a look at ActiveMQ's Logger Plugin. Then write your own (you'll need the sources to compile it) and load it with ActiveMQ (see the documentation on how to configure ActiveMQ to load plugins). You want to override the send() method which is called whenever someone sends a message to the broker. There you get a reference to the message and can access its content.
Neither of the two messages provides a convenient viewing-mechanism though. You'll have to resort to standard out, or write your own web-based access.
hawtio now shows first 256 chars of messages. Don't know if that is enough for you. Use browse() method.

Batching in REST

With web services it is considered a good practice to batch several service calls into one message to reduce a number of remote calls. Is there any way to do this with RESTful services?
If you really need to batch, Http 1.1 supports a concept called HTTP Pipelining that allows you to send multiple requests before receiving a response. Check it out here
I don't see how batching requests makes any sense in REST. Since the URL in a REST-based service represents the operation to perform and the data on which to perform it, making batch requests would seriously break the conceptual model.
An exception would be if you were performing the same operation on the same data multiple times. In this case you can either pass in multiple values for a request parameter or encode this repetition in the body (however this would only really work for PUT or POST). The Gliffy REST API supports adding multiple users to the same folder via
POST /folders/ROOT/the/folder/name/users?userId=56&userId=87&userId=45
which is essentially:
PUT /folders/ROOT/the/folder/name/users/56
PUT /folders/ROOT/the/folder/name/users/87
PUT /folders/ROOT/the/folder/name/users/45
As the other commenter pointed out, paging results from a GET can be done via request parameters:
GET /some/list/of/resources?startIndex=10&pageSize=50
if the REST service supports it.
I agree with Darrel Miller. HTTP already supports HTTP Pipelining, plus HTTP supports keep alive letting you stream multiple HTTP operations concurrently down the same socket to avoid having to wait for the responses before streaming new requests to the server etc.
So with HTTP pipelining and keep alive you get the effect of batching while using the same underlying REST API - so there's usually no need for another REST API to your service
The team with Astoria made good use of multi-part mime to send a batch of calls. Different from pipelining as the multi-part message can infer the intent of an atomic operation. Seems rather elegant.
Original blog post explaining
rational
MSDN Documentation
Of course there is a way but it would require server-side support. There is no magical one size fits all methodology that I know of.