I have configured a stream on a rabbitmq docker container. I am able to use the command rabbitmqadmin to publish to it. I can also see that the message got appended to the stream when I run a rabbitmqaddmin publish. But for some reason a rabbitmqadmin get command on the stream fails saying "540 basic.get not supported by stream queues queue "xxxx" in vhost".
Looks like a different command is needed to read from a stream or cannot be done from command console. Anyone have any ideas how to over come this?
Related
RPC calls over MQTT
Hi,
I want to publish a telemetry event by issuing a PUBLISH message(RPC call) to an MQTT topic (${device.id}/rpc)
references:
[RPC over MQTT][(ttps://mongoose-os.com/docs/mongoose-os/api/rpc/rpc-mqtt.md)
Publishing telemetry events, Google IoT core
I am using below command to call RPC over MQTT
mos --port mqtts://mqtt.2030.ltsapis.goog:8883/projects/PROJECT_NAME/locations/us-central1/registries/iot-registry/devices/esp8266_C7E6AA --cert-file gcp-esp8266_C7E6AA.pub.pem --key-file gcp-esp8266_C7E6AA.key.pem call Sys.GetInfo
But I am getting below response
$ mos --port mqtts://mqtt.2030.ltsapis.goog:8883/projects/PROJECT_NAME/locations/us-central1/registries/iot-registry/devices/esp8266_C7E6AA --cert-file gcp-esp8266_C7E6AA.pub.pem --key-file gcp-esp8266_C7E6AA.key.pem call Sys.GetInfo
Unknown command
Command completed.
Is the above command is correct or am I doing something wrong?
I am posting this answer to save time of people who might face the same problem.
So mos --port command only works form Command line(CMD).
And if you are on windows it require some special formatting that you can find out here.
Thanks
Abhi
I am trying to send my logs from filebeat into redis and then from redis into logstash and elasticsearch.
I am getting the error as
( Failed to RPUSH to redis list with ERR wrong number of arguments for 'rpush' command
2018-05-22T11:15:43.063+0530 ERROR pipeline/output.go:92
Failed to publish events: ERR wrong number of arguments for 'rpush' command).
What configuration is to be made in my filebeat.yml so that the output is redis
Also what configuartion is to be made in my logstash.yml when my input is redis.
Also I am getting error as Redis Connection problem when running logstash conf file
Please provide solution for this.
Thank you.
When listening to Keyspace Notifications it looks like this:
λ redis-cli --csv psubscribe '__keyspace#0__:myset:*'
Reading messages... (press Ctrl-C to quit)
"psubscribe","__keyspace#0__:myset:*",1
"pmessage","__keyspace#0__:myset:*","__keyspace#0__:myset:1","sadd"
"pmessage","__keyspace#0__:myset:*","__keyspace#0__:myset:1","srem"
The problem is that it never says the actual set key that is being added or removed. Is there any way to access the string that is being added or removed within a set via Keyspace Notifications? If it not possible is there a workaround?
The messages sent by the keyspace notifications mechanism do not include the actual data, only the key names.
You can make your own notifications just by calling PUBLISH alongside the calls that modify the data - for atomicity consider using transactions or a Lua script.
Is there a message / queue browser for activeMQ available?
I tried Hermes JMS, but it is not working for activeMQ 5.10, anymore.
We need a browser where we can export a message into XML.
Any suggestions?
Thanks and regards.
You can try hawtio which is a web console that has plugins for various technologies, such as ActiveMQ.
hawtio is created by the people who also created Camel and ActiveMQ and thus has great plugins for those.
http://hawt.io/
Hermes is able to connect to AMQ but perhaps you need to use older AMQ libs on the client/hermes side, like v5.7.0 or similar.
Hawt.io is great to read/move/browse/delete/send messages but you may need additional tools to export/import data.
You cannot export a JMS message to XML in a generic way. What you can do is to export the payload to a file (which may be XML).
To export messages into a files, you can use a command line tool called A. Then you can write a -b tcp://localhost:61616 -c 20 -o file.xml MY.QUEUE and you will have 20 messages exported to file-1.xml, file-2.xml .. file-20.xml.
Disclaimer: I am the author of "A".
My question builds off this one: Temporary queue made in Celery
My application needs to retrieve results, as it uploads them to an S3 file. however, the number of temporary queues being made is causing my broker to crash (machine doesn't have enough memory). I want to delete the temporary queue once the corresponding result as been retrieved. In my celery client script, I am iterating through a list of of results (where each result is from function.delay() ):
for result in result_list:
while True:
if result.ready():
#do something with result
#I WANT TO DELETE TEMPORARY QUEUE HERE
Is there any way I can achieve the above -- deleting the temporary queue once the result has been retrieved?
I would have used CELERY_TASK_RESULT_EXPIRES option in my celeryconfig , but I don't know when I can safely clean up the temporary queue, as the result may not have been retrieved. Is there anyway I can delete specific queues in this script (note that I have the queue Id from the result).
ADDITIONAL NOTE:
I am running all rabbitmq servers in a cluster with HA enabled.
The way I did this was to use the rabbitmqadmin from rabbitmq. I downloaded it via
wget localhost:15672/cli/rabbitmqadmin
after installing the management plugin
rabbitmq-plugins enable rabbitmq_management
Make sure your user has the administrator tag for rabbitmq, or you will not be able to perform commands. I then deleted the queue in my script using python subprocess import and rabbitmqadmin delete queue name='' . Keep in mind that the queue name is the same as the corresponding result id, except without the hyphens.
Also make sure you add the params -v myvhost -u myusername -p mypassword in rabbitmqadmin commands, default vhost is /.
I believe this will delete queues across all nodes in a cluster, though I am not completely sure of this.