I have application that uses rabbitmq to queue messages for other parts of ecosystem. I would like to do some performance testing and tuning, but just on my part (the program). So I guess I would like to somehow "mock" away the rabbitmq server, but without changes to my application.
Is there something like dummy rabbitmq server that just accepts all messages and throws them away immediately? Or can I configure actual rabbitmq in that way?
I was using local docker image for the performance test. You can run it with the command:
docker run -d -p 8081:15672 rabbitmq:3-management
You can access management gui on localhost:8081, default username and password is guest/guest
After you are done running a performance test you can purge queue. You do that in Queues>your queue>Purge
PS: Port can be anything you want, just change 8081 in the docker command :)
Related
I just install RabbitMQ in my computer and want to run demo for sending message but it doesn't work. According to documentation the reason may because the broker was started without enough free disk space. And when I check RabbitMQ Management Dashboard, it show my free disk space only 46kb (by default it needs at least 200 MB free). According to documentation I need to change disk_free_limit.
From this documentation I have to create configuration file by myself and put it in C:\Users\User\AppData\Roaming\RabbitMQ. The documentation give an example script for configuration. I change the setting for disk_free_limit.absolute, restart computer (I don't know how to restart RabbitMQ service in windows). But when I check the RabbitMQ Management Dashboard the disk space still 46kb.
I highly recommend the usage of containers for running services like RabbitMQ to avoid problems like the ones you are having at the moment.
I usually use this dockerfile
FROM rabbitmq:3-management
RUN echo '[rabbitmq_management,rabbitmq_management_visualiser,rabbitmq_amqp1_0].' > enabled_plugins
RUN rabbitmq-plugins enable rabbitmq_amqp1_0
For running it
docker build -t my-rabbit .
docker run -it --rm --name rabbitmq -p 5672:5672 -p 15672:15672 my-rabbit:latest
If you never played with docker before, please read this.
I have an application written in python, which run on a VPS server. It is a small application that writes, reads and receives read requests from a SQLite database, through a TCP socket.
The downside is that the application runs only when the console is open (using the ssh protocol), when closing the console, that is, the ssh session closes the application.
How should it be implemented? Or I must implement it? because the server is a ubuntu server
nohup should help in your case:
in your ssh session, launch your python app prefixed with nohup, as recommended here
exit your ssh session
The program should continue working even if its parent shell (the ssh session) is terminated.
There are (at least) two solutions:
1- The 'nohup' command, use it as follows: nohup python3 yourappname.py &
This will run your program in the background and won't be killed if you terminate the ssh session, It'll also give you a free prompt after running this command to continue your work.
2- Another GREAT option is the 'screen' command.
This gives you everything that nohup gives you, besides It allows you to check the output of your program (if any) in later logins. Although It may look a little complicated at first sight, but it's SUPER COOL! and I highly recommend you to learn it and enjoy it for the rest of your life!
A good explanation of it is available here
I have a problem with Redis and redis-cli.
I have running Redis like a service in windows, like you can see in the picture
but when I try to run "redis-cli" it doesn't anything, the console is frozen
I need to monitoring all messages with MONITORING command.
Can you help me please!?
Regards.
I'm running Celery on my laptop, with rabbitmq being the broker and redis being the backend. I just used all the default settings and ran celery -A tasks worker --loglevel=info, then it all worked. The workers can get jobs done and I get fetch the execution results by calling result.get(). My question here is that why it works even if I didn't run the rebbitmq and redis servers at all. I did not set the accounts on the servers either. In many tutorials, the first step is to run the broker and backend servers before starting celery.
I'm new to these tools and do not quite understand how they work behind the scene. Any input would be greatly appreciated. Thanks in advance.
Never mind. I just realized that redis and rabbitmq automatically run after installation or shell startup. They must be running for celery to work.
I have a Procfile like so:
web: bundle exec rails server -p $PORT
em: script/eventmachine
The em process fires up an eventmachine with start_server (port ENV['PORT']) and my web process occasionally needs to communicate with it.
My question is how does the web process know what port to communicate with it on? If I understand heroku correctly it assigns you a random port when the process starts up (and it can change if the ps is killed or restarted). Thanks!
According to Heroku documentation,
Two processes running on the same dyno can communicate over TCP/IP using whatever ports they want.
Two processes running on different dynos cannot communicate over TCP/IP at all. They need to use memcached, or the database, or one of the Heroku plugins, to communicate.
Processes are isolated and cannot communicate directly with each other.
http://www.12factor.net/processes
There are, however, a few other ways. One is to use a backing service such as Redis, or Postgres to act as an intermediary - another is to use FIFO to communicate.
http://en.wikipedia.org/wiki/FIFO
It is a good thing that your processes are isolated and share-nothing, but you do need to architecture your application slightly differently to accommodate this.
I'm reading this while on my commute to work. So I haven't tried anything with it (sorry) but this looks relevant and potentially awesome.
https://blog.heroku.com/archives/2013/5/2/new_dyno_networking_model