I just install RabbitMQ in my computer and want to run demo for sending message but it doesn't work. According to documentation the reason may because the broker was started without enough free disk space. And when I check RabbitMQ Management Dashboard, it show my free disk space only 46kb (by default it needs at least 200 MB free). According to documentation I need to change disk_free_limit.
From this documentation I have to create configuration file by myself and put it in C:\Users\User\AppData\Roaming\RabbitMQ. The documentation give an example script for configuration. I change the setting for disk_free_limit.absolute, restart computer (I don't know how to restart RabbitMQ service in windows). But when I check the RabbitMQ Management Dashboard the disk space still 46kb.
I highly recommend the usage of containers for running services like RabbitMQ to avoid problems like the ones you are having at the moment.
I usually use this dockerfile
FROM rabbitmq:3-management
RUN echo '[rabbitmq_management,rabbitmq_management_visualiser,rabbitmq_amqp1_0].' > enabled_plugins
RUN rabbitmq-plugins enable rabbitmq_amqp1_0
For running it
docker build -t my-rabbit .
docker run -it --rm --name rabbitmq -p 5672:5672 -p 15672:15672 my-rabbit:latest
If you never played with docker before, please read this.
Related
I have installed Redis GUI redis-commander by using https://github.com/joeferner/redis-commander
Redis running on localhost:6379 as a container at docker.
This says if I run redis on localhost:6379, all I need to get started is;
docker run --rm --name redis-commander -d -p 8081:8081 ghcr.io/joeferner/redis-commander:latest
But I encountered with this problem... Is there anyone who got this error and found a solution for this ??
There are some things you have to take into account.
Redis commander is running inside a container so localhost no longer points to your laptop/desktop/developing machine/server. It points to the container itself where no redis is running. So it will never connect. You need to point to the other container.
For this, you should be using some-redis (the name of the container) instead of localhost. In Redis Commander click more and add server to add a new connection
But this will not work unless both containers are running inside the same network.
You need to create first a new docker network
docker network create redis
And then run your containers using this parameter --network=redis
More about docker network here
More about docker run with networks here
I've been testing Syslog-ng in a dev environment for several weeks now. It has since been moved to production but I'm getting weird behavior. I've taken the exact same syslog-ng.conf that was on dev (listens on udp:514 and writes everything to a file on a separate disk) and have it running on production. I only seem to get data written to my destination when I run syslog-ng -Fevd in the foreground. Does anyone have any ideas. I'm tried restarting the service with no luck at all.
This particular syslog-ng is gathering logs from all ESXi and vCenter servers in the production environment, then they get forwarded to Splunk from there (Splunk's recommend solution for VMware logs).
So I continued to pour through the man page. I compared the command the service runs and cross referenced the options on the man page. It was using the -F for foreground in the service. So I just ran sudo syslog-ng --process-mode safe-background (which is supposed to be the default behavior of syslog-ng) and I'm now getting all of my logs in my destination.
TLDR; RTFM.
I have application that uses rabbitmq to queue messages for other parts of ecosystem. I would like to do some performance testing and tuning, but just on my part (the program). So I guess I would like to somehow "mock" away the rabbitmq server, but without changes to my application.
Is there something like dummy rabbitmq server that just accepts all messages and throws them away immediately? Or can I configure actual rabbitmq in that way?
I was using local docker image for the performance test. You can run it with the command:
docker run -d -p 8081:15672 rabbitmq:3-management
You can access management gui on localhost:8081, default username and password is guest/guest
After you are done running a performance test you can purge queue. You do that in Queues>your queue>Purge
PS: Port can be anything you want, just change 8081 in the docker command :)
I have a kubernetes cluster and I am getting cgroup out of memory. I have resources declared in the YAML but I have no idea which apache2 needs more memory. It gives me a process id but how do I tell which pod is being killed?
Thank you.
It is what it is. Your Apache process is using more memory than you are allowing in your pod/container definition.
Reasons why it could be needing more memory:
You have an increase in traffic and sessions being handled
Apache is forking more processes within the container running into memory limits.
Apache not reaping some lingering sessions because of a config issue.
If you are running Docker for containers (which most people do) you can ssh into the node in your cluster and run a:
docker ps -a
You should see the Exited container where your Apache process(es) was running. Then you can run:
docker logs <container-id>
And you might get details on why Apache was doing before it was killed. If you only see minimal info, I recommend increasing the verbosity of your Apache logs.
Hope it helps.
I have a problem with Redis and redis-cli.
I have running Redis like a service in windows, like you can see in the picture
but when I try to run "redis-cli" it doesn't anything, the console is frozen
I need to monitoring all messages with MONITORING command.
Can you help me please!?
Regards.