I'm using ActiveMQ with the C# client library. I created 10,000 topics with random names as part of a test for evaluation purposes and now I can't get rid of the topics. ActiveMQ grinds to a halt with this number of topics so I need them out of the system. Here is what I have tried so far, and none of it has worked. I'm running ActiveMQ as a Windows service.
Delete all of the files and folders in ACTIVEMQ_HOME\Data
Turn off all persistence
Delete all of the files and folders in the persistence folder
Delete the entire ACTIVEMQ_HOME directory and reinstall it in a different folder
I've traced the file activity and cannot find any file that is written to when a topic is created or deleted.
I realize that the .NET client library is a little light on functionality, so I can't even get a list of all the topics programmatically.
Go to your broker configuration file, open the file for editing on the broker element, add the following attribute:
deleteAllMessagesOnStartup="true"
This will cause all previous topics & queues, and their pending messages to be deleted from your kaha store when you restart your broker.
Have Fun!
This question might be old, but a quick and easy way to totally purge all data in ActiveMQ alongwith all queues and topics is to go to the following path:
<ActiveMQ_Installation_Directory>/data
And delete all files in that.
Now once you restart AMQ it will start as a fresh, clean install.
Related
I'm troubleshooting RabbitMQ's cluster network partition events and some log messages were being dropped. The nodes run on Windows VMs. I was trying to fix the message dropping part as described here. Therefore, I added the following to my advanced.config file:
[
{lager, [
{error_logger_hwm, 1024}
]}
].
How do I verify that the configuration change was applied instead of just waiting to see if more messages are dropped or not?
[UPDATE]: On my original post I was trying to change this config in the .conf file, since that's what I'm using to configure RabbitMQ. However, the lager configuration has to be done in the advanced.config file. The advanced.config file seems to be applied even if you are using a .conf file for the basic configuration.
You can't set that value in rabbitmq.conf. The link you provide shows how to set the value in the /etc/rabbitmq/advanced.config file. Please carefully re-read this comment.
You can verify it by running this command:
rabbitmqctl eval 'application:get_env(lager, error_logger_hwm).'
Also see this article:
https://www.rabbitmq.com/configure.html#verify-configuration-effective-configuration
NOTE: the RabbitMQ team monitors the rabbitmq-users mailing list and only sometimes answers questions on StackOverflow.
We are running a Nessus security scan. Unfortunately Particulars ServicePulse executable is coming up as a hit.
Path : c:\program files (x86)\particular software\servicepulse\servicepulse.host.exe
Used by services : Particular.ServicePulse
File write allowed for groups : Everyone
Full control of directory allowed for groups : Everyone
I thought it might be because of the service so I disable the service but it still comes up as a hit in the scan.
Is this software needed for NServiceBus if we are not using a dashboard to monitor?
It looks like the security scan does not like Everyone having full control. As long as the account the service is running under has Read and execute permissions it will work. By default ServicePulse is installed to run using the Local System account.
There is an issue open to address this https://github.com/Particular/ServicePulse/issues/514
ServicePulse dashboard gives an overview of your endpoints and failing messages.
If you are not using it to begin with, your endpoints will continue running, but you'll be flying blind as ServiceControl will be ingesting any failed messages and you won't see if there are any. If you use Heartbeat plugin or any other custom checks, that info will be not visible either.
There's an option of integrating with ServiceControl events, though I suggest you examine how your environment is set up in regards to ServiceControl, and don't disable ServicePulse without understanding the implications of disabling it and knowing your monitoring is covered.
The scan is complaining about file permissions that are too open. Everyone should not have write permissions in any Program Files folder.
I inspected my installation folder and have the same issue.
The fix is relatively easy:
Select the properties of folder C:\Program Files (x86)\Particular Software\ServicePulse
Select the Security tab
Select Advanced, a new dialog opens
Select Enable inheritance
Select OK and Yes on any confirmation dialog
Select Advanced again
Check Replace all child object permission entries with inheritable permissions entries from this object
Select OK and Yes on any confirmation dialog
Select Advanced again for the 3rd time
Remove the current entries where Inherited from states None, on my system those were Everyone and System, make sure that you see all other entries inherited from the parent folder and make sure you do not remove any of them.
Select OK and Yes on any confirmation dialog
It seems like hard procedure but that is because I'm being very detailed here. Some steps potentially can be combined but doing one change at a time keeps things easy to understand.
Now your file permissions are correctly set and comply to your security audit software.
I'm attempting to streamline our rabbitmq migrations and as part of this using broker definitions in version control.
Ultimately I'd like to have a roll forward and roll backward script I can run to change queues, bindings and exchanges in production.
Currently if the bindings change, I am uploading a broker definitions file with the new bindings, which unfortunately preserves the existing bindings alongside.
So then I run a script using multiple rabbitmqadmin delete commands.
This is a little cumbersome however - what would be ideal is if in the broker definitions file, some flag could be set to ensure that when the new bindings have been added, the old bindings are deleted automatically.
Does anyone know of some feature like this? Or a superior technique? Or some library in some scripting language that is designed for this?
Related to this question...
If you haven't properly configured RavenDB, it can easily exhaust your server's RAM.
Link 1
Link 2
Link 3
If you've found yourself in this predicament, how can you force RavenDB to safely release this RAM?
I thought that recycling the service would do the trick. Unfortunately this corrupted my entire RavenDB installation (fortunately in a test environment). In the Silverlight GUI, RavenDB wasn't even able to retrieve a list of the installed databases; so I wasn't able to see my documents.
Jim,
You can ask RavenDB to release memory using:
POST /admin/gc
Don't recycle the service inside of Server Manager.
Instead, issue the following command in a command prompt in your RavenDB\Server directory:
Raven.Server.exe /restart
Be patient. It might take a few minutes to restart the service.
I am having EC2 instances with auto scaling enabled on it.
Now as part of scale down policy when one of the instance is issued termination, the log files remaining on that instance need to be backed up on s3, but I am not finding any way to perform s3 logging of log files for that instance. I have tried putting the needed script in rc0.d directory through chkconfig with highest priority. I also tried to put my script in /lib/systemd/system/halt.service (or reboot.service or poweroff.service), but no luck till now.
I have found some threads related to this on stack overflow and AWS forum but no proper solution found till now.
Can any one please let me know the solution to this problem?
The only reliable way I have found of achieving this behaviour is to use rsyslog/syslog to transfer the log files to a central host as soon as they are written to the syslog subsystem.
This means you will need to run another instance that receives the log files and ships them to S3, or use an SQS-based system such as logstash.
Unfortunately there is no other way to ensure all of your log messages will be stored on S3 - you can not guarantee that your script will finish before autoscaling "pulls the plug".