According to the documentation, NServiceBus persists messages via the Management Service in a RavenDB stored in C:\ProgramData\Particular\ServiceBus.Management\Data. See Working with Error and Audit queues.
Ayende has confirmed that the database can only increase in size, never return the allocated disk space.
My problem is that the Data-file is now exceeding 20GB with +3 million messages.
Note that these messages are in the management DB, not in any of the message queue databases, which are stored in C:\Program Files\NServiceBus.Persistence.v4\Database\Databases.
The three million messages can be viewed in ServiceInsight, but I cannot delete them.
All MSMQ queues are empty, and the queue-databases are around 1MB in size.
Question:
How can I purge the Management Service / Particular Management database?
Sub-question:
How can I prevent this form happening again? Is there a setting I'm missing?
The older version of ServiceControl used to be called Particular Management Service and the data file you mention was for that.
If you've uninstalled the old version (The NServiceBus installer used to install this windows service), then it is no longer necessary. You can remove that.
More on the latest version of ServiceControl's data file here: http://docs.particular.net/ServiceControl/configure-ravendb-location
Also, using the latest version of ServiceControl, here's how to set expiration policies:
http://docs.particular.net/ServiceControl/how-purge-expired-data
Get the latest here:
http://particular.net/downloads
You can force a compaction on the db.
http://localhost:8080/admin/compact?database=YOUR_DB_NAME
Related
I like to upgrade the redis memorystore instance in our gcloud because 5.x (at least in Github) appears to have reached its end of life. It's being use for simple key value pairs, so I don't expect anything unexpected during the upgrade to 6.x. However management is nervous and wants a way to rollback the upgrade if there are issues. Is there a way to do this? The documentation appears to say that rollback is not possible. I plan to do the usual backup and then upgrade. The instance is just the basic.
In order to Upgrade the redis memorystore instance, follow the best practices mentioned in the Public Documentation as the following :
We recommend exporting your instance data before running a version upgrade operation.
Note that upgrading an instance is irreversible. You cannot downgrade the Redis version of a Memorystore for a Redis instance.
For Standard Tier instances, to increase the speed and reliability of your version upgrade operation, upgrade your instance during
periods of low instance traffic. To learn how to monitor instance
traffic, see Monitoring Redis instances.
As mentioned in the documentation which recommends you to enable RDB Snapshots.
Memorystore for Redis is primarily used as an in-memory cache. When
using Memorystore as a cache, your application can either tolerate
loss of cache data or can very easily repopulate the cache from a
persistent store.
However, there are some use cases where downtime for a Memorystore
instance, or a complete loss of instance data, can cause long
application downtimes. We recommend using the Standard Tier as the
primary mechanism for high availability. Additionally, enabling RDB
snapshots on Standard Tier instances provides extra protection from
failures that can cause cache flushes. The Standard Tier provides a
highly available instance with multiple replicas, and enables fast
recovery using automatic failover if the primary fails.
In some scenarios you may also want to ensure data can be recovered
from snapshot backups in the case of catastrophic failure of Standard
Tier instances. In these scenarios, automated backups and the ability
to restore data from RDB snapshots can provide additional protection
from data loss. With RDB snapshots enabled, if needed, a recovery is
made from the latest RDB snapshot.
For more information, you can refer to the documentation related to version upgrade behavior.
Planning to migrate Active MQ version form 5.5.1 to 5.11.2 how to migrate the existing messages from older version(5.5.1) to newer version(5.11.2)
Thanks in advance.
This assumes you have already taken care of any migration issues noted in each release note from 5.6.0 to 5.11.2.
There are essentially two ways to upgrade/migrate a broker.
Simply install the new broker and point out the old (kahaDB) database. This will automatically upgrade to a new version. This may cause some downtime during store upgrade (at least if there are a lot of messages in the store).
Have two parallell brokers running at once and let the old "fade out". You can setup a shiny new 5.11 broker side by side. This also makes it possible to migrate to other store types (JDBC or LevelDB). It's a little more work but will keep you uptime maximized. If you depend on message order, I would not recommend this method.
Setup the new broker.
Remove transportConnector from the old broker, and add a network connector from old to new.
Stop old, start new, start old.
Now, clients (using failover, right?) will fail over to the new broker and messages from the old brokers will be copied over to new as long as there are connected consumers on all queues.
When no more messages are left on old broker, shut it down and uninstall.
As with all upgrades, bypassing a lot of versions will make the upgrade less reliable. I would try some dry run upgrade of a production replica to ensure that everything goes as planned.
We have an issue with a windows service which uses nServiceBus. At some random moment, the nServiceBus stops processing messages and direct them directly to Error queue, and I have to restart the service. After the restart, the messages arrived in the input message queue are handled, and everything gets back to normal. If we re-drop the messages which were went to error queue, it is processing it successfully without any issue.
We are using log4net logs to audit the message flow and storing in DB. The NServiceBus Handler stops to log in log4net. After we restart the windows service (NServiceBus) then it start to log again. We are NOT able to redproduce this issue in development environment. We are suspecting this could be a NService Bus Memory Leak issue. But we don't know how to confirm this issue and resolve the same.
We are planning to move this Windows Service (NServiceBus) to different server as a trial and error basis. Did anyone face this issue ever and resolved it? Please help us to resolve this issue as it is causing more troubles in Production environment.
NServiceBus Version that we are using : 2.0.0.1329
Message queue and windows service are in the same machine.
I believe you're running on a version of NServiceBus that is about 5 years old and is no longer supported. While I could give you the standard recommendation of upgrading to a more current release, it could very well be that some of the configuration APIs that you're using have been made obsolete so you may need to make some modifications there and/or in the app.configs.
I'm sorry to say that there probably isn't a better solution for you at this time.
In general, I'd suggest trying to track the NServiceBus releases somewhat more closely. If you're within 6-12 months of the current release, you should generally be in good shape.
We have one VM for BizTalk and a separate VM for the SQL backend. We are using Veeam for backups which basically kicks off a snapshot of the VM. When this snapshot is being finalized on the SQL VM, BizTalk services on the application server fail. Usually they restart automatically but sometimes this requires manual intervention to start the services. The error below is logged on the BizTalk server.
Is there any timeout setting or config changes that will allow BizTalk services to stay up during the snapshot process?
An error occurred that requires the BizTalk service to terminate. The most common causes are the following:
1) An unexpected out of memory error.
OR
2) An inability to connect or a loss of connectivity to one of the BizTalk databases.
The service will shutdown and auto-restart in 1 minute. If the problematic database remains unavailable, this cycle will repeat.
Error message: [DBNETLIB][ConnectionRead (recv()).]General network error. Check your network documentation.
Error source:
BizTalk host name: BizTalkServerApplication
Windows service name: BTSSvc$BizTalkServerApplication
We experienced the same situation and error with both BizTalk 2009 and BizTalk 2013, each set up with two App servers and one SQL DB server.
When our VMware does the final step of the Snapshot backup on the Application servers, it freezes the application server for about 10 seconds, preventing it from receiving packets. On SQL Server 2008 and 2012, it by default will send out keep-alive packets to the clients every 30 seconds (30,000 ms). If the SQL server fails to receive a response back from the App server, it will send out 5 retries (default setting) of the keep-alive request 1 second (1,000 ms) apart. If SQL still does not receive the response back, it will terminate the connection, which will cause the BizTalk hosts on the App server to reset, and in our case, when our German-made ERP system sends its EDI documents over to BizTalk during that reset period, the transmission will fail.
We trapped the issue by running NetMon on the DB and App servers, waiting for the next error message. Upon inspection, we see the five SQL keep-alive packets being sent to the App servers 1 second apart, and at the same time there were NO packets at all received on the Application server. At first guess, one might think they were "just dropped network packets", which is rarely the case. We then made the correlation to the timing of the VM Snapshots, and now confirm each time the snapshot finishes each day, the App servers freeze.
As a Short-to-mid-term workaround, we raised the number of retries SQL attempts before declaring a connection dead, (5 by default), by adding the registry value TcpMaxDataRetransmissions and setting it to 30 (thus 30 seconds before SQL declares the client unresponsive). This has masked the problem for now for us, and use at your own discretion.
We are also looking at an Agent-based version of the VM Snapshot, which may alleviate the condition of freezing the server.
Is there any timeout setting or config changes that will allow BizTalk services to stay up during the snapshot process?
Not that I am aware of, however you might want to Google config options in the btsntsvc.exe.config file which is located in your BizTalk installation directory.
All messages that pass through BizTalk are written to the BizTalkMsgBoxDb and its other databases are involved if you are running tracking, BAM etc. The only service that can cache 'stuff' and handle a database outage is the Enterprise Single Sign-On (ESSO) Service. BizTalk therefore needs a persistent connection to the database server to remain 'up', hence why your Host Instance (BizTalkServerApplication) is stopping - it simply wouldn't be able to process messages if the database wasn't there.
I would add that your approach to back-ups probably isn't supported by Microsoft and I would further suggest that you seriously consider whether an approach that takes your database server offline during the backup is viable?
BizTalk has a pretty robust backup solution for its various databases built into the product, and I would recommend that you take a look at using this supported method.
If you do need to take snapshots of the database system - say once a night - you might want to consider stopping the BizTalk Host Instances, performing the snapshot, and then re-starting the Host Instances through some scripted task.
You might also want to consider checking whether there are any hotfixes for your version of BizTalk Server included in a Cumulative Update that might help address your problem.
I am evaluating using NServiceBus as a SOA mechanism in our product. I'm looking into using the publish/subscribe pattern and my understanding is that the subscription service will store all subscriptions.
Does that mean that if my RavenDB server goes down then my publishers lose the ability to send to subscribers? Or is there a way for the publishers to cache the subscribers it has and if RavenDB were to go down then it would deliver to its known subscribers?
You can run the RavenDB server as a replicated node, to avoid this being a single point of failure.
The general pattern is for an endpoint to have a master node that acts as worker and distributor, and then the master node uses a Raven installation on that same server to store its subscriptions and saga storage.
So, it is a point of failure for that one endpoint, but other endpoints in the distributed system will use the Raven installs on their own servers. Thus, the system is kept distributed and the entire system does not have a single point of failure. RavenDB enables this because it is fairly easy to install it on any server.
Contrast this to SQL Server, which is frequently centralized, scaled up to the max, and even clustered in order to provide high availability. (Read: expensive!)
You can also run RavenDB in a Windows failover cluster where the nodes use a shared SAN for the RavenDB data files. If the active node dies, another takes over. Since the data is stored on the SAN, you shouldn't even notice it except the time it takes to start up the RavenDB windows service on the new node. Check out http://ravendb.net/docs/server/administration/fmc_configuration
This is also the recommended setup for High Availability when running with Distributors. http://docs.particular.net/nservicebus/scalability-and-ha/distributor/