ravendb index replication with nservicebus - replication

In an attempt to make the index replication more robust in RavenDB's latest stable build, I was looking to introduce NServiceBus into a custom index replciation bundle, such that inside ReplicateToSqlIndexUpdateBatcher dispose when it dequeues the commands and opens a connection to sql to execute, I did rather put them on a bus and process later in a failure tolerant way. I have put the relevant config entries on raven.server.exe.config, given a IStartableBus to the ctor of ReplicateToSqlIndexUpdateBatcher, inside the dispose method of ReplicateToSqlIndexUpdateBatcher I get IBus from the IStartableBus, dequeue the command, map it to a message and Bus.Send it, but some how I keep getting no message destination specified as an error in raven studio. I have added the message mappings to the config, and also tried adding it programmatically using the fluent interface when obtaining a IStartableBus, but to no avail.
What am I missing here?

There is no need to do so.
We have a new SQL Replication bundle that will handle this scenario robustly.

Related

How to replay nServiceBus message

Is it possible to replay all failed messages through nServiceBus without using ServiceControl/ServicePulse?
I'm using NServiceBus.Host.exe to host our endpoints. Our ServiceControl/ServicePulse database became corrupt. I was able to recreate it, but now I a few failed messages in our SQL database which are not visible through the ServicePulse.
Will this help?
Take a look at the readme.md
For people who want the functionality that this tool previously
provided please take one of the following actions
Return to source queue via either ServiceInsight or ServicePulse.
Return to source queue using custom scripting or code. This has the
added benefit enabling possible performance and usability
optimizations since, as the business owner, you have more context as
to how your error queue should be managed. For example using this
approach it is trivial for you to choose to batch multiple sends
inside the same Transaction. Manually return to source queue via any
of the MSMQ management tools. If you still want to use
MsmqReturnToSourceQueue.exe feel free to use the code inside this
repository to compile a copy.
You can look at the link provided to build your own script (to mach SQL) and trip the error message wrapper so you can push the stripped message back to the SQL queue.
Does this help?
If not please contact support at particular dot net and we will be glad to help :-)
There is nothing built into the Particular stack that I know of that will take care of this.
When I have ran into issues like this before I will usually setup a console application to send some commands into the endpoint and then setup a custom handler in the endpoint to fix the data inconsistencies. This allows you to test the "fix" in a dev/uat environment and then you have an automated solution for production to fix the problem.

Clearing Locks between JUnit tests in Hibernate Search 4.1

we recently upgraded to Hibernate Search 4.1 and are getting errors when we run our JUnit tests based on the changes hibernate made with regards to locks. When we run Junit tests with the AbstractTransactionalJUnit4SpringContextTests we often see locks left after each test. In reviewing (How to handle Hibernate-Search index recovery) we tried the native locks, but this did not resolve the issue.
We've tried out the various locking mechanisms (simple, single, and native) using the default directory provider (Filestore) and regularly see messages like:
build 20-Apr-2012 07:07:53 ERROR 2012-04-20 07:07:53,290 154053 (LogErrorHandler.java:83) org.hibernate.search.exception.impl.LogErrorHandler - HSEARCH000058: HSEARCH000117: IOException on the IndexWriter
build 20-Apr-2012 07:07:53 org.apache.lucene.store.LockObtainFailedException: Lock obtain timed out: NativeFSLock#target/indexes/Resource/write.lock
build 20-Apr-2012 07:07:53 at org.apache.lucene.store.Lock.obtain(Lock.java:84)
build 20-Apr-2012 07:07:53 at org.apache.lucene.index.IndexWriter.<init>(IndexWriter.java:1108)
or
build 19-Apr-2012 19:31:09 ERROR 2012-04-19 19:31:09,395 153552 (LuceneBackendTaskStreamer.java:61) org.hibernate.search.backend.impl.lucene.LuceneBackendTaskStreamer - HSEARCH000072: Couldn't open the IndexWriter because of previous error: operation skipped, index ouf of sync!
Some of these messages seem to show the lock issue cascading from one test to another, hence the need for the reset, and some may be valid because the tests are testing 'invalid' behaviors and how our application reacts to them, but often because of cases like this where the ID is null
build 19-Apr-2012 19:31:11 Primary Failure:
build 19-Apr-2012 19:31:11 Entity org.tdar.core.bean.resource.CodingSheet Id null Work Type org.hibernate.search.backend.PurgeAllLuceneWork
But, regardless, we need to make sure that one test does not effect another.
In reading some of the discussions (email discussion on directory providers) it was suggested that the RAM based directory provider might be a better option, but we'd prefer to use the same provider as we use in production wherever possible.
How should we be resetting HibernateSearch between tests to clean up lock files and reset potential issues where the index is out-of-sync or corrupted? At the beginning of the test suite, we wipe the index directory, is it recommended to wipe it after every test?
thanks
If you have stale locks in the directory, it means that Hibernate Search wasn't shut down properly as it certainly will close the locks.
If you start a new Hibernate SessionFactory in each test, you should make sure it's closed as well after the test was run:
org.hibernate.SessionFactory.close()
(This is often missing in many examples as there are no noticeable problems when forgetting to close a Hibernate SessionFactory, but has never been optional and might leak connections or threads).
The thread from the Hibernate mailing list you linked to ended up changing the locks to use native handles in Hibernate Search 4.1, so that locks are cleaned up automatically in case the JVM crashes or is killed. But in your case I guess you're not killing the VM between tests, so you just need to make sure locks are released properly by shutting down the service.
exclusive_index_use=false hides the problem as the IndexWriter will be closed at the end of each transaction. That makes it slower though, as it's significantly more efficient to reuse the IndexWriter. The reason you have this issue after upgrading to Hibernate Search 4.1 is that this option was changed to true by default. But even then, you should still close it properly.
My understanding is that Spring manages the SessionFactory lifecycle, so it is not necessary to call close() at any time.
I have seen this locking error when there are multiple contexts loaded during a test run. For example, the first context creates the locks on the index file. The second context attempt to access the same indexes and fails due to the existence of the open SessionFactory from the first context.
I have fixed this by using #DirtiesContext which closes context before the next is instantiated

Migration patch from NServiceBus 2.6 to NServiceBus 3.0

I have an existing NServiceBus 2.6 application that I want to start moving to 3.0. I'm looking for the minimum change upgrade in the first instance. Is this as simple as replace the 2.6 DLLs with the 3.0 Nuget packages or are there other considerations?
For the most part the application migration is quite straight forward, but depending on your configuration and environment, you may need to make the following changes:
The new convention over configuration for endpoints may mean you will need to rename your endpoints to match your queue names (#andreasohlund has a good post about this).
persistence of saga, timeouts, subscriptions etc. now defaults to RavenDb, so if you use SQL Server to persist data, you need to make sure you have to correct profile and endpoint configuration. For SQL Server storage, make sure you add a reference to NServiceBus.NHibernate as it is no longer part of the core.
Error queues are now referenced differently using different configuration ie. use MessageForwardingInCaseOfFaultConfig instead of the regular MsmqTransportConfig error property. You should still be able to use it, but it will look for the MessageForwardingInCaseOfFaultConfig first.
Other than that, I don't think you need to do anything else to get you upgrade working. I modified some of my message definitions to take advantage of the new ICommand and IEvent interfaces as a way communicatinf intent more clearly.
Anyway, I'm sure there will be some cases that are specific to your environment that will require different changes but I hope this helps a bit.

Nhibernate Profiler - Shows no information other than "session"?

So I am having problems getting NHibernate intergated in my MVC project. I therefore, installed the NHProfiler and initialized it in the Global.asax.cs file (NhibernateProfiler.Initialize();).
However, all I can see in the NHProf is a Session # and the time it took to come up. But selecting it or any other operations doesn't show me any information about the connection to the database or any information at all in any of the other windows such as:
- Statements, Entities, Session Usage
The Session Factory Statistics only shows Start time, execution time, and thats it.
Any thoughts.
Do you have any custom log4net configuration? Just thinking that might be overwriting NHProf's log4net listener after startup. If you refresh the page (and hence start another session*), does NHProf display another Session start? Also verify that your HibernatingRhinos.Profiler.Appender.dll (or HibernatingRhinos.Profiler.Appender.v4.0.dll if you're using .NET 4) is the same one as the current version of NHProf.
* I'm assuming that you're using Session-per-Request since this is a web app.

Can I implement if-new-create JPA strategy?

My current persistence.xml table generation strategy is set to create. This guarantees that each new installation of my application will get the tables, but that also means that everytime the application it's started logs are polluted with exceptions of eclipselink trying to create tables that already exist.
The strategy I wish is that the tables are created only in their absence. One way for me to implement this is to check for the database file and if doesn't exist create tables using:
ServerSession session = em.unwrap(ServerSession.class);
SchemaManager schemaManager = new SchemaManager(session);
schemaManager.createDefaultTables(true);
But is there a cleaner solution? Possibly a try-catch way? It's errorprone for me to guard each database method with a try-catch where the catch executes the above code, but I'd expect it to be a property I can configure the emf with.
The Table creation problems should only be logged at the warning level. So you could filter these out by setting the log level higher than warning, or create a seperate EM that mirrors the actual application EM to be used just for table creation but with logging turned off completely.
As for catching exceptions from createDefaultTables - there shouldn't be any. The internals of createDefaultTables wrap the actual createTable portion and ignores the errors that it might throw. So the exceptions are only showing in the log due to the log level including warning messages. You could wrap it in a try/catch and set the session log level to off, and then reset it in the finally block though.