I'm having a problem adding analyzer with the raven db interface.
When writing StandartAnalyzer or SimpleAnalyzer
it never saves.
Try Lucene.Net.Analysis.Standard.StandardAnalyzer and Lucene.Net.Analysis.SimpleAnalyzer
Related
What is different between IndexWriter.Close() andIndexWriter.Commit() when I hava just single instance of indexWriter?
Note:The Data that I going to make index is very big then I can't close IndexWriter runtime.
Note:I want to search in documents when data are indexing at sametime.
Commit() commits pending, buffered changes to the index (which can then be found with IndexReader() ). The IndexWriter can then continue to be used for more changes. Close() also performs a Commit(), but additionally closes the IndexWriter. Note that IndexWriter implements IDisposable(), and I recommend using it.
By your first note, if you mean there are lots of documents to index, that's fine. You can use the same IndexWriter for many documents without closing it. Just loop through however many documents you want to index within the same IndexWriter using() statement.
With regards to your second note, you must perform a commit() ( or close()) before your IndexWriter() changes will be seen by an IndexReader(). You can always search with IndexReader(), but it will only see the index as it was since the last IndexWriter.Commit().
I recommend Lucene In Action for these important details. It helped me a great deal.
I've been developing a CRUD application using Datasets in C# and Sql Server 2012. The project is basically an agenda wich holds information about Pokémon (name, habilities, types, image, etc).
By the few months I've been facing a problem related to concurrency violation. In other words, when I try to delete or update rows that I've just added during the same execution of the program, the concurrency exception is generated and it isn't possible to perform any other changes in the database. So I need to restart the program in order to be able to perform the changes again (Important Note: this exception only happens for the new rows added through C#).
I've been looking for a solution for this violation (without using Entity Framework or LINQ to SQL), but I couldn't find anything that I could add in the C#'s source code. Does anyone knows how to handle this? What should I implement in my source code? Is there anything to do in SQL Server that could help on it?
Here is a link to my project, a backup from the database, images of the main form and the database diagram:
http://www.mediafire.com/download.php?izkat44a0e4q8em (C# source code)
http://www.mediafire.com/download.php?rj2e118pliarae2 (Sql backup)
imageshack .us /a /img 708/38 23/pokmonform .png (Main Form)
imageshack .us /a /img 18/95 46/kantopokdexdiagram .png (Database Diagram)
I have looked on your code and it seems, that you use AcceptChanges on a datatable daKanto inconsistently. In fact you use AcceptChangesDuringUpdate, which is also fine, although I prefer to call method datatable.AcceptChanges() explictly after the update, but your way also is fine.
Anyway I have noticed, that you use AcceptChangesDuringUpdate in the method Delete_Click and Update__Click, but do not use it in the Save_Click, and also I think you should use AcceptChangesDuringFill in MainForm_Load, where you fill your datasets.
I cannot guarantee you, that it will help, but I know that uniformity of data access throughout the application reduces the risk of the unexpected data consistency errors.
We have a .NET application that runs on Windows Azure and uses NHibernate to connect to a SQL Azure database. Sometimes it's necessary to add retry logic to handle transient failures in SQL Azure as described for example here -
http://social.technet.microsoft.com/wiki/contents/articles/retry-logic-for-transient-failures-in-sql-azure.aspx
Can someone point me to a way in doing this with NHibernate? Ideally I'd like to do this at the NHibernate level and not wrap every single call; I was able to do this for another ORM, .netTiers (http://nettiers.com) as I outline here -
http://blog.ehuna.org/2010/01/how_to_stop_getting_exceptions.html
I did search and found some answers that mention using a custom implementation of the IDbCommand interface -
Intercept SQL statements containing parameter values generated by NHibernate
But I'm not sure this works with NHibernate 3.2 and I'm looking for a clear example I could modify.
How could I make NHibernate retry calls to SQL Azure automatically? Let's say 3 retries, with 100ms wait between each - after the 3 retries, if still failing, we should throw the exception.
I've released a library that takes care of this:
https://github.com/robdmoore/NHibernate.SqlAzure
This is not a complete running example, but the files you need are here
https://github.com/craigvn/NHibernateRetryable
Maybe this can help you out:
http://www.objectreference.net/post/NHibernate-and-Execution-Plans.aspx. Here there is a class that you can download and play with it, with some overrides.
Also check this link as well: http://elliottjorgensen.com/nhibernate-api-ref/NHibernate.Driver/SqlClientDriver.html
I hope it helps in anyway, I'm going thru a similar situation.
Regards,
In my testing code I need to have a blank/empty database at each method. Is there code that would achieve that, to call in the #Before of the test?
Actually you always can use JPQL,
em
.createQuery("DELETE FROM MyEntity m")
.executeUpdate()
;
But note, there is no grants that entity cache would be cleaned also. But for unit-test purposes it is look like good solution.
In my testing code I need to have a blank/empty database at each method.
I would run the test methods insider a transaction (and rollback at the end of each method). That's the usual approach. I don't see the point of committing a transaction and writing data to the database if you DELETE them just after. Just don't commit.
An alternative (not exclusive) would be to use DbUnit to put your database in a known state before a test execution. When doing this, you usually don't need to clean it up.
Another option would be to use raw JDBC to drop the database if exists and then have JPA recreate the whole database. Will be pretty slow though.
I am working on a job portal site and have been using Lucene for job search functionality.
Users will be posting a number jobs on our site on a daily basis.We need to make sure that new job posted is searchable on the site as soon as possible.
In this context, how do I update Lucene index when a new job is posted or when an existing job is edited?
Can lucene index updating and search work in parallel?
Also,can I know any tips/best practices with respect to Lucene indexing,optimizing,performance etc?
Appreciate ur help!
Thanks!
Yes, Lucene can search from and write to an index at the same time as long as no more than 1 IndexWriter writes to it. If you want the new records visible ASAP, have the IndexWriter call the commit() function often (see IndexWriter's JavaDoc for details).
These Wiki pages might also help:
ImproveIndexingSpeed
ImproveSearchingSpeed
I have used Lucene.Net on a web site similar to what you are doing. Yes, you can do live indexes, updating to keep everything up to date? What platform are you using Lucene on, .NET, Java?
Make sure you create a new IndexSearcher as any additions after an IndexSearcher has been created are not visible to that instance.
A better approach may be to ReOpen the IndexReader if you want to resuse the same index searcher.