How do I release all lucene .net file handles? - lucene

I want to run a process that completely destroys and then rebuilds my lucene .net search index from scratch.
I'm stuck on the destroying part
I've called:
IndexWriter.Commit();
IndexWriter.Close();
Analyzer.Close();
foreach (var name in Directory.ListAll()) { Directory.ClearLock(name); Directory.DeleteFile(name); }
Directory.Close();
but the process is failing because the is still a file handler on a file '_0.cfs'
Any ideas?

Are you hosted in IIS? Try an iisreset (sometimes IIS is holding onto the files themselves).

Just call IndexWriter.DeleteAll() followed by a IndexWriter.Commit(), it will remove the index content and will enable you to start off with an empty index, while already open readers will still be able to read data until closed. The old files will automatically be removed once they are no longer used.

Related

How to reload AnalyzingInfixSuggester from the directory?

Using lucene 5.4.1 I am trying to use the AnalyzingInfixSuggester to build a suggestion library and I'm running into an issue where I am unable to load that suggester. I have one process that builds an index out of my data and I have another process(web service) that returns data by searching against that index. However, when I try to open the index, I get nothing from suggester.getCount()
I am calling commit() on my suggester after writing to the directory. On the file system, the files in the directory in question contain about 5.8M of data. However, when I open it to make a search from the web service, I get nothing. I tried calling build and refresh just in case I needed to do that to initialize and still get nothing.
I feel like I'm missing something. Can someone please point me in the direction of some example code that actually reads the suggestion documents from a file system directory?
The answer was much simpler than I thought. A compound of forgetting that my index was built on my machine while the service was running on a vagrant instance. Wrote to a shared directory, removed calls to refresh and build and it works. So to recap, all that is necessary is to construct the suggester by passing the directory and analyzer and it works as expected.

Multiple instances of application using Lucene.Net

I'm developing a WPF application that uses Lucene.Net to index data from files being generated by a third-party process. It's low volume with new files being created no more than once a minute.
My application uses a singleton IndexWriter instance that is created at startup. Similarly an IndexSearcher is also created at startup, but is recreated whenever an IndexWriter.Commit() occurs, ensuring that the newly added documents will appear in search results.
Anyway, some users need to run two instances of the application, but the problem is that newly added documents don't show up when searching within the second instance. I guess it's because the first instance is doing the commits, and there needs to be a way to tell the second instance to recreate its IndexSearcher.
One way would be to signal this using a file create/update in conjunction with a FileSystemWatcher, but first wondered if there was anything in Lucene.Net that I could utilise?
The only thing I can think of that might be helpful for you is IndexReader.Reopen(). This will refresh the IndexReader, but only if the index has changed since the reader was originally opened. It should cause minimal disk access in the case where the index hasn't been updated, and in the case where it has, it tries to only load segments that were changed or added.
One thing to note about the API: Reopen returns an IndexReader. In the case where the index hasn't changed, it returns the same instance; otherwise it returns a new one. The original index reader is not disposed, so you'll need to do it manually:
IndexReader reader = /* ... */;
IndexReader newReader = reader.Reopen();
if(newReader != reader)
{
// Only close the old reader if we got a new one
reader.Dispose();
}
reader = newReader;
I can't find the .NET docs right now, but here are the Java docs for Lucene 3.0.3 that explain the API.
If both instance have their own IndexWriter opened on the same directory, you're in for a world of pain and intermittent bad behaviour.
an IW expects and requires exclusive control of the index directory. This is the reason for the lock file.
If the second instance can detect that there is a existing instance, then you might be able to just open an IndexReader/Searcher on the folder and reopen when the directory changes.
But then what happens if the first instance closes? The index will nolonger be updated. So the second instance would need to reinitialise, this time with an IW. Perhaps it could do this when the lock file is removed when the first instance closes.
The "better" approach would be to spin up a "service" (just a background process, maybe in the system tray). All instances of the app would then query this service. If the app is started and the service is not detected then spin it up.

How to stop running indexing before starting another?

I'm making a web app that uses Lucene as search engine. First, the user has to select a file/directory to index and after that he is capable to search it (duh!). My problem happens when the user is trying to index a huge amount of data: for example, if it's taking too long and the user refreshs the page and try to index another directory, an exception is thrown because the first indexing is still running (write.lock shows up). Known that, how is it possible to stop the first indexing? I tried closing the IndexWriter with no success.
Thanks in advance.
Why do you want to interrupt the first indexing operation and restart it again?
In my opinion you should display a simple image which shows that the system is working (as Nielsen says: "The system should always keep users informed about what is going on, through appropriate feedback within reasonable time.") and when the user press refresh you should intercept the event and prevent the execution of another indexing process.
You are probably trying to open an indexwriter instance on the index directory which has already indexwriter open on it. If you have opened indexwriter on two different index directory then the exception with write.lock won't happen. Could you please check that the new indexwriter instance is not writing to previously opened index directory which has already indexwriter opened on it.

CrystalDecisions.CrystalReports.Engine.LoadSaveReportException: Load report failed - VB.NET 2003

Anybody know why the below error exist?
CrystalDecisions.CrystalReports.Engine.LoadSaveReportException: Load report failed
From your comments about windows\Temp, that is caused by the application pool's identity not having access to c:\windows\Temp (and possibly to the reports folder).
You can solve this problem by giving the application pool credentials that do have the necessary permissions, or giving read write permissions to "Network User" to the c:\windows\temp folder (and again, possibly to the reports folder).
The reason why this folder is required is that the crystal runtime creates a dynamic copy of the report at runtime and places it in the %temp% folder. It is the temp folder copy (with a GUID appended to the original file name) that is shown in the web browser. This is by design and is a useful feature to ensure the live report is safe.
Following from this, you will have to do a proper cleanup after loading every report because they just stay there and fill up the temp folder!
Something like:
CrystalReportViewer1.Dispose(); // if using the viewer
CrystalReportViewer1 = null;
report.Close(); // I can't remember if this is part of the reportDocument class
report.Dispose();
report = null;
GC.Collect(); // crazy but true. Monitor the temp folder to see the effect
Reckface's answer was clear enough, but to add something up.
I managed to get it working using this:
protected void Page_Unload(object sender, EventArgs e)
{
if (reportDocument != null)
{
reportDocument.Close();
reportDocument.Dispose();
crystalReportViewer1.Dispose();
}
}
Doing so may cause issues with the buttons on the toolbar, they can't find the document path anymore because the document is disposed. In that case the document needs to load the path again during postbacks: source
This is a very old question, but I want to add that I got this error because the report was not embedded into the class library.
Solution: I removed the report from the project. I restarted Visual Studio 2022 and than added the crystal report again. This time it was added as an embedded resource.
Did you even bother to Google it? This is a common exception; there are hundreds of posts about it scattered around the intertubes.
The Crystal .NET runtime has famously cryptic error messages. This one just means that the .rpt file (or embedded report) could not be loaded. There are several possible root causes: wrong filename or path, security violation, you are not disposing of old reports properly and windows/temp is getting hogged up, etc.
Do some research. If you're still stuck, come back and elaborate on the problem (do any of your reports work, is this a web app?, what code are you using, etc.)

Executing and then Deleting a DLL in c#

I'm creating a self updating app where I have the majority of the code in a seperate DLL. It's command line and will eventually be run on Mono. I'm just trying to get this code to work in C# on windows at the command line.
How can I create a c# application that I can delete a supporting dll while its running?
AppDomain domain = AppDomain.CreateDomain("MyDomain");
ObjectHandle instance = domain.CreateInstance( "VersionUpdater.Core", "VersionUpdater.Core.VersionInfo");
object unwrap = instance.Unwrap();
Console.WriteLine(((ICommand)unwrap).Run());
AppDomain.Unload(domain);
Console.ReadLine();
at the ReadLine the VersionUpdater.Core.dll is still locked from deletion
The ICommand interface is in VersionUpdater.Common.dll which is referenced by both the Commandline app and VersionUpdater.Core.dll
The only way I've ever managed something similar is to have the DLL in a separate AppDomain to the assembly that is trying to delete it. I unload the other AppDomain and then delete the DLL from disk.
If you're looking for a way to perform the update, off the top of my head I would go for a stub exe that spawns the real AppDomain. Then, when that stub exe detects an update is to be applied, it quits the other AppDomain and then does the update magic.
EDIT: The updater cannot share DLLs with the thing it is updating, otherwise it will lock those DLLs and therefore prevent them from being deleted. I suspect this is why you still get an exception. The updater has to be standalone and not reliant on anything that the other AppDomain uses, and vice versa.
Unwrap will load the assembly of the object's type into the appdomain that calls it. One way around this is to create a type in your "base" assembly that calls command.run, then load that into your new appdomain. This way you never have to call unwrap on an object from a type in a different assembly, and you can delete the assembly on disk.
When I built a self-updating app, I used the stub idea, but the stub was the app itself.
The app would start, look for updates. If it found an update, it would download a copy of the new app to temp storage, and then start it up (System.Diagnostics.Process.Start()) using a command-line option that said "you are being updated". Then the original exe exits.
The spawned exe starts up, sees that it is an update, and copies itself to the original app directory. It then starts the app from that new location. Then the spawned exe ends.
The newly started exe from the original app install location starts up - sees the temp file and deletes it. Then resumes normal execution.
You can always use MOVEFILE_DELAY_UNTIL_REBOOT to delete on reboot. This is most likely the least hackey way todo this sort of thing, by hackey I ususally see things like; loading up new DLL's or injecting to explorer.exe even patching a system dll to get loaded into another process, etc...
MoveFileEx From MSDN;
lpNewFileName [in, optional] The new
name of the file or directory on the
local computer.
When moving a file, the destination
can be on a different file system or
volume. If the destination is on
another drive, you must set the
MOVEFILE_COPY_ALLOWED flag in dwFlags.
When moving a directory, the
destination must be on the same drive.
If dwFlags specifies
MOVEFILE_DELAY_UNTIL_REBOOT and
lpNewFileName is NULL, MoveFileEx
registers the lpExistingFileName file
to be deleted when the system
restarts. If lpExistingFileName refers
to a directory, the system removes the
directory at restart only if the
directory is empty.