How to create a truly in memory jacrabbit repository - repository

I've been trying to figure out if it is possible to have a jackrabbit repository to be run completely in memory.
Whatever I try in order to run the repository fully in memory, I still end up with a repository directory full of files on my disk?
If anyone has figured this out I'd much appreciate if you could explain how to achieve it.
Thanks
Piersyp

See this 4+ year old blog post, which hopefully still applies and is helpful: http://modeshape.wordpress.com/2008/02/22/speedy-unit-testing-with-jackrabbit/

Related

Can I restore removed files in cloud9-ide?

Good evening,
I discovered cloud9-ide a few weeks ago and I find its possibilities for workig in a projetc team incredible.
When I came up with it at our teammeating I mentioned it, and my collegue asked the question
"It is pretty nice that you can see what was changed and that you can reverse it, but does this can also restore deleted files?"
I stood there, awkwarldy, and had to admit that I have no idea if this works and if so, how?
Can someone help me out? Is restoring deleted file possible? This would be a HUGE contra, if not.
To restore a deleted file simply recreate it then look at the revision history. All previous versions will be there.

Difference between LocalCacheFolder and TemporaryFolder

I wonder what's the difference between the LocalCacheFolder and the TemporaryFolder.
Regarding MSDN, the LocalCacheFolder is not backed up/restored, but the temp folder isn't either, is it? What's the difference, then? Will only the temporary folder be deleted when I clean up storage manually`?
MSDN doesn't tell me a lot, so I hope someone can tell me the difference. Thanks in advance :)
Both are folders that aren't backup up. The difference between them is that the OS may delete files from TemporaryFolder whenever it needs additional space. With LocalCacheFolder, you're responsible for deleting the files you don't need anymore.
You can find a more thorough explanation here: http://java.dzone.com/articles/windows-phone-81

Intellij Idea: Continuously asking for memory increase

My intellij idea is running very slow and keeps asking for more memory even when I have given it more than 4 GB. How do I get rid of this problem?
Clean the system cache in IntelliJ Idea. This helps if you have been running the same instance of Idea for lots of projects with lots of different libraries...
I removed the ~/.IntellijIdea* folder that contained all the config and other information. This worked for me. Not sure if there is a simpler solution.

IntelliJ IDEA updating classes takes too much time

I really like IDEA, but when I work with a webapp running on Tomcat and I modify only a single java class file, I have to do an update classes and resources and it takes much more time to do it than in eclipse. In eclipse it's instant, at least I don't notice anything, in IDEA it does a make and updates caches and I don't know what else but it's really annoying.
Why is that and how can I solve this?
Update would depend on your project and its configuration in IDEA. Normally it should not take too long as only the required steps are performed. Compilation is incremental and would be instant. In order to understand why it takes long for your project, we'll need the sample project and the exact steps to reproduce it, please file an issue to our issue tracker.
If you want really fast updates, you may consider using JRebel, it has plug-in for IDEA.
Not so with IntelliJ 10.x. Updates don't require a complete build and redeployment. Try the new version.
I am not sure but you can actually check your Project Settings. There in the modules section you can mark some of your unnecessary folders as excluded.
This might speed up your process as the unnecessary files are now not been indexed.

Files count aftet MassIndexer is HUGE for hibernate-search 3.4.0

I use MassIndexer to index. After migrate to Hibernate-search 3.4 from 3.2.1.Final count of files is really really huge (with .cfs extension). Before it was OK. And the same time migrate onto lucene-core 3.1.0
Please, could somebody explain why it happened?
MassIndexer massIndexe = fullTextSession.createIndexer(SearchLuceneDocument.class);
massIndexe.purgeAllOnStart(true) // true by default, highly recommended
.optimizeAfterPurge(true) // true is default, saves some disk space
.optimizeOnFinish(true) // true by default
.batchSizeToLoadObjects(100)
.threadsForSubsequentFetching(15)
.threadsToLoadObjects(10)
.threadsForIndexWriter(4)
.cacheMode(CacheMode.IGNORE) // defaults to CacheMode.IGNORE
.startAndWait();
Tanks to advance!
Artem
what filesystem are you on? It's known that some NFS leak file descriptors; in fact that's why we propose different alternatives for clustering - none of them involves NFS.
I'm not aware of bugs of us not closing the files during massindexer, but if you could contribute a test highlighting the error I'd be glad to look into it; post it on JIRA or on the hibernate forums, in the Search sections.
thanks