How to reload the Solr core using SolrJ? - apache

I am using SolrJ for indexing my data. I am updating the Synonym.txt file dynamically but Solr server is not getting the latest changes from Synonym.txt file, my previous question is how to update synonym.txt file dynamically?
So I have to reload/restart the Solr core programatically...
so how can I do that...?
thanks in advance...

The following code should be what you're looking for:
CoreAdminRequest adminRequest = new CoreAdminRequest();
adminRequest.setAction(CoreAdminAction.RELOAD);
CoreAdminResponse adminResponse = adminRequest.process(new HttpSolrServer(solrUrl));
NamedList<NamedList<Object>> coreStatus = adminResponse.getCoreStatus();

SolrJ contains a static convenience method for that in CoreAdminRequest class:
reloadCore("<YOUR_CORE_NAME>", solrClient)

Related

ClientCacheConfiguration is not saved to table

Was using CacheConfiguration in Ignite until I stuck with issue on how to authenticate.
Because of that I was starting to change the CacheConfiguration to clientCacheConfiguration. However after converting it to CacheConfiguration I started to notice that it
does not able to save into table because it lack of method setIndexedTypes eg.
Before
CacheConfiguration<String, IgniteParRate> cacheCfg = new CacheConfiguration<>();
cacheCfg.setName(APIConstants.CACHE_PARRATES);
cacheCfg.setIndexedTypes(String.class, IgniteParRate.class);
New
ClientCacheConfiguration cacheCfg = new ClientCacheConfiguration();
cacheCfg.setName(APIConstants.CACHE_PARRATES);
//cacheCfg.setIndexedTypes(String.class, IgniteParRate.class); --> this is not provided
I still need the table to be populated so it easier for us to verify ( using Client IDE like DBeaver)
Any way to solve this issue?
If you need to create tables/cache dynamically using the thin-client, you'll need to use the setQueryEntities() method to define the columns available to SQL "manually". (Passing in the classes with annotations is basically a shortcut for defining the query entities.) I'm not sure why setIndexedTypes() isn't available in the thin-client; maybe a question for the developer mailing list.
Alternatively, you can define your caches/tables in advance using a thick client. They'll still be available when using the thin-client.
To add to existing answer, you can also try to use cache templates for that.
https://apacheignite.readme.io/docs/cache-template
Pre-configure templates, use them when creating caches from thin client.

Solr: import Lucene index while server is up and running

As read here Can a raw Lucene index be loaded by Solr? Lucene indexes can be imported into Solr. This works well when the Solr server is not running (creating a Solr core folder structure in the data folder with all the needed configuration files) but it does not work when the Solr server is up and running.
Is there any call (via rest endpoint or java api) to tell Solr to re-scan the data folder?
You want to generate an index with lucene (outsite solr) and insert this to solr without restart.
You must not change the index-folder directly. But you can create a new core which point to the already build index folder and switch/swap the core with the (outdated) old one. Or you can merge the new index-Folder in the old core.
All this can be done by the solrj admin api.
e.g. create:
CoreAdminRequest.Create req = new CoreAdminRequest.Create();
req.setConfigName(configName);
req.setSchemaName(schemaName);
req.setDataDir(dataDir);
req.setCoreName(coreName);
req.setInstanceDir(instanceDir);
req.setIsTransient(true);
req.setIsLoadOnStartup(false); // <= unless its productive core.
return req.process(adminServer);
e.g. the swap:
CoreAdminRequest request = new CoreAdminRequest();
request.setAction(CoreAdminAction.SWAP);
request.setCoreName(coreName1);
request.setOtherCoreName(coreName2);
request.process(solrClient);
For SolrCloud use the first "create" approach with the collections api and use alias instead of swap.
e.g. the alias:
CollectionAdminRequest.CreateAlias req = new CollectionAdminRequest.CreateAlias();
req.setAliasedCollections(coreName);
req.setAliasName(aliasName);
return req.process(solrClient);

Pear Quickform2 File Upload handling

I'm trying to build a proper file upload handling via Pear quickform 2.
My serverside approach would be:
$submitValues = $editForm->getValue();
$filename = submitValues['uploaded_image']['name'];
$move_file = move_uploaded_file(.....)
Is there still a function like in quickform1: isUploadedFile() to make sure its an uploaded file?
unfortunately searching the documentary didn't give me the hints I needed.
Any advice regarding this issue is very much appreciated.
You can use the php function directly :
is_uploaded_file($submitValues['uploaded_image']['tmp_name']);
See http://php.net/is_uploaded_file

Adding rules dynamically into drools engine

I have a standalone java application which will interact with my web application running on node. I am trying to add new rules dynamically through web UI. So far I am unable to figure it out, how to create and add rules. Any suggestions for the right direction would be helpful
This is basically a duplicate of https://stackoverflow.com/questions/25036973 so the following is basically a duplicate of my answer to that question...
It's probably best to just look at the Drools examples source code. For instance the KieFileSystem example shows how to create a rule from a String and launch a session which includes it.
The essentials are that you create a KieServices, which contains a virtual file system. You then add rules to that file system. A little bit like the following:
KieServices ks = KieServices.Factory.get();
KieRepository kr = ks.getRepository();
KieFileSystem kfs = ks.newKieFileSystem();
kfs.write("src/main/resources/my/rules/therule.drl", "The source code of the rule");
KieBuilder kb = ks.newKieBuilder(kfs);
kb.buildAll();
you can add multiple Compiled rule DRL files like
knowledgebuilder.add(new ByteArrayResource(compiledDRL.getBytes()),ResourceType.DRL);
Get all the knowledgePackages and fire the all rules
knowledgeBase kbase = knowledgeBaseFactory.newKnowledgeBase();
kbase.addknowledgePackages(knowledgeBuilder.getKnowledgePackages());
knowledgeSession ksession = kbase.newStatefullKnowledgeSession();
ksession.insert(inputObject);
ksession.fireAllRules();
ksession.dispose();

How to get Mongoid working with existing MongoDB collcetions?

I'm working on a quite simple ruby on rails website using MongoDB as the database. My idea was to get this website working with Mongoid such that I can display certain contents from the already existing mongodb collection. I've checked the Internet for tutorials about how to use Mongoid, the problem is all of them are about how to create your Mongodb with rails rather than using an existing one. Could anyone tell how to do what I want? Thanks a lot.
Until you define models for your collections,
you will have to drop down to the Moped driver level to examine existing collections.
Here's a link to the Moped documentation - http://mongoid.org/en/moped/docs/driver.html
And here are some hints to get you started.
console-input.rb
session = Mongoid.default_session
collection = session['test_collection']
collection.insert({name: 'George Washington'})
session.collection_names
collection.find.first
$ rails c < console-input.rb
Loading development environment (Rails 3.2.14)
Switch to inspect mode.
session = Mongoid.default_session
<Moped::Session seeds=["localhost:27017"] database=sandbox_mongoid3_development>
collection = session['test_collection']
#<Moped::Collection:0x007ff2ceb73c20 #database=#<Moped::Database:0x007ff2ceb68b40 #session=<Moped::Session seeds=["localhost:27017"] database=sandbox_mongoid3_development>, #name="sandbox_mongoid3_development">, #name="test_collection">
collection.insert({name: 'George Washington'})
nil
session.collection_names
["test_collection"]
collection.find.first
{"_id"=>"528bdc60e277b0dd1681771a", "name"=>"George Washington"}