I have implemented my own Analyzer, QueryParser and PerFieldAnalyzerWrapper to implement the ElasticSearch ${field}.raw feature. Everything seems to be working ok, except for when I test using wildcards, etc on StringField types.
I understand this is because these queries don't use the analyzer at all.
In previous versions of lucene, there was a config option to enable the lowercasing of these queries.
I can't find how to do this in the latest version 7.5.0. Can anyone shed some light on this?
Expanded terms are processed by Analyzer.normalize. Since you have implemented your own Analyzer, add an implementation of the normalize method which runs the tokenStream through a LowerCaseFilter.
It can be as simple as:
public class MyAnalyzer extends Analyzer {
protected TokenStreamComponents createComponents(String fieldName) {
//Your createComponents implementation
}
protected TokenStream normalize(String fieldName, TokenStream in) {
return new LowerCaseFilter(in);
}
}
You can set up an analyzer like this for more details you can check out this link
Git link for CJK Bigram Plugin
#BeforeClass
public static void setUp() throws Exception {
analyzer = new Analyzer() {
#Override
protected TokenStreamComponents createComponents(String fieldName) {
Tokenizer source = new IcuTokenizer(AttributeFactory.DEFAULT_ATTRIBUTE_FACTORY,
new DefaultIcuTokenizerConfig(false, true));
TokenStream result = new CJKBigramFilter(source);
return new TokenStreamComponents(source, new StopFilter(result, CharArraySet.EMPTY_SET));
}
};
analyzer2 = new Analyzer() {
#Override
protected TokenStreamComponents createComponents(String fieldName) {
Tokenizer source = new IcuTokenizer(AttributeFactory.DEFAULT_ATTRIBUTE_FACTORY,
new DefaultIcuTokenizerConfig(false, true));
TokenStream result = new IcuNormalizerFilter(source,
Normalizer2.getInstance(null, "nfkc_cf", Normalizer2.Mode.COMPOSE));
result = new CJKBigramFilter(result);
return new TokenStreamComponents(source, new StopFilter(result, CharArraySet.EMPTY_SET));
}
};
Related
https://facebook.github.io/react-native/releases/next/docs/headless-js-android.html#headless-js
I can't seem to find the right way to do it
I am running React-native 0.40
I implemented the HeadlessJsTaskService to execute some react native code.
For me the ReactExecutorService just silently failed, because i didn't provide any Extras when calling startService(new Intent(this, MyTaskService.class));. The following code fixes that:
#Override
#Nullable
protected HeadlessJsTaskConfig getTaskConfig(Intent intent) {
Bundle extras = intent.getExtras();
WritableMap data = extras != null ? Arguments.fromBundle(extras) : null;
return new HeadlessJsTaskConfig(
"FeatureExecutor",
data,
5000);
}
I also had an existing Application class, that didn't implement ReactApplication. I had to add some code to make the HeadlessJsTaskService work:
private final ReactNativeHost reactNativeHost = new ReactNativeHost(this) {
#Override
public boolean getUseDeveloperSupport() {
return BuildConfig.DEBUG;
}
#Override
protected List<ReactPackage> getPackages() {
return Arrays.<ReactPackage>asList(
// Insert your packages, e.g.
new MainReactPackage()
);
}
};
#Override
public ReactNativeHost getReactNativeHost() {
return reactNativeHost;
}
You also have to add your custom service to the manifest. For example:
<service android:name=".MyTaskService"/>
Please provide some additional information about your issues in case the solutions above do not help.
With reference to the solution provided in How to use TLS 1.2 in Java 6, is it possible to use the TSLSocketConnectionFactory with the Apache HttpClient4.4.
Regards,
j
You should be able to use TSLSocketConnectionFactory with HttpClient like following:
SSLConnectionSocketFactory sf = new SSLConnectionSocketFactory(new TLSSocketConnectionFactory(), new String[]{"TLSv1.2"}, null, new DefaultHostnameVerifier());
HttpClient client = HttpClientBuilder.create()
.setSSLSocketFactory(sf)
.build();
You may need to change some of the SSLSession method implementations at the TSLSocketConnectionFactory.
In my case, when I tried to use it with HttpClient, I had to change the following:
At SSLSocket() implementation:
#Override
public String[] getEnabledCipherSuites() {
// return null;
return new String[]{""};
}
#Override
public String[] getEnabledProtocols() {
// return null;
return new String[]{""};
}
At SSLSession() implementation:
#Override
public String getProtocol() {
// throw new UnsupportedOperationException();
return null;
}
#Override
public String getProtocol() {
// throw new UnsupportedOperationException();
return "";
}
#Override
public String getCipherSuite() {
// throw new UnsupportedOperationException();
return "":
}
I have written the following class to populate a Lucene Index. I want to build an Index for Lucene so that I can query for specific documents. Unfortunately my documents are not added to the index.
Here is my code:
public class LuceneIndexer {
private IndexWriter indexWriter;
private IndexReader indexReader;
public LuceneIndexer() throws Exception {
Directory indexDir = FSDirectory.open(Paths.get("./index-directory"));
IndexWriterConfig config = new IndexWriterConfig(new StandardAnalyzer());
config.setCommitOnClose(true);
config.setOpenMode(OpenMode.CREATE);
this.indexWriter = new IndexWriter(indexDir, config);
indexReader = DirectoryReader.open(this.indexWriter, true);
}
public void indexRelation(String subject, String description, String object) throws IOException {
System.out.println("Indexing relation between: " + subject+" and "+object);
Document doc = new Document();
doc.add(new TextField("subject", subject, Field.Store.YES));
doc.add(new TextField("description", description, Field.Store.YES));
doc.add(new TextField("object", object, Field.Store.YES));
indexWriter.addDocument(doc);
}
public void commit() throws Exception {
indexWriter.commit();
}
public int getNumberOfRelations() {
return indexReader.numDocs();
}
}
I am trying to get the following testcase to pass:
public class LuceneIndexerTest {
private LuceneIndexer instance;
#Before
public void setUp() throws SQLException, IOException {
instance = new LuceneIndexer();
instance.indexRelation("subject1","descr1","object1");
instance.indexRelation("subject2","descr2","object2");
instance.indexRelation("subject3","descr3","object3");
instance.commit();
}
#After
public void tearDown() throws IOException {
instance.close();
}
#Test
public void testIndexing() {
Assert.assertEquals(3, instance.getNumberOfRelations());
Assert.assertEquals(3, instance.getNumberOfRelations("subject"));
}
Unfortunately the Testcase says there are 0 documents in the index.
From Lucene's javadoc: "Any changes made to the index via IndexWriter will not be visible until a new IndexReader is opened".
The indexReader keep a view on your index at the time the IndexReader object was created. Just create a new one after each commit, and your indexReader will work as expected.
Here is the fix for your LuceneIndexer class:
public void commit() throws Exception {
indexWriter.commit();
if (indexReader != null)
indexReader.close();
indexReader = DirectoryReader.open(this.indexWriter, true);
}
In my Neo4j application I have a Product entity with a name and description fields. Both of these fields are used in legacy indexing over Lucene.
Product.name is a simple text and there are no issues here but Product.description can contain HTML markup and elements.
Right now for my index I use StandardAnalyzer(Version.LUCENE_36). What analyzer should I use in order to skip all HTML elements ?
How to tell Neo4J Lucene index to not use any HTML elements in Product.description ? I'd like to index only words.
UPDATED:
I have found following class HTMLStripCharFilter and reimplemented my Analyzer as following:
public final class StandardAnalyzerV36 extends Analyzer {
private Analyzer analyzer;
public StandardAnalyzerV36() {
analyzer = new StandardAnalyzer(Version.LUCENE_36);
}
public StandardAnalyzerV36(Set<?> stopWords) {
analyzer = new StandardAnalyzer(Version.LUCENE_36, stopWords);
}
#Override
public final TokenStream tokenStream(String fieldName, Reader reader) {
return analyzer.tokenStream(fieldName, new HTMLStripCharFilter(CharReader.get(reader)));
}
#Override
public final TokenStream reusableTokenStream(String fieldName, Reader reader) throws IOException {
return analyzer.reusableTokenStream(fieldName, reader);
}
}
also I have added a new maven dependecy to my Neo4j project:
<dependency>
<groupId>org.apache.lucene</groupId>
<artifactId>lucene-analyzers</artifactId>
<version>3.6.2</version>
</dependency>
Everything works fine right now, but I'm not sure that the method
#Override
public final TokenStream tokenStream(String fieldName, Reader reader) {
return analyzer.tokenStream(fieldName, new HTMLStripCharFilter(CharReader.get(reader)));
}
is a proper place for HTMLStripCharFilter initialization.
Please correct me if I'm wrong.
I have added following init method:
#PostConstruct
public void init() {
GraphDatabaseService graphDb = template.getGraphDatabaseService();
try (Transaction t = graphDb.beginTx()) {
Index<Node> autoIndex = graphDb.index().forNodes("node_auto_index");
graphDb.index().setConfiguration(autoIndex, "type", "fulltext");
graphDb.index().setConfiguration(autoIndex, "to_lower_case", "true");
graphDb.index().setConfiguration(autoIndex, "analyzer", StandardAnalyzerV36.class.getName());
t.success();
}
}
and created following class:
public final class StandardAnalyzerV36 extends Analyzer {
private Analyzer analyzer;
public StandardAnalyzerV36() {
analyzer = new StandardAnalyzer(Version.LUCENE_36);
}
public StandardAnalyzerV36(Set<?> stopWords) {
analyzer = new StandardAnalyzer(Version.LUCENE_36, stopWords);
}
#Override
public final TokenStream tokenStream(String fieldName, Reader reader) {
return analyzer.tokenStream(fieldName, new HTMLStripCharFilter(CharReader.get(reader)));
}
#Override
public final TokenStream reusableTokenStream(String fieldName, Reader reader) throws IOException {
return analyzer.reusableTokenStream(fieldName, reader);
}
}
Now everything works as expected. Hope it will help someone else. Good luck.
Ok, I'm having trouble making a Menu with Menu Items.
I was following this tutorial ( http://docs.oracle.com/javafx/2/ui_controls/menu_controls.htm ), but when I run it I get a nullpointer error. My code looks like this:
#Override
public void initialize(URL fxmlFileLocation, ResourceBundle resources) {
ventas.setOnAction(new EventHandler<ActionEvent>() {
#Override
public void handle(ActionEvent t) {
FXMLLoader ventasloader;
ventasloader = new FXMLLoader(getClass().getResource("VentasGUI.fxml"));
Stage ventasstage = new Stage();
AnchorPane ventas = null;
try {
ventas = (AnchorPane) ventasloader.load();
} catch (IOException ex) {
Logger.getLogger(PuntoDeVentaController.class.getName()).log(Level.SEVERE, null, ex);
}
Scene ventasscene = new Scene(ventas);
ventasstage.setScene(ventasscene);
ventasstage.setTitle("Venta");
VentasGUIController controller = ventasloader.<VentasGUIController>getController();
controller.setUser(userID);
ventasstage.show();
}
...but even when I leave just the skeleton code that NetBeans automatically adds:
#Override
public void initialize(URL fxmlFileLocation, ResourceBundle resources) {
ventas.setOnAction(new EventHandler<ActionEvent>() {
#Override
public void handle(ActionEvent t) {
throw new UnsupportedOperationException("Not supported yet.");
}
...rather than get the "Not supported yet" I get the nullpointerexception. I looked at the docs on http://docs.oracle.com/javafx/2/api/javafx/scene/control/MenuItem.html but I don't see that my event handler is empty, and it seems to be exactly the same as in the tutorial.
Anyone know what I'm doing wrong?
Thanks!
You haven't told where the NPE occurs so I guess here:
ventas.setOnAction(new EventHandler<ActionEvent>() {
Further I guess that ventas is a JavaFX control which you have defined in your .fxml file.
There are two things which have to be done that a connection between the .fxml file and the Java code works.
Annotate ventas with #FXML in your Java file
Define the fx:id of the ventas control in the SceneBuilder (set it to ventas)