How to combine two tokenizers in lucene (japaneseanalyzer and standardanalyzer) - lucene

I am using lucene 4.3.0 and want to tokenize the doc with both English and Japanese characters.
An example is like "LEICA S2 カタログ (新品)"
The StandardAnalyzer "[leica] [s2] [カタログ] [新] [品]"
The JapaneseAnalyzer "[leica] [s] [2] [カタログ] [新品]"
In the application of my project, the StandardAnalyzer is better on English characters, e.g. [s2] is better than [s] [2]. JapaneseAnalyzer is better on Japanese, e.g. [新品] to [新] [品]. In addition, JapaneseAnalyzer has a good feature to convert fullwidth character "2" to "2".
If I want the tokens to be [leica] [s2] [カタログ] [新品], it means:
1) English and numbers are tokenized by StandardAnalyzer. [leica] [s2]
2) Japanese are tokenized by JapaneseAnalyzer. [カタログ] [新品]
3) fullwidth character are converted to halfwidth by a filter. [s2]=>[s2]
how to implement this custom analyzer?

First thing I would try is messing with the arguments passed to the JapaneseAnalyzer, particularly the Tokenizer.Mode (I know precisely nothing about the structure of the Japanese language, so no help from me on the intent of those options).
Barring that
You'll need to create your own Analyzer for this. Unless you are willing to write your own Tokenizer, the end result may be a best effort. Creating an analyzer is pretty simple, creating a tokenizer will mean defining your own grammar, which will not be so simple.
Take a look at the code for JapaneseAnalyzer and StandardAnalyzer, particularly the call to createComponents, which is all you need to implement to create a custom analyzer.
Say you come to conclusion the StandardTokenizer is correct for you, but otherwise we're going to use mostly the Japanese filter set, it might look something like:
#Override
protected TokenStreamComponents createComponents(String fieldName, Reader reader) {
//For your Tokenizer, you might consider StandardTokenizer, JapaneseTokenizer, or CharTokenizer
Tokenizer tokenizer = new StandardTokenizer(version, reader);
TokenStream stream = new StandardFilter(version, tokenizer);
stream = new JapaneseBaseFormFilter(stream);
stream = new LowerCaseFilter(matchVersion, stream); //In JapaneseAnalyzer, a lowercasefilter comes at the end, further proving I don't know Japanese.
stream = new JapanesePartOfSpeechStopFilter(true, stream, stoptags);
stream = new CJKWidthFilter(stream); //Note this WidthFilter! I believe this does the char width transform you are looking for.
stream = new StopFilter(matchVersion, stream, stopwords);
stream = new JapaneseKatakanaStemFilter(stream);
stream = new PorterStemFilter(stream); //Nothing stopping you using a second stemmer, really.
return new TokenStreamComponents(tokenizer, stream);
}
That's a completely random implementation, from someone who doesn't understand the concerns, but hopefully it points the way toward implementing a more meaningful Analyzer. The order in which you apply filters in that filter chain are important, so be careful there (ie. In english, LowerCaseFilter is usually applied early, so that things like Stemmers don't have to worry about case).

Related

looking for indonesian language stemmer

I'm processing some Indonesian texts in a Java application, and I need to stem them.
Currently I am using lucene indonesian stemmer.
org.apache.lucene.analysis.id.IndonesianAnalyzer;
but results are not satisfactory.
Could anyone suggest me different stemmer?
"enang" is a stem. Stems need not be actual words. For instance, in English, "argue" "argues" and "arguing" reduce to the stem "argu". "argu" isn't an english word, but it is a meaningful stem. This is how stemmers work. As long as you apply the stemmer the same way to the indexed data and the query, it should work well.
If you don't want behavior like that, it doesn't make any sense to use a stemmer at all.
Aside from the stemmer, IndonesianAnalyzer is fairly easily replicated. It's other components just involve a StandardTokenizer, StandardFilter, LowercaseAnalyzer, and a StopFilter. That's just a StandardAnalyzer with an Indonesian stopword set, when you get right down to it, so you can create an Indonesiananalyzer without the stemmer as simply as:
//If you are using the default stopword location defined in the IndonesianAnalyzer you could load them like this.
CharArraySet defaultStopSet = StopwordAnalyzerBaseloadStopwordSet(false, IndonesianAnalyzer.class, IndonesianAnalyzer.DEFAULT_STOPWORD_FILE, "#");
Analyzer analyzer = new StandardAnalyzer(Version.LUCENE_43, defaultStopSet);
I'm not sure whether you would run into problems just passing a reader on the default stop word file into the StandardAnalyzer constructor.

compute term vector without indexing in lucene 4

I am migrating my code from Lucene 3.5 to Lucene 4.1 but I am having some problems with getting the term vector without indexing.
The problem is, given a text string and an Analyzer, I need to compute the term vector (technically, find the terms and their frequencies tf). Obviously, it can be achieved by writing the index (using IndexWriter) and then reading them back (using IndexReader) but I reckon it would be expensive. Furthermore, I don't need document frequency (df). Thus, I think an indexing-free solution is suitable.
In Lucene 2 and 3, a simple technique for the above purpose is to use QueryTermVector which extends TermFreqVector and has a constructor taking a string and an Analyzer. Unfortunately, QueryTermVector (along with TermFreqVector) has been removed in Lucene 4 and it seems the migration documentation did not mention anything about QueryTermVector.
Do you have a solution for this problem in Lucene 4? Thank you very much.
If you just need to know the terms/frequencies, you can just obtain the single tokens directly from the analyzer (you can get the TF by counting them, e.g. by using a Map or a Multiset).
This is how you do it in Lucene 4.0:
TokenStream ts = analyzer.tokenStream(field, new StringReader(text));
CharTermAttribute charTermAttribute = ts.addAttribute(CharTermAttribute.class);
while (ts.incrementToken()) {
String term = charTermAttribute.toString();
//term contains your token
}

Lucene synonym expansion,stemming,spell check and more

I am using Lucene to index my database and then perform a phrase search on a specific field(field name: keyword).
I am using following code currently:
String userQuery = request.getParameter("query");
//create standard analyzer object
analyzer = new StandardAnalyzer(Version.LUCENE_30);
Analyzer analyze=AnalyzerUtil.getPorterStemmerAnalyzer(analyzer);
//create File object of our index directory
File file = new File(LUCENE_INDEX_DIRECTORY);
//create index reader object
reader = IndexReader.open(FSDirectory.open(file),true);
//create index searcher object
searcher = new IndexSearcher(reader);
//create topscore document collector
collector = TopScoreDocCollector.create(1000, false);
//create query parser object
parser = new QueryParser(Version.LUCENE_30,"keyword", analyze);
parser.setAllowLeadingWildcard(true);
//parse the query and get reference to Query object
query = parser.parse(userQuery);
//********Line 1***********************
//search the query
searcher.search(query, collector);
hits = collector.topDocs().scoreDocs;
//check whether the search returns any result
if(hits.length>0){//Code to retrieve hits}
This code works fine for stemming, but now I want to also expand my query to do synonym search like if I enter "Man" and my lucene index has a entry "male", it would still be able to give me that as a hit.
I tried to add this at Line 1 in the above code query=SynExpand.expand(userQuery,
searcher, analyze,"keyword",serialVersionUID);
But it doesn't give me any result.
I also want to introduce spell check, where in if I enter "ubelievable" instead of "unbelievable" it would still give me a result.
I have no idea why synonym expansion isn't working for me and how to do spelling check.Please if someone could guide me I will be really grateful.
Thanks!
Fuzzy search may be done by query keyword modifier, namely by adding tilde:
keyword:ubelievable~
See Lucene Parser Syntax for more details and other types of queries that may be interesting to you.
There are 2 ways of dealing with synonyms. Query expansion you are trying to use relies on WordNet. As SynExpand's documentation says, you should first invoke Syns2Index to use expansion. This is easy way, but it works only with English words.
If you need to add support for multiple languages or add your own synonyms, you can use synonym injection during indexing. The idea is to write your own analyzer that will inject synonyms from your own dictionary into indexed documents. This may sound hard to implement, but fortunately there's excellent example in Lucene in Action book (source code is available for free, see lia.analysis.synonym package. Though, I highly recommend to get your copy of this nice book).

Parse a search string (into NHibernate Criterias )

I would like to implement an advanced search for my project.
The search right now uses all the strings the user enters and makes one big disjunction with criteria API.
This works fine, but now I would like to implement more features: AND, OR and brackets()
I have got a hard time parsing the string - and building criterias from the string. I have found this Stackoverflow question, but it didn't really help (he didn't make it clear what he wanted).
I found another article, but this supports much more and spits out sql statements.
Another thing I've heard mention a lot is Lucene - but I'm not sure if this really would help me.
I've been searching around a little bit and I've found the Lucene.Net WhitespaceAnalyzer and the QueryParser.
It changes the search A AND B OR C into something like +A +B C, which is a good step in the correct direction (plus it handles brackets).
The next step would be to get the converted string into a set of conjunctions and disjunctions.
The Java example I found was using the query builder which I couldn't find in NHibernate.
Any more ideas ?
Guess you haven't heard about Nhibernate Search till now
Nhibernate Search uses lucene underneath and gives u all the options of using AND, OR, grammar.
All you have to do is attribute your entities for indexing and Nhibernate will index it at a predefined location.
Next time you can search this index with the power that lucene exposes and then get your domain level entity objects in return.
using (IFullTextSession s = Search.CreateFullTextSession(sf.OpenSession(new SearchInterceptor()))) {
QueryParser qp = new QueryParser("id", new StopAnalyzer());
IQuery NHQuery = s.CreateFullTextQuery(qp.Parse("Summary:series"), typeof(Book));
IList result = NHQuery.List();
Powerful, isn’t it?
What I am basically doing right now is parsing the input string with the Lucene.Net parse API.
This gives me a uniform and simplified syntax. (Pseudocode)
using Lucene.Net.Analysis;
using Lucene.Net.Analysis.Standard;
using Lucene.Net.QueryParsers;
using Lucene.Net.Search;
void Function Search (string queryString)
{
Analyzer analyzer = new WhitespaceAnalyzer();
QueryParser luceneParser = new QueryParser("name", analyzer);
Query luceneQuery = luceneParser.Parse(queryString);
string[] words = luceneQuery.ToString().Split(' ');
foreach (string word in words)
{
//Parsing the lucene.net string
}
}
After that I am parsing this string manually, creating the disjunctions and conjunctions.

Using MultiFieldQueryParser

Am using MultiFieldQueryParser for parsing strings like a.a., b.b., etc
But after parsing, its removing the dots in the string.
What am i missing here?
Thanks.
I'm not sure the MultiFieldQueryParser does what you think it does. Also...I'm not sure I know what you're trying to do.
I do know that with any query parser, strings like 'a.a.' and 'b.b.' will have the periods stripped out because, at least with the default Analyzer, all punctuation is treated as white space.
As far as the MultiFieldQueryParser goes, that's just a QueryParser that allows you to specify multiple default fields to search. So with the query
title:"Of Mice and Men" "John Steinbeck"
The string "John Steinbeck" will be looked for in all of your default fields whereas "Of Mice and Men" will only be searched for in the title field.
What analyzer is your parser using? If it's StopAnalyzer then the dot could be a stop word and is thus ignored. Same thing if it's StandardAnalyzer which cleans up input (includes removing dots).
(Repeating my answer from the dupe. One of these should be deleted).
The StandardAnalyzer specifically handles acronyms, and converts C.F.A. (for example) to cfa. This means you should be able to do the search, as long as you make sure you use the same analyzer for the indexing and for the query parsing.
I would suggest you run some more basic test cases to eliminate other factors. Try to user an ordinary QueryParser instead of a multi-field one.
Here's some code I wrote to play with the StandardAnalyzer:
StringReader testReader = new StringReader("C.F.A. C.F.A word");
StandardAnalyzer analyzer = new StandardAnalyzer();
TokenStream tokenStream = analyzer.tokenStream("title", testReader);
System.out.println(tokenStream.next());
System.out.println(tokenStream.next());
System.out.println(tokenStream.next());
The output for this, by the way was:
(cfa,0,6,type=<ACRONYM>)
(c.f.a,7,12,type=<HOST>)
(word,13,17,type=<ALPHANUM>)
Note, for example, that if the acronym doesn't end with a dot then the analyzer assumes it's an internet host name, so searching for "C.F.A" will not match "C.F.A." in the text.