Rethinkdb: search a string in whole table (full text search like) - rethinkdb-python

I'm writing a python flask application in combination with rethinkDB.
snippit structure:
{'Name': ['Gandalf', 'Jackson'], 'street': 'elmStreet'...}
This kind of structure is used multiple times, so basically I have all my keys as a string and my values are either also strings or an array with strings.
So I want a full-text search on this structure without using ElasticSearch or any additional program if it is possible.
Thanks for reading, have a nice day :)

I solved my issue, by using the python search engine whoosh, for it no additional program is needed.

Related

Is there a way to properly experiment with Solr field-types?

I'm working with Solr for a basic search engine, and I've created a couple different fieldTypes that include various filters and tokenizers in their analyzer chains.
However, I'm finding it very difficult to assess how these components of the chain interact and when I query in the Solr Admin, I consistently get different results than I expect-- with no clue as to why.
Is there a way to see what a phrase like education:"x university" is being transformed into when I type it in the q section of the Admin?
Also, when the phrase goes through the chain can it be transformed into multiple things that are all searched or is it just a single modified phrase?
Thanks for any help!
Use Analysis in Solr Admin to check how each field and its type process the tokens both while querying and indexing.
Analyse Fieldname / FieldType:
from the drop down option select field/type that you want to analyse and clieck on Analyse values.
ex: what tokenizer used, which all filter classes applied to token and how token is transformed after passing each filter class.
if
Verbose Output is checked, it shows more details about each filter class used for the selected field/type.

SQL like '%term%' except without letters

I'm searching against a table of news articles. The 2 relevant columns are ArticleTitle and ArticleText. When I want to search an article for a particular term, i started out with
column LIKE '%term%'.
However that gave me a lot of articles with the term inside anchor links, for example <a href="example.com/*term*> which would potentially return an irrelevant article.
So then I switched to
column LIKE '% term %'.
The problem with this query is it didn't find articles who's title or text began/ended with the term. Also it didn't match against things like term- or term's, which I do want.
It seems like the query i want should be able to do something like this
'%[^a-z]term[^a-z]%
This should exclude terms within anchor links, but everything else. I think this query still excludes strings that begin/end with the term. Is there a better solution? Does SQL-Server's FULL TEXT INDEXING solve this problem?
Additionally, would it be a good idea to store ArticleTitle and ArticleText as HTML-free columns? Then i could use '%term%' without getting anchor links. These would be 2 extra columns though, because eventually i will need the original HTML for formatting purposes.
Thanks.
SQL Server's LIKE allows you to define Regex-like patterns like you described.
A better option is to use fulltext search:
WHERE CONTAINS(ArticleTitle, 'term')
exploits the index properly (the LIKE '%term%' query is slow), and provides other benefit in the search algorithm.
Additionally, you might benefit from storing a plaintext version of the article alongside the HTML version, and run your search queries on it.
SQL is not designed to interpret HTML strings. As such, you'd only be able to postpone the problem till a more difficult issue arrives (for example, a comment node that contains your search terms as part of a plain sentence).
You can still utilize FULL TEXT as a prefilter and then run an HTML analysis on the application layer to further filter your result set.

Guidance on creating a basic search function in Rails3

Still pretty new to Rails and hoping to develop a function on a site enabling a search to be performed of the manner detailed below:
User inputs a search term / phrase (string of words but unlikely to be more than 5 or 6)
String is chopped into its constituent words
Entries in a single model with a description (a single field in the model) are output
Having looked at previous questions on this site, I am aware that there are a number of add-ons which are commonly used for search queries, however, are these needed in such a simple situation?
I was thinking that I could use an SQL command with a number of ANDs to perform this task?
Currently the model is stored within sqlite3, but it is probably going to grow to about 100,000 lines (just 10 fields though) in the near future if this is likely to cause problems?
Finally, is there an easy way to pull out the words of a string automatically for any length of string / up to a certain limit that is unlikely to be exceeded?
Thanks in advance for your time and patience
You can easily pull the words from a string with ruby: 'alice bob charlie'.split(/\s+/) will give you an array with the words.
Then, you can string those words together into an SQL query to find the appropriate records. It don't know about the performance of this solution though... You should definitely test it out to see if there are any performance issues.

Fulltext Solr statistical search

Consider I'm having a couple of documents indexed with Solr 4.0. Each has 2 fields - unique ID and text DATA field. DATA field contains few paragraphs of text. Who could advise me what kind of analyzers/parsers I should use and how to build statistical query to find out sorted list of most frequently used words in all DATA fields of all documents.
for the most frequent terms look into the terms- and statistical component
besides the answers mentioned here, you can use the "HighFreqTerms" class: its in the lucene-misc-4.0 jar (which is bundled with Solr).
This is a command line application which lets you see the top terms for any field either by document frequency or by total term frequency (the -t option)
Here is the usage:
java org.apache.lucene.misc.HighFreqTerms [-t] [number_terms] [field]
-t: include totalTermFreq
Here's the original patch, which is committed and in the 4.0 (trunk) and branch_3x codebases: https://issues.apache.org/jira/browse/LUCENE-2393
For ID field use analyzer based on keyword tokenizer - it will take all the content of the field as a single token.
For DATA field use language specific analyzer. Notice, that there's possibility to auto-detect the language of the text (patch).
I'm not sure, if it's possible to find the most frequent words with Solr, but if you can use Lucene itself, pay attention to this question. My own suggestion for it is to use HighFreqTerms class from Luke project.

How do I use native Lucene Query Syntax?

I read that Lucene has an internal query language where one specifies : and you make combinations of these using boolean operators.
I read all about it on their website and it works just fine in LUKE, I can do things like
field1:value1 AND field2:value2
and it will return seemingly correct results.
My problem is how do I pass this who Lucene query into the API? I've seen QueryParser, but I have to specifiy a field. Does this mean I still have to manually parse my input string, fields, values, parenthesis, etc or is there a way to feed the whole thing in and let lucene do it's thing?
I'm using Lucene.NET but since it's a method by method port of the orignal java, any advice is appreciated.
Are you asking whether you need to force your user to enter the field? If so, the query parser has a default field. Here's a little more info. As long as you have a default field that will do the job, they don't need to specify fields.
If you're asking how to get a Query object from the String, you need the parse method. It understands about fields, and the default field, etc. mentioned earlier. You just need to make sure that the query parser and the index builder are both using the same analysis.