I haven't ever dug into cleaning/reformatting search queries too much in the past, at least not more than general security things like preventing sql injection.
I am realizing that I should be implementing keywords like AND, OR, NOT, etc... and doing things like clearing punctuation such as apostrophes, hyphens, etc... As when a user types "Smiths" in a searchbox, the query would not return "Smith's" (with an apostrophe).
What other things can I do to improve my user's search queries (without being damaging to them)?
I am coming from a PHP MySQL-FTS setup; however, I'm sure that this could be extended to multiple platforms.
EDIT
Let me clarify that I'm not so interested in the SQL query to the database, what I'm interested in optimizing is the query that the user provides in the search box.
NEAR keyword
double quotes for "exact phrases"
remove short/common words ("a", "an", "the", etc)
stemming (remove common prefixes and suffixes)
I'd suggest reading through the answers to this similar question: Optimizing a simple search algorithm and also this article on some of Google's features.
Create an index on the "where" clause columns of your search queries.
To enable naive spell Correction perhaps, you could also store the soundex of the column you would like to offer spell-check for.
Enable logging for slow-queries which would help you in tracking down performance issues.
Related
In my project, I am asked to implement a text query service on the database we are using; Postgresql. I have used Postgresql Full Text Search features, which works fairly fine in terms of time. One problem about full text search is, it does not have fuzzy search abilities. On the other hand, there is an extension named pgtrgm providing functions and operators for determining the similarity of alphanumeric text. Also there are several examples of text search using pgtrgm like:
select actor
from products
where actor % 'tomy';
As you know example of postgres FTS also here;
SELECT title
FROM pgweb
WHERE to_tsvector(body) ## to_tsquery('friend');
So, the main question is, what is the difference between these two search strategies? Which one is more appropriate way for searching texts? Is it possible to mix them? I also need to say that performance is an important concern as well. Thanks in advance!
They do completely different things. About the only thing that is not different between them is that they operate on text and can benefit from use of indexes. From you question, it seems like you already have a good sense of the differences. The appropriate one is the one that does what you want. If one of them was always appropriate, we probably wouldn't have created the other one.
You can mix them, but you will need different indexes for each one, they cannot share an index. Also, you probably need different tables as well, as full text search is more appropriate for sentences or paragraphs while trigram for individual words or short phrases.
One way to mix them would be to have one table of full texts, and another table which lists only each distinct word present in any of the full texts. The 2nd table could be used to detect probable typos in the query, and then once those are fixed by suggestions from trigram searching, run the fixed query against the 1st table.
The difference is quite huge - in fuzzy search, you're searching for a similar result, in full-text search - for the exact same. If one is more appropriate than the other is the matter of use-case.
If you don't need fuzziness, don't use it, it's a huge performance overhead because it has to match the text not exactly, but also try other combinations.
Our clients query on our Azure Search index, mostly for people's names. We are using the Lucene analyzer for all of our fields. We build the query string by making the client's input name into a phrase, and adding proximity rate of 3. Because we search using a phrase, we can not use the Fuzzy Search capability of the Lucene analyzer, as it only works on single words.
We were therefore in search of a solution for being able to bring back results with names that weren't spelled exactly as the client input them. We came across the phonetic analyzer, and have just implemented the Metaphone algorithm into our index. We've run some tests and while it gets us closer to what we need, we still see some issues:
The analyzer's scope is so wide, that it's bringing back a lot of false positives. For example, when searching on Kenneth Gooden, it brings back Kenneth Cotton. That's just a little too far to be considered phonetically similar, in our opinion. Can the sensitivity be tweaked in any way, or, can something be done to boost some other parameter to remedy this?
When doing a search on Barry Soper, the first and highest-scored result that comes back is "Barry Spear." The second result, scored lower, is "Soper, Barry Russell." To a certain extent, I can maybe see why it's scored that way (b/c of the 2nd one being last name first) but then... not really. The 2nd result contains both exact terms within the required proximity. Maybe Azure Search gives priority to the order of words in the phrase before applying the analyzer? Still doesn't make sense to me. (Side note - this query also brings back "Barh Super" - see issue #1 above)
I would like to know if someone could offer suggestions to tweak Azure Search's behavior to work more along the lines of what we need, OR, perhaps suggest an alternative to the phonetic analyzer. We haven't tried any of the the other available phonetic algorithms either yet, only b/c it seems Metaphone is the best and most commonly-used. But we're open to suggestions regarding the other algorithms as well.
Thanks.
You are correct that the fuzzy operator only works on single terms. In this case, you can use a custom analyzer (phonetic tokenfilter) or Synonyms feature (in preview). I am not sure what you meant by "we have just implemented the Metaphone algorithm into our index" but there are several phonetic tokenfilters you can choose from in Azure Search custom analysis stack. Synonyms is a newer feature only available in preview, you can take a look here. For synonyms, you will need to define synonyms rules, say 'Nate, Nathan, Nathaniel' for example, and at query time, searching for one automatically includes the results for the others.
Okay, then how should I use these building blocks in a way to control relevance for my search? One way to model is to use separate field for each expansion strategy. For example, instead of a single field for the name, you can have three fields, say 'name', 'name_synonym', and 'name_phonetic'. The first field 'name' is for exact matches, 'name_synonym' field has synonyms enabled and the third uses a phonetic analyzer and broadens the search the most. You can then use the scoring profile to boost scores from matches in each field. You can give the boost value of 10 for exact matches, 5 for synonyms and 1 for phonetic expansions, for example. Your search will be issued against these three internal fields.
Regarding your question as to why 'Soper, Barry Russell' is ranked lower than 'Barry Spear'. After the phonetic analysis. the words 'soper' and 'spear' reduce to the same form both at indexing and query time and treated as if they were identical terms. In computing the score and ranking, the search engine uses analyzed form of the terms and phonetic similarity makes no influence to the score. That’s why, secondary factors, like field length, will play a more significant role influencing the relevance score.
Hope this helps. I provided one example to model this but you could also take a look at term boosting in the full lucene query syntax.
Let me know if you have any additional questions.
Nate
I just stumbled across this gem in our code:
my $str_rep="lower(replace(replace(replace(replace(replace(replace(replace(replace(replace(replace(replace(replace(replace(field,'-',''),'',''),'.',''),'_',''),'+',''),',',''),':',''),';',''),'/',''),'|',''),'\',''),'*',''),'~','')) like lower('%var%')";
I'm not really an expert in DB, but I have a hunch it can be rewritten in a more sane manner. Can it?
It depends on the DBMS you are using. I'll post some examples (feel free to edit this answer to add more).
MySQL
There is really not much to do; the only way to replace all the characters is nesting REPLACE functions as it has already been done in your code.
Oracle DB
Your clause can be rewritten by using the TRANSLATE function.
SQL Server
Like in MySQL there aren't any functions similar to Oracle's TRANSLATE. I have found some (much longer) alternatives in the answers to this question. In general, however, queries become very long. I don't see any real advantages of doing so, besides having a more structured query that can be easily extended.
Firebird
As suggested by Mark Rotteveel, you can use SIMILAR TO to rewrite the entire clause.
If you are allowed to build your query string via Perl you can also use a for loop against an array containing all the special characters.
EDIT: Sorry I did not see you indicated the DB in the tags. Consider only the last part of my answer.
Your flagged this as Perl, but it's probably not?
Here is a Perl solution anyway:
$var =~ s/[\-\.\_\+\,\:\;\/\|\\\*\~]+//g;
Sorry I don't know the languages concerned, but a couple of things come to mind.
Firstly you could look for a replace text function that does more that just a single character. Many languages have them. Some also do regular expression based find and replace.
Secondly the code looks like it is attempting to strip a specific list of characters. This list may not include all that is necessary which means a relatively high (pain in the butt) maintenance problem. A simpler solution might be to invert the problem and ask what characters do you want to keep? Inverting like this sometimes yields a simpler solution.
I am trying to design a database with search-ability at its core. My knowledge of database design and SQL is all self-taught and still fairly beginner-level, so my questions may possibly have easy answers.
Suppose I have a single table containing a large number of records. For example, suppose that each record contains details of a different computer application (name, developer, version number, etc). A list of keywords are associated with each record, such as a list of programming languages used to write the applications.
I wish to be able to enter one or more keywords (each separated by a space) into a search box, and I wish to have all associated records returned. How should I design the database to store the keywords, and what SQL query would I need to apply to the search text? (The search should be uppercase/lowercase independent.)
My next challenge would then be to order search results by relevance, and to allow entire key-phrases as well as keywords to be associated with each record. For example, if I type "Visual Basic" into the search field, I want the first results to have exactly the key-phrase "Visual Basic" associated with them. The next results should all have both keywords "Visual" and "Basic" associated with them, and the remaining results should have only one of these keywords. Again, please could anyone advise on how to implement this?
The final challenge I believe would be much harder: how much 'intelligent interpretation' can I design my database and SQL code to handle? For example, if I search for "CSS", can I get the records with the key-phrase "Cascading Style Sheets" to appear? Can I also get SQL to identify and search for similar words, such as plurals of search phrases or, for example, "programmer" or "programming" when "program" is input? Thanks!
Learn relational algebra, normalization rules, and SQL.
Start with entity relationships. Sounds like you could have an APPLICATION table as parent for a FEATURE child table, with a one-to-many relationship between the two. You'll query them by JOINing one to the other:
SELECT A.NAME, F.NAME
FROM APPLICATION AS A
JOIN FEATURE AS F
ON F.APP_ID = A.ID
Your challenges would not suggest SQL and relations to me. I would think more in terms of a parser, an indexer and search engine like Lucene, and a NoSQL document database like MongoDB.
I've come to the conclusion, after a LOT of research, that #duffymo's answer is hinting in the right direction. For the benefit of other n00bs like me, here's the conclusion I've drawn:
Many open source search engine server apps are out there to install for free. Lucene was the first I had ever heard of them, but others do exist and I think my favourite at the moment is Sphinx. As far as I can tell, the 'indexer' that #duffymo mentions is built into it. I have learnt that the indexer is the program that will examine my database for keywords and will automatically keep a record of which results should be returned for different input queries. I have also now learnt that the terminology for the behaviour I was looking for (and which Sphinx has) is 'stemming'. I'm still not sure what role a parser plays in all this...
A more basic approach would be to use SQL itself. Whilst I was already aware of the most basic of these (ie. using the LIKE keyword with 'wildcards'), I also discovered something a little more powerful: natural language / full-text search. For anyone not interested in installing a server app, I recommend you look this up.
Also, I see no reason why I would need to use NoSQL instead of SQL (as #duffymo has suggested), and so I'm going to stick with SQL for the moment (at least until I come across some good entry-level books to learn NoSQL from). Furthermore, I have very little intention to learn relational algebra until I know why I should and how it would be useful. The message here is that other beginners shouldn't be off-put by these things, as I don't think Sphinx requires any knowledge of them.
while I like #duffymo's answer, I will also suggest you research SPARQL and the wordnet project for your semantic equivalence questions.
If you choose Oracle, you can use the spatial option triple store to implement the SPARQL endpoint and do some very nice seaching like your css = Cascading Style Sheet example.
Can you advise on whether I can use just the Query functionality from Lucene to generate SQL queries? Something like an SQLQueryBuilder?
I have a massive SQL database of logs from a webserver cluster containing the original request and response strings plus some other useful/less bits and bobs. What I need to do is analyse the parameters in the original request and compare with the generated responses, looking at ratios, volatility, variability, consistency etc.
This question does not relate to the analysis stage, but only the retrieval of data from database which matches the parameters I'm interested in. So, I could just do this in good old sql queries, manually building the exact queries I need on a case-by-case basis. But that's kinda lame; I reckon we can be a bit smarter than that. Particularly as I can already see large numbers of similar but subtly different queries being useful. And as I'm hoping that I can expose a single search box via a web interface to non-technical end-users, adding sql queries seems like a bad idea... and a recipe for permanent maintenance requests (and can I be the first to say, er no thanks!).
In an ideal world I expose a search form, with the option to write simple queries like
request:"someAttribute=\"someValue\"" AND response="some hoped for result" AND daterange:30
which would then hopefully find all instances of requests which contain someAttribute="someValue" over the last 30 days. The results will then be put through standard statistical analyses on the given response text and printed out on-screen. At least, that's the idea.
Much of the actual logic to determine how to handle custom field definitions or special words I'll need to write myself, and that's ok. And NB, my non-technical end users are familiar enough with xml that they can handle a bit of attr="value" syntax, at least for the first iteration of the tool :D
In summary, I want to:
1) allow users to use google-like search syntax (e.g. via Lucene's QueryAPI) to specify text to match in the logs
2) allow a layer to manipulate the query based on special words or fields (e.g. this layer could be during a Java object phase)
3) convert the final query into an sql query appropriate for my database schema
4) query the database and spit back the resultset for statistical analysis
5) pretty-print on website:)
Am I completely barking up the wrong tree? It looks like it should be possible, but I can't seem to find much on it. I've been googling for a bit on this, for example trying "Lucene SQLQueryBuilder" as a possible start but didn't really find much by way of a lead.
So, my questions are:
Has anyone tried using Lucene's QueryAPI like this before? Did it work? Any gotchas?
Are there better query api libraries out there?
Examples, finished discussions and open-source implementations would be most helpful.
Many thanks.
NB: I don't think I want Lucene's search capabilities as such, as I'm only ever looking for exact matches. I just need a query layer on top of the database.
Lucene and SQL have very little in common as they're using totally different syntax (as HefferWolf mentioned) and different underlying data models. As you said yourself, I'm afraid you're barking the wrong tree.
There are however attempts, such as Hibernate Search to bridge this gap. These are interesting experiments as such, but I would be very careful to use any of that code in production.
You could possibly use Full Text Search features available in some SQL databases, or reindex all data in Lucene and use it without database.
I doubt you can reuse any code from lucene for this. Lucene does an internal rewrite of such queries but into a syntax which wouldn't be of much help for SQL I think.
name: Phil AND lastname: Miller AND NOT age: 26
would be rewritten to
+name Phil +lastname: Miller -age: 26
So I think you would have to write your on transition into a SQL Query syntax.
But maybe you can use Lucene as such for this. Have a look into hibernate-search which is quite handy to easily create a lucene index of a sql table.