FREETEXT queries in SQL Server 2008 not phrase matching - sql

I have a full text indexed table in SQL Server 2008 that I am trying to query for an exact phrase match using FULLTEXT. I don't believe using CONTAINS or LIKE is appropriate for this, because in other cases the query might not be exact (user doesn't surround phrase in double quotes) and in general I want to flexibility of FREETEXT.
According to the documentation[MSDN] for FREETEXT:
If freetext_string is enclosed in double quotation marks, a phrase match is instead performed; stemming and thesaurus are not performed.
which would lead me to believe a query like this:
SELECT Description
FROM Projects
WHERE FREETEXT(Description, '"City Hall"')
would only return results where the term "City Hall" appears in the Description field, but instead I get results like this:
1 Design of handicap ramp at Manning Hall.
2 Antenna investigation. Client: City of Cranston Engineering Dept.
3 Structural investigation regarding fire damage to International Tennis Hall of Fame.
4 Investigation Roof investigation for proposed satellite design on Herald Hall.
... etc
Obviously those results include at least one of the words in my phrase, but not the phrase itself. What's worse, I had thought the results would be ranked but the two results I actually wanted (because they include the actual phrase) are buried.
SELECT Description
FROM Projects
WHERE Description LIKE '%City Hall%'
1 Major exterior and interior renovation of the existing city hall for Quincy Massachusetts
2 Cursory structural investigation of Pawtucket City Hall tower plagued by leaks.
I'm sure this is a case of me not understanding the documentation, but is there a way to achieve what I'm looking for? Namely, to be able to pass in a search string without quotes and get exactly what I'm getting now or with quotes and get only that exact phrase?

As you said, FREETEXT looks up every word in your phrase, not the phrase as an all. For that you need to use the CONTAINS statement. Like this:
SELECT Description
FROM Projects
WHERE CONTAINS(Description, '"City Hall"')
If you want to get the rank of the results, you have to use CONTAINSTABLE. It works roughly the same, but it returns a table with two columns: [Key] wich contains the primary key of the search table and [Rank], which gives you the rank of the result.

Related

FTS doesn't work as expected with emails with dots

We're developing a search as a part of a bigger system.
We have Microsoft SQL Server 2014 - 12.0.2000.8 (X64) Standard Edition (64-bit) with this setup:
CREATE TABLE NewCompanies(
[Id] [uniqueidentifier] NOT NULL,
[Name] [nvarchar](400) NOT NULL,
[Phone] [nvarchar](max) NULL,
[Email] [nvarchar](max) NULL,
[Contacts1] [nvarchar](max) NULL,
[Contacts2] [nvarchar](max) NULL,
[Contacts3] [nvarchar](max) NULL,
[Contacts4] [nvarchar](max) NULL,
[Address] [nvarchar](max) NULL,
CONSTRAINT PK_Id PRIMARY KEY (Id)
);
Phone is a structured comma separated digits string like
"77777777777, 88888888888"
Email is structured emails string with commas like
"email1#gmail.com, email2#gmail.com" (or without commas at all like
"email1#gmail.com")
Contacts1, Contacts2, Contacts3, Contacts4 are text fields where users can specify contact details in free form. Like "John Smith +1 202 555 0156" or "Bob, +1-999-888-0156, bob#company.com". These fields can contain emails and phones we want to search further.
Here we create full-text stuff
-- FULL TEXT SEARCH
CREATE FULLTEXT CATALOG NewCompanySearch AS DEFAULT;
CREATE FULLTEXT INDEX ON NewCompanies(Name, Phone, Email, Contacts1, Contacts2, Contacts3, Contacts4, Address)
KEY INDEX PK_Id
Here is a data sample
INSERT INTO NewCompanies(Id, Name, Phone, Email, Contacts1, Contacts2, Contacts3, Contacts4)
VALUES ('7BA05F18-1337-4AFB-80D9-00001A777E4F', 'PJSC Azimuth', '79001002030, 78005005044', 'regular#hotmail.com, s.m.s#gmail.com', 'John Smith', 'Call only at weekends +7-999-666-22-11', NULL, NULL)
Actually we have about 100 thousands of such records.
We expect users can specify a part of email like "#gmail.com" and this should return all the rows with Gmail email addresses in any of Email, Contacts1, Contacts2, Contacts3, Contacts4 fields.
The same for phone numbers. Users can search for a pattern like "70283" and a query should return phones with these digits in them. It's even for free form Contacts1, Contacts2, Contacts3, Contacts4 fields where we probably should remove all but digits and space characters firstly before searching.
We used to use LIKE for the search when we had about 1500 records and it worked fine but now we have a lot of records and the LIKE search takes infinite to get results.
This is how we try to get data from there:
SELECT * FROM NewCompanies WHERE CONTAINS((Email, Contacts1, Contacts2, Contacts3, Contacts4), '"s.m.s#gmail.com*"') -- this doesn't get the row
SELECT * FROM NewCompanies WHERE CONTAINS((Phone, Contacts1, Contacts2, Contacts3, Contacts4), '"6662211*"') -- doesn't get anything
SELECT * FROM NewCompanies WHERE CONTAINS(Name, '"zimuth*"') -- doesn't get anything
Actually requests
SELECT [...] CONTAINS([...], '"6662211*"') -- doesn't get anything
against 'Call only at weekends +7-999-666-22-11'
and
SELECT [...] CONTAINS(Name, '"zimuth*"') -- doesn't get anything
against 'PJSC Azimuth'
do work as expected.
See Prefix Term. Because 6662211* is not a prefix of +7-999-666-22-11 as well as zimuth* is not a prefix of Azimuth
As for
SELECT [...] CONTAINS([...], '"s.m.s#gmail.com*"') -- this doesn't get the row
This is probably due to word breakers as alwayslearning pointed out in comments. See word-breakers
I don't think that Full-Text Search is applicable for your task.
Why use for FTS in the exact same tasks that LIKE operator is used for? If there were a better index type for LIKE queries... then there would be the better index type, not the totally different technology and syntax.
And in no way it will help you to match "6662211*" against "666some arbitrary char22some arbitrary char11".
Full Text search is not about regex-es (and "6662211*" is not even a correct expression for the job - there is nothing about "some arbitrary char" part) it's about synonyms, word forms, etc.
But is it at all possible to search for substrings effectively?
Yes it is. Leaving aside such prospects as writing your own search engine, what can we do within SQL?
First of all - it is an imperative to cleanup your data!
If you want to return to the users the exact strings they have entered
users can specify contact details in free form
...you can save them as is... and leave them along.
Then you need to extract data from the free form text (it is not so hard for emails and phone numbers) and save the data in some canonical form.
For email, the only thing you really need to do - make them all lowercase or uppercase (doesn't matter), and maybe split then on the # sing. But in phone numbers you need to leave only digits
(...And then you can even store them as numbers. That can save you some space and time. But the search will be different... For now let's dive into a more simple and universal solution using strings.)
As MatthewBaker mentioned you can create a table of suffixes.
Then you can search like so
SELECT DISTINCT * FROM NewCompanies JOIN Sufficies ON NewCompanies.Id = Sufficies.Id WHERE Sufficies.sufficies LIKE 'some text%'
You should place the wildcard % only at the end. Or there would be no benefits from the Suffixes table.
Let take for example a phone number
+7-999-666-22-11
After we get rid of waste chars in it, it will have 11 digits. That means we'll need 11 suffixes for one phone number
1
11
211
2211
62211
662211
6662211
96662211
996662211
9996662211
79996662211
So the space complexity for this solution is linear... not so bad, I'd say... But wait it's complexity in the number of records. But in symbols... we need N(N+1)/2 symbols to store all the suffixes - that is quadratic complexity... not good... but if you have now 100 000 records and do not have plans for millions in the near future - you can go with this solution.
Can we reduce space complexity?
I will only describe the idea, implementing it will take some effort. And probably we'll need to cross the boundaries of SQL
Let's say you have 2 rows in NewCompanies and 2 strings of free form text in it:
aaaaa
11111
How big should the Suffixes table be? Obviously, we need only 2 records.
Let's take another example. Also 2 rows, 2 free text strings to search for. But now it's:
aa11aa
cc11cc
Let's see how many suffixes do we need now:
a // no need, LIKE `a%` will match against 'aa' and 'a11aa' and 'aa11aa'
aa // no need, LIKE `aa%` will match against 'aa11aa'
1aa
11aa
a11aa
aa11aa
c // no need, LIKE `c%` will match against 'cc' and 'c11cc' and 'cc11cc'
cc // no need, LIKE `cc%` will match against 'cc11cc'
1cc
11cc
c11cc
cc11cc
No so bad, but not so good either.
What else can we do?
Let's say, user enters "c11" in the search field. Then LIKE 'c11%' needs 'c11cc' suffix to succeed. But if instead of searching for "c11" we first search for "c%", then for "c1%" and so on? The first search will give as only one row from NewCompanies. And there would be no need for subsequent searches. And we can
1aa // drop this as well, because LIKE '1%' matches '11aa'
11aa
a11aa // drop this as well, because LIKE 'a%' matches 'aa11aa'
aa11aa
1cc // same here
11cc
c11cc // same here
cc11cc
and we end up with only 4 suffixes
11aa
aa11aa
11cc
cc11cc
I can't say what the space complexity would be in this case, but it feels like it would be acceptable.
In cases like this full text searching is less than ideal. I was in the same boat as you are. Like searches are too slow, and full text searches search for words that start with a term rather than contains a term.
We tried several solutions, one pure SQL option is to build your own version of full text search, in particular an inverted index search. We tried this, and it was successful, but took a lot of space. We created a secondary holding table for partial search terms, and used full text indexing on that. However this mean we repeatedly stored multiple copies of the same thing. For example we stored "longword" as Longword, ongword, ngword, gword.... etc. So any contained phrase would always be at the start of the indexed term. A horrendous solution, full of flaws, but it worked.
We then looked at hosting a separate server for lookups. Googling Lucene and elastisearch will give you good information on these off the shelf packages.
Eventually, we developed our own in house search engine, which runs along side SQL. This has allowed us to implement phonetic searches (double metaphone) and then using levenshtein calculations along side soundex to establish relevance. Overkill for a lot of solutions, but worth the effort in our use case. We even now have an option of leveraging Nvidia GPUs for cuda searches, but this represented a whole new set of headaches and sleepless nights. Relevance of all these will depend on how often you see your searches being performed, and how reactive you need them to be.
Full-Text Indexes have a number of limitations. You can use wildcards on words that the index finds are whole "parts" but even then you are constrained to the ending part of the word. That is why you can use CONTAINS(Name, '"Azimut*"') but not CONTAINS(Name, '"zimuth*"')
From the Microsoft documentation:
When the prefix term is a phrase, each token making up the phrase is
considered a separate prefix term. All rows that have words beginning
with the prefix terms will be returned. For example, the prefix term
"light bread*" will find rows with text of "light breaded," "lightly
breaded," or "light bread," but it will not return "lightly toasted
bread."
The dots in the email, as indicated by the title, are not the main issue. This, for example, works:
SELECT * FROM NewCompanies
WHERE CONTAINS((Email, Contacts1, Contacts2, Contacts3, Contacts4), 's.m.s#gmail.com')
In this case, the index identifies the whole email string as valid, as well as "gmail" and "gmail.com." Just "s.m.s" though is not valid.
The last example is similar. The parts of the phone number are indexed (666-22-11 and 999-666-22-11 for example), but removing the hyphens is not a string that the index is going to know about. Otherwise, this does work:
SELECT * FROM NewCompanies
WHERE CONTAINS((Phone, Contacts1, Contacts2, Contacts3, Contacts4), '"666-22-11*"')

Optimising LIKE expressions that start with wildcards

I have a table in a SQL Server database with an address field (ex. 1 Farnham Road, Guildford, Surrey, GU2XFF) which I want to search with a wildcard before and after the search string.
SELECT *
FROM Table
WHERE Address_Field LIKE '%nham%'
I have around 2 million records in this table and I'm finding that queries take anywhere from 5-10s, which isn't ideal. I believe this is because of the preceding wildcard.
I think I'm right in saying that any indexes won't be used for seek operations because of the preceeding wildcard.
Using full text searching and CONTAINS isn't possible because I want to search for the latter parts of words (I know that you could replace the search string for Guil* in the below query and this would return results). Certainly running the following returns no results
SELECT *
FROM Table
WHERE CONTAINS(Address_Field, '"nham"')
Is there any way to optimise queries with preceding wildcards?
Here is one (not really recommended) solution.
Create a table AddressSubstrings. This table would have multiple rows per address and the primary key of table.
When you insert an address into table, insert substrings starting from each position. So, if you want to insert 'abcd', then you would insert:
abcd
bcd
cd
d
along with the unique id of the row in Table. (This can all be done using a trigger.)
Create an index on AddressSubstrings(AddressSubstring).
Then you can phrase your query as:
SELECT *
FROM Table t JOIN
AddressSubstrings ads
ON t.table_id = ads.table_id
WHERE ads.AddressSubstring LIKE 'nham%';
Now there will be a matching row starting with nham. So, like should make use of an index (and a full text index also works).
If you are interesting in the right way to handle this problem, a reasonable place to start is the Postgres documentation. This uses a method similar to the above, but using n-grams. The only problem with n-grams for your particular problem is that they require re-writing the comparison as well as changing the storing.
I can't offer a complete solution to this difficult problem.
But if you're looking to create a suffix search capability, in which, for example, you'd be able to find the row containing HWilson with ilson and the row containing ABC123000654 with 654, here's a suggestion.
WHERE REVERSE(textcolumn) LIKE REVERSE('ilson') + '%'
Of course this isn't sargable the way I wrote it here. But many modern DBMSs, including recent versions of SQL server, allow the definition, and indexing, of computed or virtual columns.
I've deployed this technique, to the delight of end users, in a health-care system with lots of record IDs like ABC123000654.
Not without a serious preparation effort, hwilson1.
At the risk of repeating the obvious - any search path optimisation - leading to the decision whether an index is used, or which type of join operator to use, etc. (independently of which DBMS we're talking about) - works on equality (equal to) or range checking (greater-than and less-than).
With leading wildcards, you're out of luck.
The workaround is a serious preparation effort, as stated up front:
It would boil down to Vertica's text search feature, where that problem is solved. See here:
https://my.vertica.com/docs/8.0.x/HTML/index.htm#Authoring/AdministratorsGuide/Tables/TextSearch/UsingTextSearch.htm
For any other database platform, including MS SQL, you'll have to do that manually.
In a nutshell: It relies on a primary key or unique identifier of the table whose text search you want to optimise.
You create an auxiliary table, whose primary key is the primary key of your base table, plus a sequence number, and a VARCHAR column that will contain a series of substrings of the base table's string you initially searched using wildcards. In an over-simplified way:
If your input table (just showing the columns that matter) is this:
id |the_search_col |other_col
42|The Restaurant at the End of the Universe|Arthur Dent
43|The Hitch-Hiker's Guide to the Galaxy |Ford Prefect
Your auxiliary search table could contain:
id |seq|search_token
42| 1|Restaurant
42| 2|End
42| 3|Universe
43| 1|Hitch-Hiker
43| 2|Guide
43| 3|Galaxy
Normally, you suppress typical "fillers" like articles and prepositions and apostrophe-s , and split into tokens separated by punctuation and white space. For your '%nham%' example, however, you'd probably need to talk to a linguist who has specialised in English morphology to find splitting token candidates .... :-]
You could start by the same technique that I use when I un-pivot a horizontal series of measures without the PIVOT clause, like here:
Pivot sql convert rows to columns
Then, use a combination of, probably nested, CHARINDEX() and SUBSTRING() using the index you get from the CROSS JOIN with a series of index integers as described in my post suggested above, and use that very index as the sequence for the auxiliary search table.
Lay an index on search_token and you'll have a very fast access path to a big table.
Not a stroll in the park, I agree, but promising ...
Happy playing -
Marco the Sane

Logical ranking in SQL Full-text Search

Below you see my query. The parameter
'ISABOUT("Windsor Col*" WEIGHT(1.0),"Windsor Col" WEIGHT(0.7),"Windsor*" WEIGHT(0.5),"Col*" WEIGHT(0.5))'
is actually passed in to a stored function that has the same code. This is for autocomplete, and this query is made when the user types "Windsor Col". What's curious though, is why "Windsor Colorado United States" isn't on the top of the list.
Anyone have a fresh pair of eyes that can spot the mistake I'm making? Also, if you have any other suggestions, feel free to comment. I want the user search experience to be as natural and obvious as possible.
EDIT: The first select (Landmarks) searches against the Name column and the second select (Cities) searches against the Extended column.
In Sql Server the rank returned from CONTAINSTABLE is only applicable to the results returned in that particular FT query. In other words comparing the rank from two different CONTAINSTABLE queries is meaningless (even though the text of the query may be the same they are hitting different columns).

Searching SQL Server

I've been asked to put together a search for one of our databases.
The criteria is the user types into a search box, SQL then needs to split up all the words in the search and search for each of them across multiple fields (Probably 2 or 3), it then needs to weight the results for example the result where all the words appear will be the top result and if only 1 word appears it will be weighted lower.
For example if you search for "This is a demo post"
The results would be ranked like this
Rank Field1 Field2
1: "This is a demo post" ""
2: "demo post" ""
3: "demo" "post"
4: "post" ""
Hope that makes some sort of sense, its kind of a base Google like search.
Anyway I can think of doing this is very messy.
Any suggestions would be great.
"Google-like search" means: fulltext search. Check it out!
Understanding fulltext indexing on SQL Server
Understanding SQL Server full-text indexing
Getting started with SQL Server 2005 fulltext searching
SQL Server fulltext search: language features
With SQL Server 2008, it's totally integrated into the SQL Server engine.
Before that, it was a bit of a quirky add-on. Another good reason to upgrade to SQL Server 2008! (and the SP1 is out already, too!)
Marc
Logically you can do this reasonably easily, although it may get hard to optimise - especially if someone uses a particularly long phrase.
Here's a basic example based on a table I have to hand...
SELECT TOP 100 Score, Forename FROM
(
SELECT
CASE
WHEN Forename LIKE '%Kerry James%' THEN 100
WHEN Forename LIKE '%Kerry%' AND Forename LIKE '%James%' THEN 75
WHEN Forename LIKE '%Kerry%' THEN 50
WHEN Forename LIKE '%James%' THEN 50
END AS Score,
Forename
FROM
tblPerson
) [Query]
WHERE
Score > 0
ORDER BY
Score DESC
In this example, I'm saying that an exact match is worth 100, a match with both terms (but not together) is worth 75 and a match of a single word is worth 50. You can make this as complicated as you wish and even include SOUNDEX matches too - but this is a simple example to point you in the right direction.
I ended up creating a full text index on the table and joining my search results to FREETEXTTABLE, allowing me to see the ranked value of each result
The SQL ended up looking something like this
SELECT
Msgs.RecordId,
Msgs.Title,
Msgs.Body
FROM
[Messages] AS Msgs
INNER JOIN FREETEXTTABLE([Messages],Title,#SearchText) AS TitleRanks ON Msgs.RecordId = TitleRanks.[Key]
ORDER BY
TitleRanks.[Key] DESC
I've used full text indexes in the past but never realised you could use FullTextTable like that, was very impressed with how easy it was to code and how well it works.

Need Pattern for dynamic search of multiple sql tables

I'm looking for a pattern for performing a dynamic search on multiple tables.
I have no control over the legacy (and poorly designed) database table structure.
Consider a scenario similar to a resume search where a user may want to perform a search against any of the data in the resume and get back a list of resumes that match their search criteria. Any field can be searched at anytime and in combination with one or more other fields.
The actual sql query gets created dynamically depending on which fields are searched. Most solutions I've found involve complicated if blocks, but I can't help but think there must be a more elegant solution since this must be a solved problem by now.
Yeah, so I've started down the path of dynamically building the sql in code. Seems godawful. If I really try to support the requested ability to query any combination of any field in any table this is going to be one MASSIVE set of if statements. shiver
I believe I read that COALESCE only works if your data does not contain NULLs. Is that correct? If so, no go, since I have NULL values all over the place.
As far as I understand (and I'm also someone who has written against a horrible legacy database), there is no such thing as dynamic WHERE clauses. It has NOT been solved.
Personally, I prefer to generate my dynamic searches in code. Makes testing convenient. Note, when you create your sql queries in code, don't concatenate in user input. Use your #variables!
The only alternative is to use the COALESCE operator. Let's say you have the following table:
Users
-----------
Name nvarchar(20)
Nickname nvarchar(10)
and you want to search optionally for name or nickname. The following query will do this:
SELECT Name, Nickname
FROM Users
WHERE
Name = COALESCE(#name, Name) AND
Nickname = COALESCE(#nick, Nickname)
If you don't want to search for something, just pass in a null. For example, passing in "brian" for #name and null for #nick results in the following query being evaluated:
SELECT Name, Nickname
FROM Users
WHERE
Name = 'brian' AND
Nickname = Nickname
The coalesce operator turns the null into an identity evaluation, which is always true and doesn't affect the where clause.
Search and normalization can be at odds with each other. So probably first thing would be to get some kind of "view" that shows all the fields that can be searched as a single row with a single key getting you the resume. then you can throw something like Lucene in front of that to give you a full text index of those rows, the way that works is, you ask it for "x" in this view and it returns to you the key. Its a great solution and come recommended by joel himself on the podcast within the first 2 months IIRC.
What you need is something like SphinxSearch (for MySQL) or Apache Lucene.
As you said in your example lets imagine a Resume that will composed of several fields:
List item
Name,
Adreess,
Education (this could be a table on its own) or
Work experience (this could grow to its own table where each row represents a previous job)
So searching for a word in all those fields with WHERE rapidly becomes a very long query with several JOINS.
Instead you could change your framework of reference and think of the Whole resume as what it is a Single Document and you just want to search said document.
This is where tools like Sphinx Search do. They create a FULL TEXT index of your 'document' and then you can query sphinx and it will give you back where in the Database that record was found.
Really good search results.
Don't worry about this tools not being part of your RDBMS it will save you a lot of headaches to use the appropriate model "Documents" vs the incorrect one "TABLES" for this application.