I am going to create:
a table for storing IDs and unique text values (which are expected to
be large)
a stored procedure which will have a text value as input parameter
(it will check if the value exists in the above table and return the
corresponding ID if it exists, or inserted a new record if not and
return the new ID as well)
I want to optimize the search of text values using hash value of the text and created index on it. So, during the search I expect a non-clustered index to be used (not the clustered index).
I decided to use the HASHBYTES with SHA2_256 and I am wondering are there any differences/benefits if I am storing the hash value as BINARY(32) or NVARCHAR(16)?
You can't reasonably store a hash value as chars because binary data is not text. Various text processing and comparison functions interpret those chars. For example trailing whitespace is sometimes ignored leading to incorrect results.
Since you've got 32 totally random unstructured bytes to store a binary(32) is the most natural format and it is the fastest one.
Related
I have an SQL table linked in MS Access which contains a number of short text fields that are limited to 255 characters. This table is updated from an Access form.
I was informed that when data is extracted from the table one of the fields is excessively long based on the contents.
I investigated and ran this query:
SELECT [dbo_NCR User Input].ImpactGrade, Len([impactgrade]) AS length
FROM [dbo_NCR User Input];
...which showed that regardless of the contents, the field length was 255 characters:
Has anyone experienced this issue and if so how can it be resolved to remove the additional characters from the field?
If the data type of the field in the SQL Server table is CHAR(255) as opposed to VARCHAR(255), then 255 bytes are always allocated for the field value, regardless of the true length of the content.
Conversely, VARCHAR(255) only allocates the bytes required to store the content of the field (+2 bytes), up to the given maximum.
What is the good approach to keep a nvarchar field unique. I have a field which is storing URLs of MP3 files. The URL length can be anything from 10 characters to 4000. I tried to create an index and it says it cannot create the index as the total length exceeds 900 bytes.
If the field is not indexed, it's going to be slow to search anything. I am using C#, ASP.net MVC for the front end.
You could use CHECKSUM command and put index on column with checksum.
--*** Add extra column to your table that will hold checksum
ALTER TABLE Production.Product
ADD cs_Pname AS CHECKSUM(Name);
GO
--*** Create index on new column
CREATE INDEX Pname_index ON Production.Product (cs_Pname);
GO
Then you can retrieve data fast using following query:
SELECT *
FROM Production.Product
WHERE CHECKSUM(N'Bearing Ball') = cs_Pname
AND Name = N'Bearing Ball';
Here is the documentation: http://technet.microsoft.com/en-us/library/ms189788.aspx
You can use a hash function (although theoretically it doesn't guarantee that two different titles will have different hashes, but should be good enough: MD5 Collisions) and then apply the index on that column.
MD5 in SQL Server
You could create a hash code of the url and use this integer as a unique index on your db. Beware of converting all characters to lowercase first to ensure that all url are in the same format. Same url will generate equal hash code.
I'm importing data from one system to another. The former keys off an alphanumeric field whereas the latter requires a numeric integer field. I'd like to find or write a function that I can feed the alphanumeric value to and have it return a number that would be unique to the value passed in.
My first thought was to do a hash, but of course the result of any built in hashes are going to contains letters and plus it's technically possible (however unlikely) that a hash may not be unique.
My first question is whether there is anything built in to sql that I'm overlooking, and short of that I'd like to hear suggestions on the easiest way to implement such a function.
Here is a function which will probably convert from base 10 (integer) to base 36 (alphanumeric) and back again:
https://www.simple-talk.com/sql/t-sql-programming/numeral-systems-and-numbers-conversion-in-sql/
You might find the resultant number is too big to be held in an integer though.
You could concatenate the ascii values of each character of your string and cast the result as a bigint.
If the original data is known to be integers you can use cast:
SELECT CAST(varcharcol AS INT) FROM Table
Is there any bad affect if I use TEXT data-type to store an ID number in a database?
I do something like:
CREATE TABLE GenData ( EmpName TEXT NOT NULL, ID TEXT PRIMARY KEY);
And actually, if I want to store a date value I usually use TEXT data-type. If this is a wrong way, what is its disadvantage?
I am using PostgreSQL.
Storing numbers in a text column is a very bad idea. You lose a lot of advantages when you do that:
you can't prevent storing invalid numbers (e.g. 'foo')
Sorting will not work the way you want to ('10' is "smaller" than '2')
it confuses everybody looking at your data model.
I want to store a date value I usually use TEXT
That is another very bad idea. Mainly because of the same reasons you shouldn't be storing a number in a text column. In addition to completely wrong dates ('foo') you can't prevent "invalid" dates either (e.g. February, 31st). And then there is the sorting thing, and the comparison with > and <, and the date arithmetic....
I really don't recommend using text for dates.
Look at all the functions you are missing with text
If you want to use them, you have to cast and it's only problems if by accident the dates stored are not valid cause with text there's no validation.
In addition to what the other answers already provided:
text is also subject to COLLATION and encoding, which may complicate portability and data interchange between platforms. It also slows down sorting operations.
Concerning storage size of text: an integer occupies 4 byte (and is subject to padding for data alignment). text or varchar occupy 1 byte plus the actual string, which is 1 byte for ASCII character in UTF-8 or more for special characters. Most likely, text will be much bigger.
It depends on what operations you are going to do on the data.
If you are going to be doing a lot of arithmetic on numeric data, it makes more sense to store it as some kind of numeric data type. Also, if you plan on sorting the data in numerical order, it really helps if the data is stored as a number.
When stored as text, "11" comes ahead of "9" because "1" comes ahead of "9". If this isn't what you want, don't use text.
On the other hand, it often makes sense to store strings of digits, such as zipcodes or social security number or phone numbers as text.
I have huge table with 2 columns: Id and Title. Id is bigint and I'm free to choose type of Title column: varchar, char, text, whatever. Column Title contains random text strings like "abcdefg", "q", "allyourbasebelongtous" with maximum of 255 chars.
My task is to get strings by given substring. Substrings also have random length and can be start, middle or end of strings. The most obvious way to perform it:
SELECT * FROM t LIKE '%abc%'
I don't care about INSERT, I need only to do fast selects. What can I do to perform search as fast as possible?
I use MS SQL Server 2008 R2, full text search will be useless, as far as I see.
if you dont care about storage, then you can create another table with partial Title entries, beginning with each substring (up to 255 entries per normal title ).
in this way, you can index these substrings, and match only to the beginning of the string, should greatly improve performance.
If you want to use less space than Randy's answer and there is considerable repetition in your data, you can create an N-Ary tree data structure where each edge is the next character and hang each string and trailing substring in your data on it.
You number the nodes in depth first order. Then you can create a table with up to 255 rows for each of your records, with the Id of your record, and the node id in your tree that matches the string or trailing substring. Then when you do a search, you find the node id that represents the string you are searching for (and all trailing substrings) and do a range search.
Sounds like you've ruled out all good alternatives.
You already know that your query
SELECT * FROM t WHERE TITLE LIKE '%abc%'
won't use an index, it will do a full table scan every time.
If you were sure that the string was at the beginning of the field, you could do
SELECT * FROM t WHERE TITLE LIKE 'abc%'
which would use an index on Title.
Are you sure full text search wouldn't help you here?
Depending on your business requirements, I've sometimes used the following logic:
Do a "begins with" query (LIKE 'abc%') first, which will use an index.
Depending on if any rows are returned (or how many), conditionally move on to the "harder" search that will do the full scan (LIKE '%abc%')
Depends on what you need, of course, but I've used this in situations where I can show the easiest and most common results first, and only move on to the more difficult query when necessary.
You can add another calculated column on the table: titleLength as len(title) PERSISTED. This would store the length of the "title" column. Create an index on this.
Also, add another calculated column called: ReverseTitle as Reverse(title) PERSISTED.
Now when someone searches for a keyword, check if the length of keyword is same as titlelength. If so, do a "=" search. If length of keyword is less than the length of the titleLength, then do a LIKE. But first do a title LIKE 'abc%', then do a reverseTitle LIKE 'cba%'. Similar to Brad's approach - ie you do the next difficult query only if required.
Also, if the 80-20 rules applies to your keywords/ substrings (ie if most of the searches are on a minority of the keywords), then you can also consider doing some sort of caching. For eg: say you find that many users search for the keyword "abc" and this keyword search returns records with ids 20, 22, 24, 25 - you can store this in a separate table and have this indexed.
And now when someone searches for a new keyword, first look in this "cache" table to see if the search was already performed by an earlier user. If so, no need to look again in main table. Simply return results from "cache" table.
You can also combine the above with SQL Server TextSearch. (assuming you have a valid reason not to use it). But you could nevertheless use Text search first to shortlist the result set. and then run a SQL query against your table to get exact results using the Ids returned by the TExt Search as a parameter along with your keyword.
All this is obviously assuming you have to use SQL. If not, you can explore something like Apache Solr.
Create index view there is new feature in sql create index on the column that you need to search and use that view after in your search that will give your more faster result.
Use ASCII charset with clustered indexing the char column.
The charset influences the search performance because of the data
size on both ram and disk. The bottleneck is often I/O.
Your column is 255 characters long so you can use normal index on
your char field rather than full text, which is faster. Do not
select unnecessary columns in your select statement.
Lastly, add more RAM to the server and Increase cache size.
Do one thing, use primary key on specific column & index it in cluster form.
Then search using any method (wild card or = or any), it will search optimally because the table is already in clustered form, so it knows where he can find (because column is already in sorted form)