We're currently investigating the load against our SQL server and looking at ways to alleviate it. During my post-secondary education, I was always told that, from a performance standpoint, it was cheaper to make SQL Server do the work. But is this true?
Here's an example:
SELECT ord_no FROM oelinhst_sql
This returns 783119 records in 14 seconds. The field is a char(8), but all of our order numbers are six-digits long so each has two blank characters leading. We typically trim this field, so I ran the following test:
SELECT LTRIM(ord_no) FROM oelinhst_sql
This returned the 783119 records in 13 seconds. I also tried one more test:
SELECT LTRIM(RTRIM(ord_no)) FROM oelinhst_sql
There is nothing to trim on the right, but I was trying to see if there was any overhead in the mere act of calling the function, but it still returned in 13 seconds.
My manager was talking about moving things like string trimming out of the SQL and into the source code, but the test results suggest otherwise. My manager also says he heard somewhere that using SQL functions meant that indexes would not be used. Is there any truth to this either?
Only optimize code that you have proven to be the slowest part of your system. Your data so far indicates that SQL string manipulation functions are not effecting performance at all. take this data to your manager.
If you use a function or type cast in the WHERE clause it can often prevent the SQL server from using indexes. This does not apply to transforming returned columns with functions.
It's typically user defined functions (UDFs) that get a bad rap with regards to SQL performance and might be the source of the advice you're getting.
The reason for this is you can build some pretty hairy functions that cause massive overhead with exponential effect.
As you've found with rtrim and ltrim this isn't a blanket reason to stop using all functions on the sql side.
It somewhat depends on what all is encompassed by: "things like string trimming", but, for string trimming at least, I'd definitely let the database do that (there will be less network traffic as well). As for the indexes, they will still be used if you're where clause is just using the column itself (as opposed to a function of the column). Use of the indexes won't be affected whatsoever by using functions on the actual columns you're retrieving (just on how you're selecting the rows).
You may want to have a look at this for performance improvement suggestions: http://net.tutsplus.com/tutorials/other/top-20-mysql-best-practices/
As I said in my comment, reduce the data read per query and you will get a speed increase.
You said:
our order numbers are six-digits long
so each has two blank characters
leading
Makes me think you are storing numbers in a string, if so why are you not using a numeric data type? The smallest numeric type which will take 6 digits is an INT (I'm assuming SQL Server) and that already saves you 4 bytes per order number, over the number of rows you mention that's quite a lot less data to read off disk and send over the network.
Fully optimise your database before looking to deal with the data outside of it; it's what a database server is designed to do, serve data.
As you found it often pays to measure but I what I think your manager may have been referring to is somthing like this.
This is typically much faster
SELECT SomeFields FROM oelinhst_sql
WHERE
datetimeField > '1/1/2011'
and
datetimeField < '2/1/2011'
than this
SELECT SomeFields FROM oelinhst_sql
WHERE
Month(datetimeField) = 1
and
year(datetimeField) = 2011
even though the rows that are returned are the same
Related
I wish to analyze the queries executed on certain redshift warehouse (not mine).
In order to do so I'm using a query with a join on stl_querytext and stl_query.
My question is how come I'm also getting illegal queries (I.E queries with wrong sql syntax)?
When I've tried it in my local redshift I haven't seen those. Also, couldn't find relevant documentation.
Is this a configuration issue? And in case I'm supposed to those queries is there a way to know those are illegal ones?
Thanks,
Nir.
So stl_querytext breaks long queries into parts identified by sequence number. I hope you are reconstructing the parts into the original query as a first step. This can be done with listagg() function as long as the resulting query doesn't over the max tex field (about 320 parts).
Now this is not enough to get valid SQL back in all cases because you need to treat combining the parts differently depending if the section of the query is inside or outside a text string in the query. (Is white space needed between parts or not)
I've done this exact process a bunch so this is doable. I don't have a perfect process on the whitespace question, I get close. Maybe others know the expression to get an exact recreation of the query from stl_querytext.
Yesterday we got a scenario where had to get type of a db field and on base of that we had to write the description of the field. Like
Select ( Case DB_Type When 'I' Then 'Intermediate'
When 'P' Then 'Pending'
Else 'Basic'
End)
From DB_table
I suggested to write a db function instead of this case statement because that would be more reusable. Like
Select dbo.GetTypeName(DB_Type)
from DB_table
The interesting part is, One of our developer said using database function will be inefficient as database functions are slower than Case statement. I searched over the internet to find the answer which is better approach in terms of efficiency but unfortunately I found nothing that could be considered satisfied answer. Please enlighten me with your thoughts, which approach is better?
UDF function is always slower than case statements
Please refer the article
http://blogs.msdn.com/b/sqlserverfaq/archive/2009/10/06/performance-benefits-of-using-expression-over-user-defined-functions.aspx
The following article suggests you when to use UDF
http://www.sql-server-performance.com/2005/sql-server-udfs/
Summary :
There is a large performance penalty paid when User defined functions is used.This penalty shows up as poor query execution time when a query applies a UDF to a large number of rows, typically 1000 or more. The penalty is incurred because the SQL Server database engine must create its own internal cursor like processing. It must invoke each UDF on each row. If the UDF is used in the WHERE clause, this may happen as part of the filtering the rows. If the UDF is used in the select list, this happens when creating the results of the query to pass to the next stage of query processing.
It's the row by row processing that slows SQL Server the most.
When using a scalar function (a function that returns one value) the contents of the function will be executed once per row but the case statement will be executed across the entire set.
By operating against the entire set you allow the server to optimise your query more efficiently.
So the theory goes that the same query run both ways against a large dataset then the function should be slower. However, the difference may be trivial when operating against your data so you should try both methods and test them to determine if any performance trade off is worth the increased utility of a function.
Your devolper is right. Functions will slow down your query.
https://sqlserverfast.com/?s=user+defined+ugly
Calling functionsis like:
wrap parts into paper
put it into a bag
carry it to the mechanics
let him unwrap, do something, wrapt then result
carry it back
use it
OK, at the first glance, it seems that it must be more efficient to use SQLPrepare+SQLBindParameter+SQLExecute than format string (e.g. with CString::Format) and pass the whole complete query string to SQLExecDirect. If not, why would there exist the second method (SQLPrepare+SQLBindParameter+SQLExecute) at all?
BUT... here is what I think: The driver has sooner or later (I suspect later, but anyway...) convert the parameters (that I feed it with SQLBindParameter) into string representation right? (Or maybe not?) So if I make this formatting in my application (printf-like formatting), will I have any loss in performance?
One thing I suspect is that when the connection is over the network, passing parameters as raw data and then formatting them at server end might decrease the network traffic, instead of passing preformatted query strings, but lets ignore the network traffic for a moment. If not that, is there any performance gain in using SQLPrepare+SQLBindParameter+SQLExecute instead of formatting full query string in application and then using SQLExecDirect?
For me using SQLExecDirect is simpler and more convenient, so I need good answer on whether (and if) I should opt to another approach.
Important: If you will say that SQLPrepare+SQLBindParameter+SQLExecute approach will give better performance, I'd like to know how much! I don't mind theoretical assumptions, I'd like to know when is it worth practically? My current use-case is not very db-intensive, I won't have more than 100 inserts/updates per second, is it ok to use SQLExecDirect? In what scenarios - if ever - do I have to use SQLPrepare+SQLBindParameter+SQLExecute?
If you are inserting or updating with the same SQL (excluding parameters) repeatedly then SQLPrepare, SQLBindParameter and SQLExecute will be faster than SQLExecDirect every time. Consider:
SQLPrepare("insert into mytable (cola, colb) values(?,?);");
for (n = 0; n < 10000; n++) {
SQLBindParameter(1, n);
SQLBindParameter(2, n);
SQLExecute;
}
and
for (n = 0; n < 10000; n++) {
char sql[1000];
sprintf("insert into mytable (cola, colb) values(%d,%d)", n, n);
SQLExecDirect(sql);
}
In the first example, the statement is prepared once and hence the db engine only has to parse it once and work out an execution plan once. In the second example the sql and parameters are passed every time and the SQL looks different every time so it is parsed each time.
In addition, in the first example you can use arrays of parameters to pass multiple rows of parameters in one go - see SQL_PARAMSET_SIZE.
See 3.1.2 Inserting data for a worked example and an indication of how much time you can save.
Ignore network traffic, you'll just be second guessing what happens under the hood in the driver.
ADDITION:
Regarding your description of what happens with parameters where you seem to think the driver will convert them to strings; the other advantage of binding parameters is you can provide them in one type and ask the driver to use them as another type. You may find you'll come across a parameter type which cannot easily be represented as a string without adding some sort of conversion function which could be avoided with a parameter.
Yes, it's a bad idea, and for two reasons:
Performance
SQLPrepare causes the SQL statement to be parsed, and depending on the SQL statement it can be time consuming. If you're using a DB on another server, it might get sent to it for parsing. Even if the parsing takes only e.g. 10% time of executing your whole query, you save that time when executing the statement twice. That may be the case when you're inserting multiple rows, or call a "select" another time.
Of course the SQL statement passed must always be a static string. Some SQL frameworks may even do prepared statement caching for you. I don't know if ODBC does this. If you want to have real performance numbers, you have to measure for yourself - every query is different (and even might depend on the table contents, too).
SQL Injection
No matter what you say where the data comes from that you're formatting with CString::Format or any other method, you might be at risk for SQL injection. Even if you're using strings from your sources, sometimes later you or someone other may be changing your code to accept data from outside, and then you're vulnerable to SQL injection. If you need more info about SQL injection, just search StackOverflow, I'm sure there are some good questions about it, or see this image:
I'm working with a relatively large SQL Server 2000 DB at the moment. It's 80 GB in size, and have millions and millions of records.
I currently need to return a list of names that contains at least one of a series of illegal characters. By illegal characters is just meant an arbitrary list of characters that is defined by the customer. In the below example I use question mark, semi-colon, period and comma as the illegal character list.
I was initially thinking to do a CLR function that worked with regular expressions, but as it's SQL server 2000, I guess that's out of the question.
At the moment I've done like this:
select x from users
where
columnToBeSearched like '%?%' OR
columnToBeSearched like '%;%' OR
columnToBeSearched like '%.%' OR
columnToBeSearched like '%,%' OR
otherColumnToBeSearched like '%?%' OR
otherColumnToBeSearched like '%;%' OR
otherColumnToBeSearched like '%.%' OR
otherColumnToBeSearched like '%,%'
Now, I'm not a SQL expert by any means, but I get the feeling that the above query will be very inefficient. Doing 8 multiple wildcard searches in a table with millions of records, seems like it could slow the system down rather seriously. While it seems to work fine on test servers, I am getting the "this has to be completely wrong" vibe.
As I need to execute this script on a live production server eventually, I hope to achieve good performance, so as not to clog the system. The script might need to be expanded later on to include more illegal characters, but this is very unlikely.
To sum up: My aim is to get a list of records where either of two columns contain a customer-defined "illegal character". The database is live and massive, so I want a somewhat efficient approach, as I believe the above queries will be very slow.
Can anyone tell me the best way for achieving my result? Thanks!
/Morten
It doesn't get used much, but the LIKE statement accepts patterns in a similar (but much simplified) way to Regex. This link is the msdn page for it.
In your case you could simplify to (untested):
select x from users
where
columnToBeSearched like '%[?;.,]%' OR
otherColumnToBeSearched like '%[?;.,]%'
Also note that you can create the LIKE pattern as a variable, allowing for the customer defined part of your requirements.
One other major optimization: If you've got an updated date (or timestamp) on the user row (for any audit history type of thing), then you can always just query rows updated since the last time you checked.
If this is a query that will be run repeatedly, you are probably better off creating an index for it. The syntax escapes me at the moment, but you could probably create a computed column (edit: probably a PERSISTED computed column) which is 1 if columnToBeSearched or otherColumnToBeSearched contain illegal characters, and 0 otherwise. Create an index on that column and simply select all rows where the column is 1. This assumes that the set of illegal characters is fixed for that database installation (I assume that that's what you mean by "specified by the customer"). If, on the other hand, each query might specify a different set of illegal characters, this won't work.
By the way, if you don't mind the risk of reading uncommitted rows, you can run the query in a transaction with the the isolation level READ UNCOMMITTED, so that you won't block other transactions.
You can try to partition your data horizontally and "split" your query in a number of smaller queries. For instance you can do
SELECT x FROM users
WHERE users.ID BETWEEN 1 AND 5000
AND -- your filters on columnToBeSearched
putting your results back together in one list may be a little inconvenient, but if it's a report you're only extracting once (or once in a while) it may be feasible.
I'm assuming ID is the primary key of users or a column that has a index defined, which means SQL should be able to create an efficient execution plan, where it evaluates users.ID BETWEEN 1 AND 5000 (fast) before trying to check the filters (which may be slow).
Look up PATINDEX it allows you to put in an array of characters PATINDEX('[._]',ColumnName) returns a 0 or a value of the first occurance of an illegal character found in a certain value. Hope this helps.
For performance and simplicity reasons I would like to get the contents of a DATETIME column in my MySQL 3.x server as seconds (or any numeric type really - I just want to avoid all the apparent nastiness of timezones when using UNIX_TIMESTAMP() [the dates in my table are indeed from different locales so I'd rather not have any doubt as to whether or not some timezone-compensation weirdness is going on behind my back]).
The TO_SECONDS() function seemed ideal, until I found out that it only works on newer MySQL installations (upgrading is not an option)...
I though about doing something like this:
SELECT (TO_DAYS(Timestamp)-730486)*86400+TIME_TO_SEC(Timestamp)
To manually calculate the number of seconds elapsed since 2000-01-01. But it seems like it might put unnecessary load on the server by forcing it to make a temporary table or something?
I could also just do:
SELECT TO_DAYS(Timestamp), TIME_TO_SEC(Timestamp)
and combine the results myself, but that makes it less simple on the code-side.
What's the best compromise? I'll be fetching a large number of rows (on the order of 10^6) each query, so both client and server-side performance is not entirely unimportant..
Thanks,.
(post edited to reduce confusion)
Firstly, just to make sure, the new field will be a BIGINT... correct?
Can you use explicit casting in to prevent the overflow?
SELECT CAST(TO_DAYS(Timestamp)*86400 + TIME_TO_SEC(Timestamp) AS UNSIGNED INTEGER)
Or perhaps use an intermediate string before populating the new BIGINT field?
SELECT CAST(TO_DAYS(Timestamp)*86400 + TIME_TO_SEC(Timestamp) AS UNSIGNED CHAR(11))