This is with Postgresql.
A column in a table contains string values with punctuations. The values are "aac", ".aaa", "aa_b", etc. When this column is specified in order by clause, the order of results is almost random. The strings starting with a period should appear at the top, which doesn't happen. They appear somewhere in the middle.
Surprisingly, this behavior is seen with only one database. The same query works fine on database on other host.
What could be the possible reason for this?
The "order by" (string comparison) behaviour depends on the cluster's locale.
First, check the EXPLAIN and see how it's doing the sort.
If it's calling an user-defined comparison function, look at that function.
If it's walking an index, see if that index is using an incorrect sorting function (one that's not transitive or some such).
If EXPLAIN doesn't show anything odd, check the cluster's locale - perhaps it's doing the comparison using a locale that ignores certain characters.
Related
I came across some SQL in an application which had no space before the "ORDER BY" clause. I was surprised that this even works.
Given a table of numbers, called [counter] where there is simply one column, counter_id that is an incrementing list of integers this SQL works fine in Microsoft SQL Server 2012
select
*
FROM [counter] c
where c.counter_id = 1000ORDER by counter_id
This also works with strings, e.g.:
WHERE some_string = 'test'ORDER BY something
My question is, are there any potential pitfalls or dangers with this query? And conversely, are there any benefits? Other than saving, what, 8 bits of network traffic for that whitespace (whcih may well be a consideration in some applications)
Let me explain the reason why this works with numbers and strings.
The reason is because numbers cannot start identifiers, unless the name is escaped. Basically, the first things that happens to a SQL query is tokenization. That is, the components of the query are broken into identifiers and keywords, which are then analyzed.
In SQL Server, keywords and identifiers and function names (and so on) cannot start with a digit (unless the name is escaped, of course). So, when the tokenizer encounters a digit, it knows that it has a number. The number ends when a non-digit character is encountered. So, a sequence of characters such as 1000ORDER BY is easily turned into three tokens, 1000, ORDER, and BY.
Similarly, the first time that a single quote is encountered, it always represents a string literal. The string literal ends when the final single quote is encountered. The next set of characters represents another token.
Let me add that there is exactly zero reason to ever use these nuances. First, these rules are properties of SQL Server's tokenization and do not necessarily apply to other databases. Second, the purpose of SQL is for humans to be able to express queries. It is way, way more important that we read them.
As jarlh mentioned there might be difference during scanning and parsing the tokens but it creates correctly during execution plan, hence it might not be huge difference in advantages or disadvantages
When parser examines characters ,it checks for keywords,identifiers,string constants and match overall semantic and syntactic structure of the language. Since 'Order by' is a keyword and sql parser knows its possible syntactic location in a query, it will interpret it accordingly without throwing any error. This is the reason why your order by will not throw any error.
Parsing sql query
Parsing SQL
Query:
SELECT StartDate, EndDate, RIGHT(Sector, 1 )
FROM Table1
ORDER BY Right(Sector, 1), StartDate
By looking at this, the query should order everything by sector, followed by the start date. This query has worked for quiet awhile until yesterday where it did not order it properly, for some reason, Sector 2 came before Sector 1.
The data type for Sector is of type int, not null. After inserting a TRIM function into Sector it seems to work fine afterwards.
New Query:
SELECT StartDate, EndDate, RIGHT(Sector, 1 )
FROM Table1
ORDER BY Right(TRIM(Sector), 1), StartDate
Which I found really weird since it's suppose to only pick out one character, so why is there leading spaces?
Is there an issue with using RIGHT function on a int before converting the type? Or is it something else?
Thanks for the help everyone!
-Edit- The RIGHT function should return either 1,2,3 or 4 however when ordering it, 2 comes before 1.
To clarify, the column Sector contains an int value, we can determine it's location by obtaining the last digit (Which is why the previous coder did)
MS Access 2003 has a curious little feature (I can't speak for the other versions):
Make a simple query. Sort by Column A Ascending. Save the query.
Run the query. When you see the output, sort by Column A Descending using the toolbar option (see pic below). Save & close.
Run the query again. Your new sort will have overridden the sort that you saved in the query.
I think you or someone else probably just opened the query out of curiosity, sorted by Sector Descending, and when prompted to save Design Changes, you chose Yes (even though technically you didn't make any). The easiest way I found to restore the original sort is to edit the query and save it.
You've got your data stored wrong if you need to sort on a subcharacter of a numeric field.
That said, in certain context, VBA functions reserve a space in string representations of numbers for the sign. A nonsensical example of this would be:
?Len("12345")
5
Notice the space at the beginning (where the - would be if the number returned by Len() could be negative). I thought this was a result of coercing a number to a string value, but that's not it, and I couldn't replicate the problem. But that would likely be the source of the problem, and, of course, trimming off the leading space would take care of the issue.
But that's two function calls for each line, and then you're sorting by it, and that means no use of indexes, so it's going to be slow relative to a SORT BY that can use indexes. So, I'd conclude you have a schema error, in that you're giving meaning to a subpart of the data stored in the field.
It seems pretty obvious that you have a blank space at the end of the Sector field that the trim is removing.
I have a column of database names like so:
testdb_20091118_124925
testdb_20091119_144925
testdb_20091119_145925
ect...
Is there a more elegant way of returning only similar records then using this like expression:
select * from sys.databases where name
LIKE 'testdb[_][0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9][_][0-9][0-9][0-9][0-9][0-9][0-9]'
No, no "elegant" solution, I'm afraid.
Furthermore, introducing functions, whether "native" or CLR, in the WHERE clause would prevent SQL of using indexes to resolve the predicate (it would have to scan the whole table, unless some other predicate came to help, in parts)
A few things to notice:
the use of the underscore may be acceptable here since the targeted values seem to follow a very regular pattern. However underscore when used with LIKE, is itself a wildcard (corresponding to one and exactly one character). If you truly want to specify underscore, "escape" them by putting them in brackets, i.e. 'abc[_]def' will match 'abc_def', precisely, but not 'abcXdef' for example.
the expression could be made a bit more selective and shorter with things like
'testdb_20[0-9][0-9][0-1][0-9][0-3][0-9][_][0-9][0-9][0-9][0-9][0-9][0-9]'
i.e. assuming dates will be in this century and limiting for day bigger than 3x etc.
No, it is not possible.
By the way, you need to put your underscore inside brackets because it means any character.
Due to a weird request, I can't put null in a database if there is no value. I'm wondering what can I put in the store procedure for nothing instead of null.
For example:
insert into blah (blah1) values (null)
Is there something like nothing or empty for "blah1" instead using null?
I would push back on this bizarre request. That's exactly what NULL is for in SQL, to denote a missing or inapplicable value in a column.
Is the requester experiencing grief over SQL logic with NULL?
edit: Okay, I've read your reply with the extra detail about this job assignment (btw, generally you should edit your original question instead of posting more information in an answer).
You'll have to declare all columns as NOT NULL and designate a special value in the domain of that column's data type to signify "no value." The appropriate value to choose might be different on a case by case basis, i.e. zero may signify nothing in a person_age column, but it might have significance in an items_in_stock column.
You should document the no-value value for each column. But I suppose they don't believe in documentation either. :-(
Depends on the data type of the column. For numbers (integers, etc) it could be zero (0) but if varchar then it can be an empty string ("").
I agree with other responses that NULL is best suited for this because it transcends all data types denoting the absence of a value. Therefore, zero and empty string might serve as a workaround/hack but they are fundamentally still actual values themselves that might have business domain meaning other than "not a value".
(If only the SQL language supported a "Not Applicable" (N/A) value type that would serve as an alternative to NULL...)
Is null is a valid value for whatever you're storing?
Use a sentry value like INT32.MaxValue, empty string, or "XXXXXXXXXX" and assume it will never be a legitimate value
Add a bit column 'Exists' that you populate with true at the same time you insert.
Edit: But yeah, I'll agree with the other answers that trying to change the requirements might be better than trying to solve the problem.
If you're using a varchar or equivalent field, then use the empty string.
If you're using a numeric field such as int then you'll have to force the user to enter data, else come up with a value that means NULL.
I don't envy you your situation.
There's a difference between NULLs as assigned values (e.g. inserted into a column), and NULLs as a SQL artifact (as for a field in a missing record for an OUTER JOIN. Which might be a foreign concept to these users. Lots of people use Access, or any database, just to maintain single-table lists.) I wouldn't be surprised if naive users would prefer to use an alternative for assignments; and though repugnant, it should work ok. Just let them use whatever they want.
There is some validity to the requirement to not use NULL values. NULL values can cause a lot of headache when they are in a field that will be included in a JOIN or a WHERE clause or in a field that will be aggregated.
Some SQL implementations (such as MSSQL) disallow NULLable fields to be included in indexes.
MSSQL especially behaves in unexpected ways when NULL is evaluated for equality. Does a NULL value in a PaymentDue field mean the same as zero when we search for records that are up to date? What if we have names in a table and somebody has no middle name. It is conceivable that either an empty string or a NULL could be stored, but how do we then get a comprehensive list of people that have no middle name?
In general I prefer to avoid NULL values. If you cannot represent what you want to store using either a number (including zero) or a string (including the empty string as mentioned before) then you should probably look closer into what you are trying to store. Perhaps you are trying to communicate more than one piece of data in a single field.
I have a column containing the strings 'Operator (1)' and so on until 'Operator (600)' so far.
I want to get them numerically ordered and I've come up with
select colname from table order by
cast(replace(replace(colname,'Operator (',''),')','') as int)
which is very very ugly.
Better suggestions?
It's that, InStr()/SubString(), changing Operator(1) to Operator(001), storing the n in Operator(n) separately, or creating a computed column that hides the ugly string manipulation. What you have seems fine.
If you really have to leave the data in the format you have - and adding a numeric sort order column is the better solution - then consider wrapping the text manipulation up in a user defined function.
select colname from table order by dbo.udfSortOperator(colname)
It's less ugly and gives you some abstraction. There's an additional overhead of the function call but on a table containing low thousands of rows in a not-too-heavily hit database server it's not a major concern. Make notes in the function to optomise later as required.
My answer would be to change the problem. I would add an operatorNumber field to the table if that is possible. Change the update/insert routines to extract the number and store it. That way the string conversion hit is only once per record.
The ordering logic would require the string conversion every time the query is run.
Well, first define the meaning of that column. Is operator a name so you can justify using chars? Or is it a number?
If the field is a name then you will use chars, and then you would want to determine the fixed length. Pad all operator names with zeros on the left. Define naming rules for operators (I.E. No leters. Or the codes you would use in a series like "A001")
An index will sort the physical data in the server. And a properly define text naming will sort them on a query. You would want both.
If the operator is a number, then you got the data type for that column wrong and needs to be changed.
Indexed computed column
If you find yourself ordering on or otherwise querying operator column often, consider creating a computed column for its numeric value and adding an index for it. This will give you a computed/persistent column (which sounds like oxymoron, but isn't).