Here is my query, to locate a single record in table:
SELECT TOP 1 Tests.Release, Tests.Result FROM Tests WHERE
Tests.TestCaseID = 104209
ORDER BY Tests.Release
Tests.Release is a column, containg alphanumeric values, which are not comparable using numerical comparision operators. I want to filter out records from my query, based on one known release string and whatever is sorted above that record. If that release were a string I would write WHERE Tests.Release > 10. But again, my release is complex string, which can be only sorted.
Update as requested:
Release
-------
A1B
A2B
A3B---^
A4B
A5B
Known release is A3B. So I need to look for my row within A2B and A1B
You can still use comparison operators on a string. They follow the normal access text collation rules as described here: http://support.microsoft.com/kb/129719
SELECT Tests.Release, Tests.Result
FROM Tests
WHERE Tests.TestCaseID = 104209 AND Tests.Release <='A1B2C2'
ORDER BY Tests.Release
If your Tests.Release will not fall into a sortable historical list without complex manipulation, I would add a timestamp column and date each release and sort by that instead.
Related
I am facing this issue:
Is there a way in PostgreSQL to put aggregated timestamp data into an array (for example using array_agg function) and then perform any match on some condition?
I am doing something similar with LIKE on aggregated strings (using string_agg(column,';')). But how to perform something similar on timestamps?
So if result would be '{10.10.2021,20.12.2021,1.1.1996}' as timestamp_array and I would like to filter rows that have at least one array element that after some input?
For example, ... WHERE 31.12.2021 > timestamp_array ... would not match the row above cause there is no array element after 31.12.2021.
But If I query ... WHERE 31.12.1996 > timestamp_array ..., the row above would be matched (cause at least one element of the array is in given interval).
First, you would use standard date formats. Then you can use:
where '2021-12-31' > any (timestamp_array)
Here is a db<>fiddle to illustrate the idea.
I would like to filter rows that has at least one array element that is after some input?
You can use the ANY construct as has been advised.
WHERE '1996-12-31'::timestamp < ANY ('{2021-10-10, 2021-12-20, 1996-01-01}'::timestamp[])
Has to be <, not >, obviously.
Your "timestamps" look a lot like dates - timestamp input accepts that, too.
But always use the recommended ISO 8601 format (as demonstrated), else your input depends on setting of the current session.
See:
IN vs ANY operator in PostgreSQL
But chances are, there is much more efficient way. You speak of "aggregated timestamp data". Typically it's much more efficient to check before aggregating. Not least because that can use indexes, as opposed to your approach. Typically, EXISTS does the job. Something like:
SELECT ...
FROM tbl t
WHERE EXISTS (SELECT FROM tbl t1 WHERE t1.id = t.id AND t1.timestamp_column > '1996-12-31'
GROUP BY ...
Start a new question with details of your query to get a fitting solution.
I've 3 fields which contain only text. However, i want to add a calculated field which counts the number of commas in each of these 3 fields and displays it separately in the adjacent column. The snippet of SQL i use is shown below. How can i build the calculated field?
SELECT week, client_I, client_II, client_III
FROM quality_control_test;
Please advise!
well, you can "count" the number of a given character in a string, by using this:
length(c) - length(replace(c,',',''))
I'm assume you can figure out how to leverage that for your own query ;)
Use:
The user searches for a partial postcode such as 'RG20' which should then be displayed in a specific order. The query uses the MATCH AGAINST method in boolean mode where an example of the postcode in the database would be 'RG20 7TT' so it is able to find it.
At the same time it also matches against a list of other postcodes which are in it's radius (which is a separate query).
I can't seem to find a way to order by a partial match, e.g.:
ORDER BY FIELD(postcode, 'RG20', 'RG14', 'RG18','RG17','RG28','OX12','OX11')
DESC, city DESC
Because it's not specifically looking for RG20 7TT, I don't think it can make a partial match.
I have tried SUBSTR (postcode, -4) and looked into left and right, but I haven't had any success using 'by field' and could not find another route...
Sorry this is a bit long winded, but I'm in a bit of a bind.
A UK postcode splits into 2 parts, the last section always being 3 characters and within my database there is a space between the two if that helps at all.
Although there is a DESC after the postcodes, I do need them to display in THAT particular order (RG20, RG14 then RG18 etc..) I'm unsure if specifying descending will remove the ordering or not
Order By Case
When postcode Like 'RG20%' Then 1
When postcode Like 'RG14%' Then 2
When postcode Like 'RG18%' Then 3
When postcode Like 'RG17%' Then 4
When postcode Like 'RG28%' Then 5
When postcode Like 'OX12%' Then 6
When postcode Like 'OX11%' Then 7
Else 99
End Asc
, City Desc
You're on the right track, trimming the field down to its first four characters:
ORDER BY FIELD(LEFT(postcode, 4), 'RG20', 'RG14', ...),
-- or SUBSTRING(postcode FROM 1 FOR 4)
-- or SUBSTR(postcode, 1, 4)
Here you don't want DESC.
(If your result set contains postcodes whose prefixes do not appear in your FIELD() ordering list, you'll have a bit more work to do, since those records will otherwise appear before any explicitly ordered records you specify. Before 'RG20' in the example above.)
If you want a completely custom sorting scheme, then I only see one way to do it...
Create a table to hold the values upon which to sort, and include a "sequence" or "sort_order" field. You can then join to this table and sort by the sequence field.
One note on the sequence field. It makes sense to create it as an int as... well, sequences are often ints :)
If there is any possibility of changing the sort order, you may want to consider making it alpha numeric... It is a lot easier to insert "5A" between "5 and "6" than it is to insert a number into a sequence of integers.
Another method I use is utilising the charindex function:
order by charindex(substr(postcode,4,1),"RG20RG14RG18...",1)
I think that's the syntax anyway, I'm just doing this in SAS at the moment so I've had to adapt from memory!
But essentially the sooner you hit your desired part of the string, the higher the rank.
If you're trying to rank on a large variety of postcodes then a case statement gets pretty hefty.
Suppose I want to order the records order by a field (string data type) called STORY_LENGTH. This field is a multi-valued field and I represent the multiple values using commas. For example, for record1, its value is "1" and record2 its value is "1,3" and for record3 its value is "1,2". Now when, I want to order the records according to STORY_LENGTH then records are ordered like this record1 > record3 > record2. Its clear that STORY_LENGTH data type is string and order by ASC is ordering that value considering it as string. But, here comes the problem. For example, when record4="10" and record5="2" and I try to order it looks like record4 > record5 which obviously I don't want. Because 2 > 10 and I am using a string formatted just because of multiple values of the field.
So, anybody, can you help me out of this? I need some good idea to fix.
thanks
Multi-values fields as you describe mean your data model is broken and should be normalized.
Once this is done, querying becomes much more simple.
From what I've understood you want to sort items by second or first number in comma separated values stored in a VARCHAR field. Implementation would depend on database used, for example in MySQL it would look like:
SELECT * FROM stories
ORDER BY CAST(COALESCE(SUBSTRING_INDEX(story_length, ',', -1), '0') AS INTEGER)
Yet it is not generally not good to use such sorting for performance reasons as sorting would require scanning of whole table instead of using index on field.
Edit: After edits it looks like you want to sort on first value and ignore value(s) after comma. As according to some comment above changes in database design are not an option just use following code for sorting:
SELECT * FROM stories
ORDER BY CAST(COALESCE(NULLIF(SUBSTRING_INDEX(story_length, ',', 1), ''), '0') AS INTEGER)
I have a varchar column in a database and a requirement has come in so a user can enter a range to/from eg/ABC001 to ABC100
I have the following query but feel it might not be strict enough to work out if any values within that range exist.
SELECT count(*) FROM MyTable where MyColumn between 'ABC001' and 'ABC005'
I have a feeling an order by should be used or is there a better way to calculate the existence of values within a alphanumeric range
No orderby is required. That should be perfrect.
If you want to boost that operation you can create a index on it.
Order by operation is done at the end of query execution, so the data will be retrived in the same way.
OP said:
or is there a better way to calculate
the existence of values within a
alphanumeric range
The best way would be:
SELECT count(*) FROM MyTable where MyColumn>='ABC001' and MyColumn<='ABC005'
I find most people can't remember if BETWEEN includes or excludes the "end points". By just always using >= and/or > and/or <= and/or < you have more clarity and flexibility.
Any ORDER BY would be applied to the resulting set of rows that meet the WHERE condition, and has nothing to do with the WHERE filtering. You can use it if you want the final result set in a particular order, but it will have no effect on which rows are included in the results.