How to omit columns for specific rows in SQL? - sql

As an example:
select
pre, sze, fdm_pre, val
from
form_data_stage
where
fdm_pre in (1,2,3,4)
order by
pre;
this will return values for all the columns pre, sze, fdm_pre, val for any of the fdm_pre values listed, i.e. (1,2,3,4). However, I only care about the pre and size values when fdm_pre is 1.
I could write a query such as
select
case when fdm_pre = 1 then pre else null end as pre,
case when fdm_pre = 1 then sze else null end as sze,
fdm_pre,
val
from
form_data_stage
where
fdm_pre in (1,2,3,4)
order by
pre;
But, is there some standard way of dealing with this situation? Is it generally more efficient to return all the columns, even if they aren't used? Or, would it be better to do some conditional checking as in the second query? The pre and sze columns are integer values.

It's not efficient to return all the columns when they are not used, specially if the unused columns are massive in size (BLOB, CLOB, TEXT, ARRAY, etc.).
In your particular example the columns "not returned" are small ones (measured in bytes), so it won't really matter if you produce nulls instead.

You can use execute string instead of this kind of script. In that way your desire columns should put in string and with if statement you can decide about which column is shown and which one is not necessary.
I can make a sample for you if you need.

I think keeping the query simple and clear to read is important. I would suggest you return all the relevant columns (that might be useful) and on the app logic side deal with these 2 options of column usage.

Related

Efficient way to select one from each category - Rails

I'm developing a simple app to return a random selection of exercises, one for each bodypart.
bodypart is an indexed enum column on an Exercise model. DB is PostgreSQL.
The below achieves the result I want, but feels horribly inefficient (hitting the db once for every bodypart):
BODYPARTS = %w(legs core chest back shoulders).freeze
#exercises = BODYPARTS.map do |bp|
Exercise.public_send(bp).sample
end.shuffle
So, this gives a random exercise for each bodypart, and mixes up the order at the end.
I could also store all exercises in memory and select from them; however, I imagine this would scale horribly (there are only a dozen or so seed records at present).
#exercises = Exercise.all
BODYPARTS.map do |bp|
#exercises.select { |e| e[:bodypart] == bp }.sample
end.shuffle
Benchmarking these shows the select approach as the more effective on a small scale:
Queries: 0.072902 0.020728 0.093630 ( 0.088008)
Select: 0.000962 0.000225 0.001187 ( 0.001113)
MrYoshiji's answer: 0.000072 0.000008 0.000080 ( 0.000072)
My question is whether there's an efficient way to achieve this output, and, if so, what that approach might look like. Ideally, I'd like to keep this to a single db query.
Happy to compose this using ActiveRecord or directly in SQL. Any thoughts greatly appreciated.
From my comment, you should be able to do (thanks PostgreSQL's DISTINCT ON):
Exercise.select('distinct on (bodypart) *')
.order('bodypart, random()')
Postgres' DISTINCT ON is very handy and performance is typically great, too - for many distinct bodyparts with few rows each. But for only few distinct values of bodypart with many rows each (big table - and your use case) there are far superior query techniques.
This will be massively faster in such a case:
SELECT e.*
FROM unnest(enum_range(null::bodypart)) b(bodypart)
CROSS JOIN LATERAL (
SELECT *
FROM exercises
WHERE bodypart = b.bodypart
-- ORDER BY ??? -- for a deterministic pick
LIMIT 1 -- arbitrary pick!
) e;
Assuming that bodypart is the name of the enum as well as the table column.
enum_range is an enum support function that (quoting the manual):
Returns all values of the input enum type in an ordered array
I unnest it and run a LATERAL subquery for each value, which is very fast when supported with the right index. Detailed explanation for the query technique and the needed index (focus on chapter "2a. LATERAL join"):
Optimize GROUP BY query to retrieve latest record per user
For just an arbitrary row for each bodypart, a simple index on exercises(bodypart) does the job. But you can have a deterministic pick like "the latest entry" with the right multicolumn index and a matching ORDER BY clause and almost the same performance.
Related:
Is it a bad practice to query pg_type for enums on a regular basis?
Select first row in each GROUP BY group?

Checking Range in Comma Separated Values [SQL Server 2008]

I have a table with following structure
ID FirstName LastName CollectedNumbers
1 A B 10,11,15,55
2 C D 101,132,111
I want a boolean value based on CollectedNumber Range. e.g. If CollectedNumbers are between 1 and 100 then True if Over 100 then False. Can anyone Suggest what would be best way to accomplish this. Collected Numbers won't be sorted always.
It so happens that you have a pretty simple way to see if values are 100 or over in the list. If such a value exists, then there are at least three characters between the commas. If the numbers are never more than 999, you could do:
select (case when ','+CollectedNumbers+',' not like '%,[0-9][0-9][0-9]%' then 1
else 0
end) as booleanflag
This happens to work for the break point of 100. It is obviously not a general solution. The best solution would be to use a junction table with one row per id and CollectedNumber.
Just make a function, which will return true/False, in the database which will convert the string values(10,11,15,55) into a table and call that function in the Selection of the Query like this
Select
ID, FirstName, LastName,
dbo.fncCollectedNumbersResult(stringvalue) as Result
from yourTableName
I think the easiest you can do is build a C# function and use the builtin sqlclr to load it as a custom function you can then call.
Inside the C# function, you can then sort your numbers and make simple logic to return your true/false.

What is MAX(DISTINCT x) in SQL?

I just stumbled over jOOQ's maxDistinct SQL aggregation function.
What does MAX(DISTINCT x) do different from just MAX(x) ?
maxDistinct and minDistinct were defined in order to keep consistency with the other aggregate functions where having a distinct option actually makes a difference (e.g., countDistinct, sumDistinct).
Since the maximum (or minimum) calculated between the distinct values of a dataset is mathematically equivalent with the simple maximum (or minimum) of the same set, these function are essentially redundant.
In short, there will be no difference. In case of MySQL, it's even stated in manual page:
Returns the maximum value of expr. MAX() may take a string argument;
in such cases, it returns the maximum string value. See Section 8.5.3,
“How MySQL Uses Indexes”. The DISTINCT keyword can be used to find the
maximum of the distinct values of expr, however, this produces the
same result as omitting DISTINCT.
The reason why it's possible - is because to keep compatibility with other platforms. Internally, there will be no difference - MySQL will just omit influence of DISTINCT. It will not try to do something with set of rows (i.e. produce distinct set first). For indexed columns it will be Select tables optimized away (thus reading one value from index, not a table), for non-indexed - full scan.
If i'm not wrong there are no difference
For Columns
ID
1
2
2
3
3
4
5
5
The OUTPUT for both quires are same 5
MAX(DISTINCT x)
// ID = 1,2,2,3,3,4,5,5
// DISTINCT = 1,2,3,4,5
// MAX = 5
// 1 row
and for
MAX(x)
// ID = 1,2,2,3,3,4,5,5
// MAX = 5
// 1 row
Theoretically, DISTINCT x ensures that every element is different from a certain set. The max operator selects the highest value from a set. In plain SQL there should be no difference between both.

Custom SQL sort by

Use:
The user searches for a partial postcode such as 'RG20' which should then be displayed in a specific order. The query uses the MATCH AGAINST method in boolean mode where an example of the postcode in the database would be 'RG20 7TT' so it is able to find it.
At the same time it also matches against a list of other postcodes which are in it's radius (which is a separate query).
I can't seem to find a way to order by a partial match, e.g.:
ORDER BY FIELD(postcode, 'RG20', 'RG14', 'RG18','RG17','RG28','OX12','OX11')
DESC, city DESC
Because it's not specifically looking for RG20 7TT, I don't think it can make a partial match.
I have tried SUBSTR (postcode, -4) and looked into left and right, but I haven't had any success using 'by field' and could not find another route...
Sorry this is a bit long winded, but I'm in a bit of a bind.
A UK postcode splits into 2 parts, the last section always being 3 characters and within my database there is a space between the two if that helps at all.
Although there is a DESC after the postcodes, I do need them to display in THAT particular order (RG20, RG14 then RG18 etc..) I'm unsure if specifying descending will remove the ordering or not
Order By Case
When postcode Like 'RG20%' Then 1
When postcode Like 'RG14%' Then 2
When postcode Like 'RG18%' Then 3
When postcode Like 'RG17%' Then 4
When postcode Like 'RG28%' Then 5
When postcode Like 'OX12%' Then 6
When postcode Like 'OX11%' Then 7
Else 99
End Asc
, City Desc
You're on the right track, trimming the field down to its first four characters:
ORDER BY FIELD(LEFT(postcode, 4), 'RG20', 'RG14', ...),
-- or SUBSTRING(postcode FROM 1 FOR 4)
-- or SUBSTR(postcode, 1, 4)
Here you don't want DESC.
(If your result set contains postcodes whose prefixes do not appear in your FIELD() ordering list, you'll have a bit more work to do, since those records will otherwise appear before any explicitly ordered records you specify. Before 'RG20' in the example above.)
If you want a completely custom sorting scheme, then I only see one way to do it...
Create a table to hold the values upon which to sort, and include a "sequence" or "sort_order" field. You can then join to this table and sort by the sequence field.
One note on the sequence field. It makes sense to create it as an int as... well, sequences are often ints :)
If there is any possibility of changing the sort order, you may want to consider making it alpha numeric... It is a lot easier to insert "5A" between "5 and "6" than it is to insert a number into a sequence of integers.
Another method I use is utilising the charindex function:
order by charindex(substr(postcode,4,1),"RG20RG14RG18...",1)
I think that's the syntax anyway, I'm just doing this in SAS at the moment so I've had to adapt from memory!
But essentially the sooner you hit your desired part of the string, the higher the rank.
If you're trying to rank on a large variety of postcodes then a case statement gets pretty hefty.

How to handle string ordering in order by clause?

Suppose I want to order the records order by a field (string data type) called STORY_LENGTH. This field is a multi-valued field and I represent the multiple values using commas. For example, for record1, its value is "1" and record2 its value is "1,3" and for record3 its value is "1,2". Now when, I want to order the records according to STORY_LENGTH then records are ordered like this record1 > record3 > record2. Its clear that STORY_LENGTH data type is string and order by ASC is ordering that value considering it as string. But, here comes the problem. For example, when record4="10" and record5="2" and I try to order it looks like record4 > record5 which obviously I don't want. Because 2 > 10 and I am using a string formatted just because of multiple values of the field.
So, anybody, can you help me out of this? I need some good idea to fix.
thanks
Multi-values fields as you describe mean your data model is broken and should be normalized.
Once this is done, querying becomes much more simple.
From what I've understood you want to sort items by second or first number in comma separated values stored in a VARCHAR field. Implementation would depend on database used, for example in MySQL it would look like:
SELECT * FROM stories
ORDER BY CAST(COALESCE(SUBSTRING_INDEX(story_length, ',', -1), '0') AS INTEGER)
Yet it is not generally not good to use such sorting for performance reasons as sorting would require scanning of whole table instead of using index on field.
Edit: After edits it looks like you want to sort on first value and ignore value(s) after comma. As according to some comment above changes in database design are not an option just use following code for sorting:
SELECT * FROM stories
ORDER BY CAST(COALESCE(NULLIF(SUBSTRING_INDEX(story_length, ',', 1), ''), '0') AS INTEGER)