Changing NULL's position in sorting - sql

I am sorting a table. The fiddle can be found here.
CREATE TABLE test
(
field date NULL
);
INSERT INTO test VALUES
('2000-01-05'),
('2004-01-05'),
(NULL),
('2008-01-05');
SELECT * FROM test ORDER BY field DESC;
The results I get:
2008-01-05
2004-01-05
2000-01-05
(null)
However I need the results to be like this:
(null)
2008-01-05
2004-01-05
2000-01-05
So the NULL value is treated as if it is higher than any other value. Is it possible to do so?

Easiest is to add an extra sort condition first:
ORDER BY CASE WHEN field is null then 0 else 1 END,field DESC
Or, you can try setting it to the maximum of its datatype:
ORDER BY COALESCE(field,'99991231') DESC
COALESCE/ISNULL work fine, provided you don't have "real" data using that same maximum value. If you do, and you need to distinguish them, use the first form.

Use a 'end of time' marker to replace nulls:
SELECT * FROM test
ORDER BY ISNULL(field, '9999-01-01') DESC;

Be wary of queries that invoke per-row functions, they rarely scale well.
That may not be a problem for smaller data sets but will be if they become large. That should be monitored by regularly performing tests on the queries. Database optimisation is only a set-and-forget operation if your data never changes (very rare).
Sometimes it's better to introduce an artificial primary sort column, such as with:
select 1 as art_id, mydate, col1, col2 from mytable where mydate is null
union all
select 2 as art_id, mydate, col1, col2 from mytable where mydate is not null
order by art_id, mydate desc
Then only use result_set["everything except art_id"] in your programs.
By doing that, you don't introduce (possibly) slow per-row functions, instead you rely on fast index lookup on the mydate column. And advanced execution engines can actually run these two queries concurrently, combining them once they're both finished.

Related

Conditional offset on union select

Consider we have complex union select from dozens of tables with different structure but similar fields meaning:
SELECT a1.abc as field1,
a1.bcd as field2,
a1.date as order_date,
FROM a1_table a1
UNION ALL
SELECT a2.def as field1,
a2.fff as field2,
a2.ts as order_date,
FROM a2_table a2
UNION ALL ...
ORDER BY order_date
Notice also that results in general are sorted by "synthetic" field order_date.
This query gives huge number of rows, and we want to work with pages from this set of rows. Each page is defined by two parameters:
page size
field2 value of last item from previous page
Most important thing that we can not change the way can page is defined. I.e. it is not possible to use row number of date of last item from previous page: only field2 value is acceptable.
Current algorithm of paging is implemented in quite ugly way:
1) query above is wrapped in additional select with row_number() additional column and then wrapped in stored procedure union_wrapper which returns appropriate
table ( field1 ..., field2 character varying),
2) then complex select performed:
RETURN QUERY
with tmp as (
select
rownum, field1, field2 from union_wrapper()
)
SELECT field1, field2
FROM tmp
WHERE rownum > (SELECT rownum
FROM tmp
WHERE field2 = last_field_id
LIMIT 1)
LIMIT page_size
The problem is that we have to build in memory full union-select results in order to later detect row number from which we want to cut new page. This is quite slow and takes unacceptable much time to perform.
Is any way to reconfigure this operations in order to significantly reduce query complexity and increase its speed?
And again: we can not change condition of paging, we can not change structure of the tables. Only way of rows retrieving.
UPD: I also can not use temp tables, because I'm working in read-replica of the database.
You have successfully maneuvered yourself into a tight spot. The query and its ORDER BY expression contradict your paging requirements.
ORDER BY order_date is not a deterministic sort order (there could be multiple rows with the same order_date) - which you need before you do anything else here. And field2 does not seem to be unique either. You need both: Define a deterministic sort order and a unique indicator for page end / start. Ideally, the indicator matches the sort order. Could be (order_date, field2), which both columns defined NOT NULL, and the combination UNIQUE. Your restriction "only field2 value is acceptable" contradicts your query.
That's all before thinking about how to get best performance ...
There are proven solutions with row values and multi-column indexes for paging:
Optimize query with OFFSET on large table
But drawing from a combination of multiple source tables complicates matters. Optimization depends on the details of your setup.
If you can't get the performance you need, your only remaining alternative is to materialize the query results somehow. Temp table, cursor, materialized view - the best tool depends on details of your setup.
Of course, general performance tuning might help, too.

TSQL query with TOP 1 + ORDER BY or max/min + GROUP BY?

I need to get a record of value and timestamp by the max of timestamp. The combination of value and timestamp is a primary key. It seems that there are two ways to get max/min value. One query example is by using TOP 1 + ORDER BY:
SELECT TOP 1
value, timestamp
FROM myTable
WHERE value = #value
ORDER BY timestamp DESC
Another one is by MAX() + GROUP BY:
SELECT value, max(timestamp)
FROM myTable
WHERE value = #value
GROUP BY value
Is the second one is better than the first one in terms of performance? I read one person's comment of "to sort n items by first one is O(n power of 2), second O(n)" to my previous question. How about the case I have index on both value and timestamp?
If you don't have a composite index on (value, timestamp) then they will be poor and probably equally poor at that.
With an index, they'll probably be the same thanks to the Query Optimiser.
You can also quickly test for yourself by using these to see resources used:
SET STATISTICS IO ON
SET STATISTICS TIME ON
...but the best way is to use the Graphical Execution Plans
You should see huge differences in the IO + CPU with and without an index especially for larger table.
Note: You have a 3rd option
SELECT #value AS value, max(timestamp)
FROM myTable
WHERE value = #value
This will return a NULL for no rows which does make it slightly different to the other two
For anyone who finds this in a search and wants to know about Postgres (not applicable to OP), if the column is indexed the plans will be identical.

Faster 'select distinct thing_id,thing_name from table1' in oracle

I have this query:
select distinct id,name from table1
For a given ID, the name will always be the same. Both fields are indexed. There's no separate table that maps the id to the name. The table is very large (10 of millions of rows), so the query could take some time.
This query is very fast, since it's indexed:
select distinct name from table1
Likewise for this query:
select distinct id from table1
Assuming I can't get the database structure changed (a very safe assumption) what's a better way to structure the first query for performance?
Edit to add a sanitized desc of the table:
Name Null Type
------------------------------ -------- ----------------------------
KEY NOT NULL NUMBER
COL1 NOT NULL NUMBER
COL2 NOT NULL VARCHAR2(4000 CHAR)
COL3 VARCHAR2(1000 CHAR)
COL4 VARCHAR2(4000 CHAR)
COL5 VARCHAR2(60 CHAR)
COL6 VARCHAR2(150 CHAR)
COL7 VARCHAR2(50 CHAR)
COL8 VARCHAR2(3 CHAR)
COL9 VARCHAR2(3 CHAR)
COLA VARCHAR2(50 CHAR)
COLB NOT NULL DATE
COLC NOT NULL DATE
COLD NOT NULL VARCHAR2(1 CHAR)
COLE NOT NULL NUMBER
COLF NOT NULL NUMBER
COLG VARCHAR2(600 CHAR)
ID NUMBER
NAME VARCHAR2(50 CHAR)
COLH VARCHAR2(3 CHAR)
20 rows selected
[LATEST EDIT]
My ORIGINAL ANSWER regarding creating the appropriate index on (name,id) to replace the index on (name) is below. (That wasn't an answer to the original question, which disallowed any database changes.)
Here are statements that I have not yet tested. There's probably some obvious reason these won't work. I'd never actually suggest writing statements like this (at the risk of being drummed thoroughly for such ridiculous suggestion.)
If these queries even return result sets, the ressult set will only resemble the result set from the OP query, almost by accident, taking advantage of a quirky guarantee about the data that Don has provided us. This statement is NOT equivalent to the original SQL, these statements are designed for the special case as described by Don.
select m1.id
, m2.name
from (select min(t1.rowid) as min_rowid
, t1.id
from table1 t1
where t1.id is not null
group by t1.id
) m1
, (select min(t2.rowid) as min_rowid
, t2.name from table1 t2
where t2.name is not null
group by t2.name
) m2
where m1.min_rowid = m2.min_rowid
order
by m1.id
Let's unpack that:
m1 is an inline view that gets us a list of distinct id values.
m2 is an inline view that gets us a list of distinct name values.
materialize the views m1 and m2
match the ROWID from m1 and m2 to match id with name
Someone else suggested the idea of an index merge. I had previously dismissed that idea, an optimizer plan to match 10s of millions of rowids without eliminating any of them.
With sufficiently low cardinality for id and name, and with the right optimizer plan:
select m1.id
, ( select m2.name
from table1 m2
where m2.id = m1.id
and rownum = 1
) as name
from (select t1.id
from table1 t1
where t1.id is not null
group by t1.id
) m1
order
by m1.id
Let's unpack that
m1 is an inline view that gets us a list of distinct id values.
materialize the view m1
for each row in m1, query table1 to get the name value from a single row (stopkey)
IMPORTANT NOTE
These statements are FUNDAMENTALLY different that the OP query. They are designed to return a DIFFERENT result set than the OP query. The happen to return the desired result set because of a quirky guarantee about the data. Don has told us that a name is determined by id. (Is the converse true? Is id determined by name? Do we have a STATED GUARANTEE, not necessarily enforced by the database, but a guarantee that we can take advantage of?) For any ID value, every row with that ID value will have the same NAME value. (And we are also guaranteed the converse is true, that for any NAME value, every row with that NAME value will have the same ID value?)
If so, maybe we can make use of that information. If ID and NAME appear in distinct pairs, we only need to find one particular row. The "pair" is going to have a matching ROWID, which conveniently happens to be available from each of the existing indexes. What if we get the minimum ROWID for each ID, and get the minimum ROWID for each NAME. Couldn't we then match the ID to the NAME based on the ROWID that contains the pair? I think it might work, given a low enough cardinality. (That is, if we're dealing with only hundreds of ROWIDs rather than 10s of millions.)
[/LATEST EDIT]
[EDIT]
The question is now updated with information concerning the table, it shows that the ID column and the NAME column both allow for NULL values. If Don can live without any NULLs returned in the result set, then adding the IS NOT NULL predicate on both of those columns may enable an index to be used. (NOTE: in an Oracle (B-Tree) index, NULL values do NOT appear in the index.)
[/EDIT]
ORIGINAL ANSWER:
create an appropriate index
create index table1_ix3 on table_1 (name,id) ... ;
Okay, that's not the answer to the question you asked, but it's the right answer to fixing the performance problem. (You specified no changes to the database, but in this case, changing the database is the right answer.)
Note that if you have an index defined on (name,id), then you (very likely) don't need an index on (name), sine the optimizer will consider the leading name column in the other index.
(UPDATE: as someone more astute than I pointed out, I hadn't even considered the possibility that the existing indexes were bitmap indexes and not B-tree indexes...)
Re-evaluate your need for the result set... do you need to return id, or would returning name be sufficient.
select distinct name from table1 order by name;
For a particular name, you could submit a second query to get the associated id, if and when you needed it...
select id from table1 where name = :b1 and rownum = 1;
If you you really need the specified result set, you can try some alternatives to see if the performance is any better. I don't hold out much hope for any of these:
select /*+ FIRST_ROWS */ DISTINCT id, name from table1 order by id;
or
select /*+ FIRST_ROWS */ id, name from table1 group by id, name order by name;
or
select /*+ INDEX(table1) */ id, min(name) from table1 group by id order by id;
UPDATE: as others have astutely pointed out, with this approach we're testing and comparing performance of alternative queries, which is a sort of hit or miss approach. (I don't agree that it's random, but I would agree that it's hit or miss.)
UPDATE: tom suggests the ALL_ROWS hint. I hadn't considered that, because I was really focused on getting a query plan using an INDEX. I suspect the OP query is doing a full table scan, and it's probably not the scan that's taking the time, it's the sort unique operation (<10g) or hash operation (10gR2+) that takes the time. (Absent timed statistics and event 10046 trace, I'm just guessing here.) But then again, maybe it is the scan, who knows, the high water mark on the table could be way out in a vast expanse of empty blocks.
It almost goes without saying that the statistics on the table should be up-to-date, and we should be using SQL*Plus AUTOTRACE, or at least EXPLAIN PLAN to look at the query plans.
But none of the suggested alternative queries really address the performance issue.
It's possible that hints will influence the optimizer to chooze a different plan, basically satisfying the ORDER BY from an index, but I'm not holding out much hope for that. (I don't think the FIRST_ROWS hint works with GROUP BY, the INDEX hint may.) I can see the potential for such an approach in a scenario where there's gobs of data blocks that are empty and sparsely populated, and ny accessing the data blocks via an index, it could actually be significantly fewer data blocks pulled into memory... but that scenario would be the exception rather than the norm.
UPDATE: As Rob van Wijk points out, making use of the Oracle trace facility is the most effective approach to identifying and resolving performance issues.
Without the output of an EXPLAIN PLAN or SQL*Plus AUTOTRACE output, I'm just guessing here.
I suspect the performance problem you have right now is that the table data blocks have to be referenced to get the specified result set.
There's no getting around it, the query can not be satisfied from just an index, since there isn't an index that contains both the NAME and ID columns, with either the ID or NAME column as the leading column. The other two "fast" OP queries can be satisfied from index without need reference the row (data blocks).
Even if the optimizer plan for the query was to use one of the indexes, it still has to retrieve the associated row from the data block, in order to get the value for the other column. And with no predicate (no WHERE clause), the optimizer is likely opting for a full table scan, and likely doing a sort operation (<10g). (Again, an EXPLAIN PLAN would show the optimizer plan, as would AUTOTRACE.)
I'm also assuming here (big assumption) that both columns are defined as NOT NULL.
You might also consider defining the table as an index organized table (IOT), especially if these are the only two columns in the table. (An IOT isn't a panacea, it comes with it's own set of performance issues.)
You can try re-writing the query (unless that's a database change that is also verboten) In our database environments, we consider a query to be as much a part of the database as the tables and indexes.)
Again, without a predicate, the optimizer will likely not use an index. There's a chance you could get the query plan to use one of the existing indexes to get the first rows returned quickly, by adding a hint, test a combination of:
select /*+ INDEX(table1) */ ...
select /*+ FIRST_ROWS */ ...
select /*+ ALL_ROWS */ ...
distinct id, name from table1;
distinct id, name from table1 order by id;
distinct id, name from table1 order by name;
id, name from table1 group by id, name order by id;
id, min(name) from table1 group by id order by id;
min(id), name from table1 group by name order by name;
With a hint, you may be able to influence the optimizer to use an index, and that may avoid the sort operation, but overall, it make take more time to return the entire result set.
(UPDATE: someone else pointed out that the optimizer might choose to merge two indexes based on ROWID. That's a possibility, but without a predicate to eliminate some rows, that's likely going to be a much more expensive approach (matching 10s of millions ROWIDs) from two indexes, especially when none of the rows are going to be excluded on the basis of the match.)
But all that theorizing doesn't amount to squat without some performance statistics.
Absent altering anything else in the database, the only other hope (I can think of) of you speeding up the query is to make sure the sort operation is tuned so that the (required) sort operation can be performed in memory, rather than on disk. But that's not really the right answer. The optimizer may not be doing a sort operation at all, it may be doing a hash operation (10gR2+) instead, in which case, that should be tuned. The sort operation is just a guess on my part, based on past experience with Oracle 7.3, 8, 8i, 9i.)
A serious DBA is going to have more issue with you futzing with the SORT_AREA_SIZE and/or HASH_AREA_SIZE parameters for your session(s) than he will in creating the correct indexes. (And those session parameters are "old school" for versions prior to 10g automatic memory management magic.)
Show your DBA the specification for the result set, let the DBA tune it.
A query cannot be tuned by looking at it, or randomly suggesting some equivalent queries, regardless how well meant they are.
You, we or the optimizer needs to know statistics about your data. And then you can measure with tools like EXPLAIN PLAN or SQLTrace/tkprof or even the simple autotrace tool from SQLPlus.
Can you show us the output of this:
set serveroutput off
select /*+ gather_plan_statistics */ distinct id,name from table1;
select * from table(dbms_xplan.display_cursor(null,null,'allstats last'));
And how does your entire table1 look like? Please show a describe output.
Regards,
Rob.
"The table is very large (10 of millions of rows)"
If you can't change the database (add index etc). Then your query will have no choice but to read the entire table. So firstly, determine how long that takes (ie time the SELECT ID,NAME FROM TABLE1). You won't get it any quicker than that.
The second step it has to do is the DISTINCT. In 10g+ that should use a HASH GROUP BY. Prior to that it is a SORT operation. The former is quicker. If your database is 9i, then you MAY get an improvement by copying the 10 million rows into a 10g database and doing it there.
Alternatively, allocate gobs of memory (google ALTER SESSION SET SORT_AREA_SIZE). That may harm other processes on the database, but then your DBAs aren't giving you much option.
You could try this:
select id, max(name) from table1 group by id
This uses the index on id for sure, but you have to try if it performs fast.
Without wishing to indulge in the practice of throwing stuff at the wall until something sticks, try this:
select id, name from table1 group by id, name
I have vague memories of a GROUP BY being inexplicably quicker than a DISTINCT.
Why do you need to even have "name" in the clause if the name is always the same for a given id? (nm...you want the name you aren't just checking for existence)
SELECT name, id FROM table WHERE id in (SELECT DISTINCT id FROM table)?
Don't know if that helps...
Is id unique? If so, you could drop DISTINCT from the query. If not - maybe it needs a new name? Yeah, I know, can't change the schema...
You could try something like
Select Distinct t1.id, t2.name
FROM (Select Distinct ID From Table) As T1
INNER JOIN table t2 on t1.id=t2.id
Select distinct t1.id, t2.name from table t1
inner Join table t2 on t1.id=t2.id
Not sure if this will work out slower or faster than the original as I'm not completely understanding how your table is set up. If each ID will always have the same name, and ID is unique, I don't really see the point of the distinct.
Really try to work something out with the DBAs. Really. Attempt to communicate the benefits and ease their fears of degraded performance.
Got a development environment/database to test this stuff?
How timely must the data be?
How about a copy of the table already grouped by id and name with proper indexing? A batch job could be configured to refresh your new table once a night.
But if that doesn't work out...
How about exporting all of the id and name pairs to an alternate database where you can group and index to your benefit and leave the DBAs with all of their smug rigidness?
This may perform better. It assumes that, as you said, the name is always the same for a given id.
WITH id_list AS (SELECT DISTINCT id FROM table1)
SELECT id_list.id, (SELECT name FROM table1 WHERE table1.id = id_list.id AND rownum = 1)
FROM id_list;
If for a given id the same name is always returned, you can run the following:
SELECT (
SELECT name
FROM table1
WHERE id = did
AND rownum = 1
)
FROM (
SELECT DISTINCT id AS did
FROM table1
WHERE id IS NOT NULL
)
Both queries will use the index on id.
If you still need the NULL values, run this:
SELECT (
SELECT name
FROM table1
WHERE id = did
AND rownum = 1
)
FROM (
SELECT DISTINCT id AS did
FROM table1
WHERE id IS NOT NULL
)
UNION ALL
SELECT NULL, name
FROM table1
WHERE id IS NULL
AND rownum = 1
This will be less efficient, since the second query doesn't use indexes, but it will stop on the first NULL it encounters: if it's close to the beginning of the tables, then you're lucky.
See the entry in my blog for performance details:
Distinct pairs

SQL Server UNION - What is the default ORDER BY Behaviour

If I have a few UNION Statements as a contrived example:
SELECT * FROM xxx WHERE z = 1
UNION
SELECT * FROM xxx WHERE z = 2
UNION
SELECT * FROM xxx WHERE z = 3
What is the default order by behaviour?
The test data I'm seeing essentially does not return the data in the order that is specified above. I.e. the data is ordered, but I wanted to know what are the rules of precedence on this.
Another thing is that in this case xxx is a View. The view joins 3 different tables together to return the results I want.
There is no default order.
Without an Order By clause the order returned is undefined. That means SQL Server can bring them back in any order it likes.
EDIT:
Based on what I have seen, without an Order By, the order that the results come back in depends on the query plan. So if there is an index that it is using, the result may come back in that order but again there is no guarantee.
In regards to adding an ORDER BY clause:
This is probably elementary to most here but I thought I add this.
Sometimes you don't want the results mixed, so you want the first query's results then the second and so on. To do that I just add a dummy first column and order by that. Because of possible issues with forgetting to alias a column in unions, I usually use ordinals in the order by clause, not column names.
For example:
SELECT 1, * FROM xxx WHERE z = 'abc'
UNION ALL
SELECT 2, * FROM xxx WHERE z = 'def'
UNION ALL
SELECT 3, * FROM xxx WHERE z = 'ghi'
ORDER BY 1
The dummy ordinal column is also useful for times when I'm going to run two queries and I know only one is going to return any results. Then I can just check the ordinal of the returned results. This saves me from having to do multiple database calls and most empty resultset checking.
Just found the actual answer.
Because UNION removes duplicates it does a DISTINCT SORT. This is done before all the UNION statements are concatenated (check out the execution plan).
To stop a sort, do a UNION ALL and this will also not remove duplicates.
If you care what order the records are returned, you MUST use an order by.
If you leave it out, it may appear organized (based on the indexes chosen by the query plan), but the results you see today may NOT be the results you expect, and it could even change when the same query is run tomorrow.
Edit: Some good, specific examples: (all examples are MS SQL server)
Dave Pinal's blog describes how two very similar queries can show a different apparent order, because different indexes are used:
SELECT ContactID FROM Person.Contact
SELECT * FROM Person.Contact
Conor Cunningham shows how the apparent order can change when the table gets larger (if the query optimizer decides to use a parallel execution plan).
Hugo Kornelis proves that the apparent order is not always based on primary key. Here is his follow-up post with explanation.
A UNION can be deceptive with respect to result set ordering because a database will sometimes use a sort method to provide the DISTINCT that is implicit in UNION , which makes it look like the rows are deliberately ordered -- this doesn't apply to UNION ALL for which there is no implicit distinct, of course.
However there are algorithms for the implicit distinct, such as Oracle's hash method in 10g+, for which no ordering will be applied.
As DJ says, always use an ORDER BY
It's very common to come across poorly written code that assumes table data is returned in insert order, and 95% of the time the coder gets away with it and is never aware that this is a problem as on many common databases (MSSQL, Oracle, MySQL). It is of course a complete fallacy and should always be corrected when it's come across, and always, without exception, use an Order By clause yourself.

SQL find non-null columns

I have a table of time-series data of which I need to find all columns that contain at least one non-null value within a given time period. So far I am using the following query:
select max(field1),max(field2),max(field3),...
from series where t_stamp between x and y
Afterwards I check each field of the result if it contains a non-null value.
The table has around 70 columns and a time period can contain >100k entries.
I wonder if there if there is a faster way to do this (using only standard sql).
EDIT:
Unfortunately, refactoring the table design is not an option for me.
The EXISTS operation may be faster since it can stop searching as soon as it finds any row that matches the criteria (vs. the MAX which you are using). It depends on your data and how smart your SQL server is. If most of your columns have a high rate of non-null data then this method will find rows quickly and it should run quickly. If your columns are mostly NULL values then your method may be faster. I would give them both a shot and see how they are each optimized and how they run. Also keep in mind that performance may change over time if the distribution of your data changes significantly.
Also, I've only tested this on MS SQL Server. I haven't had to code strict ANSI compatible SQL in over a year, so I'm not sure that this is completely generic.
SELECT
CASE WHEN EXISTS (SELECT * FROM Series WHERE t_stamp BETWEEN #x AND #y AND field1 IS NOT NULL) THEN 1 ELSE 0 END AS field1,
CASE WHEN EXISTS (SELECT * FROM Series WHERE t_stamp BETWEEN #x AND #y AND field2 IS NOT NULL) THEN 1 ELSE 0 END AS field2,
...
EDIT: Just to clarify, the MAX method might be faster since it could determine those values with a single pass through the data. Theoretically, the method here could as well, and potentially with less than a full pass, but your optimizer may not recognize that all of the subqueries are related, so it might do separate passes for each. That still might be faster, but as I said it depends on your data.
It would be faster with a different table design:
create table series (fieldno integer, t_stamp date);
select distinct fieldno from series where t_stamp between x and y;
Having a table with 70 "similar" fields is not generally a good idea.
When you say "a faster way to do this", if you mean a faster way for the query to run, then yes, here's how to do it: break it out into one query per column:
select top 1 field1 from series where t_stamp between x and y and field1 is not null
select top 1 field2 from series where t_stamp between x and y and field2 is not null
select top 1 field3 from series where t_stamp between x and y and field3 is not null
This way, you won't be doing a table scan across the entire table to find the maximum value. Instead, the database engine will stop as soon as it finds a non-null value. Assuming your data isn't 99% nulls, this should give you faster execution - but at the expense of more programming time to set this up.
How about this... You query for a list of field names that you can iterate through.
select 'field1' as fieldname from series
where field1 is not null and t_stamp between x and y
UNION
select 'field2' from series where field2 is not null
... etc
Then you have a recordset that will only contain the string name of the fields that are not null. Then you can loop over this recordset to build your real query as dynamic SQL and ignore fields that don't have any data. The "select 'field2'" will not return a string when there is no crieteria matching the where clause.
Edit: I think I misread the question... this will give you all the rows with a non-null value. I'll leave it here in case it helps someone but it's not the answer to your question. Thanks #Pax
I think you want to use COALESCE:
SELECT ... WHERE COALESCE(fild1, field2, field3) IS NOT NULL
For a start, this is a very bad idea with standard SQL since not all DBMSs sort with NULLs last.
There are all sorts of tricky ways you could do this and most would be interminably slow.
I'd suggest you (sort-of) normalize the database some more so that each of the columns is in a separate table which would make a select easier but that's probably not what you want.
After edit of question: if refactoring table design is not an option, your given solution is probably the best, especially if you have indexes on all the 70 columns.
Although that's likely to slow down inserts quite a bit, you may want to use a non-indexed table for maximum insert speed and transfer the data periodically (overnight?) to an indexed table which would run your selects at best speed (by avoiding a full table scan).
select count(field1),count(field2),count(field3),...
from series where t_stamp between x and y
will tell you how many non-null values are in each column. Unfortunately, it's not much better than the way you're doing it now.
Try this:
SELECT CASE WHEN field1 IS NOT NULL THEN '' ELSE 'contains null' END AS field1_stat,
CASE WHEN field2 IS NOT NULL THEN '' ELSE 'contains null' END AS field2_stat,
... for every field to be checked
FROM series
WHERE foo IN bar
GROUP BY CASE WHEN field1 IS NOT NULL THEN '' ELSE 'contains null' END,
CASE WHEN field2 IS NOT NULL THEN '' ELSE 'contains null' END
... etc
This will give you a summary on the combination of 'nulled' fields in the table