sequence number generation in db2 - sql

I have table A which contains 5 columns col1 to col5.totally it contains 6 rows.I am using DB2 sql
Below is the data for col2.
A
A
test
testasfdla
Null
Null
Requirement:-
If col2 contains null i need assign sequence number starting with 1.
excepted o/p:-
Below is the data for col2.
A
A
test
testasfdla
1
2
I tried with row_number but did not get the required o/p.

Try this:
WITH T (C) AS
(
VALUES
'A'
, 'A'
, 'test'
, 'testasfdla'
, Null
, Null
)
SELECT COALESCE(C, TO_CHAR(ROW_NUMBER() OVER (PARTITION BY C)))
FROM T

Related

SQL literal value that is alternative to NULL

Are there other special literal values besides NULL in SQL / PostgresQL?
NULL is nice in that we can interpret NULL as the concept of "nothing" (i.e. missing, not available, not asked, not answered, etc.), and data columns of any type can have NULL values.
I would like another value that I can interpret as representing another concept (here the idea of "everything"), in the same result set.
Is there another special value that I can return in a query, which like NULL doesn't type conflict?
Basically anything that doesn't throw ERROR: For 'UNION', types varchar and numeric are inconsistent in this toy query:
select 1 as numeral, 'one' as name UNION ALL
select 2 as numeral, 'two' as name UNION ALL
select NULL as numeral, NULL as name UNION ALL
select -999 as numeral, -999 as name UNION ALL -- type conflict
select '?' as numeral, 'x' as name -- type conflict
Here,
-999 doesn't work as its type conflicts with varchar columns
'~' doesn't work as its type conflicts with numeric columns
NULL doesn't work as it needs
More specifically here's my actual case, counting combinations of values and also include "Overall" rows in the same query. Generally I won't know or control the types of columns A, B, C in advance. And A, B, or C might also have NULL values which I would would still want to count separately.
SELECT A, COUNT(*) FROM table GROUP BY 1
UNION ALL
SELECT ?, COUNT(*) FROM table GROUP BY 1
and get a result set like:
A
COUNT
NULL
2
1
3
2
5
3
10
(all)
20
SELECT B, COUNT(*) FROM table GROUP BY 1
UNION ALL
SELECT ?, COUNT(*) FROM table GROUP BY 1
and get a result set like:
B
COUNT
NULL
2
'Circle'
3
'Line'
5
'Triangle'
10
(all)
20
You can use function CAST to convert the format to VARCHAR to be considered as string.
NOTE: Thanks to the comments above, I should completely rephrase this question as "How to COUNT/GROUP BY with ROLLUP using multiple columns of mixed/arbitrary/unknown types, and differentiate true NULL values from ROLLUP placeholders?"
The correct answer I believe is provided by #a_horse_with_no_name: use ROLLUP with GROUPING.
Below is is just me drafting that more completely with a revised example:
This toy example has an integer and a string
WITH table AS (
select 1 as numeral, 'one' as name UNION ALL
select 2 as numeral, 'two' as name UNION ALL
select 2 as numeral, 'two' as name UNION ALL
select NULL as numeral, NULL as name UNION ALL
select NULL as numeral, NULL as name UNION ALL
select NULL as numeral, NULL as name
)
select name, numeral, COUNT(*), GROUPING_ID()
FROM table
GROUP BY ROLLUP(1,2)
ORDER BY GROUPING_ID, name, numeral ;
It returns the following result:
numeral
name
count
grouping_id
note
NULL
NULL
3
0
both are true NULLs as grouping is 0
1
one
1
0
2
two
2
0
NULL
NULL
3
1
first is a true NULL, second is a ROLLUP
1
NULL
1
1
2
NULL
2
1
NULL
NULL
6
3
both NULLs are ROLLUPs

Rename category in the column in SQL Server

Here is the query
select col1
from table
col1 contains these category values:
A
B
C
NULL
How can I rename null category to D?
If you want to make the change permanent
UPDATE table
SET col1 = 'D'
WHERE col1 IS NULL
From then on you can simply query with ...
SELECT col1
FROM table
... to get the desired result.
If there is more than one row having a NULL in col1, you need to filter by a unique key, preferably by the primary key (which every table should have by the way). Let's say you have a table like
id (PK) col1
--- ----
1 'A'
2 'B'
3 'C'
4 NULL
5 NULL
then you can fix it with
UPDATE table SET col1 = 'D' WHERE id = 4;
UPDATE table SET col1 = 'E' WHERE id = 5;
unless you can calculate the new value from another column, e.g.:
UPDATE table
SET col1 = UPPER(LEFT(name, 1))
Try this : ISNULL( ) function is used to replace NULL value with another value
select isnull(col1,'D') as col1
from table
SQL Server uses ISNULL().
SELECT ISNULL(value_to_check, use_this_instead_if_valuetocheck_is_null)
For your code:
select ISNULL(col1, 'D') AS col_name
from table
However, this will happen across the board for this column. You can't use this to make a sequence, like D then E then F. Any NULL value you come across in this column will change to D.

Transpose row value to additional pre-defined field when value is less than specified value

I have a table that contains two identifying columns, a date, and a value. This value can be up to 100. What I want to do is where [ID] and [DATE] is the same across subsequent rows and the values are less than 100 (which also means [ID_SECONDARY] is always different), I want a query to place each one of these values in a column '[VALUE_1]...[VALUE_N]' along with the Value Description ([ID_SECONDARY]-->[VALUE_1_DESC]...[VALUE_N_DESC]). Ultimately each row should contain a unique [ID], [DATE], and an aggregation of the different [ID_SECONDARY] descriptions along with their values [VALUE_1]...[VALUE_N]. The number of unique [ID_SECONDARY] will not surpass 4, but could be from 1 to 4.
My initial inclination is to approach this using a cursor, but am hopeful there is a better alternative.
The first image is a sample of the information provided in the table, the second image is the output I'm looking for. Any help is greatly appreciated.
.
As far as I can tell this is different from the various dynamic pivot posts out there because the columns are independent of the secondary ID and are fully dependent on the VALUE column to determine if the value itself belongs in columns 1-4.
Try this
WITH a AS (
SELECT
ID
, [DATE]
, ID_SECONDARY
, VALUE
, ROW_NUMBER() OVER (PARTITION BY ID, DATE ORDER BY ID) AS RNUM
)
SELECT
a.ID
, a.[DATE]
, MAX (
CASE a.RNUM
WHEN 1 THEN a.VALUE
ELSE NULL
) AS VALUE_1
, MAX (
CASE a.RNUM
WHEN 1 THEN a.ID_SECONDARY
ELSE NULL
) AS VALUE_1_DESC
, MAX (
CASE a.RNUM
WHEN 2 THEN a.VALUE
ELSE NULL
) AS VALUE_2
, MAX (
CASE a.RNUM
WHEN 2 THEN a.ID_SECONDARY
ELSE NULL
) AS VALUE_2_DESC
, MAX (
CASE RNUM
WHEN 3 THEN a.VALUE
ELSE NULL
) AS VALUE_3
, MAX (
CASE RNUM
WHEN 3 THEN a.ID_SECONDARY
ELSE NULL
) AS VALUE_3_DESC
, MAX (
CASE RNUM
WHEN 4 THEN a.VALUE
ELSE NULL
) AS VALUE_4
, MAX (
CASE RNUM
WHEN 4 THEN a.ID_SECONDARY
ELSE NULL
) AS VALUE_4_DESC
FROM a
GROUP BY a.ID, a.[DATE]

Test data for unique and not null

How can I quickly check to see if the data from the test table 'test_table', and selected columns are unique and not null.
Summary, at the entrance gets a table name and a list of columns in the output are expected , eg . flag 1 or 0.
Table is big so unfortunately i must posibly fast execute;
select 1 from dual
where exsist (select col1,col2,col3,... from table
where col1 is not null and col2 is not null and col3....
group by col1,col2,col3.. having count(*) > 1 )
this will return 1 when one of the is true.
SELECT 1
FROM dual
WHERE EXISTS
(SELECT a, b FROM tab WHERE id=1
AND name='John'
AND (a IS NULL OR b IS NULL))
I changed your code and I have one question about it , now I have is that id must be 1 and the name 'John ' and any of verifying null must be correct, and wants to have that must be id = 1 and the name 'John ' and if there is either the value of the column is ' null'

How do I get the count of null value columns per row in a return set?

I'm looking for a query which will return me an extra column at the end of my current query which is the count of all columns within the return set which contain a null column. For example:
Col 1 - Col 2 - Col 3
A B 0
A NULL 1
NULL NULL 2
Is there a simple way to get this return set based on the row values rather than having to requery all the criteria which fetches the original rows?
Ugly solution:
select Col1, Col2,
case when Col1 is null then 1 else 0 end
+ case when Col2 is null then 1 else 0 end
as Col3
from (
select 'A' as Col1, 'B' as Col2
union select 'A', NULL
union select NULL, NULL
) z
This returns
Col1 Col2 Col3
NULL NULL 2
A NULL 1
A B 0
Oracle has a function NVL2() which makes this easy.
select col1,
col2,
col3,
...
NVL2(col1,0,1)
+NVL2(col2,0,1)
+NVL2(col3,0,1) coln
from whatever
select count(*) - count(ColumnName) as NumberOfNulls from yourTable
returns number of nulls in specific column. if you do this for every column you can get that data.
As in a similar post, SQL is not very suited to work across different columns within a row, but muach better on working across rows.
I'd suggest to turn the table into 'individual' facts about a row, e.g.
select <key>, col1 as value From aTable
UNION
select <key>, col2 as value From aTable
UNION
... and so on for the other columns to be summed.
This can be turned into a view i.e.
create view aView as (select as above).
Then the correct answer is just
select key, count(*)
from aView
where value is null
Group By key
create table TEST
(
a VARCHAR2(10),
b VARCHAR2(10),
c VARCHAR2(10)
);
insert into TEST (a, b, c)
values ('jas', 'abhi', 'shail');
insert into TEST (a, b, c)
values (null, 'abhi', 'shail');
insert into TEST (a, b, c)
values ('jas', null, 'shail');
insert into TEST (a, b, c)
values ('jas', 'abhi', null);
insert into TEST (a, b, c)
values ('jas', 'abhi', 'abc|xyz');
insert into TEST (a, b, c)
values ('jas', 'abhi', 'abc|xyz');
insert into TEST (a, b, c)
values ('jas', 'abhi', 'abc|xyz');
insert into TEST (a, b, c)
values (null, 'abhi', 'abc|xyz');
commit;
select sum(nvl2(a,null,1)),sum(nvl2(b,null,1)),sum(nvl2(c,null,1)) from test
where a is null
or b is null
or c is null
order by 1,2,3
If there isnt a very good reason you need to do this in the SQL, you should just do a for loop through the result set and count the NULL vlues then.
The cost goes from n^n to n..
You can use computed column:
CREATE TABLE testTable(
col1 nchar(10) NULL,
col2 nchar(10) NULL,
col3 AS (case when col1 IS NULL then (1) else (0) end+case when col2 IS NULL then (1) else (0) end)
)
It is not a pretty solution, but should work.
If you are dealing with a big number of columns and many of them you expect to be NULL then you could use sparse columns (available in SQL Server 2008). It will be optimised for NULL and it can automatically generate XML representation for each row of data in the table.