Does MINUS/EXCEPT command or code workaround analog exist for columns only? Since MINUS/EXCEPT command fine for rows, how about for columns?
Mask-table (physically exist):
id col1 col2 col3 col4 ... colN comment
doesn't A B C D ... Z --alphabet correct sequence
matter
Columns Data Type of col[i] equals to each other.
Incoming select-stream (not a physical table, but represented as table as a result of other complex joined-selection from other tables):
col1 col2 col3 col4 ... colN comment
A B C D ... Z --alphabet correct seq
A C B D ... Z --incorrect
E B C M ... Z --incorrect
...
Z Y X W ... A --full inverse icorrect
Expected output to physical table after processing mask-table to select-stream as insert result:
id col1 col2 col3 col4 ... colN
(auto-
gnrtd)
(null)(null)(null)(null)...(null)
(null) C B (null)...(null)
E (null)(null) M ...(null)
...
Z Y X W ... A
Please note: alphabet is given just as an example. Not the issue-case here. SQL-Logic/command required. Analog of MINUS/EXCEPT, but for columns (DISTINCT? How, if incoming select-stream is a result of other complex joined-select)
What will be the SQL-code for this task? Please, advise.
Another way to do it without CASE statements:
Setup
CREATE TABLE mask (
col1 TEXT,
col2 TEXT,
col3 TEXT,
col4 TEXT,
col5 TEXT
);
INSERT INTO mask SELECT 'A', 'B', 'C', 'D', 'E';
CREATE TABLE your_stream (
col1 TEXT,
col2 TEXT,
col3 TEXT,
col4 TEXT,
col5 TEXT
);
INSERT INTO your_stream
VALUES
('A', 'B', 'C', 'D', 'E'),
('A', 'C', 'B', 'D', 'E'),
('E', 'B', 'C', 'M', 'E');
Query
SELECT
NULLIF(s.col1, m.col1) AS col1,
NULLIF(s.col2, m.col2) AS col2,
NULLIF(s.col3, m.col3) AS col3,
NULLIF(s.col4, m.col4) AS col4,
NULLIF(s.col5, m.col5) AS col5
FROM your_stream s
CROSS JOIN mask m;
Result
| col1 | col2 | col3 | col4 | col5 |
| ---- | ---- | ---- | ---- | ---- |
| null | null | null | null | null |
| null | C | B | null | null |
| E | null | null | M | null |
View on DB Fiddle
I don't see what the connection to a set operation like EXCEPT would be.
Anyway, this is how you could proceed:
INSERT INTO destination (col1, col2, col3, ...)
SELECT CASE WHEN incoming_col1 <> mask.col1
THEN incoming_col1
END,
CASE WHEN incoming_col2 <> mask.col2
THEN incoming_col2
END,
...
FROM mask;
Related
I'm looking for best solution for my below scenario in DB2.
I have data present in a single column table and i want to separate the data in each row to individual columns.
My input table:
Data
A012356TEST12501
M012385749635201
N012385749635201
B012356TEST12501
A022356TEST12501
M022385749635201
N022385749635201
B022356TEST12501
I want to move the data in the above table to multiple tables. i.e the data in the table will be moved to 4 different tables. For each of the table, i have the detail of the column length to sub string.
Table_A
col1 col2 col3 col4 col5
A 01 2356 TEST 12501
A 02 2356 TEST 12501
Table_M
col1 col2 col3 col4 col5
M 012 3857 49635 201
M 022 2385 74963 201
Similarly, Table _N and Table_B.
You have no choice, in that case; you have to use SUBSTR(), a pretty CPU intensive string function, if run on lots of data:
WITH
-- your input, in a WITH CLAUSE
data_t(data_s) AS (
SELECT 'A012356TEST12501' FROM sysibm.sysdummy1
UNION ALL SELECT 'M012385749635201' FROM sysibm.sysdummy1
UNION ALL SELECT 'N012385749635201' FROM sysibm.sysdummy1
UNION ALL SELECT 'B012356TEST12501' FROM sysibm.sysdummy1
UNION ALL SELECT 'A022356TEST12501' FROM sysibm.sysdummy1
UNION ALL SELECT 'M022385749635201' FROM sysibm.sysdummy1
UNION ALL SELECT 'N022385749635201' FROM sysibm.sysdummy1
UNION ALL SELECT 'B022356TEST12501' FROM sysibm.sysdummy1
)
SELECT
SUBSTR(data_s, 1,1) AS col1
, SUBSTR(data_s, 2,2) AS col2
, SUBSTR(data_s, 4,4) AS col3
, SUBSTR(data_s, 8,5) AS col4
, SUBSTR(data_s,13,3) AS col5
FROM data_t;
-- out col1 | col2 | col3 | col4 | col5
-- out ------+------+------+-------+------
-- out A | 01 | 2356 | TEST1 | 250
-- out M | 01 | 2385 | 74963 | 520
-- out N | 01 | 2385 | 74963 | 520
-- out B | 01 | 2356 | TEST1 | 250
-- out A | 02 | 2356 | TEST1 | 250
-- out M | 02 | 2385 | 74963 | 520
-- out N | 02 | 2385 | 74963 | 520
-- out B | 02 | 2356 | TEST1 | 250
But if you get that data from an ASCII file, you could also:
CREATE TABLE splitup (
col1 CHAR(1)
, col2 SMALLINT
, col3 SMALLINT
, col4 CHAR(5)
, col5 SMALLINT
);
LOAD FROM your_in_file OF ASC MODIFIED BY STRIPTBLANKS RECLEN=16
METHOD L (
1 1
, 2 3
, 4 7
, 8 12
,13 16
)
INSERT INTO TABLE (
col1
, col2
, col3
, col4
, col5
);
If your four tables have the same number of columns, and of the same data type, you could consider an INSERT into a UNION ALL VIEW of the four (row organized) tables that are each constrained on COL1.
However, that is probably not the case (e.g. Col4 in your example looks like a CHAR in table A and an a INTEGER or DECIMAL in table B).
The simple solution is just 4 INSERT statements, with a SUBSTR for each column, and a WHERE SUBSTR(Date,1,1) = 'A' , = 'N' etc for each target table
I have a PSQL table
+--------+------+------+------+
| Col1 | Col2 | Col3 | Col4 |
+--------+------+------+------+
| 001 | 00A | 00B | 001 |
| 001001 | 00A | 00B | 001 |
| 002 | 00X | 00Y | 002 |
| 002002 | 00X | 00Y | 002 |
+--------+------+------+------+
I have the following PSQL query:
select *
from my_table
where (Col1 = '001' or Col4 = '001')
and Col2 = '00A'
order by Col3 asc;
I get the first two rows.
Here what happens is that it matches both conditions for OR condition. I need to match only one of the or conditions. That is if first condition (Col1='001001') is true then do not evaluate the next condition.
I need to select only the 2nd row (| 001001 | 00A | 00B | 001 |)
I have build another query using EXCEPT
select *
from my_table
where (Col1 = '001' or Col4 = '001')
and Col2 = '00A'
except (select *
from my_table
where Col1 != '001'
and Col2 = '00A')
order by Col3 asc
limit 1;
I would like to know if there is any other elegant queries for this job?
Your explanation is confusing as you say col1 = '001001' in one place and use 001 in the query. But I presume you want to use a hierarchy of comparison and return the one with the highest per each group ( col2,col3,col4) . Use DISTINCT ON. Change the condition in whichever way you like to return the appropriate row.
SELECT DISTINCT ON (col2, col3, col4) *
FROM my_table WHERE col2 = '00A'
ORDER BY col2,
col3,
col4,
CASE
WHEN col1 = '001001' THEN 1
WHEN col4 = '001' THEN 2
END;
DEMO
Does this give you what you want?
select *
from my_table
where (Col1 = '001' and Col2 != '00A')
or ((Col1 is null or Col1 = '') and Col4 = '001' and Col2 = '00A')
order by Col3 asc;
I need to merge a table with ID and various bit flags like this
-----------------
a1 | x | | x |
-----------------
a1 | | x | |
-----------------
a1 | | | |
-----------------
b2 | x | | |
-----------------
b2 | | | |
-----------------
c3 | x | x | x |
into such form
-----------------
a1 | x | x | x |
-----------------
b2 | x | | |
-----------------
c3 | x | x | x |
The problem is that data are join by kind of option ID each option has an unique ID which is joined with a1, b2. When I try to SELECT it by using DISTINCT I receive results from table number 1. I can make it by subqueries in SELECT but it is really weak solution due to performance reasons.
Do you have any idea how select and combine all these flags into single row?
use aggregation
select col1 ,max(col2),max(col3),max(col4)
form table_name group by col1
For the given result set it is eligible to use MIN and GROUP BY:
SELECT
tbl.Col
, MIN(tbl.Col1) Col1
, MIN(tbl.Col2) Col2
, MIN(tbl.Col3) Col3
FROM #table tbl
GROUP BY tbl.Col
However, if you have empty rows, then use MAX(). Otherwise MIN() returns empty rows:
SELECT
tbl.Col
, MAX(tbl.Col1) Col1
, MAX(tbl.Col2) Col2
, MAX(tbl.Col3) Col3
FROM #table tbl
GROUP BY tbl.Col
For example:
DECLARE #table TABLE
(
Col VARCHAR(50),
Col1 VARCHAR(50),
Col2 VARCHAR(50),
Col3 VARCHAR(50)
)
INSERT INTO #table
(
Col,
Col1,
Col2,
Col3
)
VALUES
( 'a1', -- Col - varchar(50)
'x', -- Col1 - varchar(50)
Null, -- Col2 - varchar(50)
'x' -- Col3 - varchar(50)
)
, ('a1', NULL, 'x', null)
, ('a1', NULL, 'x', null)
, ('b2', 'x', null, null)
, ('b2', null, null, null)
, ('c3', 'x', 'x', 'x')
SELECT
tbl.Col
, MIN(tbl.Col1) Col1
, MIN(tbl.Col2) Col2
, MIN(tbl.Col3) Col3
FROM #table tbl
GROUP BY tbl.Col
OUTPUT:
Col Col1 Col2 Col3
a1 x x x
b2 x NULL NULL
c3 x x x
You want aggregation :
select col1, max(col2), max(col2), max(col3)
from table t
group by col1;
This assuming blank value as null.
The general solution for such a situation is to simply aggregate and either use MIN or MAX on the columns.
SQL Server's data type BIT, however, is quirky. It's a little like a BOOLEAN, but not a real boolean. It is a little like a very limited numeric data type, but it isn't really a numeric type either. And there simply exist no aggregation functions for this data type. In standard SQL you'd have ANY and EVERY for the BOOLEAN type. In PostgreSQL you have BIT_OR and BIT_AND for BIT and BOOL_OR and BOOL_AND for BOOLEAN. SQL Server has nothing.
So convert your columns to a numeric type before using MIN (which would be a bitwise AND) or MAX (which would be a bitwise OR) on it. E.g.
select
id,
max(bit1 + 0) as bit1agg,
max(bit2 + 0) as bit2agg,
max(bit3 + 0) as bit3agg
from mytable
group by id
order by id;
You can also use CAST or CONVERT instead of course.
I currently have a few unpivot queries that yeilds about 2000 rows each. I need to take the results of those queires, and put in a new table to match on a key.
Query Example:
Select DeviceSlot
FROM tbl1
unpivot(
DeviceSlot
For col in(
col1,
col2,
col3,
)
)AS Unpivot
Now I need to match the results from the query, and insert it into a new table with about 20,000 rows.
Pseudo-Code for this:
Insert Into tbl2(DeviceSlot)
Select DeviceSlot
FROM tbl1
unpivot(
DeviceSlot
For col in(
col1,
col2,
col3
)
)AS Unpivot2
Where tbl1.key = tbl2.key
I've been pretty confused on how to do this, and I apologize if it is not clear.
I also have another unpivot query doing the same thing for different columns.
Not sure what you are asking for. While unpivoting to "normalize" data typically the wanted "key" is derived during the unpivot, for example, below the id column of the original table is repeated in the un-pivoted data to represent a foreign key for some new table.
SQL Fiddle
MS SQL Server 2014 Schema Setup:
CREATE TABLE Table1
([id] int, [col1] varchar(2), [col2] varchar(2), [col3] varchar(2))
;
INSERT INTO Table1
([id], [col1], [col2], [col3])
VALUES
(1, 'a', 'b', 'c'),
(2, 'aa', 'bb', 'cc')
;
Query 1:
select id as table1_fk, colheading, colvalue
from (
select * from table1
) t
unpivot (
colvalue for colheading in (col1, col2, col3)
) u
Results:
| table1_fk | colheading | colvalue |
|-----------|------------|----------|
| 1 | col1 | a |
| 1 | col2 | b |
| 1 | col3 | c |
| 2 | col1 | aa |
| 2 | col2 | bb |
| 2 | col3 | cc |
I have three columns in a table.
Requirements: The value of col2 and col3 should make col1.
Below shows the table I have right now, which needs to be change.
col1 col2 col3
AB football
AB football
ER driving
ER driving
TR city
TR city
Below shows the table that needs to be change to
col1 col2 col3
AB_football_1 AB football
AB_football_2 AB football
ER_driving_1 ER driving
ER_driving_2 ER driving
TR_city_1 TR city
TR_city_2 TR city
As you can see in col1, it should take col2, put (underscore), then col3, put (underscore) then increment the number according to the values in col2 and col3.
Can this be approached within CREATE or SELECT or INSERT statement or Trigger Statement, if so any tips would be grateful.
Try as
SELECT col2
||'_'
||col3
||'_'
||rank col1,
col2,
col3
FROM (SELECT col2,
col3,
ROW_NUMBER()
OVER(
PARTITION BY col2, col3
ORDER BY col2) rank
FROM my_table)
Output
+---------------+------+----------+
| COL1 | COL2 | COL3 |
+---------------+------+----------+
| AB_football_1 | AB | football |
| AB_football_2 | AB | football |
| ER_driving _1 | ER | driving |
| ER_driving _2 | ER | driving |
| TR_city _1 | TR | city |
| TR_city _2 | TR | city |
+---------------+------+----------+
/* table is */
col1 col2 col3
test 123
/* Try this query */
UPDATE `demo`
SET `col1` = concat(col2, '_', col3)
/* Output will be */
col1 col2 col3
test_123 test 123
This is easy to do (at SELECT) using row_number() window function , something like this:
select
col2 ||'_'|| col3 ||'_'|| row_number() over(partition by col2, col3 order by col2) as col1,
col2,
col3
from t