MS SQL Server multiple column sort independently - sql

So to get things out of the way, due to structure i'm left in a situation of trying to sort some columns of data in SQL in a way that is evading me.
The problem is that i need multiple sets of 2 columns sorted independently, For example i have something like this:
Name | Val1 | Name | Val2 | Name | Val3
A | 2 | A | 1 | A | 3
B | 1 | B | 3 | B | 2
C | 3 | C | 2 | C | 1
and i need the table to be sorted by the highest of each value:
Name | Val1 | Name | Val2 | Name | Val3
C | 3 | B | 3 | A | 3
A | 2 | C | 2 | B | 2
B | 1 | A | 1 | C | 1
I don't seem to know how to organise using ROW_NUMBER() and various other things i have done through long searches being able to separate out individual columns for ordering but i don't know how i can keep two linked while the others sort independently, Can anyone help?
EDIT:
The Data is Extrapolated from one table after calculations had been done for their values.
So say i have my table of:
Name | Val1 | Val2 | Val3 |
A | 2 | 1 | 3 |
B | 1 | 3 | 2 |
C | 3 | 2 | 1 |
The values are names specifically are just used for example, but are wildly Differing values.
So from that table of final results i need to get the results in the format that the name with the highest value will be on top for each individual value
SELECT Name AS N1,
Val1,
Name As N2,
Val2
etc
EDIT: Example:
Name1|Units|Name2|Units| Name3|Units
AF |218 |AF |0.83 | AF |1.04
AD |172 |AD |0.49 | AD |1.05
AF |116 |AF |0.87 | AF |1.06
AF |324 |AF |0.84 | AF |1.10

If I understand your question correctly, consider the following approach:
CREATE TABLE #NameValue (
Name varchar(10),
Val1 int,
Val2 int,
Val3 int
)
INSERT INTO #NameValue
VALUES
('A', 102, 201, 303),
('B', 101, 203, 302),
('C', 103, 202, 301);
WITH nv1 AS (
SELECT Name, Val1, ROW_NUMBER() OVER (ORDER BY Val1 DESC) AS RN1
FROM #NameValue
),
nv2 AS (
SELECT Name, Val2, ROW_NUMBER() OVER (ORDER BY Val2 DESC) AS RN2
FROM #NameValue
),
nv3 AS (
SELECT Name, Val3, ROW_NUMBER() OVER (ORDER BY Val3 DESC) AS RN3
FROM #NameValue
)
SELECT
nv1.Name AS Name1, nv1.Val1,
nv2.Name AS Name2, nv2.Val2,
nv3.Name AS Name3, nv3.Val3
FROM nv1
LEFT JOIN nv2 ON (nv1.RN1 = nv2.RN2)
LEFT JOIN nv3 ON (nv1.RN1 = nv3.RN3)
Output:
Name1 Val1 Name2 Val2 Name3 Val3
C 103 B 203 A 303
A 102 C 202 B 302
B 101 A 201 C 301

Related

copy one table to another table with diffrent columns

I have a TableA columns are (id,name,A,B,C,p_id)
i want convert TableA to TableB, TableB columns are (id,name,alphabets,alphabets_value,p_id)
Record in TableA
id | name | A | B | C | p_id
1 | xyz | a | b | | 1
2 | opq | a`| b`| c`| 1
Expected In TableB
u_id | id | name | alphabets | alphabets_value | p_id
1 | 1 | xyz | A | a | 1
2 | 1 | xyz | B | b | 1
3 | 2 | opq | A | a` | 1
4 | 2 | opq | B | b` | 1
5 | 2 | opq | C | c` | 1
i want TableB output currently using Microsoft SQL
This is an unpivot, probably most easily explained by a UNION ALL:
SELECT id, name, 'A' as alphabets, a as alphabets_value, p_id
UNION ALL
SELECT id, name, 'B' as alphabets, b as alphabets_value, p_id
UNION ALL
SELECT id, name, 'C' as alphabets, c as alphabets_value, p_id
You can then WHERE to remove the nulls from this, and ROW_NUMBER to give yourself a fake U_id:
SELECT ROW_NUMBER() OVER(ORDER BY id, alphabets) as u_id, x.*
FROM
(
SELECT id, name, 'A' as alphabets, a as alphabets_value, p_id
UNION ALL
SELECT id, name, 'B' as alphabets, b as alphabets_value, p_id
UNION ALL
SELECT id, name, 'C' as alphabets, c as alphabets_value, p_id
)
WHERE
x.alphabets_value IS NOT NULL
Once you get to having a result set you want, INSERT INTO, UPDATE FROM or MERGE to get it into table B is quite trivial

Unexpected effect of filtering on result from crosstab() query with multiple values

I have a crosstab() query similar to the one in my previous question:
Unexpected effect of filtering on result from crosstab() query
The common case is to filter extra1 field with multiples values: extra1 IN(value1, value2...). For each value included on the extra1 filter, I have added an ordering expression like this (extra1 <> valueN), as appear on the above mentioned post. The resulting query is as follows:
SELECT *
FROM crosstab(
'SELECT row_name, extra1, extra2..., another_table.category, value
FROM table t
JOIN another_table ON t.field_id = another_table.field_id
WHERE t.field = certain_value AND t.extra1 IN (val1, val2, ...) --> more values
ORDER BY row_name ASC, (extra1 <> val1), (extra1 <> val2)', ... --> more ordering expressions
'SELECT category_name FROM category_name WHERE field = certain_value'
) AS ct(extra1, extra2...)
WHERE extra1 = val1; --> condition on the result
The first value of extra1 included on the ordering expression value1, get the correct resulting rows. However, the following ones value2, value3..., get wrong number of results, resulting on less rows on each one. Why is that?
UPDATE:
Giving this as our source table (table t):
+----------+--------+--------+------------------------+-------+
| row_name | Extra1 | Extra2 | another_table.category | value |
+----------+--------+--------+------------------------+-------+
| Name1 | 10 | A | 1 | 100 |
| Name2 | 11 | B | 2 | 200 |
| Name3 | 12 | C | 3 | 150 |
| Name2 | 11 | B | 3 | 150 |
| Name3 | 12 | C | 2 | 150 |
| Name1 | 10 | A | 2 | 100 |
| Name3 | 12 | C | 1 | 120 |
+----------+--------+--------+------------------------+-------+
And this as our category table:
+-------------+--------+
| category_id | value |
+-------------+--------+
| 1 | Cat1 |
| 2 | Cat2 |
| 3 | Cat3 |
+-------------+--------+
Using the CROSSTAB, the idea is to get a table like this:
+----------+--------+--------+------+------+------+
| row_name | Extra1 | Extra2 | cat1 | cat2 | cat3 |
+----------+--------+--------+------+------+------+
| Name1 | 10 | A | 100 | 100 | |
| Name2 | 11 | B | | 200 | 150 |
| Name3 | 12 | C | 120 | 150 | 150 |
+----------+--------+--------+------+------+------+
The idea is to be able to filter the resulting table so I get results with Extra1 column with values 10 or 11, as follow:
+----------+--------+--------+------+------+------+
| row_name | Extra1 | Extra2 | cat1 | cat2 | cat3 |
+----------+--------+--------+------+------+------+
| Name1 | 10 | A | 100 | 100 | |
| Name2 | 11 | B | | 200 | 150 |
+----------+--------+--------+------+------+------+
The problem is that on my query, I get different result size for Extra1 with 10 as value and Extra1 with 11 as value. With (Extra1 <> 10) I can get the correct result size on Extra1 for that value but not in the case of 11 as value.
Here is a fiddle demonstrating the problem in more detail:
https://dbfiddle.uk/?rdbms=postgres_11&fiddle=5c401f7512d52405923374c75cb7ff04
All "extra" columns are copied from the first row of the group (as pointed out in my previous answer)
While you filter with:
.... WHERE extra1 = 'val1';
...it makes no sense to add more ORDER BY expressions on the same column. Only rows that have at least one extra1 = 'val1' in their source group survive.
From your various comments, I guess you might want to see all distinct existing values of extra - within the set filtered in the WHERE clause - for the same unixdatetime. If so, aggregate before pivoting. Like:
SELECT *
FROM crosstab(
$$
SELECT unixdatetime, x.extras, c.name, s.value
FROM (
SELECT unixdatetime, array_agg(extra) AS extras
FROM (
SELECT DISTINCT unixdatetime, extra
FROM source_table s
WHERE extra IN (1, 2) -- condition moves here
ORDER BY unixdatetime, extra
) sub
GROUP BY 1
) x
JOIN source_table s USING (unixdatetime)
JOIN category_table c ON c.id = s.gausesummaryid
ORDER BY 1
$$
, $$SELECT unnest('{trace1,trace2,trace3,trace4}'::text[])$$
) AS final_result (unixdatetime int
, extras int[]
, trace1 numeric
, trace2 numeric
, trace3 numeric
, trace4 numeric);
Aside: advice given in the following related answer about the 2nd function parameter applies to your case as well:
PostgreSQL crosstab doesn't work as desired
I demonstrate a static 2nd parameter query above. While being at it, you don't need to join to category_table at all. The same, a bit shorter and faster, yet:
SELECT *
FROM crosstab(
$$
SELECT unixdatetime, x.extras, s.gausesummaryid, s.value
FROM (
SELECT unixdatetime, array_agg(extra) AS extras
FROM (
SELECT DISTINCT unixdatetime, extra
FROM source_table
WHERE extra IN (1, 2) -- condition moves here
ORDER BY unixdatetime, extra
) sub
GROUP BY 1
) x
JOIN source_table s USING (unixdatetime)
ORDER BY 1
$$
, $$SELECT unnest('{923,924,926,927}'::int[])$$
) AS final_result (unixdatetime int
, extras int[]
, trace1 numeric
, trace2 numeric
, trace3 numeric
, trace4 numeric);
db<>fiddle here - added my queries at the bottom of your fiddle.

SQL Server - Return different values based on row count

I have two tables, table 1 is the target table, I’ve provided the required values in idCode1- idCode3.
Table 2 is the source, each idBill will have one or more idCode. If there were two rows to represent 2 unique idCode, then I want to insert to idCode 1 and 2 respectively.
I was thinking a case statement where I could test for the number of idCode and then insert first value to 1, second value to 2 etc. When I tried a bunch of case, when, exists, count etc it would always return 2 rows if there were 2 idCode values, and the idCode would only insert to idCode1. The end result must be a single row in table1 for each idBill and however many idCode for that idBill inserted to 1, 2, 3.
Sorry I couldn’t post the picture as I don’t have enough points. Here is a rough pipe delimited example of it:
| idTable1 | idBill | idCode1 | idCode2 | idCode3 |
| 1 | 1234 | A1 | A2 | |
| 2 | 1235 | E3 | E2 | A1 |
| idTable2 | idBill | codeId |
| 10 | 1234 | A1 |
| 20 | 1234 | A2 |
| 30 | 1235 | E3 |
| 40 | 1235 | E2 |
| 50 | 1235 | A1 |
Hopefully this makes sense. Thanks so much!
You can use conditional aggregation:
select s.idbill,
max(case when seqnum = 1 then s.codeid end) as codeid1,
max(case when seqnum = 2 then s.codeid end) as codeid2,
max(case when seqnum = 3 then s.codeid end) as codeid3
into target
from (select s.*, row_number() over (partition by idbill order by idtable2) as seqnum
from source s
) s
group by s.idbill;

How to rewrite a LEFT JOIN

Please, consider the following query:
create table lt (id1 int, val1 string);
insert into lt VALUES (1, "one"), (2, "two"), (3, "three");
create table rt (id2 int, val2 string);
insert into rt VALUES (2, "two"), (3, "three"), (4, "four");
select * from lt left join rt on id1=id2;
+-----+-------+------+-------+
| id1 | val1 | id2 | val2 |
+-----+-------+------+-------+
| 1 | one | NULL | NULL |
| 2 | two | 2 | two |
| 3 | three | 3 | three |
+-----+-------+------+-------+
For this specific example I can rewrite the LEFT JOIN as INNER JOIN + query that gets all IDs that are not in the "rt" table:
select lt.*, NULL as id2, NULL as val2 from lt where id1 not in (select id2 from rt)
union all
select * from lt join rt on id1=id2;
+-----+-------+------+-------+
| id1 | val1 | id2 | val2 |
+-----+-------+------+-------+
| 1 | one | NULL | NULL |
| 2 | two | 2 | two |
| 3 | three | 3 | three |
+-----+-------+------+-------+
Both querires give same result for this example. But is this generally true? Can I rewrite any LEFT JOIN in this fashion (or may be there is a shorter way)?
You can try below -
DEMO
select val1, NULL as id2, NULL as val2 from lt where id1 not in (select id2 from
rt)
union
select val1,id1, val1 from lt where 1=1 and id1 in (select id2 from rt)
OUTPUT:
val1 id2 val2
one
two 2 two
three 3 three

Oracle SQL foreach

I have this table:
+------+------+------+------+
| User | Val1 | Val2 | Val3 |
+------+------+------+------+
| Usr1 | v4 | a | x |
+------+------+------+------+
| Usr2 | v4 | c | y |
+------+------+------+------+
| Usr3 | v6 | b | z |
+------+------+------+------+
| Usr4 | v5 | d | z |
+------+------+------+------+
| Usr5 | v4 | c | z |
+------+------+------+------+
The attributes of Val1 and Val2 aren't static (with the time it's possible to have Val1=v6, v7, etc. and Val2=f,g,h, etc.).
So, i need to obtain this result:
Name | Number
v4 | 3
a | 1
c | 2
v6 | 1
b | 1
v5 | 1
d | 1
Where Name is the value of Val1 and Val2, Number the count of their occurrences
If i'm in a functional program language i can use a foreach operator...
There’s any solution to do this in ONE query with SQL for oracle DB?
edit
in Pl\SQL, it's possible?
select "Val1" as "name", "Val2", count(0) as "number"
from your_table
group by "Val1", rollup("Val2")
order by "Val1", GROUPING("Val2") desc, "Val2"
fiddle
The way you do this in SQL is:
select val1,
(case when grouping(val2) = 1 then 'Total' else val2 end) as val2,
count(*) as "Number"
from t
group by rollup(val1, val2)
This doesn't do exactly what you want in terms of output. Remember though that SQL tables and result sets have well-defined columns. Final output for reporting purposes is often done in the application.
SQL> with t (col1, col2) as
2 (
3 select 'v4','a' from dual union all
4 select 'v4','c' from dual union all
5 select 'v6','b' from dual union all
6 select 'v5','d' from dual union all
7 select 'v4','c' from dual
8 )
9 select decode(grouping(col2),1,col1,col2) col, count(*)
10 from t
11 group by rollup(col1, col2)
12 having grouping(col1)*grouping(col2) = 0
13 order by col1, grouping(col2) desc, col2
14 /
CO COUNT(*)
-- ----------
v4 3
a 1
c 2
v5 1
d 1
v6 1
b 1