tricky sql Ordering - sql

i have a table on oracle 11g that looks like this
col_1 | col_2 | col_3 |
1 | 111222001 | A
2 | 111222001 | B
3 | 111222002 | A
4 | 111222002 | B
5 | 111555001 | B
6 | 111555003 | A
7 | 111555003 | B
i want to order it, to get this
col_1 | col_2 | col_3 |
2 | 111222001 | B
4 | 111222002 | B
1 | 111222001 | A
3 | 111222002 | A
5 | 111555001 | B
7 | 111555003 | B
6 | 111555003 | A
Logic behind it :
notice how col_2 values are values of three triplets 111-222-333.
I want to order col_2 according to the the third triplet 111-222-"333" and get only the entries that have col_3 = 'B' at first, then get those that have col_3 = 'A' .
when the second triplet (changes / goes up) 111-"222"-333 we redo what was described before.
Thanks in advance, i fureg out a way to do it, but it's really ugly, if someone can figure out a way to do it beautifully

Select *
from table
order by col_3 desc, col2, col1;

HI i don't know why i didn't see it, but it was quite simple, sorry for troubling you
select col_1, col_2, col_3 from table_name
order by substr(col_2,3,8), col_3, substr(col_2,9,11);

Related

Select value from different row based on value of certain column

For example I have data like this like this
date | col_1 | Col_2 | Col_3 | Col_4
---------------------------------------------
20021 | 1 | a | null | a
20022 | 2 | a | null | a
20023 | 3 | a | null | a
20024 | 4 | a | 4.5 | a
20031 | 1 | a | 11 | b
20032 | 2 | a | 2 | b
20033 | 3 | a | 9 | b
20034 | 4 | a | 11 | b
what I need is when value in Col_3 is null and Col_1 is not 4,
then select value in Col_3 where Col_1 = 4, that have the same
I tried using this case statement:
select col_2, date, col_1, col_4,
case when col_3 is null and col_1 != 1
then (select col_3 from table s where s.date = 4
and s.col_1= seg.col_1 and s.col_4= seg.col_4
and left(s.date,4) = left(seg.date,4))
else seg.col_3
end as col_3
from table seg
but for some reason it's not doing what I need it to do
I need it to change the results of the table above to become like this:
date | col_1 | Col_2 | Col_3 | Col_4
---------------------------------------------
20021 | 1 | a | 4.5 | a
20022 | 2 | a | 4.5 | a
20023 | 3 | a | 4.5 | a
20024 | 4 | a | 4.5 | a
20031 | 1 | a | 11 | b
20032 | 2 | a | 2 | b
20033 | 3 | a | 9 | b
20034 | 4 | a | 11 | b
Maybe use an OUTER APPLY (or CROSS APPLY if you can guarantee quarter 4 is there) to always have quarter 4 available per year per col4 then just use it where you have to.
select col_2, date, col_1, col_4,
case when col_3 is null and col_1 != 4
then backfill.col_3
else seg.col_3
end as col_3
from table seg
outer apply (select col_3 from table where col_1 = 4 and left(date,4) = left(seg.date,4)) and col_4 = seg.col_4) backfill
If you'd rather stick with CASE, this should work.
case when col_3 is null and col_1 != 4
then (select col_3 from table s where s.col_1 = 4
and s.col_4 = seg.col_4
and left(s.date,4) = left(seg.date,4))
else seg.col_3
Just use window functions and coalesce():
select date, col1, col2,
coalesce(max(case when col1 = 4 then col3 end) over (partition by col4) as col3,
col4
from t;

Over Partition to find duplicates and remove them based on criteria SQL

I hope everyone is doing well. I have a dilemma that i can not quite figure out.
I am trying to find a unique value for a field that is not a duplicate.
For example:
Table 1
|Col1 | Col2| Col3 |
| 123 | A | 1 |
| 123 | A | 2 |
| 12 | B | 1 |
| 12 | B | 2 |
| 12 | C | 3 |
| 12 | D | 4 |
| 1 | A | 1 |
| 2 | D | 1 |
| 3 | D | 1 |
Col 1 is the field that would have the duplicate values. Col2 would be the owner of the value in Col 1. Col 3 uses the row number() Over Partition syntax to get the numbers in ascending order.
The goal i am trying to accomplish is to remove the value in col 1 if it is not truly unique when looking at col2.
Example:
Col1 has the value 123, Col2 has the value A. Although there are two instances of 123 being owned by A, i can determine that it is indeed unique.
Now look at Col1 that has the value 12 with values in Col2 of B,C,D.
Value 12 is associated with three different owners thus eliminating 12 from our result list.
So in the end i would like to see a result table such as this :
|Col1 | Col2|
| 123 | A |
| 1 | A |
| 2 | D |
| 3 | D |
To summarize, i would like to first use the partition numbers to identify if the value in col1 is repeated. From there i want to verify that the values in col 2 are the same. If so the value in col 1 and col 2 remains as one single entry. However if the values in col 2 do not match, all records for the col1 value are removed.
I will provide the syntax code for my query if needed.
Update**
I failed to mention that table 1 is the result of inner joining two tables.
So Col1 comes from table a and Col2 comes from table b.
The values in table a for col2 are hard to interpret so i had to make sense of them and assigned it proper name values.
The join query i used to combine the two are:
Select a.Col1, B.Col2 FROM Table a INNER JOIN Table b on a.Colx = b.Colx
Update**
Table a:
|Col1 | Colx| Col3 |
| 123 | SMS | 1 |
| 123 | S9W | 2 |
| 12 | NAV | 1 |
| 12 | NFR | 2 |
| 12 | ABC | 3 |
| 12 | DEF | 4 |
| 1 | SMS | 1 |
| 2 | DEF | 1 |
| 3 | DES | 1 |
Table b:
|Colx | Col2|
| SMS | A |
| S9W | A |
| DEF | D |
| DES | D |
| NAV | B |
| NFR | B |
| ABC | C |
Above are sample data for both tables that get joined in order to create the first table displayed in this body.
Thank you all so much!
NOT EXISTS operator can be used to do this task:
SELECT distinct Col1 , Col2
FROM table t
WHERE NOT EXISTS(
SELECT 1 FROM table t1
WHERE t.col1=t1.col1 AND t.col2 <> t1.col2
)
If I understand correctly, you want:
select col1, min(col2)
from t
group by col1
where min(col2) <> max(col2);
I think the third column is confusing you. It doesn't seem to play any role in the logic you want.

SQL Count across columns

I know that this table structure is horrible and that I should look into database normalization, but this is what I have to work with at the moment.
I need to find the most common number across the columns where one of them has a specific id (in my example 3). Both columns will never have the same value.
Query
SELECT Col1, Col2 FROM scores WHERE Col1 = 3 OR Col2 = 3
Result
+------+------+
| Col1 | Col2 |
+------+------+
| 1 | 3 |
| 3 | 1 |
| 2 | 3 |
| 6 | 3 |
| 3 | 7 |
| 3 | 9 |
| 2 | 3 |
| 5 | 3 |
+------+------+
I'm hoping to get a result like this (I don't need count for 3 since it's the ID, but it can be included)
+-------+-------+
| Value | Count |
+-------+-------+
| 1 | 2 |
| 2 | 2 |
| 5 | 1 |
| 6 | 1 |
| 7 | 1 |
| 9 | 1 |
+-------+-------+
I've tried a few things such as UNION and nested SELECT but that doesn't seem to solve this thing.
Any suggestions?
If you want a count of the values where the OTHER column is 3, then a UNION would work like this:
SELECT value, theCount = COUNT(*)
FROM (
SELECT value = col1
FROM scores
WHERE col2 = 3
UNION ALL
SELECT col2
FROM scores
WHERE col1 = 3) T
GROUP BY value
ORDER BY value;
One way is using case:
SELECT
case Col1 when 3 then Col2 else Col1 end,
count(*)
FROM scores
WHERE Col1 = 3 OR Col2 = 3
Group by
case Col1 when 3 then Col2 else Col1 end;

Copying a SQL Server table and adding and rearranging columns

I know that if I want to make a copy of a SQL Server table, I can write a query akin to this:
SELECT *
INTO NewTable
FROM OldTable
But what if I wanted to take the contents of OldTable that may look like this:
| Column1 | Column2 | Column3 |
|---------|---------|---------|
| 1 | 2 | 3 |
| 4 | 5 | 6 |
| 7 | 8 | 9 |
and make a copy of that table but have the new table look like this:
| Column1 | Column3 | Column2 | Column4 | Column5 |
|--------- |--------- |--------- |--------- |--------- |
| 1 | 3 | 2 | 10 | 11 |
| 4 | 6 | 5 | 12 | 13 |
| 7 | 9 | 8 | 14 | 15 |
So now I've swapped Columns 2 and 3 and added Column 4 and Column 5. I don't need to have a query that will add that data to the columns, just the bare columns.
It's a matter of modifying your select statement. SELECT * takes only the columns from the source table, in their order. You want something different - so SELECT it.
SELECT * INTO NewTable
FROM OldTable
->
SELECT Col1, col3, col2, ' ' AS col4, ' ' AS col5
INTO NewTable
FROM OldTable
This gives you very little flexibility as far as how the table's columns are specced and indices and such - so it's probably a bad idea, probably better to do this another way (properly CREATE TABLE), but if you need quick and dirty, I suppose...
You can just name the columns:
Select
[Column1], [Column3], [Column2], Cast(null as bigint) as [Column4], 0 as [Column5]
Into CopyTable
From YourTable
Just like any query, it is always preferable to use the Column names and avoid using *.
You can then add any value as [ColumnX] in the select.
You can use a cast to get the type you want in the new table.

Using SWITCH() to split data from a column into distinct columns, with associated data in reach row

I'm not quite sure how to properly phrase the question, but I am basically trying to develop an SQL query that SELECTs information from this table:
-------------------
| id | Val | Date |
|----|-----|------|
| 1 | A | 10/9 |
| 1 | B | 3/14 |
| 2 | A | 1/6 |
| 3 | A | 4/4 |
| 4 | B | 7/12 |
| 5 | A | 8/6 |
-------------------
And produces a table that looks like this:
------------------------------------------------
| id | Val_1 | Val_1_Date | Val_2 | Val_2_Date |
|----|-------|------------|-------|-------------
| 1 | A | 10/9 | B | 3/14 |
| 2 | A | 1/6 | | |
| 3 | A | 4/4 | | |
| 4 | | | B | 7/12 |
| 5 | A | 8/6 | | |
------------------------------------------------
I have already begun and developed the query to pull out the values in the Val fields into distinct columns:
SELECT * FROM
(
SELECT id, MAX(SWITCH( val='A', 'A')) as Val_1,
MAX(SWITCH( val='B', 'B')) as Val_2
FROM table1 GROUP BY id
)a
WHERE Val_1 IS NULL OR Val_2 IS NULL;
How would I expand on this to pull out their associated dates?
(I am using SWITCH() instead of CASE WHEN because I am using a driver similar to that of MS Access.)
Thanks!
I think following should work:
select id, SWITCH( val='A', 'A') as Val_1, SWITCH( val='A', Date) as Val_1_Date, SWITCH( val='B', 'B') as Val_2, SWITCH( val='B', Date) as Val_2_Date FROM table1 GROUP BY id
I do not prefer switches, so here is a query that does what you want without switches. This also answers your previous question.
Select distinct table1.ID, tableA.Val as Val_1, tableA.Date as Val_1_Date,
tableB.Val as Val_2, tableB.Date as Val_2_Date
FROM table1 left outer join
table1 as tableA on table1.id = tableA.id and tableA.Val = 'A' left outer join
table1 as tableB on table1.id = tableB.id and tableB.Val = 'B'
You can use ISNULL if that is preferred. This works because the first tables selects a distinct column of ID's, and the two joins get the A and B values. When creating selects using this method, make sure that you use tableA.Val = 'A' in the join conditions, and not in the where clause. Having tableA.Val = 'A' in the where clause will filter out all NULL's.