I have a table structure with columns similar to the following:
ID | line | value
1 | 1 | 10
1 | 2 | 5
2 | 1 | 6
3 | 1 | 7
3 | 2 | 4
ideally, i'd like to pull the following:
ID | value
1 | 5
2 | 6
3 | 4
one solution would be to do something like the following:
select a.ID, a.value
from
myTable a
inner join (select id, max(line) as line from myTable group by id) b
on a.id = b.id and a.line = b.line
Given the size of the table and that this is just a part of a larger pull, I'd like to see if there's a more elegant / simpler way of pulling this directly.
This is a task for OLAP-functions:
select *
from myTable a
qualify
rank() -- assign a rank for each id
over (partition by id
order by line desc) = 1
Might return multiple rows per id if they share the same max line. If you want to return only one of them, add another column to the order by to make it unique or switch to row_number to get an indeterminate row.
Related
I was hoping to query in all the rows of a table that has its ids starting at some number, and update each row of the original table with a one to one of the second table.
For example:
normal
id | fk_test_id
----------------
1 | null
2 | null
3 | null
starts_after
id |
----
12 |
13 |
14 |
What UPDATE can I use to make normal look like this:
id | fk_test_id
----------------
1 | 12
2 | 13
3 | 14
I tried:
UPDATE normal SET fk_test_id = starts_after.id FROM starts_after; which just joins on the first row of starts_after.
UPDATE normal SET fk_test_id = (SELECT id FROM starts_after ORDER BY random() LIMIT 1); Where the subquery only executes once.
Filtering the subquery by which fk_test_ids are already chosen, but it only executes on the pre-updated data.
If you added record with specific order in to starts_after you can use below query:
update normal n
set fk_test_id = tmp.id
from (select id,
row_number() over (order by id)
from starts_after) tmp
where tmp.row_number = n.id;
I ordered by id from starts_after table (ASC) and create range of record with row num:
id | row_number
----------------
12 | 1
13 | 2
14 | 3
After that i join two table and update records
I have a table like :
ID | Val | Kind
----------------------
1 | a | 2
2 | b | 1
3 | c | 4
3 | c | 33
and I need to fetch one row per each id in Oracle SQL.
any ideas?
You can use row_number() to enumerate the rows. For an arbitrary row:
select t.*
from (select t.*,
row_number() over (partition by id order by id) as seqnum
from t
) t
where seqnum = 1;
As I point out in a comment, though, this is unnecessary based on the data in your question. The ids are already unique.
after some transformation I have a result from a cross join (from table a and b) where I want to do some analysis on. The table for this looks like this:
+-----+------+------+------+------+-----+------+------+------+------+
| id | 10_1 | 10_2 | 11_1 | 11_2 | id | 10_1 | 10_2 | 11_1 | 11_2 |
+-----+------+------+------+------+-----+------+------+------+------+
| 111 | 1 | 0 | 1 | 0 | 222 | 1 | 0 | 1 | 0 |
| 111 | 1 | 0 | 1 | 0 | 333 | 0 | 0 | 0 | 0 |
| 111 | 1 | 0 | 1 | 0 | 444 | 1 | 0 | 1 | 1 |
| 112 | 0 | 1 | 1 | 0 | 222 | 1 | 0 | 1 | 0 |
+-----+------+------+------+------+-----+------+------+------+------+
The ids in the first column are different from the ids in the sixth column.
In a row are always two different IDs that are matched with each other. The other columns always have either 0 or 1 as a value.
I am now trying to find out how many values(meaning both have "1" in 10_1, 10_2 etc) two IDs have on average in common, but I don't really know how to do so.
I was trying something like this as a start:
SELECT SUM(CASE WHEN a.10_1 = 1 AND b.10_1 = 1 then 1 end)
But this would obviously only count how often two ids have 10_1 in common. I could make something like this for example for different columns:
SELECT SUM(CASE WHEN (a.10_1 = 1 AND b.10_1 = 1)
OR (a.10_2 = 1 AND b.10_1 = 1) OR [...] then 1 end)
To count in general how often two IDs have one thing in common, but this would of course also count if they have two or more things in common. Plus, I would also like to know how often two IDS have two things, three things etc in common.
One "problem" in my case is also that I have like ~30 columns I want to look at, so I can hardly write down for each case every possible combination.
Does anyone know how I can approach my problem in a better way?
Thanks in advance.
Edit:
A possible result could look like this:
+-----------+---------+
| in_common | count |
+-----------+---------+
| 0 | 100 |
| 1 | 500 |
| 2 | 1500 |
| 3 | 5000 |
| 4 | 3000 |
+-----------+---------+
With the codes as column names, you're going to have to write some code that explicitly references each column name. To keep that to a minimum, you could write those references in a single union statement that normalizes the data, such as:
select id, '10_1' where "10_1" = 1
union
select id, '10_2' where "10_2" = 1
union
select id, '11_1' where "11_1" = 1
union
select id, '11_2' where "11_2" = 1;
This needs to be modified to include whatever additional columns you need to link up different IDs. For the purpose of this illustration, I assume the following data model
create table p (
id integer not null primary key,
sex character(1) not null,
age integer not null
);
create table t1 (
id integer not null,
code character varying(4) not null,
constraint pk_t1 primary key (id, code)
);
Though your data evidently does not currently resemble this structure, normalizing your data into a form like this would allow you to apply the following solution to summarize your data in the desired form.
select
in_common,
count(*) as count
from (
select
count(*) as in_common
from (
select
a.id as a_id, a.code,
b.id as b_id, b.code
from
(select p.*, t1.code
from p left join t1 on p.id=t1.id
) as a
inner join (select p.*, t1.code
from p left join t1 on p.id=t1.id
) as b on b.sex <> a.sex and b.age between a.age-10 and a.age+10
where
a.id < b.id
and a.code = b.code
) as c
group by
a_id, b_id
) as summ
group by
in_common;
The proposed solution requires first to take one step back from the cross-join table, as the identical column names are super annoying. Instead, we take the ids from the two tables and put them in a temporary table. The following query gets the result wanted in the question. It assumes table_a and table_b from the question are the same and called tbl, but this assumption is not needed and tbl can be replaced by table_a and table_b in the two sub-SELECT queries. It looks complicated and uses the JSON trick to flatten the columns, but it works here:
WITH idtable AS (
SELECT a.id as id_1, b.id as id_2 FROM
-- put cross join of table a and table b here
)
SELECT in_common,
count(*)
FROM
(SELECT idtable.*,
sum(CASE
WHEN meltedR.value::text=meltedL.value::text THEN 1
ELSE 0
END) AS in_common
FROM idtable
JOIN
(SELECT tbl.id,
b.*
FROM tbl, -- change here to table_a
json_each(row_to_json(tbl)) b -- and here too
WHERE KEY<>'id' ) meltedL ON (idtable.id_1 = meltedL.id)
JOIN
(SELECT tbl.id,
b.*
FROM tbl, -- change here to table_b
json_each(row_to_json(tbl)) b -- and here too
WHERE KEY<>'id' ) meltedR ON (idtable.id_2 = meltedR.id
AND meltedL.key = meltedR.key)
GROUP BY idtable.id_1,
idtable.id_2) tt
GROUP BY in_common ORDER BY in_common;
The output here looks like this:
in_common | count
-----------+-------
2 | 2
3 | 1
4 | 1
(3 rows)
I want to create a view in my database, based on these three tables:
I would like to select the rows in table3 that has the highest value in Weight, for rows that has the same value in Count.
Then I want them grouped by Category_ID and ordered by Date, so that if two rows in table3 are identical, I want the newest.
Let me give you an example:
Table1
ID | Date | UserId
1 | 2015-01-01 | 1
2 | 2015-01-02 | 1
Table2
ID | table1_ID | Category_ID
1 | 1 | 1
2 | 2 | 1
Table3
ID | table2_ID | Count | Weight
1 | 1 | 5 | 10
2 | 1 | 5 | 20 <-- count is 5 and weight is highest
3 | 1 | 3 | 40
4 | 2 | 5 | 10
5 | 2 | 3 | 40 <-- newest of the two equal rows
Then the result should be row 2 and 5 from table 3.
PS I'm doing this in mssql.
PPS I'm sory if the title is not appropriate, but I did not know how to formulate a good one.
SELECT
*
FROM
(
SELECT
t3.*
,RANK() OVER (PARTITION BY [Count] ORDER BY [Weight] DESC, Date DESC) highest
FROM TABLE3 t3
INNER JOIN TABLE2 t2 ON t2.Id = t3.Table2_Id
INNER JOIN TABLE1 t1 ON t1.Id = t2.Table1_Id
) t
WHERE t.Highest = 1
This will group by the Count (which must be the same). Then it will determine which has the highest weight. If two of more of them have the same 'heighest' weight, it takes the one with the most recent date first.
You can use RANK() analytic function here, and give those rows a rank and than choose the first rank for each ID
Something like
select *
from
(select
ID, table2_ID, Count, Weight,
RANK() OVER (PARTITION BY ID ORDER BY Count, Weight DESC) as Highest
from table3)
where Highest = 1;
This is the syntax for Oracle, if you not using it look in the internet for the your syntax which should be almost the same
I am attempting to select distinct (last updated) rows from a table in my database. I am trying to get the last updated row for each "Sub section". However I cannot find a way to achieve this.
The table looks like:
ID | Name |LastUpdated | Section | Sub |
1 | Name1 | 2013-04-07 16:38:18.837 | 1 | 1 |
2 | Name2 | 2013-04-07 15:38:18.837 | 1 | 2 |
3 | Name3 | 2013-04-07 12:38:18.837 | 1 | 1 |
4 | Name4 | 2013-04-07 13:38:18.837 | 1 | 3 |
5 | Name5 | 2013-04-07 17:38:18.837 | 1 | 3 |
What I am trying to get my SQL Statement to do is return rows:
1, 2, and 5.
They are distinct for the Sub, and the most recent.
I have tried:
SELECT DISTINCT Sub, LastUpdated, Name
FROM TABLE
WHERE LastUpdated = (SELECT MAX(LastUpdated) FROM TABLE WHERE Section = 1)
Which only returns the distinct row for the most recent updated Row. Which makes sense.
I have googled what I am trying, and checked relevant posts on here. However not managed to find one which really answers what I am trying.
You can use the row_number() window function to assign numbers for each partition of rows with the same value of Sub. Using order by LastUpdated desc, the row with row number one will be the latest row:
select *
from (
select row_number() over (
partition by Sub
order by LastUpdated desc) as rn
, *
from YourTable
) as SubQueryAlias
where rn = 1
Wouldn't it be enough to use group by?
SELECT DISTINCT MIN(Sub), MAX(LastUpdated), MIN(NAME) FROM TABLE GROUP BY Sub Where Section = 1