SQL query group by and select the maximum absolute value - sql

"Table1" structure is as shown below:
source table table1
Player_NAME || Player_NUMBER || Client_name || Client_country || Player_country|| Rating
GERALD || A1234 || BENFIELD || IND || IND || 76
GERALD ||A6578 || ROTFIELD || USA || USA || 64
KUMAR || P1234 || LFV || ARG || ARG || -24
KUMAR || P5678 ||JEURASIN || ARG || TUR ||-32
KUMAR || P0101 ||ARGENIA ||ARG ||POL ||-16
ANDREW ||R1234 || GENMAD || GER || GER || 23
I need to select the records from above table “Table1” and copy them to “Table2”.
I need to select the player record from table1 which satisfy the below conditions :
If a player has multiple client_names or multiple client_country, then select the record which has the maximum value of rating . If it is negavie, then take the absolute value of that value. i.e if the rating is -10 and -34, then take the absolute value which is greatest. i. e by taking absolute value it is 10,34 and 34 is greatest one.
For ex: Kumar has 3 diff client names or 3 diff client_country ,so for kumar the record with rating 32 should be selected ,after taking the absolute value of it.
Below is the expected output:
Player_NAME || Player_NUMBER ||Client_name || Client_country ||Player_country|| Rating
GERALD || A1234 || BENFIELD|| IND|| IND|| 76
KUMAR || P5678 || JEURASIN ||ARG ||TUR || -32
ANDREW || R1234 || GENMAD ||GER ||GER || 23
destination table-'table2'

You can try something like this:
INSERT INTO Table2
(
Player_Name,
Player_Number,
Cliet_Name,
Client_country,
Player_country,
Rating
)
SELECT
Player_Name,
Player_Number,
Cliet_Name,
Client_country,
Player_country,
MAX(ABS(Rating)) OVER (PARTITION BY player_Name ORDER BY Cliet_Name,
Client_country) as Rating
FROM
table1

If your DBMS supports Analytical Function you can utilize ROW_NUMBER:
select ... -- all columns but rn
from
(
select ... -- all columns
,row_number()
over (partition by player_name
order by abs(Rating) desc as rn
from table1
) as dt
where rn = 1;
Otherwise use a Correlated Subquery:
select *
from table1 as t1
where abs(rating) =
( select max(abs(rating))
from table1 as t2
where t1.player_name = t2.player_name
)
If you got multiple rows with the same max(abs(rating)) #1. will select one of them randomly, but #2 will select all.

I guess, this query will work:
select
max(abs(Rating))
from Table1
group by Player_NAME
To insert data into Table2, you can do it like so:
INSERT INTO Table2 (
Player_Name,
Player_Number,
Cliet_Name,
Client_country,
Player_country,
Rating
)
SELECT
t1.Player_Name,
t1.Player_Number,
t1.Cliet_Name,
t1.Client_country,
t1.Player_country,
t1.Rating
FROM Table1 t1
INNER JOIN (
SELECT
Player_NAME,
MAX(ABS(Rating)) as Rating
FROM Table1
GROUP BY Player_NAME
) t2 ON t2.Player_NAME = t1.Player_NAME AND ABS(t1.Rating) = t2.Rating

Related

Query select with next id order

I have a table with ID and NextID like this:
MainID || NextID
1 || 2
2 || 3
3 || 5
4 || 6
5 || 4
6 || ...
... || ...
what I want to achieve is select data into like this
MainID || NextID
1 || 2
2 || 3
3 || 5
5 || 4
4 || 6
6 || ...
... || ...
what i've tried is simple query like :
SELECT * FROM 'table' ORDER BY NextID
but of course it didn't meet my needs,
I have an idea to create a temp table and insert with loop but takes too much time to complete :
WHILE #NextID IS NOT NULL
BEGIN
INSERT INTO 'table'(MainID, NextID)
SELECT MainID, NextId
FROM 'table' WHERE MainID=#NextID
END
Can anyone help me?
Thanks
Recursive cte will return rows in the order of nodes visited
with t as (
select f.*, right('00000000'+cast(f.mainId as varchar(max)),9) path
from yourtable f
where MainID=1
union all
select f.*, path + '->' + right('00000000'+cast(f.mainId as varchar(max)),9)
from t
join yourtable f on t.NextID = f.MainID
)
select *
from t
order by path
db<>fiddle
where MainId=1 is an arbitrary start. You may wish also start with
where not exists (select 1 from yourtable f2 where f2.Nextid = f.MainId)
Edit
Added explicit order by
For this particular case you may use right join with some ordering
select t2.*
from some_table t1
right join some_table t2
on t1. main_id = t2.next_id
order by case when t2.next_id is null then 9999999 else t2.main_id + t2.next_id end;
the 999999 in the "order by" part is to place last line (6, null) to the end of the output.
Good luck with adopting the query to your real data

How to find the changes happened between rows?

I have two tables that I need to find the difference between.
What's required is a table of a summary of what fields have changed (ignoring id columns). Also, I don't know which columns have changed.
e.g. Source table [fields that have changed are {name}, {location}; {id} is ignored]
id || name || location || description
1 || aaaa || ddd || abc
2 || bbbb || eee || abc
e.g. Output Table [outputting {name}, {location} as they have changed]
Table_name || Field_changed || field_was || field_now
Source table || name || aaaa || bbbb
Source table || location || ddd || eee
I have tried to use lag(); but that only gives me the columns I selected. Eventually I'd want to see all changes in all columns as I am not sure what columns are changed.
Also please note that the table has about 150 columns - so one of the biggest issues is how to find the ones that changed
As your table can contain multiple changes in a single row and it needs to be calculated in the result as multiple rows, I have created a query to incorporate them separately as follows:
WITH DATAA(ID, NAME, LOCATION, DESCRIPTION)
AS
(SELECT 1, 'aaaa', 'ddd', 'abc' FROM DUAL UNION ALL
SELECT 2, 'bbbb', 'eee', 'abc' FROM DUAL),
-- YOUR QUERY WILL START FROM HERE
CTE AS (SELECT NAME,
LAG(NAME,1) OVER (ORDER BY ID) PREV_NAME,
LOCATION,
LAG(LOCATION,1) OVER (ORDER BY ID) PREV_LOCATION,
DESCRIPTION,
LAG(DESCRIPTION,1) OVER (ORDER BY ID) PREV_DESCRIPTION
FROM DATAA)
--
SELECT
'Source table' AS TABLE_NAME,
FIELD_CHANGED,
FIELD_WAS,
FIELD_NOW
FROM
(
SELECT
'Name' AS FIELD_CHANGED,
PREV_NAME AS FIELD_WAS,
NAME AS FIELD_NOW
FROM
CTE
WHERE
NAME <> PREV_NAME
UNION ALL
SELECT
'location' AS FIELD_CHANGED,
PREV_LOCATION AS FIELD_WAS,
LOCATION AS FIELD_NOW
FROM
CTE
WHERE
LOCATION <> PREV_LOCATION
UNION ALL
SELECT
'description' AS FIELD_CHANGED,
PREV_DESCRIPTION AS FIELD_WAS,
DESCRIPTION AS FIELD_NOW
FROM
CTE
WHERE
DESCRIPTION <> PREV_DESCRIPTION
);
Output:
DEMO
Cheers!!

Oracle-sql : Query with hiererchical group by

I have a table like:
ID || Ent NAME || FUNCTION
1 || Dupuis || Signatory
1 || Daturt || Decision Maker
1 || Nobel || (Null )
2 || Karl || Decision Maker
2 || Titi || (Null )
3 || Cloves || (Null )
3 || Cardigan || (Null )
I want to get the most "important" people in a pre -established hierarchy (Signatory > Decision Maker > (Null ) )
So the expected result is:
ID Ent || NAME || FUNCTION
1 || Dupuis || Signatory
2 || Karl || Decision Maker
3 || Cardigan || (Null )
for the 3rd , i don't care of person selected .
I work in Oracle with extremely limited right , I can do that SELECT ( and this it is s*** ).
I have a solution bypass but it is extremely ugly and I am not satisfied:
(SELECT "ID Ent" max (NOM), max (FUNCTION)
FROM table
WHERE FUNCTION = 'Signatory' GROUP BY "ID Ent")
UNION
(SELECT "ID Ent" max (NOM), max (FUNCTION)
FROM table
WHERE FUNCTION = 'Decision Maker'
AND "ID Ent" not in (SELECT "ID Ent" FROM table WHERE FUNCTION = 'Signatory')
GROUP BY "ID Ent")
UNION
(SELECT "ID Ent" max (NOM), max (FUNCTION)
FROM table
WHERE FUNCTION = 'Decision Maker'
AND "ID Ent" not in (SELECT "ID Ent" in FUNCTION FROM table WHERE ('Signatory', 'Decision Maker'))
GROUP BY "ID Ent");
Do you have a better way to do this?
I would approach this using analytic functions:
select t.Id, t.Name, t.Function
from (select t.*,
row_number() over (partition by id
order by (case when function = 'Signatory' then 1
when function = 'Decision Maker' then 2
else 3
end)
) as seqnum
from table t
) t
where seqnum = 1;

Convert row data to new columns based on common ID value

I have an MS Access database for product ID list for each client ID.
Currently, the table is set up as the following:
CLI_ID || PRODUCT_ID
963506 || 49001608
968286 || 49001645
987218 || 00048038
987218 || 49001401
999999 || 9999999
999999 || 9999998
999999 || 9999997
999999 || 9999996
I would like to transpose the data to look like this:
CLI_ID || PRODUC1 || PRODUC2 || PRODUC3 || PRODUC4 ||
963506 || 49001608 ||
968286 || 49001645 ||
987218 || 00048038 || 49001401 ||
999999 || 99999999 || 99999998 || 99999997 || 99999996 ||
There are clients with more products than 4 in the example shown above so I would like the query to be expandable by counting # of product IDs in each client ID.
I took the code from ChrisPadgham's post and modified it to meet my needs...Here is what I have so far but it is not going further than PRODUC1:
TRANSFORM Last([PRODUCT_ID]) AS Details
SELECT Source_Table.CLI_ID
FROM Source_Table
GROUP BY Source_Table.CLI_ID
PIVOT "PRODUC" & (DCount("[PRODUCT_ID]","[Source_Table]", "[PRODUCT_ID]<>" &
[PRODUCT_ID] & " AND [CLI_ID]=" & [CLI_ID]) +1);
Any help would be appreciated!
Try adding a per client item counter to your data. Examples Found Here
It looks like your going to need to create a temp table as Access can't keep track of internal pointers when nesting the queries.
Create a temp table:
SELECT t1.CLI_ID, t1.PRODUCT_ID, (SELECT COUNT(*)
FROM Source_Table t2
WHERE t2.CLI_ID = t1.CLI_ID
AND t2.PRODUCT_ID <= t1.PRODUCT_ID
) AS PROD_COUNT INTO TEMP_CLI_PROD
FROM Source_Table AS t1
GROUP BY t1.CLI_ID, t1.PRODUCT_ID;
Then have your pivot table reference the temp table.
TRANSFORM Last(TEMP_CLI_PROD.PRODUCT_ID) AS LastOfPRODUCT_ID
SELECT TEMP_CLI_PROD.CLI_ID
FROM TEMP_CLI_PROD
GROUP BY TEMP_CLI_PROD.CLI_ID
PIVOT "PRODUCT " & TEMP_CLI_PROD.PROD_COUNT;
Output:
CLI_ID PRODUCT 1 PRODUCT 2 PRODUCT 3 PRODUCT 4
963506 49001608
968286 49001645
987218 00048038 49001401
999999 9999996 9999997 9999998 9999999
Jeff's answer is essentially correct in its approach, although a temporary table is not strictly required:
TRANSFORM First(PRODUCT_ID) AS whatever
SELECT CLI_ID
FROM
(
SELECT t1.CLI_ID, t1.PRODUCT_ID, 'PRODUCT_' & COUNT(*) AS XtabColumn
FROM
Source_Table AS t1
INNER JOIN
Source_Table AS t2
ON t1.CLI_ID = t2.CLI_ID
AND t1.PRODUCT_ID >= t2.PRODUCT_ID
GROUP BY t1.CLI_ID, t1.PRODUCT_ID
)
GROUP BY CLI_ID
PIVOT XtabColumn
returns
CLI_ID PRODUCT_1 PRODUCT_2 PRODUCT_3 PRODUCT_4
------ --------- --------- --------- ---------
963506 49001608
968286 49001645
987218 00048038 49001401
999999 9999996 9999997 9999998 9999999

PostgreSQL convert columns to rows? Transpose?

I have a PostgreSQL function (or table) which gives me the following output:
Sl.no username Designation salary etc..
1 A XYZ 10000 ...
2 B RTS 50000 ...
3 C QWE 20000 ...
4 D HGD 34343 ...
Now I want the Output as below:
Sl.no 1 2 3 4 ...
Username A B C D ...
Designation XYZ RTS QWE HGD ...
Salary 10000 50000 20000 34343 ...
How to do this?
SELECT
unnest(array['Sl.no', 'username', 'Designation','salary']) AS "Columns",
unnest(array[Sl.no, username, value3Count,salary]) AS "Values"
FROM view_name
ORDER BY "Columns"
Reference : convertingColumnsToRows
Basing my answer on a table of the form:
CREATE TABLE tbl (
sl_no int
, username text
, designation text
, salary int
);
Each row results in a new column to return. With a dynamic return type like this, it's hardly possible to make this completely dynamic with a single call to the database. Demonstrating solutions with two steps:
Generate query
Execute generated query
Generally, this is limited by the maximum number of columns a table can hold. So not an option for tables with more than 1600 rows (or fewer). Details:
What is the maximum number of columns in a PostgreSQL select query
Postgres 9.4+
Dynamic solution with crosstab()
Use the first one you can. Beats the rest.
SELECT 'SELECT *
FROM crosstab(
$ct$SELECT u.attnum, t.rn, u.val
FROM (SELECT row_number() OVER () AS rn, * FROM '
|| attrelid::regclass || ') t
, unnest(ARRAY[' || string_agg(quote_ident(attname)
|| '::text', ',') || '])
WITH ORDINALITY u(val, attnum)
ORDER BY 1, 2$ct$
) t (attnum bigint, '
|| (SELECT string_agg('r'|| rn ||' text', ', ')
FROM (SELECT row_number() OVER () AS rn FROM tbl) t)
|| ')' AS sql
FROM pg_attribute
WHERE attrelid = 'tbl'::regclass
AND attnum > 0
AND NOT attisdropped
GROUP BY attrelid;
Operating with attnum instead of actual column names. Simpler and faster. Join the result to pg_attribute once more or integrate column names like in the pg 9.3 example.
Generates a query of the form:
SELECT *
FROM crosstab(
$ct$
SELECT u.attnum, t.rn, u.val
FROM (SELECT row_number() OVER () AS rn, * FROM tbl) t
, unnest(ARRAY[sl_no::text,username::text,designation::text,salary::text]) WITH ORDINALITY u(val, attnum)
ORDER BY 1, 2$ct$
) t (attnum bigint, r1 text, r2 text, r3 text, r4 text);
This uses a whole range of advanced features. Just too much to explain.
Simple solution with unnest()
One unnest() can now take multiple arrays to unnest in parallel.
SELECT 'SELECT * FROM unnest(
''{sl_no, username, designation, salary}''::text[]
, ' || string_agg(quote_literal(ARRAY[sl_no::text, username::text, designation::text, salary::text])
|| '::text[]', E'\n, ')
|| E') \n AS t(col,' || string_agg('row' || sl_no, ',') || ')' AS sql
FROM tbl;
Result:
SELECT * FROM unnest(
'{sl_no, username, designation, salary}'::text[]
,'{10,Joe,Music,1234}'::text[]
,'{11,Bob,Movie,2345}'::text[]
,'{12,Dave,Theatre,2356}'::text[])
AS t(col,row1,row2,row3,row4);
db<>fiddle here
Old sqlfiddle
Postgres 9.3 or older
Dynamic solution with crosstab()
Completely dynamic, works for any table. Provide the table name in two places:
SELECT 'SELECT *
FROM crosstab(
''SELECT unnest(''' || quote_literal(array_agg(attname))
|| '''::text[]) AS col
, row_number() OVER ()
, unnest(ARRAY[' || string_agg(quote_ident(attname)
|| '::text', ',') || ']) AS val
FROM ' || attrelid::regclass || '
ORDER BY generate_series(1,' || count(*) || '), 2''
) t (col text, '
|| (SELECT string_agg('r'|| rn ||' text', ',')
FROM (SELECT row_number() OVER () AS rn FROM tbl) t)
|| ')' AS sql
FROM pg_attribute
WHERE attrelid = 'tbl'::regclass
AND attnum > 0
AND NOT attisdropped
GROUP BY attrelid;
Could be wrapped into a function with a single parameter ...
Generates a query of the form:
SELECT *
FROM crosstab(
'SELECT unnest(''{sl_no,username,designation,salary}''::text[]) AS col
, row_number() OVER ()
, unnest(ARRAY[sl_no::text,username::text,designation::text,salary::text]) AS val
FROM tbl
ORDER BY generate_series(1,4), 2'
) t (col text, r1 text,r2 text,r3 text,r4 text);
Produces the desired result:
col r1 r2 r3 r4
-----------------------------------
sl_no 1 2 3 4
username A B C D
designation XYZ RTS QWE HGD
salary 10000 50000 20000 34343
Simple solution with unnest()
SELECT 'SELECT unnest(''{sl_no, username, designation, salary}''::text[] AS col)
, ' || string_agg('unnest('
|| quote_literal(ARRAY[sl_no::text, username::text, designation::text, salary::text])
|| '::text[]) AS row' || sl_no, E'\n , ') AS sql
FROM tbl;
Slow for tables with more than a couple of columns.
Generates a query of the form:
SELECT unnest('{sl_no, username, designation, salary}'::text[]) AS col
, unnest('{10,Joe,Music,1234}'::text[]) AS row1
, unnest('{11,Bob,Movie,2345}'::text[]) AS row2
, unnest('{12,Dave,Theatre,2356}'::text[]) AS row3
, unnest('{4,D,HGD,34343}'::text[]) AS row4
Same result.
If (like me) you were needing this information from a bash script, note there is a simple command-line switch for psql to tell it to output table columns as rows:
psql mydbname -x -A -F= -c "SELECT * FROM foo WHERE id=123"
The -x option is the key to getting psql to output columns as rows.
I have a simpler approach than Erwin pointed above, that worked for me with Postgres (and I think that it should work with all major relational databases whose support SQL standard)
You can use simply UNION instead of crosstab:
SELECT text 'a' AS "text" UNION SELECT 'b';
text
------
a
b
(2 rows)
Of course that depends on the case in which you are going to apply this. Considering that you know beforehand what fields you need, you can take this approach even for querying different tables. I.e.:
SELECT 'My first metric' as name, count(*) as total from first_table UNION
SELECT 'My second metric' as name, count(*) as total from second_table
name | Total
------------------|--------
My first metric | 10
My second metric | 20
(2 rows)
It's a more maintainable approach, IMHO. Look at this page for more information: https://www.postgresql.org/docs/current/typeconv-union-case.html
There is no proper way to do this in plain SQL or PL/pgSQL.
It will be way better to do this in the application, that gets the data from the DB.