Merge several columns from two tables into one rows for id in the former table each - sql

If there are a table A and table B. The structure of them are below:
A :
id
1
2
B:
col_1 col_2
m q
n w
How Can I get the results of C which is below by SQL?
id col_1 col_2 col_1 col_2
1 m q n w
2 m q n w
For each data in Table B, they are related with the id in Table A. After concatenating the two table into Table C. once the id in Table C changes(which belongs to the id in Table A), the corresponding rows in Table C change. So in order to get the final Table C, there should be some calculations for getting each data in Table C(col_1, col_2, col_1 col_2)

As far as I understood, you want to get all rows from table B and associate as columns with id from table A.
I think it is impossible with just a query (don't know if a procedure can solve it), but I have an approach that may help (I tested it on MySQL).
SELECT
`a`.`id`,
GROUP_CONCAT(`b`.`key`) AS `keys`,
GROUP_CONCAT(`b`.`value`) AS `values`
FROM `a`, `b`
GROUP BY `a`.`id` ASC;
As result we have:
---- ------ --------
| id | keys | values |
---- ------ --------
| 1 | m,n | 3,4 |
| 2 | m,n | 3,4 |
---- ------ --------
The first key in column keys and first value in column values refers to the first row of table b. The second refers to the second row and so on. This way you will just need to split on the delimiter ',' with some server side code.
I searched for the matching command from Postgres to the MySQL GROUP_CONCAT and found that STRING_AGG may do the same job.
Hope it helps!

As long as you know in advance how many distinct key values can appear in B (and there are not too many), this should work:
select A.id, Once.key k1, Once.value v1, Twice.key k2, Twice.value v2
from A,(select * from B where B.key='m') Once,
(select * from B where B.key='n') Twice;
EDIT: This is the result obtained with the above query:
| ID | K1 | V1 | K2 | V2 |
--------------------------
| 1 | m | 3 | n | 4 |
| 2 | m | 3 | n | 4 |

Related

Comparing aggregated columns to non aggregated columns to remove matches

I have two separate tables from two different databases that are performing a matching check.
If the values match I want them out of the result set. The first table (A) has multiple entries that contain the same symbol matches for the matching columns in the second table (B).
The entries in table B, if added up will ideally equal the value of one of the matching rows of A.
The tables look like below when queried separately.
Underneath the tables is what my query currently looks like. I thought if I group the columns by the symbols I could use the SUM of B to add up to the value of A which would get rid of the entries. However, I think because I am summing from B and not from A, then the A doesn't count as an aggregated column so must be included in the group by and doesn't allow for the summing to work in the way I'm wanting it to calculate.
How would I be able to run this query so the values in B are all summed up. Then, if matching to the symbol/value from any of the entries in A, don't get included in the result set?
Table A
| Symbol | Value |
|--------|-------|
| A | 1000 |
| A | 1000 |
| B | 1440 |
| B | 1440 |
| C | 1235 |
Table B
| Symbol | Value |
|--------|-------|
| A | 750 |
| A | 250 |
| B | 24 |
| B | 1416|
| C | 1874|
SELECT DBA.A, DBB.B
FROM DatabaseA DBA
INNER JOIN DatabaseB DBB on DBA.Symbol = DBB.Symbol
and DBA.Value != DBB.Value
group by DBA.Symbol, DBB.Symbol, DBB.Value
having SUM(DBB.Value) != DBA.Value
order by Symbol, Value
Edited to add ideal results
Table C
| SymbolB| ValueB| SymbolA | ValueA |
|--------|-------|---------|--------|
| C | 1874 | C | 1235 |
Wherever B adds up to A remove both. If they don't add, leave number inside result set
I will use CTE and use this common table expression (CTE) to search in Table A. Then join table A and table B on symbol.
WITH tDBB as (
SELECT DBB.Symbol, SUM(DBB.Value) as total
FROM tableB as DBB
GROUP BY DBB.Symbol
)
SELECT distinct DBB.Symbol as SymbolB, DBB.Value as ValueB, DBA.Symbol as SymbolA, DBA.Value as ValueA
FROM tableA as DBA
INNER JOIN tableB as DBB on DBA.Symbol = DBB.Symbol
WHERE DBA.Symbol in (Select Symbol from tDBB)
AND NOT DBA.Value in (Select total from tDBB)
Result:
|symbolB |valueB |SymbolA |ValueA |
|--------|-------|--------|-------|
| C | 1874 | C | 1235 |
with t3 as (
select symbol
,sum(value) as value
from t2
group by symbol
)
select *
from t3 join t on t.symbol = t3.symbol and t.value != t3.value
symbol
value
Symbol
Value
C
1874
C
1235
Fiddle

Lookup row in another table that has the maximum value among all rows that are related to the current record

There are two tables, say A and B. I want to create a calculated column in A with the following data from B. For a given row i in A, I want the ID of that row in B that has the maximum value among all rows that are related to row i.
For example:
Table A:
ID
1
2
Table B:
ID | A_ID | Value
x | 1 | 100
y | 1 | 200
x | 2 | 400
y | 2 | 300
Desired result:
Table A:
ID | B_ID
1 | y
2 | x
I hope this is clear. A SQL statement as the following one would do the job.
update A set B_ID = (select B.ID from B where B.A_ID = ID order by Value desc limit 1)
The closest I got so far was with LOOKUPVALUE, but it gave me the value of the global MAX, instead of the MAX within the relevant window.
Here is a solution:
=SELECTCOLUMNS(
FILTER(B,[Flow]=CALCULATE(MAX(B[Value])) &&
A[ID]=B[A_ID]),"some name", [ID])

Include data in a table looking in every insert if there is a match with the table values

I need to insert data from one table into another, but this insert must look into the table which receives data to determine if there is a match or not, and if it is, don't insert new data.
So, i have the next tables (NODE_ID refers to values in NODE1 and NODE2, think about lines with two nodes everyone):
Table A:
| ARC | NODE1 | NODE2 | STATE |
| x | 1 | 2 | A |
| y | 2 | 3 | A |
| z | 3 | 4 | B |
Table B:
| NODE_ID| VALUE |
| 1 | N |
| 2 | N |
| 3 | N |
| 4 | N |
And want the next result, that relates NODE_ID with ARCS and write in the result table the value of STATE from ARCS table, only one result for each NODE, because if not, i would have more than one row for the same NODE:
Table C result:
| NODE_ID| STATE |
| 1 | A |
| 2 | A |
| 3 |A(or B)|
I tried to do this with CASE statement with EXISTS, IF , and NVL2() and so on in the select but have no result at this time.
Any idea about how could i write this query?
Thank you very much for your help
Ok guys, i edit my message to explain how i did it finally, i've also changed a little bit my first message to make it more clear to undestand because we had problems with that.
So finally i used this query, that #mathguy introduced to me:
merge into Table_C c
using (select distinct b.NODE_ID as nodes, a.STATE
from Table_A a, Table_B b
where (b.NODE_ID=a.NODE1 or b.NODE_ID=a.NODE2) s
on (s.nodes=c.NODE_ID)
when not matched then
insert (NODE_ID, STATE)
values (s.nodes, s.STATE)
That's all
This can be done with insert, but often when you update one table with values from another, the merge statement is more powerful (more flexible).
merge into table_c c
using ( select arc, min(state) as state from table_a group by arc ) s
on (s.arc = c.node_id)
when not matched then insert (node_id, state)
values (s.arc, s.state)
;
Thanks to #Boneist and #ThorstenKettner for pointing out several syntax errors (now fixed).
If table C does not yet exist, use a create select statement:
create table c as select arc as node_id, state from a;
In case there can be duplicate arc (not shown in your sample) you'd need aggregation:
create table c as select arc as node_id, min(state) as state from a group by arc;

SQL/PostgreSQL: How to select limited amount of rows of different types based on limits stored in a different table?

I have a table (table 1) where the first column is the key and the second column contains elements of different types. In table 1, there's three types (type A, B, C) but the actual database have many more types.
Table.1. A minimal example.
_________________
| | |
|_KEY| attribute |
|____|___________|
|k1 | A |
|k2 | A |
|k3 | B |
|k4 | C |
|k5 | C |
|____|___________|
From table 1; I am interested in retrieving only a limited amount of elements from each type. The limited amount of elements of a given type is provided by table 2, in which the elements type is the key of the table (_element).
To clarify; The limited amount of elements of type A to obtain from table 1. in this minimal example is 1. Likewise, for type B it is 2 and for type C it is 1.
Table 2. Limits of item to obtain for each type in table 1.
____________________
| _Element | Limit |
|----------|-------|
| A | 1 |
| B | 2 |
| C | 1 |
|__________|_______|
Finally, the elements should be retrieved from table 1 from top to bottom.
Thanks for any help and/or pointers / gus.
P.S.
For the above minimal example, the expected output would be
___________________
| Key| Attribute |
|____|____________|
| k1 | A |
| k3 | B |
| K4 | C |
|____|____________|
Since there only exists 1 C attribute for this particular minimal example. Note that if there would have existed, say 5 elements of type C then the follow table would have been obtained instead (since the limited amount of C elements is 2)
___________________
| Key| Attribute |
|____|____________|
| k1 | A |
| k3 | B |
| K4 | C |
|_k5 | C |
|____|____________|
You can always do it with a union.
select top (SELECT Limit FROM Table2 WHERE _Element='A') * from Table1
WHERE attribute = A
UNION ALL
select top (SELECT Limit FROM Table2 WHERE _Element='B') * from Table1
WHERE attribute = B
UNION ALL
select top (SELECT Limit FROM Table2 WHERE _Element='C') * from Table1
WHERE attribute = C
Or using row_number:
with cte as (SELECT _Key,
attribute,
ROW_NUMBER() OVER (Partition by attribute Order by _Key ASC) as rowno
From Table1)
SELECT * FROM cte
LEFT JOIN Table2 on Table2.Element = Table1.attribute
WHERE rowno >= Limit
I truly like the power of PostgreSQL arrays. So
select
table2._element,
unnest((array_agg(table1._key order by table1._key desc)[1:table2.limit])) as _key
from
table1 join table2 on (table1.attribute = table2._element)
group by
table2._element, table2.limit
where in the second field of the query:
array_agg(table1._key order by table1._key desc) - collects values into array in the specified order (note that order by table1._key desc is just for example and you might to skip it or to specify another one),
(...)[1:table2.limit] - returns array elements from 1 to table2.limit,
unnest(...) - unwraps previous result to rows.

SQLite - select the newest row with a certain field value

I have an SQLite question which essentially boils down to the following problem.
id | key | data
1 | A | x
2 | A | x
3 | B | x
4 | B | x
5 | A | x
6 | A | x
New data is appended to the end of the table with an auto-incremented id.
Now, I want to create a query which returns the latest row for each key, like this:
id | key | data
4 | B | x
6 | A | x
I've tried some different queries but I have been unsuccessful. How do you select only the latest rows for each "key" value in the table?
use this SQL-Query:
select * from tbl where id in (select max(id) from tbl group by key);
You could split the main task into two subroutine.
You could move with the approach first retrieve all id/key value then get the id for the latest value of A and B keys,
Now you could easly write a query to get latest value for A and B because you have value of id's for both A and B keys.
SELECT *
FROM mytable
JOIN
( SELECT MAX(id) AS maxid
FROM mytable
GROUP BY "key"
) AS grp
ON grp.maxid = mytable.id
Side note: it's best not to use reserved words like keyas identifiers (for tables, fields. etc.)
Without nested SELECTs, or JOINs but only if the field determining "newest" is primary key (e.g. autoincrement):
SELECT * FROM table GROUP BY key DESC;