Add entries of a table into rows of another table - sql

I have two tables
table a:
ID VALUE_z
1 41
2 32
3 51
table b:
ID TYPE z
1 a 10
1 b 15
1 c 20
2 a 12
2 b 8
2 c 5
3 a 21
3 b 4
3 c 2
I want to add the rows from table a to the column VALUE in table b based on the ID. The result should look like this
table result:
ID TYPE VALUE
1 a 10
1 b 15
1 c 20
1 z 41
2 a 12
2 b 8
2 c 5
2 z 32
3 a 21
3 b 4
3 c 2
3 z 51

Try the following using INSERT INTO SELECT Statement:
insert into tableB
select ID, 'z', VALUE_z
from tableA
See demo

Related

pandas: get top n including the duplicates of a sorted column

I have some data like
This is a table sorted by score column and also then by cat column
score cat
18 B
18 A
17 A
16 B
16 A
15 B
14 B
13 A
12 A
10 B
9 B
I want to get the top 5 of score including the duplicates and also add the rank
i.e
rank score cat
1 18 B
1 18 A
2 17 A
3 16 B
3 16 A
4 15 B
5 14 B
How can i get this using pandas
Since the data frame is ordered, try factorize
df['rnk'] = df.score.factorize()[0]+1
out = df[df['rnk'] <= 5]
out
score cat rnk
0 18 B 1
1 18 A 1
2 17 A 2
3 16 B 3
4 16 A 3
5 15 B 4
6 14 B 5

Replace values of duplicated rows with first record in pandas?

Input
df
id label
a 1
b 2
a 3
a 4
b 2
b 3
c 1
c 2
d 2
d 3
Expected
df
id label
a 1
b 2
a 1
a 1
b 2
b 2
c 1
c 1
d 2
d 2
For id a, the label value is 1 and id b is 2 because 1 and 2 is the first record for a and b.
Try
I refer this post, but still not solve it.
Update with transform first
df['lb2']=df.groupby('id').label.transform('first')
df
Out[87]:
id label lb2
0 a 1 1
1 b 2 2
2 a 3 1
3 a 4 1
4 b 2 2
5 b 3 2
6 c 1 1
7 c 2 1
8 d 2 2
9 d 3 2

pandas groupby apply optimizing a loop

For the following data:
index bond stock investor_bond inverstor_stock
0 1 2 A B
1 1 2 A E
2 1 2 A F
3 1 2 B B
4 1 2 B E
5 1 2 B F
6 1 3 A A
7 1 3 A E
8 1 3 A G
9 1 3 B A
10 1 3 B E
11 1 3 B G
12 2 4 C F
13 2 4 C A
14 2 4 C C
15 2 5 B E
16 2 5 B B
17 2 5 B H
bond1 has two investors, A,B. stock2 has three investors, B,E,F. For each investor pair (investor_bond, investor_stock), we want to filter it out if they had ever invested in the same bond/stock.
For example, for a pair of (B,F) of index=5, we want to filter it out because both of them invested in stock 2.
Sample output should be like:
index bond stock investor_bond investor_stock
11 1 3 B G
So far I have tried using two loops.
A1 = A1.groupby('bond').apply(lambda x: x[~x.investor_stock.isin(x.bond)]).reset_index(drop=True)
stock_list=A1.groupby(['bond','stock']).apply(lambda x: x.investor_stock.unique()).reset_index()
stock_list=stock_list.rename(columns={0:'s'})
stock_list=stock_list.groupby('bond').apply(lambda x: list(x.s)).reset_index()
stock_list=stock_list.rename(columns={0:'s'})
A1=pd.merge(A1,stock_list,on='bond',how='left')
A1['in_out']=False
for j in range(0,len(A1)):
for i in range (0,len(A1.s[j])):
A1['in_out'] = A1.in_out | (
A1.investor_bond.isin(A1.s[j][i]) & A1.investor_stock.isin(A1.s[j][i]))
print(j)
The loop is running forever due to the data size, and I am seeking a faster way.

Merge same SQL database values in table and relating table

sure it could solved one by one, for each value in specified column but I was looking for an more elegant way, handled by one SQL query.
Following mini database:
Two material tables:
Table MatA:
ID NomCom_ID ProFo_ID
1 1 1
2 2 2
Table MatB:
ID NomCom_ID ProFo_ID
1 1 2
2 2 2
One note table:
Table Note:
ID Val MatTab
1 S6 A
2 T1 A
3 W10 A
4 W12 A
5 S5 B
6 T1 B
7 G10 B
8 T1 XXX
9 W10 XXX
10 G8 XXX
And one table for relation between notes and Materials:
Table RelateNoteMat:
ID Mat_ID Note_ID
1 1 1
2 1 2
3 2 2
4 2 3
5 1 8
6 1 10
at a later time it's clear, that MatTab 'XXX' is MatTab 'A'
so I want to update Table Note and Table RelateNoteMat like this:
Table Note:
ID Val MatTab
1 S6 A
2 T1 A
3 W10 A
4 W12 A
5 S5 B
6 T1 B
7 G10 B
8 G8 A
And one table for relation between notes and Materials:
Table RelateNoteMat:
ID Mat_ID Note_ID
1 1 1
2 1 2
3 2 2
4 2 3
5 1 8
is this possible with one SQL query?

See if all records in the same group are of accepted types

Consider the following table. Each document (id) belongs to a group (group_id).
-----------------------
id group_id value
-----------------------
1 1 A
2 1 B
3 1 D
4 2 A
5 2 B
6 3 C
7 4 A
8 4 B
9 4 B
10 4 B
11 4 C
12 5 A
13 5 A
14 5 A
15 6 B
16 6 NULL
17 6 NULL
18 6 D
19 7 NULL
20 8 B
1/ Each document has a value NULL, A, B, C or D
2/ If the documents in the same group all have either A or B as value, the group is completed
3/ In this case, the desired output would read:
---------------------
group_id completed
---------------------
1 0 <== because document 3 = D
2 1 <== all documents have either A or B as a value
3 0 <== only one document in the group, value C
4 1 <== all documents have either A or B as a value
5 1 <== all documents have value A
6 0 <== because of NULL values and value D
7 0 <== NULL
8 1 <== only one document, value B
IS it possible to query this resultset?
As I am not very experienced in SQL, any help would be appreciated!
Try this
SELECT [group_id],
CASE
WHEN Count(CASE WHEN [value] IN ( 'A', 'B' ) THEN 1 END) = Count(*) THEN 1
ELSE 0
END AS COMPLETED
FROM yourtable
GROUP BY [group_id]