How can I use an index from a dataset without mergin all datasets? - pandas

My datasets:
df1:
Country
Year
Dem
Turkiye
1993
152
Spain
1993
223
Spain
1994
166
USA
1993
123
df2 (it has many more columns but I only want to use "Index":
Country
Year
Index
Turkiye
1993
0
Spain
1993
1
Spain
1994
1
USA
1993
1

Use pandas.merge as shown below :
df1.merge(df2[['Country', 'Year', 'Index']], on=['Country', 'Year'], how='left')

Related

Pandas - add row with inverted values based on condition

In a dataframe like this:
...
match team opponent venue
233 3b0345fb Brazil Argentina Home
234 3b2357fb Argentina Brazil Away
427 3b0947fb England Poland Home
...
how can I select one dataframe slice, based on a column value (df[df['team']=='England']), like this:
...
match team opponent venue
559 4a3eae2f England Poland Home
...
And add inverted rows of that slice to the original dataframe, changing 'Home' with 'Away', ending up with:
...
match team opponent venue
233 3b0345fb Brazil Argentina Home
234 3b2357fb Argentina Brazil Away
559 3b0947fb England Poland Home
560 3b0947fb Poland England Away
...
Note: This slice should contain n rows and produce n inverted rows.
You can use:
df2 = df[df['team'].eq('England')].copy()
df2[['team', 'opponent']] = df2[['opponent', 'team']]
df2['venue'] = df2['venue'].map({'Home': 'Away', 'Away': 'Home})
out = pd.concat([df, df2])
print(out)
Output:
match team opponent venue
233 3b0345fb Brazil Argentina Home
234 3b2357fb Argentina Brazil Away
427 3b0947fb England Poland Home
427 3b0947fb Poland England Away
If you want to invert all:
df2 = df.copy()
df2[['team', 'opponent']] = df2[['opponent', 'team']]
df2['venue'] = df2['venue'].map({'Home': 'Away', 'Away': 'Home})
out = pd.concat([df, df2])
output:
match team opponent venue
233 3b0345fb Brazil Argentina Home
234 3b2357fb Argentina Brazil Away
427 3b0947fb England Poland Home
233 3b0345fb Argentina Brazil Away
234 3b2357fb Brazil Argentina Home
427 3b0947fb Poland England Away

sql command to find out how many players score how much

I have a table like these
country
gender
player
score
year
ID
Germany
male
Michael
14
1990
1
Austria
male
Simon
13
1990
2
Germany
female
Mila
16
1990
3
Austria
female
Simona
15
1990
4
This is a table in the database. It shows 70 countries around the world with player names and gender. It shows which player score how many goals in which year. The years goes from 1990 to 2015. So the table is large. Now I would like to know how many goals all female player and how many male player from Germany have scored from 2010 to 2015. So I would like to know the total score of german male player and the total score of german female player every year from 2010 to 2015 with a Sqlite
I expecting these output
country
gender
score
year
Germany
male
114
2010
Germany
female
113
2010
Germany
male
110
2011
Germany
female
111
2011
Germany
male
119
2012
Germany
female
114
2012
Germany
male
119
2013
Germany
female
114
2013
Germany
male
129
2014
Germany
female
103
2014
Germany
male
109
2015
Germany
female
104
2015
SELECT
country,
gender,
year,
SUM(score) AS score
FROM
<table_name>
WHERE
country ='Germany'
AND year between 2010 and 2015
GROUP BY
1, 2, 3
filtering on country and the years you are interested in
then summing up total score using group by

How can I have the same index for two different columns where the columns do not have unqiue values?

df1:
name_d
name_o
year
Turkiye
Italy
1990
Turkiye
Italy
1991
Turkiye
Italy
1993
Spain
Italy
1990
Spain
Italy
1991
Spain
Japan
1990
df2:
country_name
year
v2x_regime
Spain
1990
0
Turkiye
1990
0
Italy
1990
1
Turkiye
1991
1
Spain
1991
1
Italy
1991
1
Expected result:
name_o
v2x_regime_name_o
name_d
v2x_regime_name_d
year
Italy
1
Turkiye
1
1990
Basically I would like to know the regime type of the each country for each year. Since this is a bilateral data, there are two columns that include country name. For example, for each year I would like to have the index for name_o column and name_d column.
Does this work for you?:
df = df1.merge(df2, left_on=['name_o','year'], right_on=['country_name','year'], how='left')
df = df.merge(df2, left_on=['name_d,'year'], right_on=['country_name','year'], how='left')

Pandas removing rows with incomplete time series in panel data

I have a dataframe along the lines of the below:
Country1 Country2 Year
1 Italy Greece 2000
2 Italy Greece 2001
3 Italy Greece 2002
4 Germany Italy 2000
5 Germany Italy 2002
6 Mexico Canada 2000
7 Mexico Canada 2001
8 Mexico Canada 2002
9 US France 2000
10 US France 2001
11 Greece Italy 2000
12 Greece Italy 2001
I want to keep only the rows in which there are observations for the entire time series (2000-2002). So, the end result would be:
Country1 Country2 Year
1 Italy Greece 2000
2 Italy Greece 2001
3 Italy Greece 2002
4 Mexico Canada 2000
5 Mexico Canada 2001
6 Mexico Canada 2002
One idea is reshape by crosstab and test if rows has not 0 values by DataFrame.ne with DataFrame.all, convert index to DataFrame by MultiIndex.to_frame and last get filtered rows in DataFrame.merge:
df1 = pd.crosstab([df['Country1'], df['Country2']], df['Year'])
df = df.merge(df1.index[df1.ne(0).all(axis=1)].to_frame(index=False))
print (df)
Country1 Country2 Year
0 Italy Greece 2000
1 Italy Greece 2001
2 Italy Greece 2002
3 Mexico Canada 2000
4 Mexico Canada 2001
5 Mexico Canada 2002
Or if need test some specific range is possible compare sets in GroupBy.transform:
r = set(range(2000, 2003))
df = df[df.groupby(['Country1', 'Country2'])['Year'].transform(lambda x: set(x) == r)]
print (df)
Country1 Country2 Year
1 Italy Greece 2000
2 Italy Greece 2001
3 Italy Greece 2002
6 Mexico Canada 2000
7 Mexico Canada 2001
8 Mexico Canada 2002
One option is to pivot the data, drop null rows and reshape back; this only works if the combination of Country* and Year is unique (in the sample data it is ):
(df.assign(dummy = 1)
.pivot(('Country1', 'Country2'), 'Year')
.dropna()
.stack()
.drop(columns='dummy')
.reset_index()
)
Country1 Country2 Year
0 Italy Greece 2000
1 Italy Greece 2001
2 Italy Greece 2002
3 Mexico Canada 2000
4 Mexico Canada 2001
5 Mexico Canada 2002

Series with count larger than certain number

Given this code in iPython
df1=df["BillingContactCountry"].value_counts()
df1
I get
United States 4138
Germany 1963
United Kingdom 732
Switzerland 528
Australia 459
Canada 369
Japan 344
France 303
Netherlands 285
I want to get a series with count larger than 303, what should I do?
You needboolean indexing:
print (df1[df1 > 303])
United States 4138
Germany 1963
United Kingdom 732
Switzerland 528
Australia 459
Canada 369
Japan 344
Name: BillingContactCountry, dtype: int64