I want to join two data sources, orders and customers:
orders is an SQL Server table:
orderid| customerid | orderdate | ordercost
------ | -----------| --------- | --------
12000 | 1500 |2008-08-09 | 38610
and customers is a csv file:
customerid,first_name,last_name,starting_date,ending_date,country
1500,Sian,Read,2008-01-07,2010-01-07,Greenland
I want to join these two tables in my Python application, so I wrote the following code:
# Connect to SQL Sever with Pyodbc library
connection = pypyodbc.connect("connection string here")
cursor=connection.cursor();
cursor.execute("SELECT * from order)
result= cursor.fetchall()
# convert the result to pandas Dataframe
df1 = pd.DataFrame(result, columns= ['orderid','customerid','orderdate','ordercost'])
# Read CSV File
df2=pd.read_csv(customer_csv)
# Merge two dataframes
merged= pd.merge( df1, df2, on= 'customerid', how='inner')
print(merged[['first_name', 'country']])
I expect
first_name | country
-----------|--------
Sian | Greenland
But I get empty result.
When I perform this code for two data frames that are both from CSV files, it works fine. Any help?
Thanks.
I think problem is columns customerid has different dtypes in both DataFrames so no match.
So need convert both columns to int or both to str.
df1['customerid'] = df1['customerid'].astype(int)
df2['customerid'] = df2['customerid'].astype(int)
Or:
df1['customerid'] = df1['customerid'].astype(str)
df2['customerid'] = df2['customerid'].astype(str)
Also is possible omit how='inner', because default value of merge:
merged= pd.merge( df1, df2, on= 'customerid')
empty dataframe result for pd.merge means you don't have any matching values across the two frames. Have you checked the type of the the data? use
df1['customerid'].dtype
to check.
as well as converting after importing (as suggested in the other answer), you can also tell pandas what dtype you want when you read the csv
df2=pd.read_csv(customer_csv, dtype={'customerid': str))
Related
I have two data frames which look like df1 and df2 below and I want to create df3 as shown.
I could do this using a left join to have all the rows in one dataframe and then did a numpy.where to see if they are matching or not.
I could get what I want but I feel there should be an elegant way of doing this which will eliminate renaming columns, reshuffling columns in dataframe and then using np.where.
Is there a better way to do this?
code to reproduce dataframes:
import pandas as pd
df1=pd.DataFrame({'product':['apples','bananas','oranges','pineapples'],'price':[1,2,3,7],'quantity':[5,7,11,4]})
df2=pd.DataFrame({'product':['apples','bananas','oranges'],'price':[2,2,4],'quantity':[5,7,13]})
df3=pd.DataFrame({'product':['apples','bananas','oranges'],'price_df1':[1,2,3],'price_df2':[2,2,4],'price_match':['No','Yes','No'],'quantity':[5,7,11],'quantity_df2':[5,7,13],'quantity_match':['Yes','Yes','No']})
An elegant way to do your task is to:
generate "partial" DataFrames from each source column,
and then concatenate them.
The first step is to define a function to join 2 source columns and append "match" column:
def myJoin(s1, s2):
rv = s1.to_frame().join(s2.to_frame(), how='inner',
lsuffix='_df1', rsuffix='_df2')
rv[s1.name + '_match'] = np.where(rv.iloc[:,0] == rv.iloc[:,1], 'Yes', 'No')
return rv
Then, from df1 and df2, generate 2 auxiliary DataFrames setting product as the index:
wrk1 = df1.set_index('product')
wrk2 = df2.set_index('product')
And the final step is:
result = pd.concat([ myJoin(wrk1[col], wrk2[col]) for col in wrk1.columns ], axis=1)\
.reset_index()
Details:
for col in wrk1.columns - generates names of columns to join.
myJoin(wrk1[col], wrk2[col]) - generates the partial result for this column from
both source DataFrames.
[…] - a list comprehension, collecting the above partial results in a list.
pd.concat(…) - concatenates these partial results into the final result.
reset_index() - converts the index (product names) into a regular column.
For your source data, the result is:
product price_df1 price_df2 price_match quantity_df1 quantity_df2 quantity_match
0 apples 1 2 No 5 5 Yes
1 bananas 2 2 Yes 7 7 Yes
2 oranges 3 4 No 11 13 No
I am a beginner. I have two dataframes in pandas, I would like to identify what are the changes from the original to the new dataframe.
Rows: products
Columns: demand for future periods
dataframe differences could be: new rows, deleted rows, and changed demand.
Ideally I would make a heatmap (showing changes) ... but I'm stuck - unsure if I have to iterate over or not ...
A record in a dataframe is:
ProductId | demand_Month1 | demand_Month2 | demand_Month3 ... MonthX
This data is monthly updated. I would like to generate the following table
productID | old - new (demand) ... for each month.
Dataframes contain same months demand data.
def dataframe_difference(df1: DataFrame, df2: DataFrame, which=None):
"""Find rows which are different between two DataFrames."""
comparison_df = df1.merge(
df2,
indicator=True,
how='outer'
)
if which is None:
diff_df = comparison_df[comparison_df['_merge'] != 'both']
else:
diff_df = comparison_df[comparison_df['_merge'] == which]
diff_df.to_csv('data/diff.csv')
return diff_df
Look at this for a start
I have one dataframe - df_similar_strings, which looks like this:
|---------------------|
| string_values |
|---------------------|
| ['catish', 'cat'] |
|---------------------|
| ['doggo', 'dogy'] |
|---------------------|
and the other one - df_source:
|-----------------------------|------------------|
| values | key_value |
|-----------------------------|------------------|
| ['catish', 'cat', 'cat-'] | cat |
|-----------------------------|------------------|
| ['doggo', 'dogy', 'dog'] | dog |
|-----------------------------|------------------|
I would like to join those data frames based on the column string_values and values so that there is at least one value matching.
I have no idea how to do this since the columns are nested as arrays.
Hey you just need to type cast your list to tuple. And then try merging. Since list is unhashable hence merge operation can't be applied. Try this
df_source.values = df_source["values"].apply(lambda x: tuple(x))
Similarly with the other df and try merging using pd merge.
You can solve it by first doing a cartesian-product between your two dataframes and then dropping from that dataframe all rows which doesn't have any shared value.
For simplicity, I assume the columns on both datasets have the same name ("values"). Also, I assume the lists doesn't have repeated values (all values appear once).
from collections import Counter
def find_duplicates(arr):
return [item for item,count in Counter(arr).items() if count==2]
df1['key']=1
df2['key']=1
cartes_prod_df = df1.merge(df2,on=['key'],how='outer').drop(columns=['key'])
duplicate_values = (cartes_prod_df.values_x + cartes_prod_df.values_y).apply(find_duplicates)
merged_df = cartes_prod_df[duplicate_values.apply(lambda x: len(x)>0)]
I've used a little trick in order to do the cartesian product (Adding the key column), and then the duplicate_values found from the joint array (using the + operator) are the values which appeared twice in the joint array.
UPDATE
In order to supply a full example, here's an example of df1 and df2:
d1 = {'values': [['A','B'],['B','C'],['D']],'otherkey':[1,2,3]}
d2 = {'values': [['A'],['B'],['A','C'],['D']],'otherkey':[4,5,3,6]}
df1 = pd.DataFrame(d1)
df2 = pd.DataFrame(d2)
Now, merged_df would give the output:
How to get merged data frame from two data frames having common column value such that only those rows make merged data frame having common value in a particular column.
I have 5000 rows of df1 as format : -
director_name actor_1_name actor_2_name actor_3_name movie_title
0 James Cameron CCH Pounder Joel David Moore Wes Studi Avatar
1 Gore Verbinski Johnny Depp Orlando Bloom Jack Davenport Pirates
of the Caribbean: At World's End
2 Sam Mendes Christoph Waltz Rory Kinnear Stephanie Sigman Spectre
and 10000 rows of df2 as
movieId genres movie_title
1 Adventure|Animation|Children|Comedy|Fantasy Toy Story
2 Adventure|Children|Fantasy Jumanji
3 Comedy|Romance Grumpier Old Men
4 Comedy|Drama|Romance Waiting to Exhale
A common column 'movie_title' have common values and based on them, I want to get all rows where 'movie_title' is same. Other rows to be deleted.
Any help/suggestion would be appreciated.
Note: I already tried
pd.merge(dfinal, df1, on='movie_title')
and output comes like one row
director_name actor_1_name actor_2_name actor_3_name movie_title movieId title genres
and on how ="outer"/"left", "right", I tried all and didn't get any row after dropping NaN although many common coloumn do exist.
You can use pd.merge:
import pandas as pd
pd.merge(df1, df2, on="movie_title")
Only rows are kept for which common keys are found in both data frames. In case you want to keep all rows from the left data frame and only add values from df2 where a matching key is available, you can use how="left":
pd.merge(df1, df2, on="movie_title", how="left")
We can merge two Data frames in several ways. Most common way in python is using merge operation in Pandas.
import pandas
dfinal = df1.merge(df2, on="movie_title", how = 'inner')
For merging based on columns of different dataframe, you may specify left and right common column names specially in case of ambiguity of two different names of same column, lets say - 'movie_title' as 'movie_name'.
dfinal = df1.merge(df2, how='inner', left_on='movie_title', right_on='movie_name')
If you want to be even more specific, you may read the documentation of pandas merge operation.
If you want to merge two DataFrames and you want a merged DataFrame in which only common values from both data frames will appear then do inner merge.
import pandas as pd
merged_Frame = pd.merge(df1, df2, on = id, how='inner')
I come from an Excel background but I love pandas and it has truly made me more efficient. Unfortunately, I probably carry over some bad habits from Excel. I have three large files (between 2 million and 13 million rows each) which contain data on interactions which could be tied together, unfortunately, there is no unique key connecting the files. I am literally concatenating (Excel formula) 3 fields into one new column on all three files.
Three columns which exist on each file which I combined together (the other fields would be like the reason for interaction on one file, the score on another file, and the some other data on the third file which I would like to tie together back to a certain agentID):
Date | CustomerID | AgentID
I edit my date format to be uniform on each file:
df[Date] = pd.to_datetime(df['Date'], coerce = True)
df[Date] = df[Date].apply(lambda x:x.date().strftime('%Y-%m-%d'))
Then I create a unique column (well, as unique as I can get it.. sometimes the same customer interacts with the same agent on the same date but this should be quite rare):
df[Unique] = df[Date].astype(str) + df[CustomerID].astype(str) + df[AgentID].astype(str)
I do the same steps for df2 and then:
combined = pd.merge(df, df2, how = 'left', on = 'Unique')
I typically send that to a new csv in case something crashes, gzip it, then read it again and do the same process again with the third file.
final = pd.merge(combined, df2, how = 'left', on = 'Unique')
As you can see, this takes time. I have to format the dates on each and then turn them into text, create an object column which adds to the filesize, and (due to the raw data issues themselves) drop duplicates so I don't accidentally inflate numbers. Is there a more efficient workflow for me to follow?
Instead of using on = 'Unique':
combined = pd.merge(df, df2, how = 'left', on = 'Unique')
you can pass a list of columns to the on keyword parameter:
combined = pd.merge(df, df2, how='left', on=['Date', 'CustomerID', 'AgentID'])
Pandas will correctly merge rows based on the triplet of values from the 'Date', 'CustomerID', 'AgentID' columns. This is safer (see below) and easier than building the Unique column.
For example,
import pandas as pd
import numpy as np
np.random.seed(2015)
df = pd.DataFrame({'Date': pd.to_datetime(['2000-1-1','2000-1-1','2000-1-2']),
'CustomerID':[1,1,2],
'AgentID':[10,10,11]})
df2 = df.copy()
df3 = df.copy()
L = len(df)
df['ABC'] = np.random.choice(list('ABC'), L)
df2['DEF'] = np.random.choice(list('DEF'), L)
df3['GHI'] = np.random.choice(list('GHI'), L)
df2 = df2.iloc[[0,2]]
combined = df
for x in [df2, df3]:
combined = pd.merge(combined, x, how='left', on=['Date','CustomerID', 'AgentID'])
yields
In [200]: combined
Out[200]:
AgentID CustomerID Date ABC DEF GHI
0 10 1 2000-1-1 C F H
1 10 1 2000-1-1 C F G
2 10 1 2000-1-1 A F H
3 10 1 2000-1-1 A F G
4 11 2 2000-1-2 A F I
A cautionary note:
Adding the CustomerID to the AgentID to create a Unique ID could be problematic
-- particularly if neither has a fixed-width format.
For example, if CustomerID = '12' and AgentID = '34' Then (ignoring the date which causes no problem since it does have a fixed-width) Unique would be
'1234'. But if CustomerID = '1' and AgentID = '234' then Unique would
again equal '1234'. So the Unique IDs may be mixing entirely different
customer/agent pairs.
PS. It is a good idea to parse the date strings into date-like objects
df['Date'] = pd.to_datetime(df['Date'], coerce=True)
Note that if you use
combined = pd.merge(combined, x, how='left', on=['Date','CustomerID', 'AgentID'])
it is not necessary to convert any of the columns back to strings.