How to iterate over a list of csv files and compile files with common filenames into a single csv as multiple columns - pandas

I am currently iterating through a list of csv files and want to combine csv files with common filename strings into a single csv file merging the data from the new csv file as a set of two new columns. I am having trouble with the final part of this in that the append command adds the data as rows at the base of the csv. I have tried with pd.concat, but must be going wrong somewhere. Any help would be much appreciated.
**Note the code is using Python 2 - just for compatibility with the software I am using - Python 3 solution welcome if it translates.
Here is the code I'm currently working with:
rb_headers = ["OID_RB", "Id_RB", "ORIG_FID_RB", "POINT_X_RB", "POINT_Y_RB"]
for i in coords:
if fnmatch.fnmatch(i, '*RB_bank_xycoords.csv'):
df = pd.read_csv(i, header=0, names=rb_headers)
df2 = df[::-1]
#Export the inverted RB csv file as a new csv to the original folder overwriting the original
df2.to_csv(bankcoords+i, index=False)
#Iterate through csvs to combine those with similar key strings in their filenames and merge them into a single csv
files_of_interest = {}
forconc = []
for filename in coords:
if filename[-4:] == '.csv':
key = filename[:39]
files_of_interest.setdefault(key, [])
files_of_interest[key].append(filename)
for key in files_of_interest:
buff_df = pd.DataFrame()
for filename in files_of_interest[key]:
buff_df = buff_df.append(pd.read_csv(filename))
files_of_interest[key]=buff_df
redundant_headers = ["OID", "Id", "ORIG_FID", "OID_RB", "Id_RB", "ORIG_FID_RB"]
outdf = buff_df.drop(redundant_headers, axis=1)

If you want only to merge in one file:
paths_list=['path1', 'path2',...]
dfs = [pd.read_csv(f, header=None, sep=";") for f in paths_list]
dfs=pd.concat(dfs,ignore_index=True)
dfs.to_csv(...)

Related

How to add new file to dataframe

I have a folder where CSV files are stored, AT certain interval a new CSV file(SAME FORMAT) is added to the folder.
I need to detect the new file and add the contents to data frame.
My current code reads all CSV files at once and stores in dataframe , But Dataframe should get updated with the contents of new CSV when a new file(CSV) is added to the folder.
import os
import glob
import pandas as pd
os.chdir(r"C:\Users\XXXX\CSVFILES")
extension = 'csv'
all_filenames = [i for i in glob.glob('*.{}'.format(extension))]
#combine all files in the list
df = pd.concat([pd.read_csv(f) for f in all_filenames ])
Let's say you have a path into your folder where new csv are downloaded:
path_csv = r"C:\........\csv_folder"
I assume that your dataframe (the one you want to append to) is created and that you load it into your script (you have probably updated it before, saved to some csv in another folder). Let's assume you do this:
path_saved_df = r"C:/..../saved_csv" #The path to which you've saved the previously read csv:s
filename = "my_old_files.csv"
df_old = pd.read_csv(path + '/' +filename, sep="<your separator>") #e.g. sep =";"
Then, to only read the latest addition of a csv to the folder in path you simply do the following:
list_of_csv = glob.glob(path_csv + "\\\\*.csv")
latest_csv = max(list_of_csv , key=os.path.getctime) #max ensures you only read the latest file
with open(latest_csv) as csv_file:
csv_reader = csv.reader(csv_file , delimiter=';')
new_file = pd.read_csv(latest_csv, sep="<your separator>", encoding ="iso-8859-1") #change encoding if you need to
Your new dataframe is then
New_df = pd.concat([df_old,new_file])

Exporting Multiple log files data to single Excel using Pandas

How do I export multiple dataframes to a single excel, I'm not talking about merging or combining. I just want a specific line from multiple log files to be compiled to a single excel sheet. I already wrote a code but I am stuck:
import pandas as pd
import glob
import os
from openpyxl.workbook import Workbook
file_path = "C:/Users/HP/Desktop/Pandas/MISC/Log Source"
read_files = glob.glob(os.path.join(file_path,"*.log"))
for files in read_files:
logs = pd.read_csv(files, header=None).loc[540:1060, :]
print(LBS_logs)
logs.to_excel("LBS.xlsx")
When I do this, I only get data from the first log.
Appreciate your recommendations. Thanks !
You are saving logs, which is the variable in your for loop that changes on each iteration. What you want is to make a list of dataframes and combine them all, and then save that to excel.
file_path = "C:/Users/HP/Desktop/Pandas/MISC/Log Source"
read_files = glob.glob(os.path.join(file_path,"*.log"))
dfs = []
for file in read_files:
log = pd.read_csv(file, header=None).loc[540:1060, :]
dfs.append(log)
logs = pd.concat(logs)
logs.to_excel("LBS.xlsx")

Read multiple csv's and concat to multiple dataframes based on filenames python

I have a list of csv's with the same columns. Here is how the list looks like,
C:/Users/foo/bar/January01.csv
C:/Users/foo/bar/February01.csv
C:/Users/foo/bar/March01.csv
C:/Users/foo/bar/January02.csv
C:/Users/foo/bar/March02.csv
I want something like this, all csv's that start with January should copy the data into January dataframe and likewise for all months.
Can anyone help me on this?
You can first iterate trough your directory to find all months you have, then you pass again appending the dataframes and finally saves them:
import os
dir_name = #your dir
months = set()
for file in os.listdir(dir_name):
months.add(file[:-2])
month_df = {month: pd.DataFrame() for month in months}
for file in os.listdir(dir_name):
month_df[file[:-2]] = month[file[:-2]].append(pd.read_csv(os.join.path(dir_name, file)))
for month in month_df.keys():
month_df[month].to_csv(month + '.csv', index=False)

Writing a pandas dataframe to a csv file and renaming on a for loop

I have a script that reads SQL db to a pandas data frame which is then concatenated together to form one dataframe on a loop. I need to write this second data frame to a csv file and rename this from a list of ID's
I am using pd.to_csv to write the file and os.rename to change the name.
for X, df in d.iteritems():
newdf = pd.concat(d)
for X in newdf:
export_csv = newdf.to_csv (r'/Users/uni/Desktop/corrindex+id/X.csv', index = False, header = None)
for X in NAMES:
os.rename ('X.csv',X)
This is the code that concatenates the data frames together.
In the third loop, NAMES = 'rt35' but in the future this will be a list of similar names.
I expect to get a file named rt35.csv. However I either get r.csv or X.csv and this error:
OSError: [Errno 2] No such file or directory
The files are writing correctly, the only issue is the name.
In your code, the X variable is inside of a string so python considers X as a character and not as a variable. You should do it like that :
export_csv = newdf.to_csv (r'/Users/uni/Desktop/corrindex+id/{}.csv'.format(X), index = False, header = None)
Same here :
for X in NAMES:
os.rename (X +'.csv',X)

How to skip duplicate headers in multiple CSV files having indetical columns and merge as one big data frame

I have copied 34 CSV files having identical columns in google colab and trying to merge as one big data frame. However, each CSV has a duplicate header which needs to be skipped.
The actual header anyway will be skipped while concatenating, as my CSV files having identical columns correct?
dfs = [pd.read_csv(path.join('/content/drive/My Drive/',x)skiprows=1) for x in os.listdir('/content/drive/My Drive/') if path.isfile(path.join('/content/drive/My Drive/',x))]
df = pd.concat(dfs)
Above code throwing below error.
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xe2 in position 1: invalid continuation byte
Below code working for sample files,but need an efficient way to skip dup headers and merged into one data frame.Please suggest.
df1=pd.read_csv("./Aug_0816.csv",skiprows=1)
df2=pd.read_csv("./Sep_0916.csv",skiprows=1)
df3=pd.read_csv("./Oct_1016.csv",skiprows=1)
df4=pd.read_csv("./Nov_1116.csv",skiprows=1)
df5=pd.read_csv("./Dec_1216.csv",skiprows=1)
dfs=[df1,df2,df3,df4,df5]
df=pd.concat(dfs)
Have you considered using glob from the standard library?
Try this
path = ('/content/drive/My Drive/')
os.chdir(path)
allFiles = glob.glob("*.csv")
dfs = [pd.read_csv(f,header=None,error_bad_lines=False) for f in allFiles]
#or if you know the specific delimiter for your csv
#dfs = [pd.read_csv(f,header=None,delimiter='yourdelimiter') for f in allFiles]
df = pd.concat(dfs)
Try this, the most generic script for concatenating multiple 'n' csv files in a specific path with a common file name format!
def get_merged_csv(flist, **kwargs):
return pd.concat([pd.read_csv(f,**kwargs) for f in flist], ignore_index=True)
path = r"C:\Users\Jyotsna\Documents"
fmask = os.path.join(path, 'Detail**.csv')
df = get_merged_csv(glob.glob(fmask), index_col=None)
df.head()
If you want to skip some fixed rows and/or columns in each of the files before concatenating, edit the code accordingly on this line!
return pd.concat([pd.read_csv(f, skiprows=4,usecols=range(9),**kwargs) for f in flist], ignore_index=True)