How to quickly read csv files and append them into single pandas data frame - pandas

I am trying to read 4684 csv files from an folder with each file consisting of 2000 rows and 102 columns and file size is 418 kB. I am reading and appending them one by one using below code.
for file in allFiles:
df = pd.read_csv(file,index_col=None, header = None)
df2 = df2.append(df)
This is taking 4 to 5 hours to read all the 4684 file and append in on dataframe. Is there any possibility to make this process complete quickly. I am using i7 with 32GB ram.
Thanks

Related

How can I process a large parquet file from spark in numpy/pandas?

I'm posting this for pandas, numpy and spark tags because I'm not really sure the best approach to solve this problem within those three systems.
I have a large parquet file that a downstream process is having trouble opening because it exceeds the system's memory(~63gb in memory if opened at once). I was writing the file as such:
FULL_MAIN.write.mode("overwrite").parquet(PATH+"/FULL_MAIN.parquet")
but the file was too big, so I tried to do this to break the file into smaller chucks:
split_factor = [.1,.1,.1,.1,.1,.1,.1,.1,.1,.1]
FULL_MAIN_RDD1,FULL_MAIN_RDD2,FULL_MAIN_RDD3,FULL_MAIN_RDD4,FULL_MAIN_RDD5, FULL_MAIN_RDD6,FULL_MAIN_RDD7,FULL_MAIN_RDD8,FULL_MAIN_RDD9,FULL_MAIN_RDD10 = FULL_MAIN.randomSplit(split_factor)
FULL_MAIN_RDD1.write.mode("overwrite").parquet(PATH+"/FULL_MAIN_RDD1.parquet")
FULL_MAIN_RDD2.write.mode("overwrite").parquet(PATH+"/FULL_MAIN_RDD2.parquet")
...
The problem with this approach is there are other dataframes that I needed to keep the rows aligned to and doing this random split is making the dataframes not aligned.
So my two questions are:
Is there way to split multiple dataframes by relative equal amounts when I don't have any row numbers or numeric counter for each row in my dataset?
Is there a way to read parquet files in batches in pandas or numpy? This would basically solve my problem on the downstream system. I can't figure out how to open the parquet in batches(I've tried to open it in pandas and then split the rows and save each file but when I load the dataframe it crashes my system). I am not sure if it's possible without exceeding memory.
Parquet file format supports row groups. Install pyarrow and use row_groups when creating parquet file:
df.to_parquet("filename.parquet", row_group_size=10000, engine="pyarrow")
Then you can read group-by-group (or even only specific group):
import pyarrow.parquet as pq
pq_file = pq.ParquetFile("filename.parquet")
n_groups = pq_file.num_row_groups
for grp_idx in range(n_groups):
df = pq_file.read_row_group(grp_idx, use_pandas_metadata=True).to_pandas()
process(df)
If you don't have control over creation of the parquet file, you still able to read only part of the file:
pq_file = pq.ParquetFile("filename.parquet")
batch_size = 10000 # records
batches = pq_file.iter_batches(batch_size, use_pandas_metadata=True) # batches will be a generator
for batch in batches:
df = batch.to_pandas()
process(df)
I am not sure if you are having spark . If you want to provide downstream smaller chunks of file , you use repartition to a desired number of chunks and rewrite the parquet file .
You can change the repartition number as per your need.
df = spark.read.parquet('filename.parquet')
df.repartition(200).mode('overwrite').save('targetPath')

Create Dataframe in Pandas - Out of memory error while reading Parquet files

I have a Windows 10 machine with 8 GB RAM and 5 cores.
I have created a parquet file compressed with gzip. The size of the file after compression is 137 MB.
When I am trying to read the parquet file through Pandas, dask and vaex, I am getting memory issues:
Pandas :
df = pd.read_parquet("C:\\files\\test.parquet")
OSError: Out of memory: realloc of size 3915749376 failed
Dask:
import dask.dataframe as dd
df = dd.read_parquet("C:\\files\\test.parquet").compute()
OSError: Out of memory: realloc of size 3915749376 failed
Vaex:
df = vaex.open("C:\\files\\test.parquet")
OSError: Out of memory: realloc of size 3915749376 failed
Since Pandas /Python is meant for efficiency and 137 mb file is below par size , are there any recommended ways to create efficient dataframes? Libraries like Vaex, Dask claims to be very efficient.
For single machine, I would recommend Vaex with HDF file format. The data resides on hard disk and thus you can use bigger data sets. There is a built-in function in vaex that will read and convert bigger csv file into hdf file format.
df = vaex.from_csv('./my_data/my_big_file.csv', convert=True, chunk_size=5_000_000)
Dask is optimized for distributed system. You read the big file in chunks and then scatter it among worker machines.
It is totally possible that a 137MB parquet file expands to 4GB in memory, due to efficient compression and encoding in parquet. You may have some options on load, please show your schema. Are you using fastparquet or pyarrow?
Since all of the engines you are trying to use are capable of loading one "row-group" at a time, I suppose you only have one row group, and so splitting won't work. You could load only a selection of columns to save memory, if this can accomplish your task (all the loaders support this).
Check that you are using the latest version of pyarrow. A few times updating has helped me.
pip install -U pyarrow
pip install pyarrow==0.15.0 worked for me.

How to read a no header csv with variable length csv using pandas

I have a csv file which has no header columns and it has variable length records in each line.
Each record can go upto 398 fields and I want to keep only 256 fields in my dataframe.As I need only those fields to process.
Below is a slim version of the file.
1,2,3,4,5,6
12,34,45,65
34,34,24
In the above I would like to keep only 3 fields(analogous to 256 above) from each line while calling the read_csv.
I tried the below
import pandas as pd
df = pd.read_csv('sample.csv',header=None)
I get the following error as pandas taking the 1st to generate the metadata.
File "pandas/_libs/parsers.pyx", line 2042, in pandas._libs.parsers.raise_parser_error
pandas.errors.ParserError: Error tokenizing data. C error: Expected 5 fields in line 4, saw 10
Only solution I can think of is using
names = ['column1','column2','column3','column4','column5','column6']
while creating the data frame.
But for the real files which can be upto 50MB I don't want to do that as that is taking a lot of memory and I am trying to run it using aws lambda which will incur more cost. I have to process a large number of files daily.
My question is can I just create a dataframe using the slimmer 256 field while reading the csv alone? Can that be my step one ?
I am very new to pandas so kindly bear my ignorance. I tried to look for a solution for a long time but could find one.
# only 3 columns
df = pd.read_csv('sample.csv', header=None, usecols=range(3))
print(df)
# 0 1 2
# 0 1 2 3
# 1 12 34 45
# 2 34 34 24
So just change range value.

How can I read and manipulate large csv files in Google Colaboratory while not using all the RAM?

I am trying to import and manipulate compressed .csv files (that are each about 500MB in compressed form) in Google Colaboratory. There are 7 files. Using pandas.read_csv(), I "use all the available RAM" just after 2 files are imported and I have to restart my runtime.
I have searched forever on here looking for answers and have tried all the ones I came across, but none work. I have the files in my google drive and am mounted to it.
How can I read all of the files and manipulate them without using all the RAM? I have 12.72GB of RAM and 358.27GM of Disk.
Buying more RAM isn't an option.
To solve my problem, I created 7 cells (one for each data file). Within each cell I read the file, manipulated it, saved what I needed, then deleted everything:
import pandas as pd
import gc
df = pd.read_csv('Google drive path', compression = 'gzip')
filtered_df = df.query('my query condition here')
filtered_df.to_csv('new Google drive path', compression = 'gzip')
del df
del filtered_df
gc.collect()
After all 7 files, each about 500MB, for a total row-by-column size of 7,000,000 by 100, my RAM has stayed under 1MB.
Just using del didn't free up enough RAM. I had to use gc.collect() after in each cell.

Pandas reading csv into hdfstore thrashes, creates huge file

As a test, I'm trying to read a small 25 mg csv file using pandas.HDFStore:
store = pd.HDFStore('file.h5',mode='w')
for chunk in read_csv('file.csv',chunksize=50000):
store.append('df',chunk)
store.close()
It causes my computer to thrash and when it finally completes, file.h5 is 6.7 gigs. I don't know what is causing the file size to balloon: when I look at the store afterwards, the only thing in there is the small dataframe.
If I read the csv in without chunking and then add it to the store, I have no problems.
Update 1:
I'm running Anaconda, using python 2.7.6, HDF5 version 1.8.9, numpy 1.8.0, pytables 3.1.0, pandas 13.1, ubuntu 12.04.
The data is proprietary, so I can't post the chunk information online. I do have some mixed types. It still crashes if I try to read everything in as object.
Update 2:
Dropped all the columns with mixed type and I'm still getting the same issue. I have some very large text columns if that makes any difference.
Update 3:
The problem seems to be loading the dataframe into the hdfstore. I drastically reduced the size of my file, but kept one of my very wide columns (1259 characters). Whereas the size of the csv file is 878.6kb, the size of the hdfstore is 53 megs. Is pytables unable to handle very wide columns? Is there a threshold above which I should truncate?
The wide object columns are definitely the problem. My solution has been to truncate the object columns while reading them in. If I truncate to a width of 20 characters, the h5 file is only about twice as large as a csv file. However, if I truncate to 100 characters, the h5 file is about 6 times larger.
I include my code below as an answer, but if anyone has any idea how to reduce this size disparity without having to truncate so much text, I'd be grateful.
store = pd.HDFStore(filepath, 'w')
for chunk in pd.read_csv(f, chunksize=5000, sep='\t',
na_values="null", error_bad_lines=False):
chunk = chunk.apply(truncateCol)
store.append(table, chunk)
def truncateCol(ser, width=100):
if ser.dtype == np.object:
ser = ser.str[:width] if ser.str.len().max() > width else ser
return ser