Merge large pickle files and save as dta - pandas

I want to merge several large pickle files (around 500mb compression=gzip) and save it as dta file. I am using following code
df10= pd.read_pickle("10.pickle.gzip",compression="gzip")
df11= pd.read_pickle("11.pickle.gzip",compression="gzip")
new_df=pd.concat([df10,df11], ignore_index=True)
for col in new_df.columns:
new_df[col] = new_df[col].astype(str)
new_df.to_stata("myfile.dta",version=119)
The script is killed for zsh when I am it tries to save it.
I want to merge several of such files and save it as dta then read it in Stata. Any advice on how to do it will be great.
Thanks in advance.

Related

Merging multiple files in Pig

I have several files (around 10 files) which I would like to merge together in Pig:
Student01.txt
Student02.txt
...
Student10.txt
I am aware that I could merge two datasets together by:
data = UNION Student01, Student02
Is there any way that I could iterate over a loop to merge the dataset from Student01 to Student10?
Assuming the files are in the same format, then LOAD command allows you to read all files if you provide it a directory or a glob.
From docs -
The input data to the load can be a file, a directory or a glob
Example
STUDENTS = LOAD("/path/to/students/Student*.txt") USING PigStorage();

pandas.read_csv of a gzip file within a zipped directory

I would like to use pandas.read_csv to open a gzip file (.asc.gz) within a zipped directory (.zip). Is there an easy way to do this?
This code doesn't work:
csv = pd.read_csv(r'C:\folder.zip\file.asc.gz') // can't find the file
This code does work (however, it requires me to unzip the folder, which I want to avoid because my dataset currently contains thousands of zipped folders):
csv = pd.read_csv(r'C:\folder\file.asc.gz')
Is there an easy way to do this? I have tried using a combination of zipfile.Zipfile and read_csv, but have been unsuccessful (I think partly due to the fact that this is an ascii file as well)
Maybe the followings might help.
df = pd.read_csv('filename.gz', compression='gzip')
OR
import gzip
file=gzip.open('filename.gz','rb')
content=file.read()

Importing a *random* csv file from a folder into pandas

I have a folder with several csv files, with file names between 100 and 400 (Eg. 142.csv, 278.csv etc). Not all the numbers between 100-400 are associated with a file, for example there is no 143.csv. I want to write a loop that imports 5 random files into separate dataframes in pandas instead of manually searching and typing out the file names over and over. Any ideas to get me started with this?
You can use glob and read all the csv files in the directory.
file = glob.glob('*.csv')
random_files=np.random.choice(file,5)
dataframes= []
for fp in random_files :
dataframes.append(pd.read_csv(fp))
From this you can chose the random 5 files from directory and then read them seprately.
Hope I answer your question

How can I read many large .7z files containing many CSV files?

I have many .7z files every file containing many large CSV files (more than 1GB). How can I read this in python (especially pandas and dask data frame)? Should I change the compression format to something else?
I believe you should be able to open the file using
import lzma
with lzma.open("myfile.7z", "r") as f:
df = pd.read_csv(f, ...)
This is, strictly speaking, meant for the xz file format, but may work for 7z also. If not, you will need to use libarchive.
For use with Dask, you can do the above for each file with dask.delayed.
dd.read_csv directly also allows you to specify storage_options={'compression': 'xz'}; however, ramdom access within a file is likely to be inefficient at best, so you should add blocksize=None to force one partition per file:
df = dd.read_csv('myfiles.*.7z', storage_options={'compression': 'xz'},
blocksize=None)

Compressing StringIO data to read with pandas?

I have been using pandas pd.read_sql_query to read a decent chunk of data into memory each day in order to process it (add columns, calculations, etc to about 1GB of data). This has cause my computer to freeze a few times though so today I tried using psql to create a .csv file. I then zipped that file (.xz) and read it with pandas.
Overall, it was a lot smoother and it made me think about automating the process. Is it possible to replace saving a .csv.xz file and instead copying the data directly to memory while still compressing it (ideally)?
buf = StringIO()
from_curs = from_conn.cursor()
from_curs.copy_expert("COPY table where row_date = '2016-10-17' TO STDOUT WITH CSV HEADER", buf)
(is it possible to compress this?)
buf.seek(0)
(read the buf with pandas to process it)