Loading a Single Series from a Pickled DataFrame in Pandas - pandas

After saving a Pandas DataFrame with df.to_pickle(file_name), it can be loaded with df = pd.read_pickle(file_name). But sometimes, you may only want to load the data for one Series at a particular time, and loading the entire DataFrame is inefficient. Is there a way to load just a single Series from a pickled DataFrame?

This is not possible because pickle files are serialized and reading a single column of a serialized file is not possible. You can read a single column of other file types (i.e. h5, csv, etc.) but not a serialized file.

Related

What is the difference between pandas data frame and reading a csv files by lines?

What is the difference between pandas data frame and reading csv files by lines?
How do I determine which method to use at the beginning of a project?
Pandas is very easy for manipulation of data. You can replace,fill,remove the null or specific value.pandas can handle large dataset. Pandas provide many functions and method to apply.

Save output in CSV without losing previous data on that CSV in pandas dataframe

I'm doing sentiment analysis of Tweeter data. For this work, I've made some datasets in CSV format where different month in different dataset. When I do the preprocessing of every dataset individually, I want to save all dataset in 1 single CSV file. but when I write the below's code by using pandas dataframe:
df.to_csv('dataset.csv', index=False)
It removes previous data (Rows) of that dataset. Is there any way that I can keep the previous data too on that file? So that I can merge all data together. Thank you..........
It's not entirely clear what you want from your question, so this is just a guess, but something like this might be what you're looking for. if you keep assigning dataframes to df, then new data will overwrite the old data. Try reassigning them to differently named dataframes like df1 and `df21. Then you can merge them.
# vertically merge the multiple dataframes and reassign to new variable
df = pd.concat([df1, df2])
# save the dataframe
df.to_csv('my_dataset.csv', index=False)
In python you can use the open("file") method with the parameter 'a':
open("file", 'a').
The 'a' means "append" so you will add lines at the end of your file.
You can use the same parameter for the pandas.dataFrame.to_csv() method.
e.g:
import pandas as pd
# code where you get data and return df
pd.df.to_csv("file", mode='a')
#thehand0: Your code works, but it's inefficient, so it will take longer for your script to run.

Is there a Pandas DataFrame implementation that loads lazily records from a table in a HDF5 file?

I am trying to convert millions of existing HDF5 files to Parquet format. Problem is that both input and output can't fit memory so I need means to process the input data (tables in a HDF5 file) in chunks and somehow have Pandas DataFrame that lazily load these chunks while fastparquet write function reads from it.
Pandas read_hdf() and HDF5Store's select do take chunksize as parameter, but they do not return usable dataframe. Without the chunksize parameter the program runs out of memory because Pandas loads the whole dataset in memory.

How to concat multiple pandas dataframes into one dask dataframe larger than memory?

I am parsing tab-delimited data to create tabular data, which I would like to store in an HDF5.
My problem is I have to aggregate the data into one format, and then dump into HDF5. This is ~1 TB-sized data, so I naturally cannot fit this into RAM. Dask might be the best way to accomplish this task.
If I use parsing my data to fit into one pandas dataframe, I would do this:
import pandas as pd
import csv
csv_columns = ["COL1", "COL2", "COL3", "COL4",..., "COL55"]
readcsvfile = csv.reader(csvfile)
total_df = pd.DataFrame() # create empty pandas DataFrame
for i, line in readcsvfile:
# parse create dictionary of key:value pairs by table field:value, "dictionary_line"
# save dictionary as pandas dataframe
df = pd.DataFrame(dictionary_line, index=[i]) # one line tabular data
total_df = pd.concat([total_df, df]) # creates one big dataframe
Using dask to do the same task, it appears users should try something like this:
import pandas as pd
import csv
import dask.dataframe as dd
import dask.array as da
csv_columns = ["COL1", "COL2", "COL3", "COL4",..., "COL55"] # define columns
readcsvfile = csv.reader(csvfile) # read in file, if csv
# somehow define empty dask dataframe total_df = dd.Dataframe()?
for i, line in readcsvfile:
# parse create dictionary of key:value pairs by table field:value, "dictionary_line"
# save dictionary as pandas dataframe
df = pd.DataFrame(dictionary_line, index=[i]) # one line tabular data
total_df = da.concatenate([total_df, df]) # creates one big dataframe
After creating a ~TB dataframe, I will save into hdf5.
My problem is that total_df does not fit into RAM, and must be saved to disk. Can dask dataframe accomplish this task?
Should I be trying something else? Would it be easier to create an HDF5 from multiple dask arrays, i.e. each column/field a dask array? Maybe partition the dataframes among several nodes and reduce at the end?
EDIT: For clarity, I am actually not reading directly from a csv file. I am aggregating, parsing, and formatting tabular data. So, readcsvfile = csv.reader(csvfile) is used above for clarity/brevity, but it's far more complicated than reading in a csv file.
Dask.dataframe handles larger-than-memory datasets through laziness. Appending concrete data to a dask.dataframe will not be productive.
If your data can be handled by pd.read_csv
The pandas.read_csv function is very flexible. You say above that your parsing process is very complex, but it might still be worth looking into the options for pd.read_csv to see if it will still work. The dask.dataframe.read_csv function supports these same arguments.
In particular if the concern is that your data is separated by tabs rather than commas this isn't an issue at all. Pandas supports a sep='\t' keyword, along with a few dozen other options.
Consider dask.bag
If you want to operate on textfiles line-by-line then consider using dask.bag to parse your data, starting as a bunch of text.
import dask.bag as db
b = db.read_text('myfile.tsv', blocksize=10000000) # break into 10MB chunks
records = b.str.split('\t').map(parse)
df = records.to_dataframe(columns=...)
Write to HDF5 file
Once you have dask.dataframe try the .to_hdf method:
df.to_hdf('myfile.hdf5', '/df')

Pandas DataFrame chunks: writing a DataFrame generator object to_csv

I'm reading a large amount of data from a database via pd.read_sql(...chunksize=10000) which generates a df generator object.
While I can still work with that dataframe in merging it with pd.merge(df,df2...) some functions are no longer available, such as df.to_cs(...)
What is the best way to handle that? How can I write such a dataframe to a CSV? Do I need to iterate over it manually?
You can either process each chunk individually, or combine them using e.g. pd.concat to operate on all chunks as a whole.
Individually, you would indeed iterate over the chunks like so:
for chunk in pd.read_sql(...chunksize=10000):
# process chunk
To combine, you can use list comprehension:
df = pd.concat([chunk for chunk in pd.read_sql(...chunksize=10000)])
#process df