I have some csv files which take a bit long to load as dataframe into my workspace. Is there a fast and easy tool to convert them to pickle to load faster?
After you load the data using Pandas,
Use the following:
import pandas as pd
df.to_pickle('/Drive Path/df.pkl') #to save the dataframe, df to 123.pkl
df1 = pd.read_pickle('/Drive Path/df.pkl') #to load 123.pkl back to the dataframe df
Related
I tried to convert spark dataframe to pandas in databricks notebook with pyspark. It takes for ever running. Is there a better way to do this? There are more than 600,000 rows.
df_PD = sparkDF.toPandas()
df_PD = sparkDF.toPandas()
Can you try changing your import statement and importing the Pandas API for Spark?
import pyspark.pandas as pd
df_PD = sparkDF.to_pandas()
I want to read a csv file into a pandas dataframe but I get an error when executing the code below:
filepath = "https://drive.google.com/file/d/1bUTjF-iM4WW7g_Iii62Zx56XNTkF2-I1/view"
df = pd.read_csv(filepath)
df.head(5)
To retrieve information or data from google drive, at first, you need to identify the file id.
import pandas as pd
url='https://drive.google.com/file/d/0B6GhBwm5vaB2ekdlZW5WZnppb28/view?usp=sharing'
file_id=url.split('/')[-2]
dwn_url='https://drive.google.com/uc?id=' + file_id
df = pd.read_csv(dwn_url)
print(df.head())
Try the following code snippet to read the CSV from Google Drive into the pandas DataFrame:
import pandas as pd
url = "https://drive.google.com/uc?id=1bUTjF-iM4WW7g_Iii62Zx56XNTkF2-I1"
df = pd.read_csv(url)
df.head(5)
I have a dictionary like this:
d = {'Caps': 'cap_list', 'Term': 'unique_tokens', 'LocalFreq': 'local_freq_list','CorpusFreq': 'corpus_freq_list'}
I want to create a dask dataframe from it. How do I do it? Normally, in Pandas, is can be easily imported to a Pandas df by:
df = pd.DataFrame({'Caps': cap_list, 'Term': unique_tokens, 'LocalFreq': local_freq_list,
'CorpusFreq': corpus_freq_list})
Should I first load into a bag and then convert from bag to ddf?
If your data fits in memory then I encourage you to use Pandas instead of Dask Dataframe.
If for some reason you still want to use Dask dataframe then I would convert things to a Pandas dataframe and then use the dask.dataframe.from_pandas function.
import dask.dataframe as dd
import pandas as pd
df = pd.DataFrame(...)
ddf = dd.from_pandas(df, npartitions=20)
But there are many cases where this will be slower than just using Pandas well.
I am pulling a dataset out of a MATLAB mat file which is of HDF5 format as shown below:
matfile = 'C:\\....\\dataStuff.mat'
f = h5py.File(matfile, 'r')
data = f['/' + stuff + '/data/'].value
df = pd.DataFrame(data) # How do I create a Dask DF instead from data?
How do I do the same thing but instead of using Pandas, I create a Dask Dataframe?
The below function gives me an error:
ddf = dd.read_hdf(matfile, 'key')
the HDF5 class H5T_COMPOUND is not supported yet
I could attempt to just convert the Pandas DF into a Dask DF as shown below, but I would like to skip this step that takes another 2 minutes, but pulling the HDF5 data directly into a Dask Dataframe like I did with the Pandas.
df = dd.from_pandas(df, npartitions=3) # What I don't want to do
I am parsing tab-delimited data to create tabular data, which I would like to store in an HDF5.
My problem is I have to aggregate the data into one format, and then dump into HDF5. This is ~1 TB-sized data, so I naturally cannot fit this into RAM. Dask might be the best way to accomplish this task.
If I use parsing my data to fit into one pandas dataframe, I would do this:
import pandas as pd
import csv
csv_columns = ["COL1", "COL2", "COL3", "COL4",..., "COL55"]
readcsvfile = csv.reader(csvfile)
total_df = pd.DataFrame() # create empty pandas DataFrame
for i, line in readcsvfile:
# parse create dictionary of key:value pairs by table field:value, "dictionary_line"
# save dictionary as pandas dataframe
df = pd.DataFrame(dictionary_line, index=[i]) # one line tabular data
total_df = pd.concat([total_df, df]) # creates one big dataframe
Using dask to do the same task, it appears users should try something like this:
import pandas as pd
import csv
import dask.dataframe as dd
import dask.array as da
csv_columns = ["COL1", "COL2", "COL3", "COL4",..., "COL55"] # define columns
readcsvfile = csv.reader(csvfile) # read in file, if csv
# somehow define empty dask dataframe total_df = dd.Dataframe()?
for i, line in readcsvfile:
# parse create dictionary of key:value pairs by table field:value, "dictionary_line"
# save dictionary as pandas dataframe
df = pd.DataFrame(dictionary_line, index=[i]) # one line tabular data
total_df = da.concatenate([total_df, df]) # creates one big dataframe
After creating a ~TB dataframe, I will save into hdf5.
My problem is that total_df does not fit into RAM, and must be saved to disk. Can dask dataframe accomplish this task?
Should I be trying something else? Would it be easier to create an HDF5 from multiple dask arrays, i.e. each column/field a dask array? Maybe partition the dataframes among several nodes and reduce at the end?
EDIT: For clarity, I am actually not reading directly from a csv file. I am aggregating, parsing, and formatting tabular data. So, readcsvfile = csv.reader(csvfile) is used above for clarity/brevity, but it's far more complicated than reading in a csv file.
Dask.dataframe handles larger-than-memory datasets through laziness. Appending concrete data to a dask.dataframe will not be productive.
If your data can be handled by pd.read_csv
The pandas.read_csv function is very flexible. You say above that your parsing process is very complex, but it might still be worth looking into the options for pd.read_csv to see if it will still work. The dask.dataframe.read_csv function supports these same arguments.
In particular if the concern is that your data is separated by tabs rather than commas this isn't an issue at all. Pandas supports a sep='\t' keyword, along with a few dozen other options.
Consider dask.bag
If you want to operate on textfiles line-by-line then consider using dask.bag to parse your data, starting as a bunch of text.
import dask.bag as db
b = db.read_text('myfile.tsv', blocksize=10000000) # break into 10MB chunks
records = b.str.split('\t').map(parse)
df = records.to_dataframe(columns=...)
Write to HDF5 file
Once you have dask.dataframe try the .to_hdf method:
df.to_hdf('myfile.hdf5', '/df')