Multiprocessing on dataset from pyspark returns JVM error - pandas

I need to run some clustering algorithms in parallel in Jupyter notebook. The clustering function I want to parallel works when doing multithreading or when run individually. However, it returns
raise Py4JError("{0} does not exist in the JVM".format(name))
when I try multiprocessing. I don't have much experience with multiprocessing, what could I be doing wrong?
Code for clustering:
def clustering(ID, df):
pandas_df = df.select("row", "features", "type") \
.where(df.type == ID).toPandas()
print("process " + str(ID) + ": preparing data for clustering")
feature_series = pandas_df["features"].apply(lambda x: x.toArray())
objs = [pandas_df, pd.DataFrame(feature_series.tolist())]
t_df = pd.concat(objs, axis=1)
print("process " + str(ID) + ": initiating clustering")
c= #clustering algo here
print("process " + str(ID) + " DONE!")
return
Code for multiprocessing:
import multiprocessing as mp
k = 4
if __name__ == '__main__':
pl = []
for i in range(0,k):
print("sending process:", i)
process = mp.Process(target=clustering, args=(i, df))
jobs.append(process)
process.start()
for process in pl:
print("waiting for join from process")
process.join()

Error was caused by the subprocesses not being able to access the same memory(in which the pyspark dataframe resided).
Solved by partitioning the dataset first by putting the access to the pyspark dataframe in another function like so:
pandas_df = df.select("row", "features", "type") \
.where(df.type == ID).toPandas()
And then running the clustering on the separated Pandas dataframes.

Related

concatenate results after multiprocessing

I have a function which is creating a data frame by doing multiprocessing on a df:-
Suppose if I am having 10 rows in my df so the function processor will process all 10 rows separately. what I want is to concatenate all the output of the function processor and make one data frame.
def processor(dff):
"""
reading data from a data frame and doing all sorts of data manipulation
for multiprocessing
"""
return df
def main(infile, mdebug):
global debug
debug = mdebug
try:
lines = sum(1 for line in open(infile))
except Exception as err:
print("Error {} opening file: {}").format(err, infile)
sys.exit(2000)
if debug >= 2:
print(infile)
try:
dff = pd.read_csv(infile)
except Exception as err:
print("Error {}, opening file: {}").format(err, infile)
sys.exit(2000)
df_split = np.array_split(dff, (lines+1))
cores = multiprocessing.cpu_count()
cores = 64
# pool = Pool(cores)
pool = Pool(lines-1)
for n, frame in enumerate(pool.imap(processor, df_split), start=1):
if frame is not None:
frame.to_csv('{}'.format(n))
pool.close()
pool.join()
if __name__ == "__main__":
args = parse_args()
"""
print "Debug is: {}".format(args.debug)
"""
if args.debug >= 1:
print("Running in debug mode: "), args.debug
main(infile=args.infile, mdebug=args.debug)
you can use either the data frame constructor or concat to solve your problem. the appropriate one to use depends on details of your code that you haven't included
here's a more complete example:
import numpy as np
import pandas as pd
# create dummy dataset
dff = pd.DataFrame(np.random.rand(101, 5), columns=list('abcde'))
# process data
with Pool() as pool:
result = pool.map(processor, np.array_split(dff, 7))
# put it all back together in one dataframe
result = np.concat(result)

Correct way of passing dataframe to ray

I am trying to do the simplest thing with Ray, but no matter what I do it just never releases memory and fails.
The usage case is simply
read parquet files to DF -> pass to pool of actors -> make changes to DF -> return DF
class Main_func:
def calculate(self,data):
#do some things with the DF
return df.copy(deep=True) <- one of many attempts to fix the problem, but didnt work
cpus = 24
actors = []
for _ in range(cpus):
actors.append(Main_func.remote())
from ray.util import ActorPool
pool = ActorPool(actors)
import os
arr = os.listdir("/some/files")
def to_ray():
try:
filename = arr.pop(0)
pf = ParquetFile("/some/files/" + filename)
df = pf.to_pandas()
pool.submit(lambda a,v:a.calculate.remote(v),df.copy(deep=True)
except Exception as e:
print(e)
for _ in range(cpus):
to_ray()
while(True):
res = pool.get_next_unordered()
write('./temp/' + random_filename, res,compression='GZIP')
del res
to_ray()
I have tried other ways of doing the same thing, manually submitting rather than the map command, but whatever i do it always locks memory and fails after a few 100 dataframes.
Does each task needs to preserve state among different files? Ray has tasks abstraction that should simplify things:
import ray
ray.init()
#ray.remote
def read_and_write(path):
df = pd.read_parquet(path)
... do things
df.to_parquet("./temp/...")
import os
arr = os.listdir("/some/files")
results = ray.get([read_and_write.remote(path) for path in arr])

dask how to define a custom (time fold) function that operates in parallel and returns a dataframe with a different shape

I am trying to implement a time fold function to be 'map'ed to various partitions of a dask dataframe which in turn changes the shape of the dataframe in question (or alternatively produces a new dataframe with the altered shape). This is how far I have gotten. The result 'res' returned on compute is a list of 3 delayed objects. When I try to compute each of them in a loop (last tow lines of code) this results in a "TypeError: 'DataFrame' object is not callable" After going through the examples for map_partitions, I also tried altering the input DF (inplace) in the function with no return value which causes a similar TypeError with NoneType. What am I missing?
Also, looking at the visualization (attached) I feel like there is a need for reducing the individually computed (folded) partitions into a single DF. How do I do this?
#! /usr/bin/env python
# Start dask scheduler and workers
# dask-scheduler &
# dask-worker --nthreads 1 --nprocs 6 --memory-limit 3GB localhost:8786 --local-directory /dev/shm &
from dask.distributed import Client
from dask.delayed import delayed
import pandas as pd
import numpy as np
import dask.dataframe as dd
import math
foldbucketsecs=30
periodicitysecs=15
secsinday=24 * 60 * 60
chunksizesecs=60 # 1 minute
numts = 5
start = 1525132800 # 01/05
end = 1525132800 + (3 * 60) # 3 minute
c = Client('127.0.0.1:8786')
def fold(df, start, bucket):
return df
def reduce_folds(df):
return df
def load(epoch):
idx = []
for ts in range(0, chunksizesecs, periodicitysecs):
idx.append(epoch + ts)
d = np.random.rand(chunksizesecs/periodicitysecs, numts)
ts = []
for i in range(0, numts):
tsname = "ts_%s" % (i)
ts.append(tsname)
gts.append(tsname)
res = pd.DataFrame(index=idx, data=d, columns=ts, dtype=np.float64)
res.index = pd.to_datetime(arg=res.index, unit='s')
return res
gts = []
load(start)
cols = len(gts)
idx1 = pd.DatetimeIndex(start=start, freq=('%sS' % periodicitysecs), end=start+periodicitysecs, dtype='datetime64[s]')
meta = pd.DataFrame(index=idx1[:0], data=[], columns=gts, dtype=np.float64)
dfs = [delayed(load)(fn) for fn in range(start, end, chunksizesecs)]
from_delayed = dd.from_delayed(dfs, meta, 'sorted')
nfolds = int(math.ceil((end - start)/foldbucketsecs))
cprime = nfolds * cols
gtsnew = []
for i in range(0, cprime):
gtsnew.append("ts_%s,fold=%s" % (i%cols, i/cols))
idx2 = pd.DatetimeIndex(start=start, freq=('%sS' % periodicitysecs), end=start+foldbucketsecs, dtype='datetime64[s]')
meta = pd.DataFrame(index=idx2[:0], data=[], columns=gtsnew, dtype=np.float64)
folded_df = from_delayed.map_partitions(delayed(fold)(from_delayed, start, foldbucketsecs), meta=meta)
result = c.submit(reduce_folds, folded_df)
c.gather(result).visualize(filename='/usr/share/nginx/html/svg/df4.svg')
res = c.gather(result).compute()
for f in res:
f.compute()
Never mind! It was my fault, instead of wrapping my function in delayed I simply passed it to the map_partitions call like so and it worked.
folded_df = from_delayed.map_partitions(fold, start, foldbucketsecs, nfolds, meta=meta)

Pyspark Streaming application stucks during a batch processing

I have a pyspark applicataion that loads data from Kinesis and saves to S3.
Each batch processing time is quite stable, but then it can stuck.
How can I figure out why it happens?
Code sample:
columns = [x.name for x in schema]
Event = Row(*[x[0] for x in columns])
def get_spark_session_instance(sparkConf):
if ("sparkSessionSingletonInstance" not in globals()):
globals()["sparkSessionSingletonInstance"] = SparkSession \
.builder \
.config(conf=sparkConf) \
.getOrCreate()
return globals()["sparkSessionSingletonInstance"]
def creating_func():
def timing(message):
print('timing', str(datetime.utcnow()), message)
def process_game(df, game, time_part):
# s3
df.write.json("{}/{}/{}/{}".format(path_prefix, game, 'group_1', time_part),
compression="gzip", timestampFormat="yyyy-MM-dd'T'HH:mm:ss.SSS")
timing('{}_grop_1'.format(game))
df[df['group'] == 2] \
.write.json("{}/{}/{}/{}".format(path_prefix, game, 'group_2', time_part),
compression="gzip", timestampFormat="yyyy-MM-dd'T'HH:mm:ss.SSS")
timing('{}_grop_2'.format(game))
# database
df[df['group'] == 3].select(*db_columns) \
.write.jdbc(db_connection_string, table="test.{}group_3".format(game), mode='append',
properties=db_connection_propetries)
timing('{}_db'.format(game))
def event_to_row(event):
event_dict = json.loads(event)
event_dict['json_data'] = event_dict.get('json_data') and json.dumps(
event_dict.get('json_data'))
return Event(*[event_dict.get(x) for x in columns])
def process(rdd):
if not rdd.isEmpty():
spark_time = datetime.utcnow().strftime('%Y/%m/%d/%H/%M%S_%f')
rows_rdd = rdd.map(event_to_row)
spark = get_spark_session_instance(rdd.context.getConf())
df = spark.createDataFrame(data=rows_rdd, schema=schema)
df = df.withColumn("ts", df["ts"].cast(TimestampType())) \
.withColumn("processing_time", lit(datetime.utcnow()))
df.cache()
print('timing -----------------------------')
process_game(df[df['app_id'] == 1], 'app_1', spark_time)
process_game(df[df['app_id'] == 2], 'app_2', spark_time)
sc = SparkContext.getOrCreate()
ssc = StreamingContext(sc, 240)
kinesis_stream = KinesisUtils.createStream(
ssc, sys.argv[2], 'My-stream-name', "kinesis.us-east-1.amazonaws.com",
'us-east-1', InitialPositionInStream.TRIM_HORIZON, 240, StorageLevel.MEMORY_AND_DISK_2)
kinesis_stream.repartition(16 * 3).foreachRDD(process)
ssc.checkpoint(checkpoint_prefix + sys.argv[1])
return ssc
if __name__ == '__main__':
print('timing', 'cast ts', str(datetime.utcnow()))
ssc = StreamingContext.getActiveOrCreate(checkpoint_prefix + sys.argv[1], creating_func)
ssc.start()
ssc.awaitTermination()
Streaming Web UI
Batches info
identify the process taking the time, use kill -QUIT or jstack to get the stack trace. look in the source for possible delays, and consider where you can increase log4j logging for more info.
Does the delay increase with the amount of data written? If so, that's the usual "rename is really copy" problem s3 has

How to use the PyPy as the notebook interpreter?

I Have a Script for data extraction from some CSV files and bifurcating the Data into different excel files. I using Ipython for the that and I m sure it using CPython as the Default interpreter.
But the script is taking too much time for the whole process to finish. Can someone please help to how use that script using the PyPy as i heard it is much faster than CPython.
Script is something like this:
import pandas as pd
import xlsxwriter as xw
import csv
import pymsgbox as py
file1 = "vDashOpExel_Change_20150109.csv"
file2 = "vDashOpExel_T3Opened_20150109.csv"
path = "C:\Users\Abhishek\Desktop\Pandas Anlaysis"
def uniq(words):
seen = set()
for word in words:
l = word.lower()
if l in seen:
continue
seen.add(l)
yield word
def files(file_name):
df = pd.read_csv( path + '\\' + file_name, sep=',', encoding = 'utf-16')
final_frame = df.dropna(how='all')
file_list = list(uniq(list(final_frame['DOEClient'])))
return file_list, final_frame
def fill_data(f_list, frame1=None, frame2=None):
if f_list is not None:
for client in f_list:
writer = pd.ExcelWriter(path + '\\' + 'Accounts'+ '\\' + client + '.xlsx', engine='xlsxwriter')
if frame1 is not None:
data1 = frame1[frame1.DOEClient == client] # Filter the Data
data1.to_excel(writer,'Change',index=False, header=True) # Importing the Data to Excel File
if frame2 is not None:
data2 = frame2[frame2.DOEClient == client] # Filter the Data
data2.to_excel(writer,'Opened',index=False, header=True) # Importing the Data to Excel File
else:
py.alert('Please enter the First Parameter !!!', 'Error')
list1, frame1 = files(file1)
list2, frame2 = files(file2)
final_list = set(list1 + list2)