I always get a Kernel Dead when using "pd.read_parquet()". (No matter which file size) - pandas

As you can guess by the title, the kernel always dies when using pd.read_parquet().
I already tried it with different sizes but it wont work.
here the code... (I am using jupyter (without anaconda, because it always takes to long to start) in Python 3.7 with a i5 & 16GB RAM)
outfp = PurePath(data_dir+'/interim/IVE_tickbidask.parq')
#df = df.head(10)
df.to_parquet(outfp)
from pathlib import PurePath, Path
import pandas as pd
data_dir = "../../../Adv_Fin_ML_Exercises-master/Adv_Fin_ML_Exercises-master/data"
infp=PurePath(data_dir+'/interim/IVE_tickbidask.parq')
df = pd.read_parquet(data_dir+'/interim/IVE_tickbidask.parq')
cprint(df)
What can i do to still make it work?

I had the same problem, adding engine = 'fastparquet' worked for me. Otherwise it defaults to engine = 'pyarrow' and that seems to make the kernel die.

Related

error: (-215:Assertion failed) !_src.empty() in function 'cvtColor' while using OpenCV 4.2 with swift [duplicate]

I am trying to do a basic colour conversion in python however I can't seem to get past the below error. I have re-installed python, opencv and tried on both python 3.4.3 (latest) and python 2.7 (which is on my Mac).
I installed opencv using python's package manager opencv-python.
Here is the code that fails:
frame = cv2.imread('frames/frame%d.tiff' % count)
frame_HSV= cv2.cvtColor(frame,cv2.COLOR_RGB2HSV)
This is the error message:
cv2.error: OpenCV(3.4.3) /Users/travis/build/skvark/opencv-python/opencv/modules/imgproc/src/color.cpp:181: error: (-215:Assertion failed) !_src.empty() in function 'cvtColor'
This error happened because the image didn't load properly. So you have a problem with the previous line cv2.imread. My suggestion is :
check if the image exists in the path you give
check if the count variable has a valid number
If anyone is experiencing this same problem when reading a frame from a webcam:
Verify if your webcam is being used on another task and close it. This wil solve the problem.
I spent some time with this error when I realized my camera was online in a google hangouts group. Also, Make sure your webcam drivers are up to date
I kept getting this error too:
Traceback (most recent call last):
File "face_detector.py", line 6, in <module>
gray_img=cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
cv2.error: OpenCV(4.1.0) C:\projects\opencv-python\opencv\modules\imgproc\src\color.cpp:182: error: (-215:Assertion failed) !_src.empty() in function 'cv::cvtColor
My cv2.cvtColor(...) was working fine with \photo.jpg but not with \news.jpg. For me, I finally realized that when working on Windows with python, those escape characters will get you every time!! So my "bad" photo was being escaped because of the file name beginning with "n". Python took the \n as an escape character and OpenCV couldn't find the file!
Solution:
Preface file names in Windows python with r"...\...\" as in
cv2.imread(r".\images\news.jpg")
If the path is correct and the name of the image is OK, but you are still getting the error
use:
from skimage import io
img = io.imread(file_path)
instead of:
cv2.imread(file_path)
The function imread loads an image from the specified file and returns
it. If the image cannot be read (because of missing file, improper permissions, unsupported or invalid format), the function returns an empty matrix ( Mat::data==NULL ).
check if the image exists in the path and verify the image extension (.jpg or .png)
Check whether its the jpg, png, bmp file that you are providing and write the extension accordingly.
Another thing which might be causing this is a 'weird' symbol in your file and directory names. All umlaut (äöå) and other (éóâ etc) characters should be removed from the file and folder names. I've had this same issue sometimes because of these characters.
Most probably there is an error in loading the image, try checking directory again.
Print the image to confirm if it actually loaded or not
In my case, the image was incorrectly named. Check if the image exists and try
import numpy as np
import cv2
img = cv2.imread('image.png', 0)
cv2.imshow('image', img)
I've been in same situation as well, and My case was because of the Korean letter in the path...
After I remove Korean letters from the folder name, it works.
OR put
[#-*- coding:utf-8 -*-]
(except [ ] at the edge)
or something like that in the first line to make python understand Korean or your language or etc.
then it will work even if there is some Koreans in the path in my case.
So the things is, it seems like there is something about path or the letter.
People who answered are saying similar things. Hope you guys solve it!
I had the same problem and it turned out that my image names included special characters (e.g. château.jpg), which could not bet handled by cv2.imread. My solution was to make a temporary copy of the file, renaming it e.g. temp.jpg, which could be loaded by cv2.imread without any problems.
Note: I did not check the performance of shutil.copy2 vice versa other options. So probably there is a better/faster solution to make a temporary copy.
import shutil, sys, os, dlib, glob, cv2
for f in glob.glob(os.path.join(myfolder_path, "*.jpg")):
shutil.copy2(f, myfolder_path + 'temp.jpg')
img = cv2.imread(myfolder_path + 'temp.jpg')
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
os.remove(myfolder_path + 'temp.jpg')
If there are only few files with special characters, renaming can also be done as an exeption, e.g.
for f in glob.glob(os.path.join(myfolder_path, "*.jpg")):
try:
img = cv2.imread(f)
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
except:
shutil.copy2(f, myfolder_path + 'temp.jpg')
img = cv2.imread(myfolder_path + 'temp.jpg')
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
os.remove(myfolder_path + 'temp.jpg')
In my case it was a permission issue. I had to:
chmod a+wrx the image,
then it worked.
must please see guys that the error is in the cv2.imread() .Give the right path of the image. and firstly, see if your system loads the image or not. this can be checked first by simple load of image using cv2.imread().
after that ,see this code for the face detection
import numpy as np
import cv2
cascPath = "/Users/mayurgupta/opt/anaconda3/lib/python3.7/site- packages/cv2/data/haarcascade_frontalface_default.xml"
eyePath = "/Users/mayurgupta/opt/anaconda3/lib/python3.7/site-packages/cv2/data/haarcascade_eye.xml"
smilePath = "/Users/mayurgupta/opt/anaconda3/lib/python3.7/site-packages/cv2/data/haarcascade_smile.xml"
face_cascade = cv2.CascadeClassifier(cascPath)
eye_cascade = cv2.CascadeClassifier(eyePath)
smile_cascade = cv2.CascadeClassifier(smilePath)
img = cv2.imread('WhatsApp Image 2020-04-04 at 8.43.18 PM.jpeg')
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
faces = face_cascade.detectMultiScale(gray, 1.3, 5)
for (x,y,w,h) in faces:
img = cv2.rectangle(img,(x,y),(x+w,y+h),(255,0,0),2)
roi_gray = gray[y:y+h, x:x+w]
roi_color = img[y:y+h, x:x+w]
eyes = eye_cascade.detectMultiScale(roi_gray)
for (ex,ey,ew,eh) in eyes:
cv2.rectangle(roi_color,(ex,ey),(ex+ew,ey+eh),(0,255,0),2)
cv2.imshow('img',img)
cv2.waitKey(0)
cv2.destroyAllWindows()
Here, cascPath ,eyePath ,smilePath should have the right actual path that's picked up from lib/python3.7/site-packages/cv2/data here this path should be to picked up the haarcascade files
Your code can't find the figure or the name of your figure named the by error message.
Solution:
import cv2
import numpy as np
import matplotlib.pyplot as plt
img=cv2.imread('哈哈.jpg')#solution:img=cv2.imread('haha.jpg')
print(img)
If anyone is experiencing this same problem when reading a frame from a webcam [with code similar to "frame = cv2.VideoCapture(0)"] and work in Jupyter Notebook, you may try:
ensure previously tried code is not running already and restart Jupyter Notebook kernel
SEPARATE code "frame = cv2.VideoCapture(0)" in separate cell on place where it is [previous code put in cell above, code under put to cell down]
then run all the code above cell where is "frame = cv2.VideoCapture(0)"
then try run next cell with its only code "frame = cv2.VideoCapture(0)" - AND - till you will continue in executing other cells - ENSURE - that ASTERIX on the left side of this particular cell DISAPEAR and command order number appear instead - only then continue
now you can try execute the rest of your code as your camera input should not be empty anymore :-)
After end, ensure you close all your program and restart kernel to prepare it for another run
As #shaked litbak , this error arised with my initial use with the ASCII-generator , as i naively thought i just had to add to the ./data directory , with its load automatically .
I had to append the --input option with the desired file path .
I checked my image file path and it was correct. I made sure there was no corrupt images.The problem was with my mac. It sometimes have a hidden file called .DS_Store which was saved together with the image file path. Therefore cv2 was having a problem with that file.So I solved the problem by deleting .DS_Store
I also encountered this type of error:
error: OpenCV(4.1.2) /io/opencv/modules/imgproc/src/color.cpp:182: error: (-215:Assertion failed) !_src.empty() in function 'cvtColor'
The solution was to load the image properly. Since the file mentioned was wrong, images were not loaded and hence it threw this error. You can check the path of the image or if uploading an image through colab or drive, make sure that the image is present in the drive.
I encounter the problem when I try to load the image from non-ASCII path.
If I simply use imread to load the image, I am only able to get None.
Here is my solution:
import cv2
import numpy as np
path = r'D:\map\上海地图\abc.png'
image = cv2.imdecode(np.fromfile(path, dtype=np.uint8), cv2.IMREAD_UNCHANGED)
Similar thing will happen when I save the image in a non-ASCII path. It will not be successfully saved without any warnings. And here is what I did.
import cv2
import numpy as np
path = r'D:\map\上海地图\abc.png'
cv2.imencode('.png', image)[1].tofile(path)
path = os.path.join(raw_folder, folder, file)
print('[DEBUG] path:', path)
img = cv2.imread(path) #read path Image
if img is None: # check if the image exists in the path you give
print('Wrong path:', path)
else: # It completes the steps
img = cv2.resize(img, dsize=(128,128))
pixels.append(img)
The solution os to ad './' before the name of image before reading it...
Just Try Degrading the OpenCV
in python Shell (in cmd)
>>> import cv2
>>> cv2.__version__
after Checking in cmd
pip uninstall opencv-python
after uninstalling the version of opencv install
pip install opencv-python==3.4.8.29

Pyspark: Serialized task exceeds max allowed. Consider increasing spark.rpc.message.maxSize or using broadcast variables for large values

I'm doing calculations on a cluster and at the end when I ask summary statistics on my Spark dataframe with df.describe().show() I get an error:
Serialized task 15:0 was 137500581 bytes, which exceeds max allowed: spark.rpc.message.maxSize (134217728 bytes). Consider increasing spark.rpc.message.maxSize or using broadcast variables for large values
In my Spark configuration I already tried to increase the aforementioned parameter:
spark = (SparkSession
.builder
.appName("TV segmentation - dataprep for scoring")
.config("spark.executor.memory", "25G")
.config("spark.driver.memory", "40G")
.config("spark.dynamicAllocation.enabled", "true")
.config("spark.dynamicAllocation.maxExecutors", "12")
.config("spark.driver.maxResultSize", "3g")
.config("spark.kryoserializer.buffer.max.mb", "2047mb")
.config("spark.rpc.message.maxSize", "1000mb")
.getOrCreate())
I also tried to repartition my dataframe using:
dfscoring=dfscoring.repartition(100)
but still I keep on getting the same error.
My environment: Python 3.5, Anaconda 5.0, Spark 2
How can I avoid this error ?
i'm in same trouble, then i solve it.
the cause is spark.rpc.message.maxSize if default set 128M, you can change it when launch a spark client, i'm work in pyspark and set the value to 1024, so i write like this:
pyspark --master yarn --conf spark.rpc.message.maxSize=1024
solve it.
I had the same issue and it wasted a day of my life that I am never getting back. I am not sure why this is happening, but here is how I made it work for me.
Step 1: Make sure that PYSPARK_PYTHON and PYSPARK_DRIVER_PYTHON are correctly set.
Turned out that python in worker(2.6) had a different version than in driver(3.6). You should check if environment variables PYSPARK_PYTHON and PYSPARK_DRIVER_PYTHON are correctly set.
I fixed it by simply switching my kernel from Python 3 Spark 2.2.0 to Python Spark 2.3.1 in Jupyter. You may have to set it up manually. Here is how to make sure your PySpark is set up correctly https://mortada.net/3-easy-steps-to-set-up-pyspark.html
STEP 2: If that doesn't work, try working around it:
This kernel switch worked for DFs that I haven't added any columns to:
spark_df -> panda_df -> back_to_spark_df .... but it didn't work on the DFs where I had added 5 extra columns. So what I tried and it worked was the following:
# 1. Select only the new columns:
df_write = df[['hotel_id','neg_prob','prob','ipw','auc','brier_score']]
# 2. Convert this DF into Spark DF:
df_to_spark = spark.createDataFrame(df_write)
df_to_spark = df_to_spark.repartition(100)
df_to_spark.registerTempTable('df_to_spark')
# 3. Join it to the rest of your data:
final = df_to_spark.join(data,'hotel_id')
# 4. Then write the final DF.
final.write.saveAsTable('schema_name.table_name',mode='overwrite')
Hope that helps!
I had the same problem but using Watson studio. My solution was:
sc.stop()
configura=SparkConf().set('spark.rpc.message.maxSize','256')
sc=SparkContext.getOrCreate(conf=configura)
spark = SparkSession.builder.getOrCreate()
I hope it help someone...
I had faced the same issue while converting the sparkDF to pandasDF.
I am working on Azure-Databricks , first you need to check the memory set in the spark config using below -
spark.conf.get("spark.rpc.message.maxSize")
Then we can increase the memory-
spark.conf.set("spark.rpc.message.maxSize", "500")
For those folks, who are looking for AWS Glue script pyspark based way of doing this. The below code snippet might be useful
from awsglue.context import GlueContext
from pyspark.context import SparkContext
from pyspark import SparkConf
myconfig=SparkConf().set('spark.rpc.message.maxSize','256')
#SparkConf can be directly used with its .set property
sc = SparkContext(conf=myconfig)
glueContext = GlueContext(sc)
..
..

Test if notebook is running on Google Colab

How can I test if my notebook is running on Google Colab?
I need this test as obtaining / unzipping my training data is different if running on my laptop or on Colab.
Try importing google.colab
try:
import google.colab
IN_COLAB = True
except:
IN_COLAB = False
Or just check if it's in sys.modules
import sys
IN_COLAB = 'google.colab' in sys.modules
For environments using ipython
If you are sure that the script will be run using ipython which is the most typical usage, there is also the possibility to check the ipython interpreter used. I think it is a little bit more clear and you don't have to import any module.
if 'google.colab' in str(get_ipython()):
print('Running on CoLab')
else:
print('Not running on CoLab')
If you need to do it multiple times you might want to assign a variable so you don't have to repeat the str(get_ipython()).
RunningInCOLAB = 'google.colab' in str(get_ipython())
RunningInCOLAB is True if run in a Google Colab notebook.
For environments not using ipython
In this case you have to check ipython is used first, assuming that COLab will always use ipython.
RunningInCOLAB = 'google.colab' in str(get_ipython()) if hasattr(__builtins__,'__IPYTHON__') else False
you can check environment variable like this:
import os
if 'COLAB_GPU' in os.environ:
print("I'm running on Colab")
actually you can print out os.environ to check what's associated with colab and then check the key
Improved Solution for all Python environments
As none of the other answers given here worked for me, and I was not using iPython. I checked the environment variables they use in Colab and thus, the following is best for checking the environment:
import os
if os.getenv("COLAB_RELEASE_TAG"):
print("Running in Colab")
else:
print("NOT in Colab")
In a %%bash cell, use:
%%bash
[[ ! -e /colabtools ]] && exit # Continue only if running on Google Colab
# Do Colab-only stuff here
Or in Python equivalence
import os
if os.path.exists('/colabtools'):
# do stuff

Parallelizing apply function in pandas taking longer than expected

I have a simple cleaner function which removes special characters from a dataframe (and other preprocessing stuff). My dataset is huge and I want to make use of multiprocessing to improve performance. My idea was to break the dataset into chunks and run this cleaner function in parallel on each of them.
I used dask library and also the multiprocessing module from python. However, it seems like the application is stuck and is taking longer than running with a single core.
This is my code:
from multiprocessing import Pool
def parallelize_dataframe(df, func):
df_split = np.array_split(df, num_partitions)
pool = Pool(num_cores)
df = pd.concat(pool.map(func, df_split))
pool.close()
pool.join()
return df
def process_columns(data):
for i in data.columns:
data[i] = data[i].apply(cleaner_func)
return data
mydf2 = parallelize_dataframe(mydf, process_columns)
I can see from the resource monitor that all cores are being used, but as I said before, the application is stuck.
P.S.
I ran this on windows server 2012 (where the issue happens). Running this code on unix env, I was actually able to see some benefit from the multiprocessing library.
Thanks in advance.

Is there any 2D plotting library compatible with pypy?

I am a heavy user of jupyter notebook and, lately, I am running it using pypy instead of python to get extra speed. It works perfectly but I am missing matplotlib so much. Is there any decent 2D plotting library compatible with pypy and jupyter notebook? I don't need fancy stuff, scatter, line and bar plots would be more than enough.
Bokeh is working fairly good with pypy. The only problem I have encountered is linked to the use of numpy.datetime64 that is not yet supported by pypy. Fortunately it is enough to monkey-patch bokeh/core/properties.py and bokeh/util/serialization.py to pass in case of datetime64 reference.
I did it in this way:
bokeh/core/properties.py
...
try:
import numpy as np
datetime_types += (np.datetime64,)
except:
pass
...
and
bokeh/util/serialization.py
...
# Check for astype failures (putative Numpy < 1.7)
try:
dt2001 = np.datetime64('2001')
legacy_datetime64 = (dt2001.astype('int64') ==
dt2001.astype('datetime64[ms]').astype('int64'))
except:
legacy_datetime64 = False
pass
...
And managed to get nice looking plots in jupyter using pypy.