Blurry XGBClassifier tree plot - pandas

I have trained and XGBClassifier called model and then plot the tree as follows:
from xgboost import plot_tree
plot_tree(model); plt.show(dpi=1200)
The resulting plot is really blurry:
Does anyone know how to improve the quality of that plot?
I have tried to include dpi=1200 (see code above) but that doesn't make any difference.

Related

How to save an image that has been visualized/generated by a Keras model?

I am using detecto model to visualize an image. So basically I am passing an image to this model and it will draw a boundary line accross the object and dislay the visualized image.
from keras.preprocessing.image import load_img
from keras.preprocessing.image import save_img
from keras.preprocessing.image import img_to_array
from detecto import core, utils, visualize
image = utils.read_image('retina_model/4.jpg')
model = core.Model()
labels, boxes, scores = model.predict_top(image)
img=visualize.show_labeled_image(image, boxes,)
Now, I am trying to convert this visualized image into Numpy array. I am using the below line for converting the image into numpy array :
img_array = img_to_array(img)
It is giving the errror :
Unsupported Image Shape
All I want is to display the visualized image which is the output of this model to my website. The plan is to convert the image into numpy array and then save the image by code using the below line :
save_img('image1.jpg', img_array)
So I was planning to download this visualized image (output of this model) so that I can display the downloaded image to my website. If there is some other way to do achieve this then please let me know.
Detecto's documentation says the utils.read_image() is already returning a NumPy array.
But you are passing the return of visualize.show_labeled_image() to Keras' img_to_array(img)
Looking at the Detecto source code of visualize.show_labeled_image(), it has no return type, so it is returning None by default. So I think your problem is you are not passing a valid image to img_to_array(img), but None.
I don't think the call to img_to_array(img) is needed, because you already have the image as a NumPy array. But note that according to Detecto's documentation, utils.read_image() is "Equivalent to using OpenCV’s cv2.imread function and converting from BGR to RGB format" . Make sure that's what you want.
you can visit the official github repo of detecto/visualize.pyto find out the show_labeled_image() function it uses matplotlib to plot the image with bounding boxes you can modify that code in your file to save the plot using plt.save_fig()

Making ROC curves with results from cross_validate?

I am running 5 fold cross validation with a random forest as such:
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import cross_validate
forest = RandomForestClassifier(n_estimators=100, max_depth=8, max_features=6)
cv_results = cross_validate(forest, X, y, cv=5, scoring=scoring)
However, I want to plot the ROC curves for the 5 outputs on one graph. The documentation only provides an example to plot the roc curve with cross validation when specifically using StratifiedKFold cross validation (see documentation here: https://scikit-learn.org/stable/auto_examples/model_selection/plot_roc_crossval.html#sphx-glr-auto-examples-model-selection-plot-roc-crossval-py)
I tried tweeking the code to make it work for cross_validate but to no avail.
How do I make a ROC curve with the 5 results from the cross_validate output being plotted on a single graph?
Thanks in advance
cross_validate is a Model validation tool rather than a splitter class. You need to choose the splitter class which is right for you. You are probably after KFold. Something like this:
from sklearn.model_selection import KFold
cv = KFold(n_splits=5)

Draw an ordinary plot with the same style as in plt.hist(histtype='step')

The method plt.hist() in pyplot has a way to create a 'step-like' plot style when calling
plt.hist(data, histtype='step')
but the 'ordinary' methods that plot raw data without processing (plt.plot(), plt.scatter(), etc.) apparently do not have style options to obtain the same result. My goal is to plot a given set of points using that style, without making histogram of these points.
Is that achievable with standard library methods for plotting a given 2-D set of points?
I also think that there is at least one hack (generating a fake distribution which would have histogram equal to our data) and a 'low-level' solution to draw each segment manually, but none of these ways seems favorable.
Maybe you are looking for drawstyle="steps".
import numpy as np; np.random.seed(42)
import matplotlib.pyplot as plt
data = np.cumsum(np.random.randn(10))
plt.plot(data, drawstyle="steps")
plt.show()
Note that this is slightly different from histograms, because the lines do not go to zero at the ends.

Difference between matplotlib.countourf and matlab.contourf() - odd sharp edges in matplotlib

I am a recent migrant from Matlab to Python and have recently worked with Numpy and Matplotlib. I recoded one of my scripts from Matlab, which employs Matlab's contourf-function, into Python using matplotlib's corresponding contourf-function. I managed to replicate the output in Python, apart that the contourf-plots are not exacly the same, for a reason that is unknown to me. As I run the contourf-function in matplotlib, I get this otherwise nice figure but it has these sharp edges on the contour-levels on top and bottom, which should not be there (see Figure 1 below, matplotlib-output). Now, when I export the arrays I used in Python to Matlab (i.e. the exactly same data set that was used to generate the matplotlib-contourf-plot) and use Matlab's contourf-function, I get a slightly different output, without those sharp contour-level edges (see Figure 2 below, Matlab-output). I used the same number of levels in both figures. In figure 3 I have made a scatterplot of the same data, which shows that there are no such sharp edges in the data as shown in the contourf-plot (I added contour-lines just for reference). Example dataset can be downloaded through Dropbox-link given below. The data set contains three txt-files: X, Y, Z. Each of them are an 500x500 arrays, which can be directly used with contourf(), i.e. plt.contourf(X,Y,Z,...). The code that used was
plt.contourf(X,Y,Z,10, cmap=plt.cm.jet)
plt.contour(X,Y,Z,10,colors='black', linewidths=0.5)
plt.axis('equal')
plt.axis('off')
Does anyone have an idea why this happens? I would appreciate any insight on this!
Cheers,
Jussi
Below are the details of my setup:
Python 3.7.0
IPython 6.5.0
matplotlib 2.2.3
Matplotlib output
Matlab output
Matplotlib-scatter
Link to data set
The confusing thing about the matlab plot is that its colorbar shows much more levels than there are actually in the plot. Hence you don't see the actual intervals that are contoured.
You would achieve the same result in matplotlib by choosing 12 instead of 11 levels.
import numpy as np
import matplotlib.pyplot as plt
X, Y, Z = [np.loadtxt("data/roundcontourdata/{}.txt".format(i)) for i in list("XYZ")]
levels = np.linspace(Z.min(), Z.max(), 12)
cntr = plt.contourf(X,Y,Z,levels, cmap=plt.cm.jet)
plt.contour(X,Y,Z,levels,colors='black', linewidths=0.5)
plt.colorbar(cntr)
plt.axis('equal')
plt.axis('off')
plt.show()
So in conclusion, both plots are correct and show the same data. Just the levels being automatically chosen are different. This can be circumvented by choosing custom levels depending on the desired visual appearance.

MNIST Matplotlib: showing color

I am exploring the MNIST dataset which is a collection of gray-scale handwritten digit images. I am using Matplotlib to plot random images from the dataset:
plt.subplot(221)
plt.imshow(X_train[1],cmap='gray')
plt.subplot(222)
plt.imshow(X_train[100])
plt.subplot(223)
plt.imshow(X_train[4559])
plt.subplot(224)
plt.imshow(X_train[50000])
plt.show()
My question is why the images are coming up as colored when I don't explicitly set cmap='gray'.
Shouldn't they all appear as grayscale images by default as that's their true nature?
This is because, by default, imshow() uses 'viridis' as cmap.