Related
If I have a function (get_plot()) from a library which plots and returns an object of type "matplotlib.axes._subplots.AxesSubplot". How can I generate 4 of these plots and add them to a matplotlib subplot grid?
df = pd.DataFrame({'x': [1,2,3,4,5]})
def get_plot(data):
return df.plot(data)
fig, axes = plt.subplots(nrows=2, ncols=2)
axes[0][0].plt(get_plot(df))
axes[0][1].plt(get_plot(df))
axes[1][0].plt(get_plot(df))
axes[1][1].plt(get_plot(df))
throws an error:
AttributeError: 'AxesSubplot' object has no attribute 'plt'
Note: the function get_plot() comes from a library which I do not want to modify.
The ConnectionPatch is a useful way to draw a line between two points on two different axes (demo). Is it possible to use this class when one (or both) of the axes is of Cartopy GeoAxes type? A related answer suggests a work-around but I would prefer to avoid this.
I can not answer your question about the use of that class thing. But, if you are interested in plotting the lines between 2 different Cartopy geoaxes, or between matplotlib axes and a geoaxe, that can be achieved with some coordinate transformation. Here is a runnable code and the output plot. I have written some comments within the code to help explain the important steps.
For further information about coordinate system and tranformation:
Cartopy https://scitools.org.uk/cartopy/docs/latest/tutorials/understanding_transform.html
Since Cartopy is built on top of Matplotlib, you need to look into the related subject in Matplotlib.
Matplotlib https://matplotlib.org/3.2.1/tutorials/advanced/transforms_tutorial.html
import cartopy
import cartopy.mpl.geoaxes
import matplotlib.pyplot as plt
from mpl_toolkits.axes_grid1.inset_locator import inset_axes
fig, ax = plt.subplots()
fig.set_size_inches([8,8]) # 9,6; 8,9; 8,3 all OK
# Plot simple line on main axes
ax.plot([4,5,3,1,2])
p1 = [0.5,3.0] # Bangkok text location
p2 = [0.5,2.75] # Himalaya text location
# Plot texts (Bangkok, Himalaya) on the main axes
ax.text(*p1, "Bangkok", ha='right')
ax.text(*p2, "Himalaya", ha='right')
# Ploting on UR inset map (cartopy) on the main axes (ax)
bkk_lon, bkk_lat = 100, 13 # Bangkok
hml_lon, hml_lat = 83.32, 29.22 # Everest peak
# Create cartopy geoaxes inset axes as part of the main axes 'ax'
axins = inset_axes(ax, width="40%", height="30%", loc="upper right",
axes_class = cartopy.mpl.geoaxes.GeoAxes,
axes_kwargs = dict(map_projection = cartopy.crs.PlateCarree()))
# Set map limits on that axes (for Thailand)
llx, lly = 95, 0
urx, ury = 110, 25
axins.set_xlim((llx, urx))
axins.set_ylim((lly, ury))
# Plot coastlines
axins.add_feature(cartopy.feature.COASTLINE)
# Plot line across the inset mao, LL to UR; OK
#ll_p, ur_p = [llx,urx], [lly,ury]
#axins.plot(ll_p, ur_p, "r--")
axins.plot(bkk_lon, bkk_lat, 'ro', transform=cartopy.crs.PlateCarree()) # OK!
# Create another inset map on the main axes (ax)
axins2 = inset_axes(ax, width="40%", height="30%", loc="lower left",
axes_class = cartopy.mpl.geoaxes.GeoAxes,
axes_kwargs = dict(map_projection = cartopy.crs.PlateCarree()))
# Set map limits on that axes (second inset map)
llx2, lly2 = -60, -20
urx2, ury2 = 120, 90
axins2.set_xlim((llx2, urx2))
axins2.set_ylim((lly2, ury2))
axins2.add_feature(cartopy.feature.COASTLINE)
# Plot line from UK to BKK, OK
#p21, p22 = [0, 100], [40, 13]
#axins2.plot(p21, p22, "r--")
# Plot blue dot at Himalaya
axins2.plot(hml_lon, hml_lat, "bo")
plt.draw() # Do this to get updated position
# Do coordinate transformation to get BKK, HML locations in display coordinates
# from axins_data_xy to dp_xy
dpxy_bkk_axins = axins.transData.transform((bkk_lon, bkk_lat)) # get display coordinates
# from axins2_data_xy to dp_xy
dpxy_bkk_axins2 = axins2.transData.transform((hml_lon, hml_lat)) # get display coordinates
# Do coordinate transformation to get BKK, HML locations in data coordinates of the main axes 'ax'
# from both dp_xy to main_ax_data
ur_bkk = ax.transData.inverted().transform( dpxy_bkk_axins )
ll_hml = ax.transData.inverted().transform( dpxy_bkk_axins2 )
# Prep coordinates for line connecting BKK to HML
xs = ur_bkk[0], ll_hml[0]
ys = ur_bkk[1], ll_hml[1]
xs = ur_bkk[0], ll_hml[0]
ys = ur_bkk[1], ll_hml[1]
ax.plot(xs, ys, 'g--') # from Bkk to Himalaya of different inset maps
# Plot lines from texts (on main axes) to locations on maps
ax.plot([p1[0], ur_bkk[0]], [p1[1], ur_bkk[1]], 'y--')
ax.plot([p2[0], ll_hml[0]], [p2[1], ll_hml[1]], 'y--')
# Set cartopy inset background invisible
axins.background_patch.set_visible(False)
axins2.background_patch.set_visible(False)
plt.show()
The output plot:-
I am a bit lost on the best approach to add labels to my markers with a seaborn relplot. I see in the matplotlib documentation that there is a axes.text() method that looks to be the right approach, but it doesn't appear that this method exists. Does seaborn behave differently than matplotlib in this sense? What would the right approach be?
Error:
AttributeError: 'numpy.ndarray' object has no attribute 'text'
Code:
line_minutes_asleep = sns.relplot(
x = "sleep_date",
y = "minutes_asleep",
kind = "line",
data = df,
height=10, # make the plot 5 units high
aspect=3
)
x = df.sleep_date
y = df.minutes_asleep
names = df.minutes_asleep
print(line_minutes_asleep.axes.text())
relplot returns a FacetGrid, which is a figure containing several subplots. The property .axes of a FacetGrid is a 2D ndarray of Axes objects. Therefore, you need to use FacetGrid.axes[i,j] to get a reference to the subplot.
If you want to write something in the first subplot (axes[0,0]), at the position x,y=(20,5), you would need to do:
import seaborn as sns
sns.set(style="ticks")
tips = sns.load_dataset("tips")
g = sns.relplot(x="total_bill", y="tip", hue="day", data=tips)
g.axes[0,0].text(20,5,"this is a text")
I resorted to using the cloud training workflow. Given the product I got, I would have expected to drop directly into the code that I have that works with other tflite models, but the cloud produced model doesn't work. I get "index out of range" when asking for interpreter.get_tensor parameters.
Here is my code, basically a modified example, where I can ingest a video and produce a video with results.
import argparse
import cv2
import numpy as np
import sys
import importlib.util
# Define and parse input arguments
parser = argparse.ArgumentParser()
parser.add_argument('--modeldir', help='Folder the .tflite file is located in',
required=True)
parser.add_argument('--graph', help='Name of the .tflite file, if different than detect.tflite',
default='model.tflite')
# default='/tmp/detect.tflite')
parser.add_argument('--labels', help='Name of the labelmap file, if different than labelmap.txt',
default='dict.txt')
# default='/tmp/coco_labels.txt')
parser.add_argument('--threshold', help='Minimum confidence threshold for displaying detected objects',
default=0.5)
parser.add_argument('--video', help='Name of the video file',
default='test.mp4')
parser.add_argument('--edgetpu', help='Use Coral Edge TPU Accelerator to speed up detection',
action='store_true')
args = parser.parse_args()
MODEL_NAME = args.modeldir
GRAPH_NAME = args.graph
LABELMAP_NAME = args.labels
VIDEO_NAME = args.video
min_conf_threshold = float(args.threshold)
use_TPU = args.edgetpu
# Import TensorFlow libraries
# If tensorflow is not installed, import interpreter from tflite_runtime, else import from regular tensorflow
# If using Coral Edge TPU, import the load_delegate library
pkg = importlib.util.find_spec('tensorflow')
pkg = True
if pkg is None:
from tflite_runtime.interpreter import Interpreter
if use_TPU:
from tflite_runtime.interpreter import load_delegate
else:
from tensorflow.lite.python.interpreter import Interpreter
if use_TPU:
from tensorflow.lite.python.interpreter import load_delegate
# If using Edge TPU, assign filename for Edge TPU model
if use_TPU:
# If user has specified the name of the .tflite file, use that name, otherwise use default 'edgetpu.tflite'
if (GRAPH_NAME == 'detect.tflite'):
GRAPH_NAME = 'edgetpu.tflite'
# Get path to current working directory
CWD_PATH = os.getcwd()
# Path to video file
VIDEO_PATH = os.path.join(CWD_PATH,VIDEO_NAME)
# Path to .tflite file, which contains the model that is used for object detection
PATH_TO_CKPT = os.path.join(CWD_PATH,MODEL_NAME,GRAPH_NAME)
# Path to label map file
PATH_TO_LABELS = os.path.join(CWD_PATH,MODEL_NAME,LABELMAP_NAME)
# Load the label map
with open(PATH_TO_LABELS, 'r') as f:
labels = [line.strip() for line in f.readlines()]
# Have to do a weird fix for label map if using the COCO "starter model" from
# https://www.tensorflow.org/lite/models/object_detection/overview
# First label is '???', which has to be removed.
if labels[0] == '???':
del(labels[0])
# Load the Tensorflow Lite model.
# If using Edge TPU, use special load_delegate argument
if use_TPU:
interpreter = Interpreter(model_path=PATH_TO_CKPT,
experimental_delegates=[load_delegate('libedgetpu.so.1.0')])
print(PATH_TO_CKPT)
else:
interpreter = Interpreter(model_path=PATH_TO_CKPT)
interpreter.allocate_tensors()
# Get model details
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()
height = input_details[0]['shape'][1]
width = input_details[0]['shape'][2]
floating_model = (input_details[0]['dtype'] == np.float32)
input_mean = 127.5
input_std = 127.5
# Open video file
video = cv2.VideoCapture(VIDEO_PATH)
imW = video.get(cv2.CAP_PROP_FRAME_WIDTH)
imH = video.get(cv2.CAP_PROP_FRAME_HEIGHT)
out = cv2.VideoWriter('output.avi', cv2.VideoWriter_fourcc(
'M', 'J', 'P', 'G'), 10, (1920, 1080))
while(video.isOpened()):
# Acquire frame and resize to expected shape [1xHxWx3]
ret, frame = video.read()
frame_rgb = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
frame_resized = cv2.resize(frame_rgb, (width, height))
input_data = np.expand_dims(frame_resized, axis=0)
# Normalize pixel values if using a floating model (i.e. if model is non-quantized)
if floating_model:
input_data = (np.float32(input_data) - input_mean) / input_std
# Perform the actual detection by running the model with the image as input
interpreter.set_tensor(input_details[0]['index'],input_data)
interpreter.invoke()
# Retrieve detection results
boxes = interpreter.get_tensor(output_details[0]['index'])[0] # Bounding box coordinates of detected objects
classes = interpreter.get_tensor(output_details[1]['index'])[0] # Class index of detected objects
scores = interpreter.get_tensor(output_details[2]['index'])[0] # Confidence of detected objects
print (boxes)
print (classes)
print (scores)
#num = interpreter.get_tensor(output_details[3]['index'])[0] # Total number of detected objects (inaccurate and not needed)
# Loop over all detections and draw detection box if confidence is above minimum threshold
for i in range(len(scores)):
if ((scores[i] > min_conf_threshold) and (scores[i] <= 1.0)):
# Get bounding box coordinates and draw box
# Interpreter can return coordinates that are outside of image dimensions, need to force them to be within image using max() and min()
ymin = int(max(1,(boxes[i][0] * imH)))
xmin = int(max(1,(boxes[i][1] * imW)))
ymax = int(min(imH,(boxes[i][2] * imH)))
xmax = int(min(imW,(boxes[i][3] * imW)))
cv2.rectangle(frame, (xmin,ymin), (xmax,ymax), (10, 255, 0), 4)
# Draw label
object_name = labels[int(classes[i])] # Look up object name from "labels" array using class index
label = '%s: %d%%' % (object_name, int(scores[i]*100)) # Example: 'person: 72%'
labelSize, baseLine = cv2.getTextSize(label, cv2.FONT_HERSHEY_SIMPLEX, 0.7, 2) # Get font size
label_ymin = max(ymin, labelSize[1] + 10) # Make sure not to draw label too close to top of window
cv2.rectangle(frame, (xmin, label_ymin-labelSize[1]-10), (xmin+labelSize[0],
label_ymin+baseLine-10), (255, 255, 255), cv2.FILLED) # Draw white box to put label text in
cv2.putText(frame, label, (xmin, label_ymin-7), cv2.FONT_HERSHEY_SIMPLEX,
0.7, (0, 0, 0), 2) # Draw label text
# All the results have been drawn on the frame, so it's time to display it.
cv2.imshow('Object detector', frame)
#output_rgb = cv2.cvtColor(frame, cv2.COLOR_RGB2BGR)
out.write(frame)
# Press 'q' to quit
if cv2.waitKey(1) == ord('q'):
break
# Clean up
video.release()
out.release()
cv2.destroyAllWindows()
Here is what the print statements should look like when using the canned tflite model:
[32. 76. 56. 76. 0. 61. 74. 0. 0. 0.]
[0.609375 0.48828125 0.44921875 0.44921875 0.4140625 0.40234375
0.37890625 0.3125 0.3125 0.3125 ]
[[-0.01923192 0.17330796 0.747546 0.8384144 ]
[ 0.01866053 0.5023282 0.39603746 0.6143299 ]
[ 0.01673795 0.47382414 0.34407628 0.5580931 ]
[ 0.11588445 0.78543806 0.8778869 1.0039229 ]
[ 0.8106107 0.70675755 1.0080075 0.89248717]
[ 0.84941524 0.06391776 1.0006479 0.28792098]
[ 0.05543692 0.53557926 0.40413857 0.62823087]
[ 0.07051808 -0.00938512 0.8822515 0.28100258]
[ 0.68205094 0.33990026 0.9940187 0.6020821 ]
[ 0.08010477 0.01998334 0.6011186 0.26135433]]
Here is the error when presented with the cloud created model:
File "tflite_vid.py", line 124, in <module>
classes = interpreter.get_tensor(output_details[1]['index'])[0] # Class index of detected objects
IndexError: list index out of range
So I would kindly ask that someone explain how to either develop a TFlite model with TF2 with Python or how to get the cloud to generate a usable TFlite model. Please oh please do not point me into a direction that entails wondering through the Internet examples unless they are the actual gospel on how to do this.,
In output_details[1], it is [1] <- list index out of range. Your model may have 1 output, but the code try to access the 2nd output.
For more usage about Python code, please refer to https://www.tensorflow.org/lite/guide/inference#load_and_run_a_model_in_python for guidance.
I've managed to make a set of subplots using hist2d and ImageGrid with the code below:
from mpl_toolkits.axes_grid1 import ImageGrid
fig = figure(figsize(20, 60))
grid = ImageGrid(fig, 111, nrows_ncols=(1, 3), axes_pad=0.25)
for soa, ax in zip(soalist, grid):
# grab my data from pandas DataFrame...
samps = allsubs[allsubs['soa'] == soa]
x, y = samps['x'], samps['y']
# calls hist2d and returns the Image returned by hist2d
img = gazemap(x, y, ax, std=True, mean=True)
ax.set_title("{0} ms".format(soa * 1000))
# attempt to show a colorbar for that image
grid.cbar_axes[-1].colorbar(img)
show() # threw this in for good measure, but doesn't help!
I get no explicit error (which is good, because I passed an Image to colorbar), but my colorbar does not appear. What gives?
Okay, I fixed it!
All I had to do was pass the cbar_mode and cbar_location kwargs to ImageGrid!