How to retrieve (x,y) coordinates corresponding to FACEMESH_FACE_OVAL in results returned by mediapipe Holistic? - mediapipe

How can I retrieve the series of (x,y) coordinates corresponding to FACEMESH_FACE_OVAL from the results returned by mediapipe holistic? I can see some information about FACEMESH_FACE_OVAL at https://github.com/google/mediapipe/blob/master/mediapipe/python/solutions/face_mesh_connections.py and have read much of the code at https://github.com/google/mediapipe/tree/master/mediapipe/python/solutions but am stuck.
import mediapipe as mp
import cv2
mp_holistic = mp.solutions.holistic
frame = cv2.imread(color_frame) # color_frame is the name of a color image file
imageRGB = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
holistic = mp_holistic.Holistic(model_complexity=2)
results = holistic.process(imageRGB)

Related

xarray : how to stack several pcolormesh figures above a map?

For a ML project I'm currently on, I need to verify if the trained data are good or not.
Let's say that I'm "splitting" the sky into several altitude grids (let's take 3 values for the moment) and for a given region (let's say, Europe).
One grid could be a signal reception strength (RSSI), another one the signal quality (RSRQ)
Each cell of the grid is therefor a rectangle and it has a mean value of each measurement (i.e. RSSI or RSRQ) performed in that area.
I have hundreds of millions of data
In the code below, I know how to draw a coloured mesh with xarray for each altitude: I just use xr.plot.pcolormesh(lat,lon, the_data_set); that's fine
But this will only give me a "flat" figure like this:
RSSI value at 3 different altitudes
I need to draw all the pcolormesh() of a dataset for each altitude in such way that:
1: I can have the map at the bottom
2: Each pcolormesh() is stacked and "displayed" at its altitude
3: I need to add a 3d scatter plot for testing my trained data
4: Need to be interactive as I have to zoom in areas
For 2 and 3 above, I managed to do something using plt and cartopy :
enter image description here
But plt/cartopy combination is not as interactive as plotly.
But plotly doesn't have the pcolormesh functionality
And still ... I don't know in anycase, how to "stack" the pcolormesh results that I did get above.
I've been digging Internet for few days but I didn't find something that could satisfy all my criteria.
What I did to get my pcolormesh:
import numpy as np
import xarray as xr
import cartopy.crs as ccrs
import matplotlib.pyplot as plt
class super_data():
def __init__(self, lon_bound,lat_bound,alt_bound,x_points,y_points,z_points):
self.lon_bound = lon_bound
self.lat_bound = lat_bound
self.alt_bound = alt_bound
self.x_points = x_points
self.y_points = y_points
self.z_points = z_points
self.lon, self.lat, self.alt = np.meshgrid(np.linspace(self.lon_bound[0], self.lon_bound[1], self.x_points),
np.linspace(self.lat_bound[0], self.lat_bound[1], self.y_points),
np.linspace(self.alt_bound[0], self.alt_bound[1], self.z_points))
self.this_xr = xr.Dataset(
coords={'lat': (('latitude', 'longitude','altitude'), self.lat),
'lon': (('latitude', 'longitude','altitude'), self.lon),
'alt': (('latitude', 'longitude','altitude'), self.alt)})
def add_data_array(self,ds_name,ds_min,ds_max):
def create_temp_data(ds_min,ds_max):
data = np.random.randint(ds_min,ds_max,size=self.y_points * self.x_points)
return data
temp_data = []
# Create "z_points" number of layers in the z axis
for i in range(self.z_points):
temp_data.append(create_temp_data(ds_min,ds_max))
data = np.concatenate(temp_data)
data = data.reshape(self.z_points,self.x_points, self.y_points)
self.this_xr[ds_name] = (("altitude","longitude","latitude"),data)
def plot(self,dataset, extent=None, plot_center=False):
# I want t
if np.sqrt(self.z_points) == np.floor(np.sqrt(self.z_points)):
side_size = int(np.sqrt(self.z_points))
else:
side_size = int(np.floor(np.sqrt(self.z_points) + 1))
fig = plt.figure()
i_ax=1
for i in range(side_size):
for j in range(side_size):
if i_ax < self.z_points+1:
this_dataset = self.this_xr[dataset].sel(altitude=i_ax-1)
# Initialize figure with subplots
ax = fig.add_subplot(side_size, side_size, i_ax, projection=ccrs.PlateCarree())
i_ax += 1
ax.coastlines()
this_dataset.plot.pcolormesh('lon', 'lat', ax=ax, infer_intervals=True, alpha=0.5)
else:
break
plt.tight_layout()
plt.show()
if __name__ == "__main__":
# Wanted coverage :
lons = [-15, 30]
lats = [35, 65]
alts = [1000, 5000]
xarr = super_data(lons,lats,alts,10,8,3)
# Add some fake data
xarr.add_data_array("RSSI",-120,-60)
xarr.add_data_array("pressure",700,1013)
xarr.plot("RSSI",0)
Thanks for you help

How to add text to folium choropleth maps

I am using sample data from geopandas for this example. But basically what I want to do is have a choropleth map with text displayed on it by default.
Currently, I have used tooltip='BoroCode' to display the text I need however, I don't want the text displayed only when you hover over the area. I'd like it to be displayed all the time.
import folium
import branca.colormap as cmp
import geopandas
path_to_data = geopandas.datasets.get_path("nybb")
gdf = geopandas.read_file(path_to_data)
# create map
m= folium.Map(location=[40.730610, -73.935242])
# add boroughs
gdf.explore(
m=m,
column="BoroName",
scheme="naturalbreaks",
legend=True,
k=10,
cmap='winter',
legend_kwds=dict(),
name="boroughs" ,
tooltip='BoroCode'
)
folium.LayerControl().add_to(m)
m
I don't have enough experience with geopandas, but in this case, gdf.explore() doesn't seem to have the ability to add annotations, so you can add a marker with html format text set with an icon on the folium side. If the map coordinate system in geopandas is The map coordinate system of geopandas is not in a format that can be used by folium, so it is converted. Then, the center point of the borough is obtained. A warning will be displayed if this center point is not correct. I think the solution to avoid this warning is to use the actual center point.
import folium
from folium.features import DivIcon
import branca.colormap as cmp
import geopandas
path_to_data = geopandas.datasets.get_path("nybb")
gdf = geopandas.read_file(path_to_data)
gdf = gdf.to_crs(epsg=4326)
gdf['centroid'] = gdf.centroid
gdf['lat'] = gdf['centroid'].map(lambda p: p.y)
gdf['lon'] = gdf['centroid'].map(lambda p: p.x)
m = folium.Map(location=[40.730610, -73.935242])
for i, row in gdf.iterrows():
folium.map.Marker(
[row['lat'],row['lon']],
icon=DivIcon(
icon_size=(100,24),
icon_anchor=(0,0),
html=f'<div style="font-size:16px; color:white;">{row["BoroName"]}</div>',
)
).add_to(m)
#add boroughs
gdf.explore(
m=m,
column="BoroName",
scheme="naturalbreaks",
legend=True,
k=10,
cmap='winter',
legend_kwds=dict(),
name="boroughs",
tooltip='BoroName'
)
folium.LayerControl().add_to(m)
m

How to get intel realsense D435i camera serial numbers from frames for multiple cameras?

I have initialized one pipeline for two cameras and I am getting color and depth images from the same.
The problem is that I cannot find camera serial numbers for corresponding frames to determine which camera captured the frames.
Below is my code:
import pyrealsense2 as rs
import numpy as np
import cv2
import logging
import time
# Configure depth and color streams...
pipeline_1 = rs.pipeline()
config_1 = rs.config()
config_1.enable_device('938422072752')
config_1.enable_device('902512070386')
config_1.enable_stream(rs.stream.depth, 640, 480, rs.format.z16, 30)
config_1.enable_stream(rs.stream.color, 640, 480, rs.format.bgr8, 30)
# Start streaming from both cameras
pipeline_1.start(config_1)
try:
while True:
# Camera 1
# Wait for a coherent pair of frames: depth and color
frames_1 = pipeline_1.wait_for_frames()
depth_frame_1 = frames_1.get_depth_frame()
color_frame_1 = frames_1.get_color_frame()
if not depth_frame_1 or not color_frame_1:
continue
# Convert images to numpy arrays
depth_image_1 = np.asanyarray(depth_frame_1.get_data())
color_image_1 = np.asanyarray(color_frame_1.get_data())
# Apply colormap on depth image (image must be converted to 8-bit per pixel first)
depth_colormap_1 = cv2.applyColorMap(cv2.convertScaleAbs(depth_image_1, alpha=0.5), cv2.COLORMAP_JET)
# Camera 2
# Wait for a coherent pair of frames: depth and color
frames_2 = pipeline_1.wait_for_frames()
depth_frame_2 = frames_2.get_depth_frame()
color_frame_2 = frames_2.get_color_frame()
if not depth_frame_2 or not color_frame_2:
continue
# Convert images to numpy arrays
depth_image_2 = np.asanyarray(depth_frame_2.get_data())
color_image_2 = np.asanyarray(color_frame_2.get_data())
# Apply colormap on depth image (image must be converted to 8-bit per pixel first)
depth_colormap_2 = cv2.applyColorMap(cv2.convertScaleAbs(depth_image_2, alpha=0.5), cv2.COLORMAP_JET)
# Stack all images horizontally
images = np.hstack((color_image_1, depth_colormap_1,color_image_2, depth_colormap_2))
# Show images from both cameras
cv2.namedWindow('RealSense', cv2.WINDOW_NORMAL)
cv2.imshow('RealSense', images)
cv2.waitKey(20)
finally:
pipeline_1.stop()
How can I find camera serial numbers after wait_for_frames() to determine which camera captured depth and color image.
I adopted your code, combined it with the C++ example posted by nayab to compose the following code that grabs the color image (only) of multiple RealSense cameras and stacks them horizontally:
import pyrealsense2 as rs
import numpy as np
import cv2
import logging
import time
realsense_ctx = rs.context() # The context encapsulates all of the devices and sensors, and provides some additional functionalities.
connected_devices = []
# get serial numbers of connected devices:
for i in range(len(realsense_ctx.devices)):
detected_camera = realsense_ctx.devices[i].get_info(
rs.camera_info.serial_number)
connected_devices.append(detected_camera)
pipelines = []
configs = []
for i in range(len(realsense_ctx.devices)):
pipelines.append(rs.pipeline()) # one pipeline for each device
configs.append(rs.config()) # one config for each device
configs[i].enable_device(connected_devices[i])
configs[i].enable_stream(rs.stream.color, 1920, 1080, rs.format.bgr8, 30)
pipelines[i].start(configs[i])
try:
while True:
images = []
for i in range(len(pipelines)):
print("waiting for frame at cam", i)
frames = pipelines[i].wait_for_frames()
color_frame = frames.get_color_frame()
images.append(np.asanyarray(color_frame.get_data()))
# Stack all images horizontally
image_composite = images[0]
for i in range(1, len(images)):
images_composite = np.hstack((image_composite, images[i]))
# Show images from both cameras
cv2.namedWindow('RealSense', cv2.WINDOW_NORMAL)
cv2.imshow('RealSense', images_composite)
cv2.waitKey(20)
finally:
for i in range(len(pipelines)):
pipelines[i].stop()
This will look for the connected devices and find the serial numbers.
They are saved in a list and you can use them to start the available cameras.
# Configure depth and color streams...
realsense_ctx = rs.context()
connected_devices = []
for i in range(len(realsense_ctx.devices)):
detected_camera = ealsense_ctx.devices[i].get_info(rs.camera_info.serial_number)
connected_devices.append(detected_camera)

how to save figure in vis_bbox without white background, when plotting with matplotlib?

i'm trying to save the image after vis_bbox prediction with its original image dimension.
my code:
from PIL import Image, ImageChops
import cv2
img = utils.read_image('/home/ubuntu/ui.jpg', color=True)
bboxes, labels,scores = model.predict([img])
bbox, label, score = bboxes[0], labels[0], scores[0],
colors = voc_colormap(label + 1)
bccd_labels = ('cell', 'cell')
vis_bbox(img, bbox, label_names=bccd_labels, instance_colors=colors, alpha=0.9, linewidth=1.0)
plt.axis("off")
plt.savefig("/home/ubuntu/ins.jpg")
while saving , it saves the image with white background and default size (432 *288).
i need to save the predicted image from vis_bbox with the original dimension (1300 *1300).
Any suggestions would be helpful!

Plotting Lat/Long Points Using Basemap

I am trying to plot points on a map using matplotlib and Basemap, where the points represent the lat/long for specific buildings. My map does indeed plot the points, but puts them in the wrong location. When I use the same data and do the same thing using Bokeh, instead of matplotlib and basemap, I get the correct plot.
Here is the CORRECT result in Bokeh:
Bokeh Version
And here is the INCORRECT result in Basemap:
Basemap Version
I have seen discussion elsewhere on StackOverflow that suggested this might be related to the fact that plot() "shifts" the longitude somehow. I've tried the suggestion from there, which was to include the line:
lons, lats = m.shiftdata(long, lat)
and then use the shifted data. That didn't have any visible impact.
My full sample code which generates both of the plots in Basemap and Bokeh is here:
import matplotlib.pyplot as plt
from mpl_toolkits.basemap import Basemap
import pandas as pd
from bokeh.plotting import figure, show
from bokeh.sampledata.us_states import data as states
from bokeh.models import ColumnDataSource, Range1d
# read in data to use for plotted points
buildingdf = pd.read_csv('buildingdata.csv')
lat = buildingdf['latitude'].values
long = buildingdf['longitude'].values
# determine range to print based on min, max lat and long of the data
margin = .2 # buffer to add to the range
lat_min = min(lat) - margin
lat_max = max(lat) + margin
long_min = min(long) - margin
long_max = max(long) + margin
# create map using BASEMAP
m = Basemap(llcrnrlon=long_min,
llcrnrlat=lat_min,
urcrnrlon=long_max,
urcrnrlat=lat_max,
lat_0=(lat_max - lat_min)/2,
lon_0=(long_max-long_min)/2,
projection='merc',
resolution = 'h',
area_thresh=10000.,
)
m.drawcoastlines()
m.drawcountries()
m.drawstates()
m.drawmapboundary(fill_color='#46bcec')
m.fillcontinents(color = 'white',lake_color='#46bcec')
# convert lat and long to map projection coordinates
lons, lats = m(long, lat)
# plot points as red dots
m.scatter(lons, lats, marker = 'o', color='r')
plt.show()
# create map using Bokeh
source = ColumnDataSource(data = dict(lat = lat,lon = long))
# get state boundaries
state_lats = [states[code]["lats"] for code in states]
state_longs = [states[code]["lons"] for code in states]
p = figure(
toolbar_location="left",
plot_width=1100,
plot_height=700,
)
# limit the view to the min and max of the building data
p.y_range = Range1d(lat_min, lat_max)
p.x_range = Range1d(long_min, long_max)
p.xaxis.visible = False
p.yaxis.visible = False
p.xgrid.grid_line_color = None
p.ygrid.grid_line_color = None
p.patches(state_longs, state_lats, fill_alpha=0.0,
line_color="black", line_width=2, line_alpha=0.3)
p.circle(x="lon", y="lat", source = source, size=4.5,
fill_color='red',
line_color='grey',
line_alpha=.25
)
show(p)
I don't have enough reputation points to post a link to the data or to include it here.
In the basemap plot the scatter points are hidden behind the fillcontinents. Removing the two lines
#m.drawmapboundary(fill_color='#46bcec')
#m.fillcontinents(color = 'white',lake_color='#46bcec')
would show you the points. Because this might be undesired, the best solution would be to place the scatter on top of the rest of the map by using the zorder argument.
m.scatter(lons, lats, marker = 'o', color='r', zorder=5)
Here is the complete code (and I would like to ask you to include this kind of runnable minimal example with hardcoded data next time asking a question, as it saves everyone a lot of work inventing the data oneself):
import matplotlib.pyplot as plt
from mpl_toolkits.basemap import Basemap
import pandas as pd
import io
u = u"""latitude,longitude
42.357778,-71.059444
39.952222,-75.163889
25.787778,-80.224167
30.267222, -97.763889"""
# read in data to use for plotted points
buildingdf = pd.read_csv(io.StringIO(u), delimiter=",")
lat = buildingdf['latitude'].values
lon = buildingdf['longitude'].values
# determine range to print based on min, max lat and lon of the data
margin = 2 # buffer to add to the range
lat_min = min(lat) - margin
lat_max = max(lat) + margin
lon_min = min(lon) - margin
lon_max = max(lon) + margin
# create map using BASEMAP
m = Basemap(llcrnrlon=lon_min,
llcrnrlat=lat_min,
urcrnrlon=lon_max,
urcrnrlat=lat_max,
lat_0=(lat_max - lat_min)/2,
lon_0=(lon_max-lon_min)/2,
projection='merc',
resolution = 'h',
area_thresh=10000.,
)
m.drawcoastlines()
m.drawcountries()
m.drawstates()
m.drawmapboundary(fill_color='#46bcec')
m.fillcontinents(color = 'white',lake_color='#46bcec')
# convert lat and lon to map projection coordinates
lons, lats = m(lon, lat)
# plot points as red dots
m.scatter(lons, lats, marker = 'o', color='r', zorder=5)
plt.show()