I import a .graphml file with
G = nx.read_graphml(file.graphml).
It's a graph that shows fuzzing status. There are some attributes like FuncSize, Coverage etc.
How can I visualize my graph? For example, FuncSize should affect node size. Coverage should affect
node color etc?
Related
I'm new to opencv and I'm trying to detect person through cv2.findContours with morphological transformation of the video. Here is the code snippet..
import numpy as np
import imutils
import cv2 as cv
import time
cap = cv.VideoCapture(0)
while(cap.isOpened()):
ret, frame = cap.read()
#frame = imutils.resize(frame, width=700,height=100)
gray = cv.cvtColor(frame, cv.COLOR_BGR2GRAY)
gray = cv.GaussianBlur(gray, (21, 21), 0)
cv.accumulateWeighted(gray, avg, 0.5)
mask2 = cv.absdiff(gray, cv.convertScaleAbs(avg))
mask = cv.absdiff(gray, cv.convertScaleAbs(avg))
contours0, hierarchy = cv.findContours(mask2,cv.RETR_EXTERNAL,cv.CHAIN_APPROX_SIMPLE)
for cnt in contours0:
.
.
.
The rest of the code has the logic of a contour passing a line and incrementing the count.
The problem I'm encountering is, cv.findContours detects every movement/change in the frame (including the person). What I want is cv.findContours to detect only person and not any other movement. I know that person detection can be achieved through harrcasacade but is there any way I can implement detection using cv2.findContours?
If not then is there a way I can still do morphological transformation and detect people because the project I'm working on requires filtering of noise and much of the background to detect the person and increment it's count on passing the line.
I will show you two options to do this.
The method I mentioned in the comments which you can use with Yolo to detect humans:
Use saliency to detect the standout parts of the video
Apply K-Means Clustering to cluster the objects into individual clusters.
Apply Background Subtraction and erosion or dilation (or both depends on the video but try them all and see which one does the best job).
Crop the objects
Send the cropped objects to Yolo
If the class name is a pedestrian or human then draw the bounding boxes on them.
Using OpenCV's builtin pedestrian detection which is much more easier:
Convert frames to black and white
Use pedestrian_cascade.detectMultiScale() on the grey frames.
Draw a bounding box over each pedestrian
Second method is much simpler but it depends what is expected of you for this project.
When reading the documentation for TFX, especially in the parts related to pre-processing of the data, I would think the pipeline design is more appropiate for categorical features.
I wanted to know whether TFX could also be used for pipelines involving images.
Yes, TFX could also be used for pipelines involving images.
Especially, in the parts related to pre-processing the data, as per my knowledge, there are no in built functions in Tensorflow Transform.
But the Transformations can be made using Tensorflow Ops. For example, Image Augmentation can be done using tf.image, and so on.
Sample code for Transformation of Images, i.e., converting an image from Color to Grey Scale, by dividing the value of each pixel by 255, using Tensorflow Transform is shown below:
def preprocessing_fn(inputs):
"""Preprocess input columns into transformed columns."""
# Since we are modifying some features and leaving others unchanged, we
# start by setting `outputs` to a copy of `inputs.
outputs = inputs.copy()
# Convert the Image from Color to Grey Scale.
# NUMERIC_FEATURE_KEYS is the names of Columns of Values of Pixels
for key in NUMERIC_FEATURE_KEYS:
outputs[key] = tf.divide(outputs[key], 255)
outputs[LABEL_KEY] = outputs[LABEL_KEY]
return outputs
I am using the below code to generate a graph with two clusters with four nodes each
For some reasons when I print the graph the clusters do not show up.
What am I doing wrong?
import pygraphviz as pgv
A=pgv.AGraph(bgcolor="#cccccc",layout='neato')
A.add_edge('R1','R2')
A.add_edge('R2','R3')
A.add_edge('R3','R4')
A.add_edge('R4','R5')
A.add_edge('R5','R6')
A.add_subgraph(['R1','R2','R3','R4'], 'pbd01')
A.add_subgraph(['R5','R6','R7','R8'], 'pbd02')
A.write('cluster.dot')
A.draw('Topology.png', prog='neato')
I believe there are two problems:
The 'neato' rendering engine does not support clustering
By convention, rendering engines that do support clustering require that the subgraph name starts with 'cluster'
The following code / image was produced with the 'dot' engine and correctly clusters the nodes:
import pygraphviz as pgv
A=pgv.AGraph(bgcolor="#cccccc",layout='dot')
A.add_edge('R1','R2')
A.add_edge('R2','R3')
A.add_edge('R3','R4')
A.add_edge('R4','R5')
A.add_edge('R5','R6')
A.add_subgraph(['R1','R2','R3','R4'], name='cluster_pbd01')
A.add_subgraph(['R5','R6','R7','R8'], name='cluster_pbd02')
A.write('cluster.dot')
A.draw('Topology.png', prog='dot')
Topology.png
Is there a way to place patterns into selected areas on an imshow graph? To be precise, I need to make it so that, in addition to the numerical-data-carrying colored squares, I also have different patterns in other squares indicate different failure modes for the experiment (and also generate a key explaining the meaning of these different patterns). An example of a pattern that would be useful would be various types of crosshatches. I need to be able to do this without disrupting the main color-numerical data relationship on the graph.
Due to the back-end I am working within for the GUI containing the graph, I cannot utilize patches (they fail to pickle and make it from the back-end to the front-end via the multiprocessing package). I was wondering if anyone knew of another way to do this.
grid = np.ma.array(grid, mask=np.isnan(grid))
ax.imshow(grid, interpolation='nearest', aspect='equal', vmax = private.vmax, vmin = private.vmin)
# Up to here works fine and draws the graph showing only the data with white spaces for any point that failed
if show_fail and faildat != []:
faildat = faildat[np.lexsort((faildat[:,yind],faildat[:,xind]))]
fails = []
for i in range(len(faildat)): #gives coordinates with failures as (x,y)
fails.append((faildat[i,1],faildat[i,0]))
for F in fails:
ax.FUNCTION NEEDED HERE
ax.minorticks_off()
ax.set_xticks(range(len(placex)))
ax.set_yticks(range(len(placey)))
ax.set_xticklabels(placex)
ax.set_yticklabels(placey, rotation = 0)
ax.colorbar()
ax.show()
How can I do a basic face alignment on a 2-dimensional image with the assumption that I have the position/coordinates of the mouth and eyes.
Is there any algorithm that I could implement to correct the face alignment on images?
Face (or image) alignment refers to aligning one image (or face in your case) with respect to another (or a reference image/face). It is also referred to as image registration. You can do that using either appearance (intensity-based registration) or key-point locations (feature-based registration). The second category stems from image motion models where one image is considered a displaced version of the other.
In your case the landmark locations (3 points for eyes and nose?) provide a good reference set for straightforward feature-based registration. Assuming you have the location of a set of points in both of the 2D images, x_1 and x_2 you can estimate a similarity transform (rotation, translation, scaling), i.e. a planar 2D transform S that maps x_1 to x_2. You can additionally add reflection to that, though for faces this will most-likely be unnecessary.
Estimation can be done by forming the normal equations and solving a linear least-squares (LS) problem for the x_1 = Sx_2 system using linear regression. For the 5 unknown parameters (2 rotation, 2 translation, 1 scaling) you will need 3 points (2.5 to be precise) for solving 5 equations. Solution to the above LS can be obtained through Direct Linear Transform (e.g. by applying SVD or a matrix pseudo-inverse). For cases of a sufficiently large number of reference points (i.e. automatically detected) a RANSAC-type method for point filtering and uncertainty removal (though this is not your case here).
After estimating S, apply image warping on the second image to get the transformed grid (pixel) coordinates of the entire image 2. The transform will change pixel locations but not their appearance. Unavoidably some of the transformed regions of image 2 will lie outside the grid of image 1, and you can decide on the values for those null locations (e.g. 0, NaN etc.).
For more details: R. Szeliski, "Image Alignment and Stitching: A Tutorial" (Section 4.3 "Geometric Registration")
In OpenCV see: Geometric Image Transformations, e.g. cv::getRotationMatrix2D cv::getAffineTransform and cv::warpAffine. Note though that you should estimate and apply a similarity transform (special case of an affine) in order to preserve angles and shapes.
For the face there is lot of variability in feature points. So it won't be possible to do a perfect fit of all feature points by just affine transforms. The only way to align all the points perfectly is to warp the image given the points. Basically you can do a triangulation of image given the points and do a affine warp of each triangle to get the warped image where all the points are aligned.
Face detection could be handled based on the just eye positions.
Herein, OpenCV, Dlib and MTCNN offers to detect faces and eyes. Besides, it is a python based framework but deepface wraps those methods and offers an out-of-the box detection and alignment function.
detectFace function applies detection and alignment in the background respectively.
#!pip install deepface
from deepface import DeepFace
backends = ['opencv', 'ssd', 'dlib', 'mtcnn']
DeepFace.detectFace("img.jpg", detector_backend = backends[0])
Besides, you can apply detection and alignment manually.
from deepface.commons import functions
img = functions.load_image("img.jpg")
backends = ['opencv', 'ssd', 'dlib', 'mtcnn']
detected_face = functions.detect_face(img = img, detector_backend = backends[3])
plt.imshow(detected_face)
aligned_face = functions.align_face(img = img, detector_backend = backends[3])
plt.imshow(aligned_face)
processed_img = functions.detect_face(img = aligned_face, detector_backend = backends[3])
plt.imshow(processed_img)
There's a section Aligning Face Images in OpenCV's Face Recognition guide:
http://docs.opencv.org/trunk/modules/contrib/doc/facerec/facerec_tutorial.html#aligning-face-images
The script aligns given images at the eyes. It's written in Python, but should be easy to translate to other languages. I know of a C# implementation by Sorin Miron:
http://code.google.com/p/stereo-face-recognition/