networkx position (biggest) nodes in the middle of a graph - matplotlib

I've been creating graphs with the networkx package and everything works fine. I would like to make the graphs even better by placing the bigger nodes in the middle of the graph and the layout functions from networkx does not seem to do the job. The nodes represent the size of degree (the higher connected the node, the bigger).
Is there any way to program these graphs in such a way that the bigger nodes are positioned in the middle? It does not have to be automated, i could also manually choose the nodes and give them the middle position but i can also not find how to do this.
If this is not possible with networkx or something else; is there any way to do it with Gephi or cytoscape? I had trouble with Gephi that it does not import the graph the same way i see it in my jupyter notebook (the colors, the node- and edge-sizes do not import).
To summarize; i want to put bigger nodes in the middle of my graph but i dont mind how i get it done (with networkx, matplotlib or whatever).
Unfortunately i cannot provide my actual graphs but here is an example which can look like one of my graphs; it is a directed weighted graph.
G = nx.gnp_random_graph(15, 0.2, directed=True)
d = dict(G.degree(weight='weight'))
d = {k: v/10 for k, v in d.items()}
edge_size = [(float(i)/sum(weights))*100 for i in weights]
node_size = [(v*1000) for v in d.values()]
nx.draw(G,width=edge_size,node_size=node_size)

There are several options:
import networkx as nx
G = nx.gnp_random_graph(15, 0.2, directed=True)
node_degree = dict(G.degree(weight='weight'))
# A) Precompute node positions, and then manually over-ride some node positions.
node_positions = nx.spring_layout(G)
node_positions[0] = (0.5, 0.5) # by default, networkx plots on a canvas with the origin at (0, 0) and a width and height of 1; (0.5, 0.5) is hence the center
nx.draw(G, pos=node_positions, node_size=[100 * node_degree[node] for node in G])
plt.show()
# B) Use netgraph to draw the graph and then drag the nodes around with the mouse.
from netgraph import InteractiveGraph # pip install netgraph
plot_instance = InteractiveGraph(G, node_size=node_degree)
plt.show()
# C) Modify the Fruchterman-Reingold algorithm to include a gravitational force that pulls nodes with a large "mass" towards the center.
# This is left as an exercise to the interested reader (i.e. very non-trivial).
Edit: option C is non-trivial but also very do-able.
Here is my stab at it.
#!/usr/bin/env python
# coding: utf-8
"""
FR layout but with an additional gravitational pull towards a gravitational center.
The pull is proportional to the mass of the node.
"""
import numpy as np
import matplotlib.pyplot as plt
# pip install netgraph
from netgraph._main import BASE_SCALE
from netgraph._utils import (
_get_unique_nodes,
_edge_list_to_adjacency_matrix,
)
from netgraph._node_layout import (
_is_within_bbox,
_get_temperature_decay,
_get_fr_repulsion,
_get_fr_attraction,
_rescale_to_frame,
_handle_multiple_components,
_reduce_node_overlap,
)
DEBUG = False
#_handle_multiple_components
def get_fruchterman_reingold_newton_layout(edges,
edge_weights = None,
k = None,
g = 1.,
scale = None,
origin = None,
gravitational_center = None,
initial_temperature = 1.,
total_iterations = 50,
node_size = 0,
node_mass = 1,
node_positions = None,
fixed_nodes = None,
*args, **kwargs):
"""Modified Fruchterman-Reingold node layout.
Uses a modified Fruchterman-Reingold algorithm [Fruchterman1991]_ to compute node positions.
This algorithm simulates the graph as a physical system, in which nodes repell each other.
For connected nodes, this repulsion is counteracted by an attractive force exerted by the edges, which are simulated as springs.
Unlike the original algorithm, there is an additional attractive force pulling nodes towards a gravitational center, in proportion to their masses.
Parameters
----------
edges : list
The edges of the graph, with each edge being represented by a (source node ID, target node ID) tuple.
edge_weights : dict
Mapping of edges to edge weights.
k : float or None, default None
Expected mean edge length. If None, initialized to the sqrt(area / total nodes).
g : float or None, default 1.
Gravitational constant that sets the magnitude of the gravitational pull towards the center.
origin : tuple or None, default None
The (float x, float y) coordinates corresponding to the lower left hand corner of the bounding box specifying the extent of the canvas.
If None is given, the origin is placed at (0, 0).
scale : tuple or None, default None
The (float x, float y) dimensions representing the width and height of the bounding box specifying the extent of the canvas.
If None is given, the scale is set to (1, 1).
gravitational_center : tuple or None, default None
The (float x, float y) coordinates towards which nodes experience a gravitational pull.
If None, the gravitational center is placed at the center of the canvas defined by origin and scale.
total_iterations : int, default 50
Number of iterations.
initial_temperature: float, default 1.
Temperature controls the maximum node displacement on each iteration.
Temperature is decreased on each iteration to eventually force the algorithm into a particular solution.
The size of the initial temperature determines how quickly that happens.
Values should be much smaller than the values of `scale`.
node_size : scalar or dict, default 0.
Size (radius) of nodes.
Providing the correct node size minimises the overlap of nodes in the graph,
which can otherwise occur if there are many nodes, or if the nodes differ considerably in size.
node_mass : scalar or dict, default 1.
Mass of nodes.
Nodes with higher mass experience a larger gravitational pull towards the center.
node_positions : dict or None, default None
Mapping of nodes to their (initial) x,y positions. If None are given,
nodes are initially placed randomly within the bounding box defined by `origin` and `scale`.
If the graph has multiple components, explicit initial positions may result in a ValueError,
if the initial positions fall outside of the area allocated to that specific component.
fixed_nodes : list or None, default None
Nodes to keep fixed at their initial positions.
Returns
-------
node_positions : dict
Dictionary mapping each node ID to (float x, float y) tuple, the node position.
References
----------
.. [Fruchterman1991] Fruchterman, TMJ and Reingold, EM (1991) ‘Graph drawing by force‐directed placement’,
Software: Practice and Experience
"""
# This is just a wrapper around `_fruchterman_reingold`, which implements (the loop body of) the algorithm proper.
# This wrapper handles the initialization of variables to their defaults (if not explicitely provided),
# and checks inputs for self-consistency.
assert len(edges) > 0, "The list of edges has to be non-empty."
if origin is None:
if node_positions:
minima = np.min(list(node_positions.values()), axis=0)
origin = np.min(np.stack([minima, np.zeros_like(minima)], axis=0), axis=0)
else:
origin = np.zeros((2))
else:
# ensure that it is an array
origin = np.array(origin)
if scale is None:
if node_positions:
delta = np.array(list(node_positions.values())) - origin[np.newaxis, :]
maxima = np.max(delta, axis=0)
scale = np.max(np.stack([maxima, np.ones_like(maxima)], axis=0), axis=0)
else:
scale = np.ones((2))
else:
# ensure that it is an array
scale = np.array(scale)
assert len(origin) == len(scale), \
"Arguments `origin` (d={}) and `scale` (d={}) need to have the same number of dimensions!".format(len(origin), len(scale))
dimensionality = len(origin)
if gravitational_center is None:
gravitational_center = origin + 0.5 * scale
else:
# ensure that it is an array
gravitational_center = np.array(gravitational_center)
if fixed_nodes is None:
fixed_nodes = []
connected_nodes = _get_unique_nodes(edges)
if node_positions is None: # assign random starting positions to all nodes
node_positions_as_array = np.random.rand(len(connected_nodes), dimensionality) * scale + origin
unique_nodes = connected_nodes
else:
# 1) check input dimensionality
dimensionality_node_positions = np.array(list(node_positions.values())).shape[1]
assert dimensionality_node_positions == dimensionality, \
"The dimensionality of values of `node_positions` (d={}) must match the dimensionality of `origin`/ `scale` (d={})!".format(dimensionality_node_positions, dimensionality)
is_valid = _is_within_bbox(list(node_positions.values()), origin=origin, scale=scale)
if not np.all(is_valid):
error_message = "Some given node positions are not within the data range specified by `origin` and `scale`!"
error_message += "\n\tOrigin : {}, {}".format(*origin)
error_message += "\n\tScale : {}, {}".format(*scale)
error_message += "\nThe following nodes do not fall within this range:"
for ii, (node, position) in enumerate(node_positions.items()):
if not is_valid[ii]:
error_message += "\n\t{} : {}".format(node, position)
error_message += "\nThis error can occur if the graph contains multiple components but some or all node positions are initialised explicitly (i.e. node_positions != None)."
raise ValueError(error_message)
# 2) handle discrepancies in nodes listed in node_positions and nodes extracted from edges
if set(node_positions.keys()) == set(connected_nodes):
# all starting positions are given;
# no superfluous nodes in node_positions;
# nothing left to do
unique_nodes = connected_nodes
else:
# some node positions are provided, but not all
for node in connected_nodes:
if not (node in node_positions):
warnings.warn("Position of node {} not provided. Initializing to random position within frame.".format(node))
node_positions[node] = np.random.rand(2) * scale + origin
unconnected_nodes = []
for node in node_positions:
if not (node in connected_nodes):
unconnected_nodes.append(node)
fixed_nodes.append(node)
# warnings.warn("Node {} appears to be unconnected. The current node position will be kept.".format(node))
unique_nodes = connected_nodes + unconnected_nodes
node_positions_as_array = np.array([node_positions[node] for node in unique_nodes])
total_nodes = len(unique_nodes)
if isinstance(node_size, (int, float)):
node_size = node_size * np.ones((total_nodes))
elif isinstance(node_size, dict):
node_size = np.array([node_size[node] if node in node_size else 0. for node in unique_nodes])
if isinstance(node_mass, (int, float)):
node_mass = node_mass * np.ones((total_nodes))
elif isinstance(node_mass, dict):
node_mass = np.array([node_mass[node] if node in node_mass else 0. for node in unique_nodes])
adjacency = _edge_list_to_adjacency_matrix(
edges, edge_weights=edge_weights, unique_nodes=unique_nodes)
# Forces in FR are symmetric.
# Hence we need to ensure that the adjacency matrix is also symmetric.
adjacency = adjacency + adjacency.transpose()
if fixed_nodes:
is_mobile = np.array([False if node in fixed_nodes else True for node in unique_nodes], dtype=bool)
mobile_positions = node_positions_as_array[is_mobile]
fixed_positions = node_positions_as_array[~is_mobile]
mobile_node_sizes = node_size[is_mobile]
fixed_node_sizes = node_size[~is_mobile]
mobile_node_masses = node_mass[is_mobile]
fixed_node_masses = node_mass[~is_mobile]
# reorder adjacency
total_mobile = np.sum(is_mobile)
reordered = np.zeros((adjacency.shape[0], total_mobile))
reordered[:total_mobile, :total_mobile] = adjacency[is_mobile][:, is_mobile]
reordered[total_mobile:, :total_mobile] = adjacency[~is_mobile][:, is_mobile]
adjacency = reordered
else:
is_mobile = np.ones((total_nodes), dtype=bool)
mobile_positions = node_positions_as_array
fixed_positions = np.zeros((0, 2))
mobile_node_sizes = node_size
fixed_node_sizes = np.array([])
mobile_node_masses = node_mass
fixed_node_masses = np.array([])
if k is None:
area = np.product(scale)
k = np.sqrt(area / float(total_nodes))
temperatures = _get_temperature_decay(initial_temperature, total_iterations)
# --------------------------------------------------------------------------------
# main loop
for ii, temperature in enumerate(temperatures):
candidate_positions = _fruchterman_reingold_newton(mobile_positions, fixed_positions,
mobile_node_sizes, fixed_node_sizes,
adjacency, temperature, k,
mobile_node_masses, fixed_node_masses,
gravitational_center, g)
is_valid = _is_within_bbox(candidate_positions, origin=origin, scale=scale)
mobile_positions[is_valid] = candidate_positions[is_valid]
# --------------------------------------------------------------------------------
# format output
node_positions_as_array[is_mobile] = mobile_positions
if np.all(is_mobile):
node_positions_as_array = _rescale_to_frame(node_positions_as_array, origin, scale)
node_positions = dict(zip(unique_nodes, node_positions_as_array))
return node_positions
def _fruchterman_reingold_newton(mobile_positions, fixed_positions,
mobile_node_radii, fixed_node_radii,
adjacency, temperature, k,
mobile_node_masses, fixed_node_masses,
gravitational_center, g):
"""Inner loop of modified Fruchterman-Reingold layout algorithm."""
combined_positions = np.concatenate([mobile_positions, fixed_positions], axis=0)
combined_node_radii = np.concatenate([mobile_node_radii, fixed_node_radii])
delta = mobile_positions[np.newaxis, :, :] - combined_positions[:, np.newaxis, :]
distance = np.linalg.norm(delta, axis=-1)
# alternatively: (hack adapted from igraph)
if np.sum(distance==0) - np.trace(distance==0) > 0: # i.e. if off-diagonal entries in distance are zero
warnings.warn("Some nodes have the same position; repulsion between the nodes is undefined.")
rand_delta = np.random.rand(*delta.shape) * 1e-9
is_zero = distance <= 0
delta[is_zero] = rand_delta[is_zero]
distance = np.linalg.norm(delta, axis=-1)
# subtract node radii from distances to prevent nodes from overlapping
distance -= mobile_node_radii[np.newaxis, :] + combined_node_radii[:, np.newaxis]
# prevent distances from becoming less than zero due to overlap of nodes
distance[distance <= 0.] = 1e-6 # 1e-13 is numerical accuracy, and we will be taking the square shortly
with np.errstate(divide='ignore', invalid='ignore'):
direction = delta / distance[..., None] # i.e. the unit vector
# calculate forces
repulsion = _get_fr_repulsion(distance, direction, k)
attraction = _get_fr_attraction(distance, direction, adjacency, k)
gravity = _get_gravitational_pull(mobile_positions, mobile_node_masses, gravitational_center, g)
if DEBUG:
r = np.median(np.linalg.norm(repulsion, axis=-1))
a = np.median(np.linalg.norm(attraction, axis=-1))
g = np.median(np.linalg.norm(gravity, axis=-1))
print(r, a, g)
displacement = attraction + repulsion + gravity
# limit maximum displacement using temperature
displacement_length = np.linalg.norm(displacement, axis=-1)
displacement = displacement / displacement_length[:, None] * np.clip(displacement_length, None, temperature)[:, None]
mobile_positions = mobile_positions + displacement
return mobile_positions
def _get_gravitational_pull(mobile_positions, mobile_node_masses, gravitational_center, g):
delta = gravitational_center[np.newaxis, :] - mobile_positions
direction = delta / np.linalg.norm(delta, axis=-1)[:, np.newaxis]
magnitude = mobile_node_masses - np.mean(mobile_node_masses)
return g * magnitude[:, np.newaxis] * direction
if __name__ == '__main__':
import networkx as nx
from netgraph import Graph
G = nx.gnp_random_graph(15, 0.2, directed=True)
node_degree = dict(G.degree(weight='weight'))
node_positions = get_fruchterman_reingold_newton_layout(
list(G.edges()),
node_size={node : BASE_SCALE * degree for node, degree in node_degree.items()},
node_mass=node_degree, g=2
)
Graph(G, node_layout=node_positions, node_size=node_degree)
plt.show()

Related

Non Linear MPC optimization of a 2 dimensional drone

I am trying to simulate a drone on a 2-dimensional lunar surface. The drone can apply thrust the z-axis of the body, and the drone can change the angle of its body from -90 degrees to +90 degrees.
The first planned acceleration in the y direction that the MPC function gives is a negative value that exceeds the the lunar accel_g, which I set to be 1.635 m/s^2; thus, the drone cancels out the initial velocity really quickly. This should not happen since I set the constraints of body angle in such that the thrust will never be able to reduce the vertical velocity: vertical velocity of the drone should be reduced only by the lunar gravity. I can not find what is wrong with the code.
** is there a way I can apply rotation to the marker of the plot? I want to change the cross marker so that it can represent the changes in attitude. **
function run_mpc(initial_position, initial_velocity, initial_angle)
model = Model(Ipopt.Optimizer)
Δt = 0.1
num_time_steps = 20 # Change this -> Affects Optimization
max_acceleration_Thr = 3 # Max Thrust / Mass
max_pitch_angle = 90
accel_g = 1.635 # 1/6 of Earth G
des_pos = [-1,0]
#variables model begin
position[1:2, 1:num_time_steps]
velocity[1:2, 1:num_time_steps]
acceleration[1:2, 1:num_time_steps]
-max_pitch_angle <= angle[1:num_time_steps] <= max_pitch_angle
0 <= accel_Thr[1:num_time_steps] <= max_acceleration_Thr
end
# Dynamics constraints
#NLconstraint(model, [i=2:num_time_steps, j=[1]], acceleration[j, i] == accel_Thr[i-1]*sind(angle[i-1]))
#NLconstraint(model, [i=2:num_time_steps, j=[2]], acceleration[j, i] == (accel_Thr[i-1]*cosd(angle[i-1]))-accel_g)
#NLconstraint(model, [i=2:num_time_steps, j=1:2],
velocity[j, i] == velocity[j, i - 1] + (acceleration[j, i - 1]) * Δt)
#NLconstraint(model, [i=2:num_time_steps, j=1:2],
position[j, i] == position[j, i - 1] + velocity[j, i - 1] * Δt)
# Cost function: minimize final position and final velocity
# For Moving to [-2,0] with min. vertical velocity,
# sum(([-2,0]-position[:, end]).^2)+ sum(velocity[[2], end].^2)
#NLobjective(model, Min,
100 * sum((des_pos[i]-position[i, num_time_steps])^2 for i in 1:2)+ sum(velocity[i, num_time_steps]^2 for i in 1:2))
# Initial conditions:
#NLconstraint(model, [i=1:2], position[i, 1] == initial_position[i])
#NLconstraint(model, [i=1:2], velocity[i, 1] == initial_velocity[i])
#NLconstraint(model, angle[1] == initial_angle)
optimize!(model)
return value.(position), value.(velocity), value.(acceleration), value.(angle[2:end])
end;
begin
# The robot's starting position and velocity
q = [1.0, 0.0]
v = [-2.0, 2.0]
ang = 45
Δt = 0.1
# Recording Position, Acceleration, Attitude, Planned Positions
qs_x = []
qs_y = []
as_x = []
as_y = []
angs = []
q_plans = []
u_plans = []
anim = #animate for i in 1:90 # This determies the number of MPC to be run
# Plot the current position & Attitude
plot(label = "Drone",[q[1]], [q[2]], marker=(:rect, 10), xlim=(-2, 2), ylim=(-2, 2))
plot!(label = "Body Axis",[q[1]], [q[2]], marker=(:cross, 18, :grey))
push!(qs_x,q[1])
push!(qs_y,q[2])
# Run the MPC control optimization
q_plan, v_plan, u_plan, ang_plan = run_mpc(q, v, ang)
# Draw the planned future states from the MPC optimization
plot!(label = "Opt. Path", q_plan[1, :], q_plan[2, :], linewidth=5, arrow=true, c=:orange)
# Draw the planned acceleration
plot!(label = "Opt. Accel",u_plan[1, 1:2], u_plan[2, 1:2], linewidth=3, arrow=true, c=:red)
# Save Acceleration & Angle Data to csv
u = u_plan[:, 1]
push!(as_x, u[1])
push!(as_y, u[2])
push!(angs, ang)
push!(u_plans, u_plan)
# Apply the planned acceleration&Attitude and simulate one step in time
global ang = ang_plan[1]
global v += u * Δt
global q += v * Δt
end
gif(anim, "~/Downloads/NLmpc_angle.gif", fps=60)
end

Find 7 vertices of a box using openCV

I don't know if this question have been repeating in here. If yes then i'm sorry..
I have a box that positioned to see H,W,L view. I understand steps to get vertices however most of the examples in the net only describes how to get 4 vertices from 2D plane. So my question is, how if we want to get 7 vertices (like the pic above) and handle it in numpy? How to differentiate between upper points and lower points?
I will be using Python to determine this.
Here's my attempt to get the 8 corners of the 3d rectangle. I masked on the saturation channel of the HSV color space since that separates out white.
I used findContours to get the contour of the box and then used approxPolyDP to get a six-point approximation (the six visible corners).
From there I approximated the two "hidden" corners via a parallelogram approximation. For each point I looked two points behind and created a fourth point that would make a parallelogram with that side. I then took the centroid of these parallelogram points to guess the corner. I hoped that taking the centroid of the points would help even out the error between the parallelogram assumption and the perspective warping, but it did a poor job.
If you need a better approximation there are probably ways to estimate the perspective warping to get the corners.
import cv2
import numpy as np
import random
def tup(point):
return (int(point[0]), int(point[1]));
# load image
img = cv2.imread("box.jpg");
# reduce size to fit on screen
scale = 0.25;
h,w = img.shape[:2];
h = int(scale*h);
w = int(scale*w);
img = cv2.resize(img, (w,h));
copy = np.copy(img);
# convert to hsv
hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV);
h,s,v = cv2.split(hsv);
# make mask
mask = cv2.inRange(s, 30, 255);
# dilate and erode to get rid of small holes
kernel = np.ones((5,5), np.uint8);
mask = cv2.dilate(mask, kernel, iterations = 1);
mask = cv2.erode(mask, kernel, iterations = 1);
# contours # OpenCV 3.4, in OpenCV 2 or 4 it returns (contours, _)
_, contours, _ = cv2.findContours(mask, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE);
contour = contours[0]; # just take the first one
# approx until 6 points
num_points = 999999;
step_size = 0.01;
percent = step_size;
while num_points >= 6:
# get number of points
epsilon = percent * cv2.arcLength(contour, True);
approx = cv2.approxPolyDP(contour, epsilon, True);
num_points = len(approx);
# increment
percent += step_size;
# step back and get the points
# there could be more than 6 points if our step size misses it
percent -= step_size * 2;
epsilon = percent * cv2.arcLength(contour, True);
approx = cv2.approxPolyDP(contour, epsilon, True);
# draw contour
cv2.drawContours(img, [approx], -1, (0,0,200), 2);
# draw points
for point in approx:
point = point[0]; # drop extra layer of brackets
center = (int(point[0]), int(point[1]));
cv2.circle(img, center, 4, (150, 200, 0), -1);
# do parallelogram approx to get the two "hidden" corners to complete our 3d rectangle
proposals = [];
size = len(approx);
for a in range(size):
# get points backwards
two = approx[a - 2][0];
one = approx[a - 1][0];
curr = approx[a][0];
# get vector from one -> two
dx = two[0] - one[0];
dy = two[1] - one[1];
hidden = [curr[0] + dx, curr[1] + dy];
proposals.append([hidden, curr, a, two]);
# debug draw
c = np.copy(copy);
cv2.circle(c, tup(two), 4, (255, 0, 0), -1);
cv2.circle(c, tup(one), 4, (0,255,0), -1);
cv2.circle(c, tup(curr), 4, (0,0,255), -1);
cv2.circle(c, tup(hidden), 4, (255,255,0), -1);
cv2.line(c, tup(two), tup(one), (0,0,200), 1);
cv2.line(c, tup(curr), tup(hidden), (0,0,200), 1);
cv2.imshow("Mark", c);
cv2.waitKey(0);
# draw proposals
for point in proposals:
point = point[0];
center = (point[0], point[1]);
cv2.circle(img, center, 4, (200, 100, 0), -1);
# group points and sum up points
hidden_corners = [[0,0], [0,0]];
for point in proposals:
# get index and update hidden corners
index = point[2] % 2;
pos = point[0];
hidden_corners[index][0] += pos[0];
hidden_corners[index][1] += pos[1];
# divide to get centroid
hidden_corners[0][0] /= 3.0;
hidden_corners[0][1] /= 3.0;
hidden_corners[1][0] /= 3.0;
hidden_corners[1][1] /= 3.0;
# draw new points
for point in proposals:
# unpack
pos = point[0];
parent = point[1];
index = point[2] % 2;
source = point[3];
# draw
color = [random.randint(0, 150) for a in range(3)];
cv2.line(img, tup(hidden_corners[index]), tup(parent), (0,0,200), 2);
cv2.line(img, tup(pos), tup(parent), color, 1);
cv2.line(img, tup(pos), tup(source), color, 1);
cv2.circle(img, tup(hidden_corners[index]), 4, (200, 200, 0), -1);
# show
cv2.imshow("Image", img);
cv2.imshow("Mask", mask);
cv2.waitKey(0);

Unravel Index loops forever

I am doing some work using image processing and sparse coding. Problem is, the following code works only on some images.
Here is the image that it works perfectly on:
And here is the image that it loops forever on:
Here is the code:
import cv2
import numpy as np
import networkx as nx
from preproc import Preproc
# From https://github.com/vicariousinc/science_rcn/blob/master/science_rcn/learning.py
def sparsify(bu_msg, suppress_radius=3):
"""Make a sparse representation of the edges by greedily selecting features from the
output of preprocessing layer and suppressing overlapping activations.
Parameters
----------
bu_msg : 3D numpy.ndarray of float
The bottom-up messages from the preprocessing layer.
Shape is (num_feats, rows, cols)
suppress_radius : int
How many pixels in each direction we assume this filter
explains when included in the sparsification.
Returns
-------
frcs : see train_image.
"""
frcs = []
img = bu_msg.max(0) > 0
while True:
r, c = np.unravel_index(img.argmax(), img.shape)
print(r, c)
if not img[r, c]:
break
frcs.append((bu_msg[:, r, c].argmax(), r, c))
img[r - suppress_radius:r + suppress_radius + 1,
c - suppress_radius:c + suppress_radius + 1] = False
return np.array(frcs)
if __name__ == '__main__':
img = cv2.imread('https://i.stack.imgur.com/Nb08A.png', 0)
img2 = cv2.imread('https://i.stack.imgur.com/2MW93.png', 0)
prp = Preproc()
bu_msg = prp.fwd_infer(img)
frcs = sparsify(bu_msg)
and the accompanying preprocessing code:
"""A pre-processing layer of the RCN model. See Sec S8.1 for details.
"""
import numpy as np
from scipy.ndimage import maximum_filter
from scipy.ndimage.filters import gaussian_filter
from scipy.signal import fftconvolve
class Preproc(object):
"""
A simplified preprocessing layer implementing Gabor filters and suppression.
Parameters
----------
num_orients : int
Number of edge filter orientations (over 2pi).
filter_scale : float
A scale parameter for the filters.
cross_channel_pooling : bool
Whether to pool across neighboring orientation channels (cf. Sec S8.1.4).
Attributes
----------
filters : [numpy.ndarray]
Kernels for oriented Gabor filters.
pos_filters : [numpy.ndarray]
Kernels for oriented Gabor filters with all-positive values.
suppression_masks : numpy.ndarray
Masks for oriented non-max suppression.
"""
def __init__(self,
num_orients=16,
filter_scale=2.,
cross_channel_pooling=False):
self.num_orients = num_orients
self.filter_scale = filter_scale
self.cross_channel_pooling = cross_channel_pooling
self.suppression_masks = generate_suppression_masks(filter_scale=filter_scale,
num_orients=num_orients)
def fwd_infer(self, img, brightness_diff_threshold=18.):
"""Compute bottom-up (forward) inference.
Parameters
----------
img : numpy.ndarray
The input image.
brightness_diff_threshold : float
Brightness difference threshold for oriented edges.
Returns
-------
bu_msg : 3D numpy.ndarray of float
The bottom-up messages from the preprocessing layer.
Shape is (num_feats, rows, cols)
"""
filtered = np.zeros((len(self.filters),) + img.shape, dtype=np.float32)
for i, kern in enumerate(self.filters):
filtered[i] = fftconvolve(img, kern, mode='same')
localized = local_nonmax_suppression(filtered, self.suppression_masks)
# Threshold and binarize
localized *= (filtered / brightness_diff_threshold).clip(0, 1)
localized[localized < 1] = 0
if self.cross_channel_pooling:
pooled_channel_weights = [(0, 1), (-1, 1), (1, 1)]
pooled_channels = [-np.ones_like(sf) for sf in localized]
for i, pc in enumerate(pooled_channels):
for channel_offset, factor in pooled_channel_weights:
ch = (i + channel_offset) % self.num_orients
pos_chan = localized[ch]
if factor != 1:
pos_chan[pos_chan > 0] *= factor
np.maximum(pc, pos_chan, pc)
bu_msg = np.array(pooled_channels)
else:
bu_msg = localized
# Setting background to -1
bu_msg[bu_msg == 0] = -1.
return bu_msg
#property
def filters(self):
return get_gabor_filters(
filter_scale=self.filter_scale, num_orients=self.num_orients, weights=False)
#property
def pos_filters(self):
return get_gabor_filters(
filter_scale=self.filter_scale, num_orients=self.num_orients, weights=True)
def get_gabor_filters(size=21, filter_scale=4., num_orients=16, weights=False):
"""Get Gabor filter bank. See Preproc for parameters and returns."""
def _get_sparse_gaussian():
"""Sparse Gaussian."""
size = 2 * np.ceil(np.sqrt(2.) * filter_scale) + 1
alt = np.zeros((int(size), int(size)), np.float32)
alt[int(size // 2), int(size // 2)] = 1
gaussian = gaussian_filter(alt, filter_scale / np.sqrt(2.), mode='constant')
gaussian[gaussian < 0.05 * gaussian.max()] = 0
return gaussian
gaussian = _get_sparse_gaussian()
filts = []
for angle in np.linspace(0., 2 * np.pi, num_orients, endpoint=False):
acts = np.zeros((size, size), np.float32)
x, y = np.cos(angle) * filter_scale, np.sin(angle) * filter_scale
acts[int(size / 2 + y), int(size / 2 + x)] = 1.
acts[int(size / 2 - y), int(size / 2 - x)] = -1.
filt = fftconvolve(acts, gaussian, mode='same')
filt /= np.abs(filt).sum() # Normalize to ensure the maximum output is 1
if weights:
filt = np.abs(filt)
filts.append(filt)
return filts
def generate_suppression_masks(filter_scale=4., num_orients=16):
"""
Generate the masks for oriented non-max suppression at the given filter_scale.
See Preproc for parameters and returns.
"""
size = 2 * int(np.ceil(filter_scale * np.sqrt(2))) + 1
cx, cy = size // 2, size // 2
filter_masks = np.zeros((num_orients, size, size), np.float32)
# Compute for orientations [0, pi), then flip for [pi, 2*pi)
for i, angle in enumerate(np.linspace(0., np.pi, num_orients // 2, endpoint=False)):
x, y = np.cos(angle), np.sin(angle)
for r in range(1, int(np.sqrt(2) * size / 2)):
dx, dy = round(r * x), round(r * y)
if abs(dx) > cx or abs(dy) > cy:
continue
filter_masks[i, int(cy + dy), int(cx + dx)] = 1
filter_masks[i, int(cy - dy), int(cx - dx)] = 1
filter_masks[num_orients // 2:] = filter_masks[:num_orients // 2]
return filter_masks
def local_nonmax_suppression(filtered, suppression_masks, num_orients=16):
"""
Apply oriented non-max suppression to the filters, so that only a single
orientated edge is active at a pixel. See Preproc for additional parameters.
Parameters
----------
filtered : numpy.ndarray
Output of filtering the input image with the filter bank.
Shape is (num feats, rows, columns).
Returns
-------
localized : numpy.ndarray
Result of oriented non-max suppression.
"""
localized = np.zeros_like(filtered)
cross_orient_max = filtered.max(0)
filtered[filtered < 0] = 0
for i, (layer, suppress_mask) in enumerate(zip(filtered, suppression_masks)):
competitor_maxs = maximum_filter(layer, footprint=suppress_mask, mode='nearest')
localized[i] = competitor_maxs <= layer
localized[cross_orient_max > filtered] = 0
return localized
The problem I found was that np.unravel_index returns all the positions of features for the first image, whereas it only returns (0, 0) continuously for the second. My hypothesis is that it is either a problem with the preprocessing code, or it is a bug in the np.unravel_index function itself, but I am not too sure.
Okay, so turns out there is an underlying problem when calling argmax on the image. I rewrote the sparsification script to not use argmax and it works exactly the same. It should now work with any image.
def sparsify(bu_msg, suppress_radius=3):
"""Make a sparse representation of the edges by greedily selecting features from the
output of preprocessing layer and suppressing overlapping activations.
Parameters
----------
bu_msg : 3D numpy.ndarray of float
The bottom-up messages from the preprocessing layer.
Shape is (num_feats, rows, cols)
suppress_radius : int
How many pixels in each direction we assume this filter
explains when included in the sparsification.
Returns
-------
frcs : see train_image.
"""
frcs = []
img = bu_msg.max(0) > 0
for (r, c), _ in np.ndenumerate(img):
if img[r, c]:
frcs.append((bu_msg[:, r, c].argmax(), r, c))
img[r - suppress_radius:r + suppress_radius + 1,
c - suppress_radius:c + suppress_radius + 1] = False
return np.array(frcs)

High Eigen values always for Edge detection

I am trying to understand Harris detector, using the explanation here. As per explanation, I understand, if we calculate the eigen values, then,
However, when I try to calculate the eigen values are always high. Below is my main image from which I extract parts to calculate eigen values.
For a flat area with no visible features, I get this distribution (on right most) which is good, but eigen values are large
260935.70201362,434796.29798638
For a linear edge, also I get high eigen values: 16290305.45393251 567780.54606749
For corner, it is expected to get high values, but now I am doubtful if these high values are correct due to above cases.
8958127.80563239 10986758.19436761
Here is my method, translated from matlab code here. Its the vals value I directly get from numpy's linear algebra library.
def plot_derivatives_1(img_rgb, mode=1):
'''
img_rgb = image in rgb color space (3 channeled)
'''
img_1c = cv2.cvtColor(img_rgb, cv2.COLOR_BGR2GRAY)
if mode == 1: # method 1 derivative
Ix = cv2.Sobel(img_1c, cv2.CV_64F, 1, 0, ksize=3)
Iy = cv2.Sobel(img_1c, cv2.CV_64F, 0, 1, ksize=3)
else:
# another method of derivatives
dx = np.array([
[-1, 0, 1],
[-1, 0, 1],
[-1, 0, 1]
]);
dy = np.transpose(dx)
Ix = signal.convolve2d(img_1c, dx, mode='valid')
Iy = signal.convolve2d(img_1c, dy, mode='valid')
Ix, Iy = Ix.astype(np.float64), Iy.astype(np.float64) # else gaussian blur later is failing
# yet to solve why we need A and eigen outputs
A = np.array([
[ np.sum(Ix*Ix), np.sum(Ix*Iy) ],
[ np.sum(Ix*Iy), np.sum(Iy*Iy) ]
])
vals, V = linalg.eig(A)
lamb = vals/np.max(vals)
print('lambda values:{}'.format(vals))
fig, ax = plt.subplots(1,4, figsize=(20,5))
ax[0].imshow(img_rgb);ax[0].set_title('Input Image')
ax[1].imshow(Ix, cmap='gray');ax[1].set_title('$I_x = \dfrac{\partial I}{\partial x}$')
ax[2].imshow(Iy, cmap='gray');ax[2].set_title('$I_y = \dfrac{\partial I}{\partial y}$')
ax[3].scatter(Ix, Iy);ax[3].set_xlim([-200,200]);ax[3].set_ylim([-200,200]);
ax[3].set_aspect('equal');ax[3].set_title('Derivatives Distribution');
ax[3].set_xlabel('Ix');ax[3].set_ylabel('Iy')
ax[3].axvline(x=0, color = 'r');ax[3].axhline(y=0, color ='r')
plt.tight_layout();plt.show()
return Ix, Iy
A sample call for a case (here shown for corner).
img = cv2.imread(SRC_FOLDER + 'checkersandbooksmall_sample_6.jpg')
img_rgb = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
Ix, Iy = plot_derivatives_1(img_rgb, mode=1)
I use jupyter notebook and the code is just built as I try to understand the concept.
What am I doing wrong to get high eigen values always for all cases?
The sample images used for above cases could be found here

Avoiding optimization pitfalls when modeling an ordinal predicted variable in PyMC3

I am trying to model an ordinal predicted variable using PyMC3 based on the approach in chapter 23 of Doing Bayesian Data Analysis. I would like to determine a good starting value using find_MAP, but am receiving an optimization error.
The model:
import pymc3 as pm
import numpy as np
import theano
import theano.tensor as tt
# Some helper functions
def cdf(x, location=0, scale=1):
epsilon = np.array(1e-32, dtype=theano.config.floatX)
location = tt.cast(location, theano.config.floatX)
scale = tt.cast(scale, theano.config.floatX)
div = tt.sqrt(2 * scale ** 2 + epsilon)
div = tt.cast(div, theano.config.floatX)
erf_arg = (x - location) / div
return .5 * (1 + tt.erf(erf_arg + epsilon))
def percent_to_thresh(idx, vect):
return 5 * tt.sum(vect[:idx + 1]) + 1.5
def full_thresh(thresh):
idxs = tt.arange(thresh.shape[0] - 1)
thresh_mod, updates = theano.scan(fn=percent_to_thresh,
sequences=[idxs],
non_sequences=[thresh])
return tt.concatenate([[-1 * np.inf, 1.5], thresh_mod, [6.5, np.inf]])
def compute_ps(thresh, location, scale):
f_thresh = full_thresh(thresh)
return cdf(f_thresh[1:], location, scale) - cdf(f_thresh[:-1], location, scale)
# Generate data
real_ps = [0.05, 0.05, 0.1, 0.1, 0.2, 0.3, 0.2]
data = np.random.choice(7, size=1000, p=real_ps)
# Run model
with pm.Model() as model:
mu = pm.Normal('mu', mu=4, sd=3)
sigma = pm.Uniform('sigma', lower=0.1, upper=70)
thresh = pm.Dirichlet('thresh', a=np.ones(5))
cat_p = compute_ps(thresh, mu, sigma)
results = pm.Categorical('results', p=cat_p, observed=data)
with model:
start = pm.find_MAP()
trace = pm.sample(2000, start=start)
When running this, I receive the following error:
Applied interval-transform to sigma and added transformed sigma_interval_ to model.
Applied stickbreaking-transform to thresh and added transformed thresh_stickbreaking_ to model.
Traceback (most recent call last):
File "cm_net_log.v1-for_so.py", line 53, in <module>
start = pm.find_MAP()
File "/usr/local/lib/python3.5/site-packages/pymc3/tuning/starting.py", line 133, in find_MAP
specific_errors)
ValueError: Optimization error: max, logp or dlogp at max have non-finite values. Some values may be outside of distribution support. max: {'thresh_stickbreaking_': array([-1.04298465, -0.48661088, -0.84326554, -0.44833646]), 'sigma_interval_': array(-2.220446049250313e-16), 'mu': array(7.68422528308479)} logp: array(-3506.530143064723) dlogp: array([ 1.61013190e-06, nan, -6.73994118e-06,
-6.93873894e-06, 6.03358122e-06, 3.18954680e-06])Check that 1) you don't have hierarchical parameters, these will lead to points with infinite density. 2) your distribution logp's are properly specified. Specific issues:
My questions:
How can I determine why dlogp is nan at certain points?
Is there a different way that I can express this model to avoid dlogp being nan?
Also worth noting:
This model runs fine if I don't find_MAP and use a Metropolis sampler. However, I'd like to have the flexibility of using other samplers as this model becomes more complex.
I have a suspicion that the issue is due to the relationship between the thresholds and the normal distribution, but I don't know how to disentangle them for the optimization.
Regarding question 2: I expressed the model for the ordinal predicted variable (single group) differently; I used the Theano #as_op decorator for a function that calculates probabilities for the outcomes. That also explains why I cannot use find_MAP() or gradient based samplers: Theano cannot calculate a gradient for the custom function. (http://pymc-devs.github.io/pymc3/notebooks/getting_started.html#Arbitrary-deterministics)
# Number of outcomes
nYlevels = df.Y.cat.categories.size
thresh = [k + .5 for k in range(1, nYlevels)]
thresh_obs = np.ma.asarray(thresh)
thresh_obs[1:-1] = np.ma.masked
#as_op(itypes=[tt.dvector, tt.dscalar, tt.dscalar], otypes=[tt.dvector])
def outcome_probabilities(theta, mu, sigma):
out = np.empty(nYlevels)
n = norm(loc=mu, scale=sigma)
out[0] = n.cdf(theta[0])
out[1] = np.max([0, n.cdf(theta[1]) - n.cdf(theta[0])])
out[2] = np.max([0, n.cdf(theta[2]) - n.cdf(theta[1])])
out[3] = np.max([0, n.cdf(theta[3]) - n.cdf(theta[2])])
out[4] = np.max([0, n.cdf(theta[4]) - n.cdf(theta[3])])
out[5] = np.max([0, n.cdf(theta[5]) - n.cdf(theta[4])])
out[6] = 1 - n.cdf(theta[5])
return out
with pm.Model() as ordinal_model_single:
theta = pm.Normal('theta', mu=thresh, tau=np.repeat(.5**2, len(thresh)),
shape=len(thresh), observed=thresh_obs, testval=thresh[1:-1])
mu = pm.Normal('mu', mu=nYlevels/2.0, tau=1.0/(nYlevels**2))
sigma = pm.Uniform('sigma', nYlevels/1000.0, nYlevels*10.0)
pr = outcome_probabilities(theta, mu, sigma)
y = pm.Categorical('y', pr, observed=df.Y.cat.codes.as_matrix())
http://nbviewer.jupyter.org/github/JWarmenhoven/DBDA-python/blob/master/Notebooks/Chapter%2023.ipynb