Interpolating lines of a Polygon - line

Let's suppose we have 5 (x,y) points which makes a closed loop or a polygon. How can I interpolate or upsample some points so that the polygon will have a more round-ish look instead of sharp linear lines between two points? E.g. see the image. What I have is on the left and what I want is on the right.
A simple MATLAB code is as follows:
xv = [0 2 3 2.5 1 0];
yv = [1 0 2 3.5 3.5 1];
plot(xv, yv)
xlim([-1 4])
ylim([-2 5])

Related

How to create an adjacency matrix from VTK / STL file?

I have a .vtk mesh with N points and F polygon (triangle) faces, and i'd like to build an N x N adjacency matrix to represent the connectivity between the points.
I've tried mesh.GetLines().GetData() however, this returns an empty array. I've also tried mesh.GetPolys().GetData() and this gives an flat array of 4 x F elements.
From inspecting the .vtk file, I know that each face is given as 3, point1, point2, point3 where I assume 3 indicates the faces are triangular. From here it is possible to create the adjacency matrix by iterating through the list, however I'd like to know there if there is any inbuilt VTK functions that can do the job for me.
I also have the mesh in .stl format, if that helps.
Thanks
It is possible to create the adj matrix by iterating over the vtk polys faces and adding a 1 in an empty matrix for every edge connection like so:
polygons = vtk_to_numpy(mesh.GetPolys().GetData())
adj = np.zeros((len(coords), len(coords)))
print('ADJACENCY MATRIX SHAPE: ',adj.shape)
for i in range(0, int(len(polygons)), 4):
line = polygons[i:i+4] # 3, point1, point2, point3
face = line[1:] # point1, point2, point3
n1, n2, n3 = face[0], face[1], face[2]
adj[n1,n2] = 1
adj[n2,n1] = 1
adj[n1,n3] = 1
adj[n3,n1] = 1
adj[n2,n3] = 1
adj[n3,n2] = 1
Altenatively, if using .stl files, this can be done with the trimesh and networkx packages, like so:
mesh = trimesh.load_mesh('mesh.stl')
adj = networkx.adjacency_matrix(trimesh.graph.vertex_adjacency_graph(mesh))

Merge masked selection of array with original array

I'm facing a problem with an assignment at the moment.
So I have an array which contains 400 2d Points. So an array of shape 400 X 2.
Then I have a mask that selects m points (rows) that I wanna compute some changes on.
As per the assignment I'm supposed to store the points that I want to change in an array of shape m X 2.
Then I do my changes on this resulting array. But now after the changes I want to insert these new computed values in my original array at the original indices. And I just have no clue how to do that.
So I basically have:
orig (400 X 2)
mask (400 X 1) (boolean mask selecting the rows to edit)
change (m X 2) (just the changes I want to add)
changed (m X 2) (the original values + the change (with a factor applied) added together
How do I transform my change or changed arrays with the mask so that I can add/insert the changes into my original array?
Look at this example with 4 rows.
The principle is that the mask that "extract" from orig can also return the sub-array to the original place.
import numpy as np
x = np.array([[1,2],[3,4],[5,6],[7,8]])
print(x)
mask_ix = np.array([True,False, True, False])
masked = x[mask_ix,:]
masked = masked * 10 # the change
print(masked)
x[mask_ix] = masked # return to the original x in the mask_ix mask
print(x)
x =[[1 2]
[3 4]
[5 6]
[7 8]]
masked = [[10 20]
[50 60]]
x = [[10 20]
[ 3 4]
[50 60]
[ 7 8]]

Change linear interpolation in Matplotlib

I have data that looks like this:
sv_length status right_edge left_edge
0 (0.999, 3.0] 0.142857 3.000000e+00 9.990000e-01
1 (3.0, 6.0] 0.125000 6.000000e+00 3.000000e+00
2 (6.0, 11.3] 0.153846 1.130000e+01 6.000000e+00
3 (11.3, 18.375] 0.964286 1.837500e+01 1.130000e+01
4 (18.375, 28.0] 0.965517 2.800000e+01 1.837500e+01
When I plot it, as so:
sbn.lineplot(x=binned['right_edge'], y=binned['status'])
plt.xlim([1, 100])
plt.xscale("log")
I get the following:
There is clearly linear interpolation between the values in "right_edge" (ie. the slanted edges). How can I make it so that the plot shows right-most interpolation (ie. extending each value to the left until the previous point is reached, thus producing horizontal lines)?

Bernoulli random number generator

I cannot understand how Bernoulli Random Number generator used in numpy is calculated and would like some explanation on it. For example:
np.random.binomial(size=3, n=1, p= 0.5)
Results:
[1 0 0]
n = number of trails
p = probability of occurrence
size = number of experiments
With how do I determine the generated numbers/results of "0" or "1"?
=================================Update==================================
I created a Restricted Boltzmann Machine which always presents the same results despite being "random" on multiple code executions. The randomize is seeded using
np.random.seed(10)
import numpy as np
np.random.seed(10)
def sigmoid(u):
return 1/(1+np.exp(-u))
def gibbs_vhv(W, hbias, vbias, x):
f_s = sigmoid(np.dot(x, W) + hbias)
h_sample = np.random.binomial(size=f_s.shape, n=1, p=f_s)
f_u = sigmoid(np.dot(h_sample, W.transpose())+vbias)
v_sample = np.random.binomial(size=f_u.shape, n=1, p=f_u)
return [f_s, h_sample, f_u, v_sample]
def reconstruction_error(f_u, x):
cross_entropy = -np.mean(
np.sum(
x * np.log(sigmoid(f_u)) + (1 - x) * np.log(1 - sigmoid(f_u)),
axis=1))
return cross_entropy
X = np.array([[1, 0, 0, 0]])
#Weight to hidden
W = np.array([[-3.85, 10.14, 1.16],
[6.69, 2.84, -7.73],
[1.37, 10.76, -3.98],
[-6.18, -5.89, 8.29]])
hbias = np.array([1.04, -4.48, 2.50]) #<= 3 bias for 3 neuron in hidden
vbias = np.array([-6.33, -1.68, -1.25, 3.45]) #<= 4 bias for 4 neuron in input
k = 2
v_sample = X
for i in range(k):
[f_s, h_sample, f_u, v_sample] = gibbs_vhv(W, hbias, vbias, v_sample)
start = v_sample
if i < 2:
print('f_s:', f_s)
print('h_sample:', h_sample)
print('f_u:', f_u)
print('v_sample:', v_sample)
print(v_sample)
print('iter:', i, ' h:', h_sample, ' x:', v_sample, ' entropy:%.3f'%reconstruction_error(f_u, v_sample))
Results:
[[1 0 0 0]]
f_s: [[ 0.05678618 0.99652957 0.97491304]]
h_sample: [[0 1 1]]
f_u: [[ 0.99310473 0.00139984 0.99604968 0.99712837]]
v_sample: [[1 0 1 1]]
[[1 0 1 1]]
iter: 0 h: [[0 1 1]] x: [[1 0 1 1]] entropy:1.637
f_s: [[ 4.90301318e-04 9.99973278e-01 9.99654440e-01]]
h_sample: [[0 1 1]]
f_u: [[ 0.99310473 0.00139984 0.99604968 0.99712837]]
v_sample: [[1 0 1 1]]
[[1 0 1 1]]
iter: 1 h: [[0 1 1]] x: [[1 0 1 1]] entropy:1.637
I am asking on how the algorithm works to produce the numbers. – WhiteSolstice 35 mins ago
Non-technical explanation
If you pass n=1 to the Binomial distribution it is equivalent to the Bernoulli distribution. In this case the function could be thought of simulating coin flips. size=3 tells it to flip the coin three times and p=0.5 makes it a fair coin with equal probabilitiy of head (1) or tail (0).
The result of [1 0 0] means the coin came down once with head and twice with tail facing up. This is random, so running it again would result in a different sequence like [1 1 0], [0 1 0], or maybe even [1 1 1]. Although you cannot get the same number of 1s and 0s in three runs, on average you would get the same number.
Technical explanation
Numpy implements random number generation in C. The source code for the Binomial distribution can be found here. Actually two different algorithms are implemented.
If n * p <= 30 it uses inverse transform sampling.
If n * p > 30 the BTPE algorithm of (Kachitvichyanukul and Schmeiser 1988) is used. (The publication is not freely available.)
I think both methods, but certainly the inverse transform sampling, depend on a random number generator to produce uniformly distributed random numbers. Numpy internally uses a Mersenne Twister pseudo random number generator. The uniform random numbers are then transformed into the desired distribution.
A Binomially distributed random variable has two parameters n and p, and can be thought of as the distribution of the number of heads obtained when flipping a biased coin n times, where the probability of getting a head at each flip is p. (More formally it is a sum of independent Bernoulli random variables with parameter p).
For instance, if n=10 and p=0.5, one could simulate a draw from Bin(10, 0.5) by flipping a fair coin 10 times and summing the number of times that the coin lands heads.
In addition to the n and p parameters described above, np.random.binomial has an additional size parameter. If size=1, np.random.binomial computes a single draw from the Binomial distribution. If size=k for some integer k, k independent draws from the same Binomial distribution will be computed. size can also be an array of indices, in which case a whole np.array with the given size will be filled with independent draws from the Binomial distribution.
Note that the Binomial distribution is a generalisation of the Bernoulli distribution - in the case that n=1, Bin(n,p) has the same distribution as Ber(p).
For more information about the binomial distribution see: https://en.wikipedia.org/wiki/Binomial_distribution

tf.nn.sparse_softmax_cross_entropy_with_logits - labels without one hot encoding in tensorflow

I am trying to understand how tf.nn.sparse_softmax_cross_entropy_with_logits works.
Description says:
A common use case is to have logits of shape [batch_size, num_classes]
and labels of shape [batch_size]. But higher dimensions are supported.
So it suggests that we can feed labels in raw form for example [1,2,3].
Now since all computations are done per batch I believe the following is possible:
In all cases we assume size of batch equal two.
Case 1 (with one batch):
logit:
0.4 0.2 0.4
0.3 0.3 0.4
correspoding labels:
2
3
I am guessing labels might be coded as
[1 0 0]
[0 1 0]
Case 2 (with another batch):
logit:
0.4 0.2 0.4
0.3 0.3 0.4
correspoding labels:
1
2
I am guessing labels might be coded as (I do not see what prevents us from this coding, unless tensorflow keeps track how it coded before)
[1 0 0]
[0 1 0]
So we have two different codings. Is it safe to assume that tensorflow keeps coding consistent from batch to batch?
There is no real coding happening. The labels are just the position of the 1 in the according one-hot vector:
0 -> [1, 0, 0]
1 -> [0, 1, 0]
2 -> [0, 0, 1]
This "coding" will be used in every batch.