Is the numpy sum method superfluous in this code? - numpy

I am reading a book, and found an error as below:
def relu(x):
return (x>0)*x
def relu2dev(x):
return (x>0)
street_lights = np.array([[1,0,1],[0,1,1],[0,0,1],[1,1,1]])
walk_stop = np.array([[1,1,0,0]]).T
alpha = 0.2
hidden_size = 4
weights_0_1 = 2*np.random.random((3,hidden_size))-1
weights_1_2 = 2*np.random.random((hidden_size,1))-1
for it in range(60):
layer_2_error = 0;
for i in range(len(street_lights)):
layer_0 = street_lights[i:i+1]
layer_1 = relu(np.dot(layer_0,weights_0_1))
layer_2 = np.dot(layer_1,weights_1_2)
layer_2_delta = (layer_2-walk_stop[i:i+1])
# -> layer_2_delta's shape is (1,1), so why np.sum?
layer_2_error += np.sum((layer_2_delta)**2)
layer_1_delta = layer_2_delta.dot(weights_1_2.T) * relu2dev(layer_1)
weights_1_2 -= alpha * layer_1.T.dot(layer_2_delta)
weights_0_1 -= alpha * layer_0.T.dot(layer_1_delta)
if(it % 10 == 9):
print("Error: " + str(layer_2_error))
The error place is commented with # ->:
layer_2_delta's shape is (1,1), so why would one use np.sum? I think np.sum can be removed, but not quite sure, since it comes from a book.

As you say, layer_2_delta has a shape of (1,1). This means it is a 2 dimensional array with one element: layer_2_delta = np.array([[X]]). However, layer_2_error is a scalar. So you can get the scalar from the array by either selecting the value at the first index (layer_2_delta[0,0]) or by summing all the elements (which in this case is just the one). As the book seems to use "sum of square errors", it seems natural to keep the notation which is square each element in array and then add all of these up (for instruction purposes): this would be more general (e.g., to cases where the layer has more than one element) than the index approach. But you're right, there could be other ways to do this :).

Related

ORTools CP-Sat Solver Channeling Constraint dependant of x

I try to add the following constraints to my model. my problem: the function g() expects x as a binary numpy array. So the result arr_a depends on the current value of x in every step of the optimization!
Afterwards, I want the max of this array times x to be smaller than 50.
How can I add this constraint dynamically so that arr_a is always rightfully calculated with the value of x at each iteration while telling the model to keep the constraint arr_a * x <= 50 ? Currently I am getting an error when adding the constraint to the model because g() expects x as numpy array to calculate arr_a, arr_b, arr_c ( g uses np.where(x == 1) within its calculation).
#Init model
from ortools.sat.python import cp_model
model = cp_model.CpModel()
# Declare the variables
x = []
for i in range(self.ds.n_banks):
x.append(model.NewIntVar(0, 1, "x[%i]" % (i)))
#add bool vars
a = model.NewBoolVar('a')
arr_a, arr_b, arr_c = g(df1,df2,df3,x)
model.Add((arr_a.astype('int32') * x).max() <= 50).OnlyEnforceIf(a)
model.Add((arr_a.astype('int32') * x).max() > 50).OnlyEnforceIf(a.Not())
Afterwards i add the target function that naturally also depends on x.
model.Minimize(target(x))
def target(x):
arr_a, arr_b, arr_c = g(df1,df2,df3,x)
return (3 * arr_b * x + 2 * arr_c * x).sum()
EDIT:
My problem changed a bit and i managed to get it work without issues. Nevertheless, I experienced that the constraint is never actually met! self-defined-function is a highly non-linear function that expects the indices where x==1 and where x == 0 and returns a numpy array. Also it is not possible to re-build it with pre-defined functions of the sat.solver.
#Init model
model = cp_model.CpModel()
# Declare the variables
x = [model.NewIntVar(0, 1, "x[%i]" % (i)) for i in range(66)]
# add hints
[model.AddHint(x[i],np.random.choice(2, 1, p=[0.4, 0.6])[0]) for i in range(66)]
open_elements = [model.NewBoolVar("open_elements[%i]" % (i)) for i in range(66)]
closed_elements = [model.NewBoolVar("closed_elements[%i]" % (i)) for i in range(6)]
# open indices as bool vars
for i in range(66):
model.Add(x[i] == 1).OnlyEnforceIf(open_elements[i])
model.Add(x[i] != 1).OnlyEnforceIf(open_elements[i].Not())
model.Add(x[i] != 1).OnlyEnforceIf(closed_elements[i])
model.Add(x[i] == 1).OnlyEnforceIf(closed_elements[i].Not())
model.Add((self-defined-function(np.where(open_elements), np.where(closed_elements), some_array).astype('int32') * x - some_vector).all() <= 0)
Even when I apply a simpler function, it will not work properly.
model.Add((self-defined-function(x, some_array).astype('int32') * x - some_vector).all() <= 0)
I also tried the following:
arr_indices_open = []
arr_indices_closed = []
for i in range(66):
if open_elements[i] == True:
arr_indices_open.append(i)
else:
arr_indices_closed.append(i)
# final Constraint
arr_ = self-defined-function(arr_indices_open, arr_indices_closed, some_array)[0].astype('int32')
for i in range(66):
model.Add(arr_[i] * x[i] <= some_other_vector[i])
Some minimal example for the self-defined-function, with which I simply try to say that n_closed shall be smaller than 10. Even that condition is not met by the solver:
def self_defined_function(arr_indices_closed)
return len(arr_indices_closed)
arr_ = self-defined-function(arr_indices_closed)
for i in range(66):
model.Add(arr_ < 10)
I'm not sure I fully understand the question, but generally, if you want to optimize a function g(x), you'll have to implement it in using the solver's primitives (docs).
It's easier to do when your calculation coincides with an existing solver function, e.g.: if you're trying to calculate a linear expression; but could get harder to do when trying to calculate something more complex. However, I believe that's the only way.

Crop sides of a numpy array by w elements (where w may be zero)

I'd like to generalize the following to allow the number w of cropped elements to possibly be zero:
a = np.arange(42).reshape(6, 7)
w = 1 # Number of elements to crop on each side.
print(a[w:-w, w:-w])
And also in this generalization to arbitrary dimensions:
def crop(array, width):
width = np.broadcast_to(width, array.ndim)
return array[tuple(slice(w, -w) for w in width)]
The problem is that a slice with stop=0 returns no elements.
What is the most elegant solution? Is there any existing library function?
It would be wonderful if numpy.pad would allow negative values for cropping, but apparently it does not.
It may be necessary to write a custom function:
def crop_array(array, width) -> np.ndarray:
array = np.asarray(array)
width = np.broadcast_to(width, array.ndim)
assert np.all(width >= 0)
return array[tuple((slice(None) if w == 0 else slice(w, -w)) for w in width)]

Shortest rotation between two vectors not working like expected

def signed_angle_between_vecs(target_vec, start_vec, plane_normal=None):
start_vec = np.array(start_vec)
target_vec = np.array(target_vec)
start_vec = start_vec/np.linalg.norm(start_vec)
target_vec = target_vec/np.linalg.norm(target_vec)
if plane_normal is None:
arg1 = np.dot(np.cross(start_vec, target_vec), np.cross(start_vec, target_vec))
else:
arg1 = np.dot(np.cross(start_vec, target_vec), plane_normal)
arg2 = np.dot(start_vec, target_vec)
return np.arctan2(arg1, arg2)
from scipy.spatial.transform import Rotation as R
world_frame_axis = input_rotation_object.apply(canonical_axis)
angle = signed_angle_between_vecs(canonical_axis, world_frame_axis)
axis_angle = np.cross(world_frame_axis, canonical_axis) * angle
C = R.from_rotvec(axis_angle)
transformed_world_frame_axis_to_canonical = C.apply(world_frame_axis)
I am trying to align world_frame_axis to canonical_axis by performing a rotation around the normal vector generated by the cross product between the two vectors, using the signed angle between the two axes.
However, this code does not work. If you start with some arbitrary rotation as input_rotation_object you will see that transformed_world_frame_axis_to_canonical does not match canonical_axis.
What am I doing wrong?
not a python coder so I might be wrong but this looks suspicious:
start_vec = start_vec/np.linalg.norm(start_vec)
from the names I would expect that np.linalg.norm normalizes the vector already so the line should be:
start_vec = np.linalg.norm(start_vec)
and all the similar lines too ...
Also the atan2 operands are not looking right to me. I would (using math):
a = start_vec / |start_vec | // normalized start
b = target_vec / |target_vec| // normalized end
u = a // normalized one axis of plane
v = cross(u ,b)
v = cross(v ,u)
v = v / |v| // normalized second axis of plane perpendicular to u
dx = dot(u,b) // target vector in 2D aligned to start
dy = dot(v,b)
ang = atan2(dy,dx)
beware the ang might negated (depending on your notations) if the case either add minus sign or reverse the order in cross(u,v) to cross(v,u) Also you can do sanity check with comparing result to unsigned:
ang' = acos(dot(a,b))
in absolute values they should be the same (+/- rounding error).

How to use maxout activation function in tensorflow?

I want to use maxout activation function in tensorflow, but I don't know which function should use.
I sent a pull request for maxout, here is the link:
https://github.com/tensorflow/tensorflow/pull/5528
Code is as follows:
def maxout(inputs, num_units, axis=None):
shape = inputs.get_shape().as_list()
if axis is None:
# Assume that channel is the last dimension
axis = -1
num_channels = shape[axis]
if num_channels % num_units:
raise ValueError('number of features({}) is not a multiple of num_units({})'
.format(num_channels, num_units))
shape[axis] = -1
shape += [num_channels // num_units]
outputs = tf.reduce_max(tf.reshape(inputs, shape), -1, keep_dims=False)
return outputs
Here is how it works:
I don't think there is a maxout activation but there is nothing stopping yourself from making it yourself. You could do something like the following.
with tf.variable_scope('maxout'):
layer_input = ...
layer_output = None
for i in range(n_maxouts):
W = tf.get_variable('W_%d' % d, (n_input, n_output))
b = tf.get_variable('b_%d' % i, (n_output,))
y = tf.matmul(layer_input, W) + b
if layer_output is None:
layer_output = y
else:
layer_output = tf.maximum(layer_output, y)
Note that this is code I just wrote in my browser so there may be syntax errors but you should get the general idea. You simply perform a number of linear transforms and take the maximum across all the transforms.
How about this code?
This seems to work in my test.
def max_out(input_tensor,output_size):
shape = input_tensor.get_shape().as_list()
if shape[1] % output_size == 0:
return tf.transpose(tf.reduce_max(tf.split(input_tensor,output_size,1),axis=2))
else:
raise ValueError("Output size or input tensor size is not fine. Please check it. Reminder need be zero.")
I refer the diagram in the following page.
From version 1.4 on you can use tf.contrib.layers.maxout.
Maxout is a layer such that it calculates N*M output for a N*1 input, and then it returns the maximum value across the column, i.e., the final output has shape N*1 as well. Basically it uses multiple linear fittings to mimic a complex function.

Storing plot objects in a list

I asked this question yesterday about storing a plot within an object. I tried implementing the first approach (aware that I did not specify that I was using qplot() in my original question) and noticed that it did not work as expected.
library(ggplot2) # add ggplot2
string = "C:/example.pdf" # Setup pdf
pdf(string,height=6,width=9)
x_range <- range(1,50) # Specify Range
# Create a list to hold the plot objects.
pltList <- list()
pltList[]
for(i in 1 : 16){
# Organise data
y = (1:50) * i * 1000 # Get y col
x = (1:50) # get x col
y = log(y) # Use natural log
# Regression
lm.0 = lm(formula = y ~ x) # make linear model
inter = summary(lm.0)$coefficients[1,1] # Get intercept
slop = summary(lm.0)$coefficients[2,1] # Get slope
# Make plot name
pltName <- paste( 'a', i, sep = '' )
# make plot object
p <- qplot(
x, y,
xlab = "Radius [km]",
ylab = "Services [log]",
xlim = x_range,
main = paste("Sample",i)
) + geom_abline(intercept = inter, slope = slop, colour = "red", size = 1)
print(p)
pltList[[pltName]] = p
}
# close the PDF file
dev.off()
I have used sample numbers in this case so the code runs if it is just copied. I did spend a few hours puzzling over this but I cannot figure out what is going wrong. It writes the first set of pdfs without problem, so I have 16 pdfs with the correct plots.
Then when I use this piece of code:
string = "C:/test_tabloid.pdf"
pdf(string, height = 11, width = 17)
grid.newpage()
pushViewport( viewport( layout = grid.layout(3, 3) ) )
vplayout <- function(x, y){viewport(layout.pos.row = x, layout.pos.col = y)}
counter = 1
# Page 1
for (i in 1:3){
for (j in 1:3){
pltName <- paste( 'a', counter, sep = '' )
print( pltList[[pltName]], vp = vplayout(i,j) )
counter = counter + 1
}
}
dev.off()
the result I get is the last linear model line (abline) on every graph, but the data does not change. When I check my list of plots, it seems that all of them become overwritten by the most recent plot (with the exception of the abline object).
A less important secondary question was how to generate a muli-page pdf with several plots on each page, but the main goal of my code was to store the plots in a list that I could access at a later date.
Ok, so if your plot command is changed to
p <- qplot(data = data.frame(x = x, y = y),
x, y,
xlab = "Radius [km]",
ylab = "Services [log]",
xlim = x_range,
ylim = c(0,10),
main = paste("Sample",i)
) + geom_abline(intercept = inter, slope = slop, colour = "red", size = 1)
then everything works as expected. Here's what I suspect is happening (although Hadley could probably clarify things). When ggplot2 "saves" the data, what it actually does is save a data frame, and the names of the parameters. So for the command as I have given it, you get
> summary(pltList[["a1"]])
data: x, y [50x2]
mapping: x = x, y = y
scales: x, y
faceting: facet_grid(. ~ ., FALSE)
-----------------------------------
geom_point:
stat_identity:
position_identity: (width = NULL, height = NULL)
mapping: group = 1
geom_abline: colour = red, size = 1
stat_abline: intercept = 2.55595281266726, slope = 0.05543539319091
position_identity: (width = NULL, height = NULL)
However, if you don't specify a data parameter in qplot, all the variables get evaluated in the current scope, because there is no attached (read: saved) data frame.
data: [0x0]
mapping: x = x, y = y
scales: x, y
faceting: facet_grid(. ~ ., FALSE)
-----------------------------------
geom_point:
stat_identity:
position_identity: (width = NULL, height = NULL)
mapping: group = 1
geom_abline: colour = red, size = 1
stat_abline: intercept = 2.55595281266726, slope = 0.05543539319091
position_identity: (width = NULL, height = NULL)
So when the plot is generated the second time around, rather than using the original values, it uses the current values of x and y.
I think you should use the data argument in qplot, i.e., store your vectors in a data frame.
See Hadley's book, Section 4.4:
The restriction on the data is simple: it must be a data frame. This is restrictive, and unlike other graphics packages in R. Lattice functions can take an optional data frame or use vectors directly from the global environment. ...
The data is stored in the plot object as a copy, not a reference. This has two
important consequences: if your data changes, the plot will not; and ggplot2 objects are entirely self-contained so that they can be save()d to disk and later load()ed and plotted without needing anything else from that session.
There is a bug in your code concerning list subscripting. It should be
pltList[[pltName]]
not
pltList[pltName]
Note:
class(pltList[1])
[1] "list"
pltList[1] is a list containing the first element of pltList.
class(pltList[[1]])
[1] "ggplot"
pltList[[1]] is the first element of pltList.
For your second question: Multi-page pdfs are easy -- see help(pdf):
onefile: logical: if true (the default) allow multiple figures in one
file. If false, generate a file with name containing the
page number for each page. Defaults to ‘TRUE’.
For your main question, I don't understand if you want to store the plot inputs in a list for later processing, or the plot outputs. If it is the latter, I am not sure that plot() returns an object you can store and retrieve.
Another suggestion regarding your second question would be to use either Sweave or Brew as they will give you complete control over how you display your multi-page pdf.
Have a look at this related question.