Using Open SCAD, I have a module that, like cube(), has a size parameter that can be a single value or a vector of three values. Ultimately, I want a vector of three values.
If the caller passes a single value, I'd like all three values of the vector to be the same. I don't see anything in the language documentation about detecting the type of an argument. So I came up with this hack:
module my_cubelike_thing(size=1) {
dimensions = concat(size, size, size);
width = dimensions[0];
length = dimensions[1];
height = dimensions[2];
// ... use width, length, and height ...
}
When size is a single value, the result of the concat is exactly what I want: three copies of the value.
When size is a three-value vector, the result of the concat is nine-value vector, and my code just ignores the last six values.
It works but only because what I want in the single value case is to replicate the value. Is there a general way to switch on the argument type and do different things depending on that type?
If type of size only can be single value or a vector with 3 values, the type can helpwise be found by the special value undef:
a = [3,5,8];
// a = 5;
if (a[0] == undef) {
dimensions = concat(a, a, a);
// do something
cube(size=dimensions,center=false);
}
else {
dimensions = a;
// do something
cube(size=dimensions,center=false);
}
But assignments are only valid in the scope in which they are defined , documnetation of openscad.
So in each subtree much code is needed and i would prefere to validate the type of size in an external script (e.g. python3) and write the openscad-code with the assignment of variables to a file, which can be included in the openscad-file, here my short test-code:
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
import os
# size = 20
size = [20,15,10]
if type(size) == int:
dimensions = [size, size, size]
elif type(size) == list:
dimensions = size
else:
# if other types possible
pass
with open('variablen.scad', 'w') as wObj:
for i, v in enumerate(['l', 'w', 'h']):
wObj.write('{} = {};\n'.format(v, dimensions[i]))
os.system('openscad ./typeDef.scad')
content of variablen.scad:
l = 20;
w = 15;
h = 10;
and typeDef.scad can look like this
include <./variablen.scad>;
module my_cubelike_thing() {
linear_extrude(height=h, center=false) square(l, w);
}
my_cubelike_thing();
Related
I am reading a book, and found an error as below:
def relu(x):
return (x>0)*x
def relu2dev(x):
return (x>0)
street_lights = np.array([[1,0,1],[0,1,1],[0,0,1],[1,1,1]])
walk_stop = np.array([[1,1,0,0]]).T
alpha = 0.2
hidden_size = 4
weights_0_1 = 2*np.random.random((3,hidden_size))-1
weights_1_2 = 2*np.random.random((hidden_size,1))-1
for it in range(60):
layer_2_error = 0;
for i in range(len(street_lights)):
layer_0 = street_lights[i:i+1]
layer_1 = relu(np.dot(layer_0,weights_0_1))
layer_2 = np.dot(layer_1,weights_1_2)
layer_2_delta = (layer_2-walk_stop[i:i+1])
# -> layer_2_delta's shape is (1,1), so why np.sum?
layer_2_error += np.sum((layer_2_delta)**2)
layer_1_delta = layer_2_delta.dot(weights_1_2.T) * relu2dev(layer_1)
weights_1_2 -= alpha * layer_1.T.dot(layer_2_delta)
weights_0_1 -= alpha * layer_0.T.dot(layer_1_delta)
if(it % 10 == 9):
print("Error: " + str(layer_2_error))
The error place is commented with # ->:
layer_2_delta's shape is (1,1), so why would one use np.sum? I think np.sum can be removed, but not quite sure, since it comes from a book.
As you say, layer_2_delta has a shape of (1,1). This means it is a 2 dimensional array with one element: layer_2_delta = np.array([[X]]). However, layer_2_error is a scalar. So you can get the scalar from the array by either selecting the value at the first index (layer_2_delta[0,0]) or by summing all the elements (which in this case is just the one). As the book seems to use "sum of square errors", it seems natural to keep the notation which is square each element in array and then add all of these up (for instruction purposes): this would be more general (e.g., to cases where the layer has more than one element) than the index approach. But you're right, there could be other ways to do this :).
I need the indices (as numpy array) of the rows matching a given condition in a table (with billions of rows) and this is the line I currently use in my code, which works, but is quite ugly:
indices = np.array([row.nrow for row in the_table.where("foo == 42")])
It also takes half a minute, and I'm sure that the list creation is one of the reasons why.
I could not find an elegant solution yet and I'm still struggling with the pytables docs, so does anybody know any magical way to do this more beautifully and maybe also a bit faster? Maybe there is special query keyword I am missing, since I have the feeling that pytables should be able to return the matched rows indices as numpy array.
tables.Table.get_where_list() gives indices of the rows matching a given condition
I read the source of pytables, where() is implemented in Cython, but it seems not fast enough. Here is a complex method that can speedup:
Create some data first:
from tables import *
import numpy as np
class Particle(IsDescription):
name = StringCol(16) # 16-character String
idnumber = Int64Col() # Signed 64-bit integer
ADCcount = UInt16Col() # Unsigned short integer
TDCcount = UInt8Col() # unsigned byte
grid_i = Int32Col() # 32-bit integer
grid_j = Int32Col() # 32-bit integer
pressure = Float32Col() # float (single-precision)
energy = Float64Col() # double (double-precision)
h5file = open_file("tutorial1.h5", mode = "w", title = "Test file")
group = h5file.create_group("/", 'detector', 'Detector information')
table = h5file.create_table(group, 'readout', Particle, "Readout example")
particle = table.row
for i in range(1001000):
particle['name'] = 'Particle: %6d' % (i)
particle['TDCcount'] = i % 256
particle['ADCcount'] = (i * 256) % (1 << 16)
particle['grid_i'] = i
particle['grid_j'] = 10 - i
particle['pressure'] = float(i*i)
particle['energy'] = float(particle['pressure'] ** 4)
particle['idnumber'] = i * (2 ** 34)
# Insert a new particle record
particle.append()
table.flush()
h5file.close()
Read the column in chunks and append the indices into a list and concatenate the list to array finally. You can change the chunk size according to your memory size:
h5file = open_file("tutorial1.h5")
table = h5file.get_node("/detector/readout")
size = 10000
col = "energy"
buf = np.zeros(batch, dtype=table.coldtypes[col])
res = []
for start in range(0, table.nrows, size):
length = min(size, table.nrows - start)
data = table.read(start, start + batch, field=col, out=buf[:length])
tmp = np.where(data > 10000)[0]
tmp += start
res.append(tmp)
res = np.concatenate(res)
I am trying to code a java methods which returns a Boolean true if a point(x,y) is on a line segment and false if not.
I tried this:
public static boolean OnDistance(MyLocation a, MyLocation b, MyLocation queryPoint) {
double value = java.lang.Math.signum((a.mLongitude - b.mLongitude) * (queryPoint.mLatitude - a.mLatitude)
- (b.mLatitude - a.mLatitude) * (queryPoint.mLongitude - a.mLongitude));
double compare = 1;
if (value == compare) {
return true;
}
return false;
}
but it doesn't work.
I am not JAVA coder so I stick to math behind ... For starters let assume you are on plane (not sphere surface)
I would use Vector math so let:
a,b - be the line endpoints
q - queried point
c=q-a - queried line direction vector
d=b-a - line direction vector
use dot product for parameter extraction
t=dot(c,d)/(|c|*|d|)
t is line parameter <0,1> if out of range q is not inside line
|c|=sqrt(c.x*c.x+c.y*c.y) size of vector
dot(c,d)=c.x*d.x+c.y*d.y scalar vector multiply
now compute corresponding point on line
e=a+(t*d)
e is the closest point to q on the line ab
compute perpendicular distance of q and ab
l=|q-e|;
if (l>treshold) then q is not on line ab else it is on the line ab. The threshold is the max distance from line you are still accepting as inside line. No need to have l sqrt-ed the threshold constant can be powered by 2 instead for speed.
if you add all this to single equation
then some things will simplify itself (hope did not make some silly math mistake)
l=|(q-a)-(b-a)*(dot(q-a,b-a)/|b-a|^2)|;
return (l<=treshold);
or
l=|c-(d*dot(c,d)/|d|^2)|;
return (l<=treshold);
As you can see we do not even need sqrt for this :)
[Notes]
If you need spherical or ellipsoidal surface instead then you need to specify it closer which it is what are the semi axises. The line become arc/curve and need some corrections which depends on the shape of surface see
Projecting a point onto a path
but can be done also by approximation and may be also by binary search of point e see:
mine approx class in C++
The vector math used can be found here at the end:
Understanding 4x4 homogenous transform matrices
Here 3D C++ implementation (with different names):
double distance_point_axis(double *p,double *p0,double *dp)
{
int i;
double l,d,q[3];
for (i=0;i<3;i++) q[i]=p[i]-p0[i]; // q = p-p0
for (l=0.0,i=0;i<3;i++) l+=dp[i]*dp[i]; // l = |dp|^2
for (d=0.0,i=0;i<3;i++) d+=q[i]*dp[i]; // d = dot(q,dp)
if (l<1e-10) d=0.0; else d/=l; // d = dot(q,dp)/|dp|^2
for (i=0;i<3;i++) q[i]-=dp[i]*d; // q=q-dp*dot(q,dp)/|dp|^2
for (l=0.0,i=0;i<3;i++) l+=q[i]*q[i]; l=sqrt(l); // l = |q|
return l;
}
Where p0[3] is any point on axis and dp[3] is direction vector of axis. The p[3] is the queried point you want the distance to axis for.
I'm trying to calculate the average Luminance of an RGB image. To do this, I find the luminance of each pixel i.e.
L(r,g,b) = X*r + Y*g + Z*b (some linear combination).
And then find the average by summing up luminance of all pixels and dividing by width*height.
To speed this up, I'm using pyopencl.reduction.ReductionKernel
The array I pass to it is a Single Dimension Numpy Array so it works just like the example given.
import Image
import numpy as np
im = Image.open('image_00000001.bmp')
data = np.asarray(im).reshape(-1) # so data is a single dimension list
# data.dtype is uint8, data.shape is (w*h*3, )
I want to incorporate the following code from the example into it . i.e. I would make changes to datatype and the type of arrays I'm passing. This is the example:
a = pyopencl.array.arange(queue, 400, dtype=numpy.float32)
b = pyopencl.array.arange(queue, 400, dtype=numpy.float32)
krnl = ReductionKernel(ctx, numpy.float32, neutral="0",
reduce_expr="a+b", map_expr="x[i]*y[i]",
arguments="__global float *x, __global float *y")
my_dot_prod = krnl(a, b).get()
Except, my map_expr will work on each pixel and convert each pixel to its luminance value.
And reduce expr remains the same.
The problem is, it works on each element in the array, and I need it to work on each pixel which is 3 consecutive elements at a time (RGB ).
One solution is to have three different arrays, one for R, one for G and one for B ,which would work, but is there another way ?
Edit: I changed the program to illustrate the char4 usage instead of float4:
import numpy as np
import pyopencl as cl
import pyopencl.array as cl_array
deviceID = 0
platformID = 0
workGroup=(1,1)
N = 10
testData = np.zeros(N, dtype=cl_array.vec.char4)
dev = cl.get_platforms()[platformID].get_devices()[deviceID]
ctx = cl.Context([dev])
queue = cl.CommandQueue(ctx)
mf = cl.mem_flags
Data_In = cl.Buffer(ctx, mf.READ_WRITE, testData.nbytes)
prg = cl.Program(ctx, """
__kernel void Pack_Cmplx( __global char4* Data_In, int N)
{
int gid = get_global_id(0);
//Data_In[gid] = 1; // This would change all components to one
Data_In[gid].x = 1; // changing single component
Data_In[gid].y = 2;
Data_In[gid].z = 3;
Data_In[gid].w = 4;
}
""").build()
prg.Pack_Cmplx(queue, (N,1), workGroup, Data_In, np.int32(N))
cl.enqueue_copy(queue, testData, Data_In)
print testData
I hope it helps.
I asked this question yesterday about storing a plot within an object. I tried implementing the first approach (aware that I did not specify that I was using qplot() in my original question) and noticed that it did not work as expected.
library(ggplot2) # add ggplot2
string = "C:/example.pdf" # Setup pdf
pdf(string,height=6,width=9)
x_range <- range(1,50) # Specify Range
# Create a list to hold the plot objects.
pltList <- list()
pltList[]
for(i in 1 : 16){
# Organise data
y = (1:50) * i * 1000 # Get y col
x = (1:50) # get x col
y = log(y) # Use natural log
# Regression
lm.0 = lm(formula = y ~ x) # make linear model
inter = summary(lm.0)$coefficients[1,1] # Get intercept
slop = summary(lm.0)$coefficients[2,1] # Get slope
# Make plot name
pltName <- paste( 'a', i, sep = '' )
# make plot object
p <- qplot(
x, y,
xlab = "Radius [km]",
ylab = "Services [log]",
xlim = x_range,
main = paste("Sample",i)
) + geom_abline(intercept = inter, slope = slop, colour = "red", size = 1)
print(p)
pltList[[pltName]] = p
}
# close the PDF file
dev.off()
I have used sample numbers in this case so the code runs if it is just copied. I did spend a few hours puzzling over this but I cannot figure out what is going wrong. It writes the first set of pdfs without problem, so I have 16 pdfs with the correct plots.
Then when I use this piece of code:
string = "C:/test_tabloid.pdf"
pdf(string, height = 11, width = 17)
grid.newpage()
pushViewport( viewport( layout = grid.layout(3, 3) ) )
vplayout <- function(x, y){viewport(layout.pos.row = x, layout.pos.col = y)}
counter = 1
# Page 1
for (i in 1:3){
for (j in 1:3){
pltName <- paste( 'a', counter, sep = '' )
print( pltList[[pltName]], vp = vplayout(i,j) )
counter = counter + 1
}
}
dev.off()
the result I get is the last linear model line (abline) on every graph, but the data does not change. When I check my list of plots, it seems that all of them become overwritten by the most recent plot (with the exception of the abline object).
A less important secondary question was how to generate a muli-page pdf with several plots on each page, but the main goal of my code was to store the plots in a list that I could access at a later date.
Ok, so if your plot command is changed to
p <- qplot(data = data.frame(x = x, y = y),
x, y,
xlab = "Radius [km]",
ylab = "Services [log]",
xlim = x_range,
ylim = c(0,10),
main = paste("Sample",i)
) + geom_abline(intercept = inter, slope = slop, colour = "red", size = 1)
then everything works as expected. Here's what I suspect is happening (although Hadley could probably clarify things). When ggplot2 "saves" the data, what it actually does is save a data frame, and the names of the parameters. So for the command as I have given it, you get
> summary(pltList[["a1"]])
data: x, y [50x2]
mapping: x = x, y = y
scales: x, y
faceting: facet_grid(. ~ ., FALSE)
-----------------------------------
geom_point:
stat_identity:
position_identity: (width = NULL, height = NULL)
mapping: group = 1
geom_abline: colour = red, size = 1
stat_abline: intercept = 2.55595281266726, slope = 0.05543539319091
position_identity: (width = NULL, height = NULL)
However, if you don't specify a data parameter in qplot, all the variables get evaluated in the current scope, because there is no attached (read: saved) data frame.
data: [0x0]
mapping: x = x, y = y
scales: x, y
faceting: facet_grid(. ~ ., FALSE)
-----------------------------------
geom_point:
stat_identity:
position_identity: (width = NULL, height = NULL)
mapping: group = 1
geom_abline: colour = red, size = 1
stat_abline: intercept = 2.55595281266726, slope = 0.05543539319091
position_identity: (width = NULL, height = NULL)
So when the plot is generated the second time around, rather than using the original values, it uses the current values of x and y.
I think you should use the data argument in qplot, i.e., store your vectors in a data frame.
See Hadley's book, Section 4.4:
The restriction on the data is simple: it must be a data frame. This is restrictive, and unlike other graphics packages in R. Lattice functions can take an optional data frame or use vectors directly from the global environment. ...
The data is stored in the plot object as a copy, not a reference. This has two
important consequences: if your data changes, the plot will not; and ggplot2 objects are entirely self-contained so that they can be save()d to disk and later load()ed and plotted without needing anything else from that session.
There is a bug in your code concerning list subscripting. It should be
pltList[[pltName]]
not
pltList[pltName]
Note:
class(pltList[1])
[1] "list"
pltList[1] is a list containing the first element of pltList.
class(pltList[[1]])
[1] "ggplot"
pltList[[1]] is the first element of pltList.
For your second question: Multi-page pdfs are easy -- see help(pdf):
onefile: logical: if true (the default) allow multiple figures in one
file. If false, generate a file with name containing the
page number for each page. Defaults to ‘TRUE’.
For your main question, I don't understand if you want to store the plot inputs in a list for later processing, or the plot outputs. If it is the latter, I am not sure that plot() returns an object you can store and retrieve.
Another suggestion regarding your second question would be to use either Sweave or Brew as they will give you complete control over how you display your multi-page pdf.
Have a look at this related question.