I'm trying to graph a series of planes as a solid object in mathematica. I first tried to use the RangePlot3D options as well as the fill options to graph the 3D volume, but was unable to find a working result.
The graphic I'm trying to create will show the deviation between the z axis and the radius from the origin of a 3D cuboid. The current equation I'm using is this:
Plot3D[Evaluate[{Sqrt[(C[1])^2 + x^2 + y^2]} /.
C[1] -> Range[6378100, 6379120]], {x, -1000000,
1000000}, {y, -1000000, 1000000}, AxesLabel -> Automatic]
(output for more manageable range looks as follows)
Where C1 was the origional Z-value at each plane and the result of this equation is z+(r-z)
for any point on the x,y plane.
However this method is incredibly inefficient. Because this will be used to model large objects with an original z-values of >6,000,000 and heights above 1000, mathematica is unable to graph thousands of planes and represent them in a responsive method.
Additionally, Because the Range of C1 only includes integer values, there is discontinuity between these planes.
Is there a way to rewrite this using different mathematica functionality that will generate a 3Dplot that is both a reasonable load on my system and is a smooth object?
2nd, What can I do to improve my perforamance? when computing the above input for >30min, mathematica was only utilizing about 30% CPU and 4GB of ram with a light load on my graphics card as well. This is only about twice as much as chrome is using right now on my system.
I attempted to enable CUDALink, but it wouldn't enable properly. Would this offer a performance boost for this type of processing?
For Reference, my system build is:
16GB Ram
Intel i7 4770K running at stock settings
Nvidia GeForce 760GTX
256 Samsung SSD
Plotting a million planes and hoping that becomes a 3d solid seems unlikely to succeed.
Perhaps you could adapt something like this
Show[Plot3D[{Sqrt[6^2+x^2+y^2], Sqrt[20^2+x^2+y^2]}, {x, -10, 10}, {y, -10, 10},
AxesLabel -> Automatic, PlotRange -> {{-15, 15}, {-15, 15}, All}],
Graphics3D[{
Polygon[Join[
Table[{x, -10, Sqrt[6^2 + x^2 + (-10)^2]}, {x, -10, 10, 1}],
Table[{x, -10, Sqrt[20^2 + x^2 + (-10)^2]}, {x, 10, -10, -1}]]],
Polygon[Join[
Table[{-10, y, Sqrt[6^2 + (-10)^2 + y^2]}, {y, -10, 10, 1}],
Table[{-10, y, Sqrt[20^2 + (-10)^2 + y^2]}, {y, 10, -10, -1}]]],
Polygon[Join[
Table[{x, 10, Sqrt[6^2 + x^2 + 10^2]}, {x, -10, 10, 1}],
Table[{x, 10, Sqrt[20^2 + x^2 + 10^2]}, {x, 10, -10, -1}]]],
Polygon[Join[
Table[{10, y, Sqrt[6^2 + 10^2 + y^2]}, {y, -10, 10, 1}],
Table[{10, y, Sqrt[20^2 + 10^2 + y^2]}, {y, 10, -10, -1}]]]}]]
What that does is plot the top and bottom surface and then construct four polygons, each connecting the top and bottom surface along one side. But one caution, if you look very very closely you will see that, because they are polygons, the edges of the four faces are made up of short line segments rather than parabolas and thus are not perfectly joining your two paraboloids, there can be tiny gaps or tiny overlaps. This may or may not make any difference for your application.
That graphic displays in a fraction of a second on a machine that is a fraction of yours.
Mathematica does not automatically parallelize computations onto multiple cores.
CUDA programming is a considerably bigger challenge than turning the link on.
If you can simply define each face of your solid and combine them with Show then
I think you will have a much greater chance of success.
Another way:
xyrange = 10
cmin = 6
cmax = 20
RegionPlot3D[
Abs[x] < xyrange && Abs[y] < xyrange &&
cmin^2 < z^2 - ( x^2 + y^2) < cmax^2 ,
{x, -1.2 xyrange, 1.2 xyrange}, {y, -1.2 xyrange, 1.2 xyrange},
{z, cmin, Sqrt[ cmax^2 + 2 xyrange^2]}, MaxRecursion -> 15,
PlotPoints -> 100]
This is nowhere near as fast as Bills' approach, but it may be useful if you plot a more complicated region. Note RegionPlot does not work for your original example because the volume is too small compared to the plot range.
Related
I have a similar question to Writing an Exodus II file programmatically. The linked question has still no accepted answer.
I am performing numerical simulations in MATLAB. My output consists of polyhedra in 3d space and I know the displacement for every vertex in terms of a displacement vector for a finite number of time steps. I would like to visualize the displacement as an animation in Paraview. On YouTube I found a tutorial on animations for the can.ex2 example in Paraview.
Therefore, I would like to export my data (i.e. initial vertex positions + displacement for each time step) to Exodus 2 format or similar. If possible, I would like to avoid any existing library and write the file myself in MATLAB. I was already successful with .vtk/.vti/.obj/... writers for other parts of my project.
Can someone recommend a comprehensive description on how the .ex2 files should be written and/or code that I can orientate myself on? Unfortunately, I was not very successful with my own research. I am also open for suggestions of similar file formats that would be sufficient for my plans.
Edit
Because I was asked for a code example:
% Vertices of unit triangle contained in (x,y)-plane
initialVertexPos = [0, 1, 0;
0, 0, 1;
0, 0, 0];
nVertices = size(initialVertexPos, 2);
% Linear displacement of all vertices along z-axis
nTimeSteps = 10;
disp_x = zeros(nTimeSteps, nVertices);
disp_y = zeros(nTimeSteps, nVertices);
disp_z = repmat(linspace(0,1,nTimeSteps)', 1, nVertices);
% Position of vertex kVertex at time step kTime is given by
% initialVertexPos(:,kVertex) + [disp_x(kTime,kVertex); disp_y(kTime,kVertex); disp_z(kTime,kVertex)]
I have been having some difficulty in displaying the results from my lmer model within ggplot2. I am specifically interested in displaying predicted regression lines on top of observed data. The lmer model I am running on this (speech) data is here below:
lmer.declination <- lmer(zlogF0_m60~Center.syll*Tone + (1|Trial) + (1+Tone|Speaker) + (1|Utterance.num), data=data)
The dependent variable here is fundamental frequency (F0), normalized and averaged across the middle 60% of a syllable. The fixed effects are syllable number (Center.syll), counted backwards from the end of a sentence (e.g. -2 is the 3rd last syllable in the sentence). The data here is from a lexical tone language, so the Tone (all low tone /1/, all mid tone /3/, and all high tone /4/) is a discrete fixed effect. The experimental questions are whether F0 falls across the sentences for this language, if so, by how much, and whether tone matters. It was a bit difficult for me to think of a way to produce a toy data set here, but the data can be downloaded here (a 437K file).
In order to extract the model fits, I used the effects package and converted the output to a data frame.
ex <- Effect(c("Center.syll","Tone"),lmer.declination)
ex.df <- as.data.frame(ex)
I plot the data using ggplot2, with the following code:
t.plot <- ggplot(data, aes(factor(Center.syll), zlogF0_m60, group=Tone, color=Tone)) + stat_summary(fun.data = mean_cl_boot, geom = "smooth") + ylab("Normalized log(F0)") + xlab("Syllable number") + ggtitle("F0 change across utterances with identical level tones, medial 60% of vowel") + geom_pointrange(data=ex.df, mapping=aes(x=Center.syll, y=fit, ymin=lower, ymax=upper)) + theme_bw()
t.plot
This produces the following plot:
Predicted trajectories and observed trajectories
The predicted values appear to the left of the observed data, not overlaid on the data itself. Whatever I seem to try, I can not get them to overlap on the observed data. I would ideally like to have a single line drawn rather than a pointrange, but when I attempted to use geom_line, the default was for the line to connect from the upper bound of one point to the lower bound of the next (not at the median/midpoint). Thank you for your help.
(Edit: As the OP pointed out, he did in fact include a link to his data set. My apologies for implying that he didn't.)
First of all, you will have much better luck getting a helpful response if you provide a minimal, complete, and verifiable example (MVCE). Look here for information on how to best do that for R specifically.
Lacking your actual data to work with, I believe your problem is that you're factoring the x-axis for the stat_summary, but not for the geom_pointrange. I mocked up a toy example from the plot you linked to in order to demonstrate:
dat1 <- data.frame(x=c(-6:0, -5:0, -4:0),
y=c(-0.25, -0.5, -0.6, -0.75, -0.8, -0.8, -1.5,
0.5, 0.45, 0.4, 0.2, 0.1, 0,
0.5, 0.9, 0.7, 0.6, 1.1),
z=c(rep('a', 7), rep('b', 6), rep('c', 5)))
dat2 <- data.frame(x=dat1$x,
y=dat1$y + runif(18, -0.2, 0.2),
z=dat1$z,
upper=dat1$y + 0.3 + runif(18, -0.1, 0.1),
lower=dat1$y - 0.3 + runif(18, -0.1, 0.1))
Now, the following call gives me a result similar to the graph you linked to:
ggplot(dat1, aes(factor(x), # note x being factored here
y, group=z, color=z)) +
geom_line() + # (this is a place-holder for your stat_summary)
geom_pointrange(data=dat2,
mapping=aes(x=x, # but x not being factored here
y=y, ymin=lower, ymax=upper))
However, if I remove the factoring of the initial x value, I get the line and the point ranges overlaid:
ggplot(dat1, aes(x, # no more factoring here
y, group=z, color=z)) +
geom_line() +
geom_pointrange(data=dat2,
mapping=aes(x=x, y=y, ymin=lower, ymax=upper))
Note that I still get the overlaid result if I factor both of the x-axes. The two simply have to be consistent.
Again, I can't stress enough how much it helps this entire process if you provide code we can copy/paste into an R session and see what you're seeing. Hopefully this helps you out, but it all goes more smoothly (and quickly) if you help us help you.
I'm trying to use object_detection from tensorflow library to detect colored squares. For every image in train-eval-dataset, I should have the information about bounding box coordinates (with origin in top left corner) defined by 4 floating point numbers [ymin, xmin, ymax, xmax]. Now, let's suppose background_image is completly white image 300 x 300px. Code of my image-generator looks like this (pseudocode):
new_image = background_image.copy()
rand_x, rand_y = random_coordinates(new_image)
for (i = rand_x; i < rand_y + 100; ++i)
for (j = rand_y; j < rand_y + 100; ++j)
new_image[i][j] = color(red)
...so now we have 300 x 300px image of red square 100 x 100px on white background. The question is - should my bounding box contain only red colored pixels [rand_x, rand_y, rand_x + 100, rand_y + 100] or should it contain "white frame" like [rand_x - 5, rand_y - 5, rand_x + 105, rand_y + 105]? And maybe it does not matter? After 15h of training and evaluating (with bounding box coordinates = [rand_x, rand_y, rand_x + 100, rand_y + 100]) tensorboard shows me something like this:
Tensorboard informs that precission is about 0.1.
I understand well that after only 1100 steps results should not be breathtaking. I just want to exclude potential inaccuracies resulting from my fault.
Ideally, you want that your predicted boxes perfectly overlap the ground truth boxes.
This means that if A = [y_min, x_min, y_max, x_max] is the ground truth box, you want B (the predicted box) to be equal to A => A=B.
During the train phase is perfectly normal that your predictions are "around" the ground truth and there's no perfect match.
In reality, even during the test phase (at the end of the train) A=B is something difficult to achieve, because every classifier/regressor is not perfect.
In short: your predictions looks fine. With more epochs of train you'll probably get some better results
I have a question regarding a code snipped which I have found i a book.
The author creates two categories of sample points. Next the author learns a model and plots the SVC model onto the "blobs".
This is the code snipped:
# create 50 separable points
X, y = make_blobs(n_samples=50, centers=2,
random_state=0, cluster_std=0.60)
# fit the support vector classifier model
clf = SVC(kernel='linear')
clf.fit(X, y)
# plot the data
fig, ax = plt.subplots(figsize=(8, 6))
point_style = dict(cmap='Paired', s=50)
ax.scatter(X[:, 0], X[:, 1], c=y, **point_style)
# format plot
format_plot(ax, 'Input Data')
ax.axis([-1, 4, -2, 7])
# Get contours describing the model
xx = np.linspace(-1, 4, 10)
yy = np.linspace(-2, 7, 10)
xy1, xy2 = np.meshgrid(xx, yy)
Z = np.array([clf.decision_function([t])
for t in zip(xy1.flat, xy2.flat)]).reshape(xy1.shape)
line_style = dict(levels = [-1.0, 0.0, 1.0],
linestyles = ['dashed', 'solid', 'dashed'],
colors = 'gray', linewidths=1)
ax.contour(xy1, xy2, Z, **line_style)
The result is the following:
My question is now, why do we create "xx" and "yy" as well as "xy1" and "xy2"? Because actually we want to show the SVC "function" for the X and y data and if we pass xy1 and xy2 as well as Z (which is also created with xy1 and xy2) to the meshgrid function to plot the meshgrid, there is no connection to the data with which the SVC model was learned...isn't it?
Can anybody explain this to me please or give a recommendation how to solve this more easily?
Thanks for your answers
I'll start with short broad answers. ax.contour() is just one way to plot the separating hyperplane and its "parallel" planes. You can certainly plot it by calculating the plane, like this example.
To answer your last question, in my opinion it's already a relatively simple (in math and logic) and easy (in coding) way to plot your model. And it is especially useful when your separating hyperplane is not mathematically easy to describe (such as polynomial and RBF kernel for non-linear separation), like this example.
To address your second question and comments, and to answer your first question, yes you're right, xx, yy, xy1, xy2 and Z all have very limited connect to your (simulated blobs of) data. They are used for drawing the hyperplanes to describe your model.
That should answer your questions. But please allow me to give some more details here in case others are not familiar with the topic as you do. The only connection between your data and xx, yy, xy1, xy2, Z is:
xx, yy, xy1 and xy2 sample an area surrounding the simulated data. Specifically, the simulated data centered around 2. xx sets a limit between (-1, 4) and yy sets a limit between (-2, 7). One can check the "meshgrid" by ax.scatter(xy1, xy2).
Z is a calculation for all sample points in the "meshgrid". It calculates the normalized distance from a sample point to the separating hyperplane. Z is the levels on the contour plot.
ax.contour then uses the "meshgrid" and Z to plot contour lines. Here are some key points:
xy1 and xy2 are both 2-D specifying the (x, y) coordinates of the surface. They list sample points in the area row by row.
Z is a 2-D array with the same shape as xy1 and xy2. It defines the level at each point so that the program can "understand" the shape of the 3-dimensional surface.
levels = [-1.0, 0.0, 1.0] indicates that there are 3 curves (lines in this case) at corresponding levels to draw. In related to SVC, level 0 is the separating hyperplane; level -1 and 1 are very close (differ by a ζi) to the maximum margin separating hyperplane.
linestyles = ['dashed', 'solid', 'dashed'] indicates that the separating hyperplan is drawn as a solid line and the two planes on both sides are drawn as a dashed line.
Edit (in response to the comment):
Mathematically, the decision function should be a sign function which tell us a point is level 0 or 1, as you said. However, when you check values in Z, you will find they are continuous data. The decision_function(X) works in a way that the sign of the value indicates the classification, while the absolute value is the "Distance of the samples X to the separating hyperplane" which reflects (kind of) the confidence/significance of the predicted classification. This is critical to the plot of model. If Z is categorical, you would have contour lines which makes an area like a mesh rather than a single contour line. It will be like the colormesh in the example; but you won't see that with ax.contour() since it's not a correct behavior for a contour plot.
I am trying to find a function h(r) that minimises a functional H(h) by a very simple gradient descent algorithm. The result of H(h) is a single number. (Basically, I have a field configuration in space and I am trying to minimise the energy due to this field). The field is discretized in space, and the gradient used is the derivative of H with respect to the field's value at each discrete point in space. I am doing this on Mathematica, but I think the code is easy to follow for non-users of Mathematica.
The function Hamiltonian takes a vector of field values, the spacing of points d, and the number of points imax, and gives the value of energy. EderhSym is the function that gives a table of values for the derivatives at each point. I wrote the derivative function manually to save computation time. (The details of these two functions are probably irrelevant for the question).
Hamiltonian[hvect_, d_, imax_] :=
Sum[(i^2/2)*d*(hvect[[i + 1]] - hvect[[i]])^2, {i, 1, imax - 1}] +
Sum[i^2*d^3*Potential[hvect[[i]]], {i, 1, imax}]
EderhSym[hvect_,d_,imax_]:=Join[{0},Table[2 i^2 d hvect[[i]]-i(i+1)d hvect[[i+1]]
-i(i-1)d hvect[[i-1]]+i^2 d^3 Vderh[hvect[[i]]], {i, 2, imax - 1}], {0}]
The code below shows a single iteration of gradient descent. hvect1 is some starting configuration that I have guessed using physical principles.
Ederh = EderhSym[hvect1, d, imax];
hvect1 = hvect1 - StepSize*Ederh;
The problem is that I am getting random spikes in the derivative table. The spikes keep growing until there is an overflow. I have tried changing the step size, I have tried using moving averages, low pass filters, Gaussian filters etc. I still get spikes that cause an overflow. Has anyone encountered this? Is it a problem with the way I am setting up gradient descent?
Aside - I am testing my gradient descent code as I will have to adapt it to a different multivariable Hamiltonian where I do this to find a saddle point instead: (n is an appropriately chosen small number, 10 in my case)
For[c = 1, c <= n, c++,
Ederh = EderhSym[hvect1, \[CapitalDelta], imax];
hvect = hvect - \[CapitalDelta]\[Tau]*Ederh;
];
Ederh = EderhSym[hvect1, \[CapitalDelta], imax];
hvect1 = hvect1 + n \[CapitalDelta]\[Tau]*Ederh;
Edit: It seems to be working with a much smaller step size than I had previously tried (although convergence is slow). I believe that was the problem, but I do not see why the divergences are localised at particular points.