I have voronoi diagram that contains 4 sites:
I am trying to find bounds B that would divide my voronoi regions into equal areas or close as possible. The only requirement here is that B would have constant aspect ratio c. In other words B width divided with height would always result to c (c = width/height).
Images here as example I am looking for general solution that would work on any 4 sites. I plan to use this solution on software realtime with constantly changing sites, so it is preferred it would not require huge number of iterations.
I am curious is there any algorithm to solve this issue. So far I tried:
Floyd relaxation, that is used to find equal area regions, but it modifies sites.
Reinforced learning, could not get anything relevant out of it.
Managed to solve it for 3 sites, but that did not scaled well to 4 sites.
Are there any options or requirements which make
CGAL::Polygon_mesh_slicer output to be in correct order.
eg. I've loaded a mesh and converted it to CGAL::Surface_mesh. Then I've used slicer on that mesh to get list of polylines. Problem is that these polylines are not in any order CW or CCW.
To be more precise, output polylines are not consecutive
Here is a slice of a cube from the top.
o---1--o
| |
2 3
| |
o--4---o
I would expect that output will be like: 1->2->4->3 or reverse
But I got more or less 1->4->2->3
As stated here, "Each resulting polyline P is oriented such that for two consecutive points p and q in P, the normal vector of the face(s) containing the segment pq, the vector pq, and the orthogonal vector of plane is a direct orthogonal basis. The normal vector of each face is chosen to point on the side of the face where its sequence of vertices is seen counterclockwise."
So the orientation of the polylines depends on the orientation of the plane and of the mesh faces.
I download the following graph-cut code:
https://github.com/shaibagon/GCMex
I compiled the mex files, and ran it for pre-defined image in the code (which is rgb image)
I wanna optimize the image segmentation results,
I have probability map of the image, which its dimension is (width,height, 5). Five probability distribution over the image dimension are stacked together. each relates to one the classes.
My problem is which parts of code should according to the probability image.
I want to define Data and Smoothing terms based on my application.
My question is:
1) Has someone refined the code according to the defining different energy function (I wanna change Unary and pair-wise formulation).
2) I have a stack of 3D images. I wanna define 6-neighborhood system, 4 neighbors in current slice and the other two from two adjacent slices. In which function and part of code can I do the refinements?
Thanks
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 6 years ago.
Improve this question
I am doing linear regression with multiple variables. In my data I have n = 143 features and m = 13000 training examples. Some of my features are continuous (ordinal) variables (area, year, number of rooms). But I also have categorical variables (district, color, type). For now I visualized some of my feautures against predicted price. For example here is the plot of area against predicted price:
Since area is continuous ordinal variable I had no troubles visualizing the data. But now I wanted to somehow visualize dependency of my categorical variables (such as district) on predicted price.
For categorical variables I used one-hot (dummy) encoding.
For example that kind of data:
turned to this format:
If I were using ordinal encoding for districts this way:
DistrictA - 1
DistrictB - 2
DistrictC - 3
DistrictD - 4
DistrictE - 5
I would plot this values against predicted price pretty easy by putting 1-5 to X axis and price to Y axis.
But I used dummy coding and now I do not know how can I show (visualize) dependency between price and categorical variable 'District' represented as series of zeros and ones.
How can I make a plot showing a regression line of districts against predicted price in case of using dummy coding?
If you just want to know how much the different districts influence your prediction you can take a look at the trained coefficients directly. A high theta indicates that that district increases the price.
If you want to plot this, one possible way is to make a scatter plot with the x coordinate depending on which district is set.
Something like this (untested):
plot.scatter(0, predict(data["DistrictA"==1]))
plot.scatter(1, predict(data["DistrictB"==1]))
And so on.
(Possibly you need to provide an x vector of the same size as the filtered data vector.)
It looks even better if you can add a slight random perturbation to the x coordinate.
I'm trying to implement a geometry templating engine. One of the parts is taking a prototypical polygonal mesh and aligning an instantiation with some points in the larger object.
So, the problem is this: given 3d point positions for some (perhaps all) of the verts in a polygonal mesh, find a scaled rotation that minimizes the difference between the transformed verts and the given point positions. I also have a centerpoint that can remain fixed, if that helps. The correspondence between the verts and the 3d locations is fixed.
I'm thinking this could be done by solving for the coefficients of a transformation matrix, but I'm a little unsure how to build the system to solve.
An example of this is a cube. The prototype would be the unit cube, centered at the origin, with vert indices:
4----5
|\ \
| 6----7
| | |
0 | 1 |
\| |
2----3
An example of the vert locations to fit:
v0: 1.243,2.163,-3.426
v1: 4.190,-0.408,-0.485
v2: -1.974,-1.525,-3.426
v3: 0.974,-4.096,-0.485
v5: 1.974,1.525,3.426
v7: -1.243,-2.163,3.426
So, given that prototype and those points, how do I find the single scale factor, and the rotation about x, y, and z that will minimize the distance between the verts and those positions? It would be best for the method to be generalizable to an arbitrary mesh, not just a cube.
Assuming you have all points and their correspondences, you can fine-tune your match by solving the least squares problem:
minimize Norm(T*V-M)
where T is the transformation matrix you are looking for, V are the vertices to fit, and M are the vertices of the prototype. Norm refers to the Frobenius norm. M and V are 3xN matrices where each column is a 3-vector of a vertex of the prototype and corresponding vertex in the fitting vertex set. T is a 3x3 transformation matrix. Then the transformation matrix that minimizes the mean squared error is inverse(V*transpose(V))*V*transpose(M). The resulting matrix will in general not be orthogonal (you wanted one which has no shear), so you can solve a matrix Procrustes problem to find the nearest orthogonal matrix with the SVD.
Now, if you don't know which given points will correspond to which prototype points, the problem you want to solve is called surface registration. This is an active field of research. See for example this paper, which also covers rigid registration, which is what you're after.
If you want to create a mesh on an arbitrary 3D geometry, this is not the way it's typically done.
You should look at octree mesh generation techniques. You'll have better success if you work with a true 3D primitive, which means tetrahedra instead of cubes.
If your geometry is a 3D body, all you'll have is a surface description to start with. Determining "optimal" interior points isn't meaningful, because you don't have any. You'll want them to be arranged in such a way that the tetrahedra inside aren't too distorted, but that's the best you'll be able to do.