So I was reading a Document about Displacement Mappings and Surface Blendings and came across this equation which is supposed to be a Alpha-Blending equation:
while v1,...,vn are supposed to be the value vector and w1,....,wn the weight vector (is how the document describes it).
To tell what my interpretation of this equation is, is that considering n being the number of surfaces we are trying to blend together the value vectors are supposed to represent as the name says the value (probably color related?) of each surface and the weight vector basically describing the value preference of each surface (so the higher the weight value the more we would see the color of that one surface after the blend). The multiplication and division part is something what i do not fully understand (just interpreting it as the 'it just works like that' part of the equation)
I couldn't find any similar equation anywhere so far so I figured out that either I didn't search deep enough or I am not understanding something that is supposed to be very obvious and I wanted to make sure that fully understand this equation for further read in the document which bases on this idea.
Related
I am trying to extract camera matrix from essential matrix. I found some answers about this.
determine camera rotation and translation matrix from essential matrix
Rotation and Translation from Essential Matrix incorrect
In these answers, they suggest me to use newE where [U,S,V] = svd(E) and newE = U*diag(1,1,0)*Vt. I don't understand why I need to use newE. As I know, singular values are unique. So changing singular values to diag(1,1,0) seems to make E to completely different values.
I read 'Multiple View Geometry in Computer Vision' also, but it just refers to the ideal case, i.e., singular values are (1,1,0). I didn't find the reason of using newE.
Please can anyone explain me why people use newE?
If I understand your question correctly, then since you source data (and thus E) is usually noisy real world data, then by using diag(1,1,0) you are constraining the matrix to be of the correct scale and rank and algebraically enforcing the geometric constraints.
Wikipedia also a has a nice section explaining this.
Does anyone have any ideas how to implement a monte carlo integration simulator in vb.net.
I have looked around the internet with no luck.
Any code, or ideas as to how to start it would be of help.
Well i guess we are talking about a 2 dimensional problem. I assume you have a polygon of which you want to calculate the area.
1) First you need a function to check if a point is inside the polygon.
2) Now you define an area with a known size around the polygon.
3) Now you need random points inside your known area, some of them will be in your polygon, some will be outside, count them!
4) Now you have two relations: First the relations of all points to points inside your polygon. Second the area around your polygon which you know, to the area of the polygon you don't know.
5) The relations is the same --> you can calculate the area of your polygon! (Area of polygon should be: points in you polygon / all your points * size of known area)
Example: 3 points hits hit the polygon, 20 points where "shot", the area of the polygon is 0.6m²
NOTE: This area is only an approach! The more points you have, the better the approach gets.
You can implement a fancy method to display this in your vb program of course. Was this what you needed? Is my assumption about the polygon correct? Do you need help with the "point inside polygon" algorithm?
There is nothing specific to VB.net with this problem, except maybe for the choice of a random number generator from the library.
Numerically solving integrals of a function f(x_1,...,x_n) by using can become infeasible (in acceptable time) for high dimensions n, because the number of sample points needed for a given sampling distance grows exponentially with the dimension of the problem. The fundamental idea with Monte Carlo Integration is to replace the uniform sampling of the variables x_1,...,x_n with random sampling, taking n random numbers per sample. With these samples, estimate the integral. The more samples, the better the estimate. And the major benefit of MC integration is, that you can use standard statistical methods to estimate the error of your result.
So, how to start: Implement integration by uniform sampling of the integration space, then go to random sampling and add error estimation.
I have some data that tells me the amount of hours water is available for particular towns.
You can see it here
I want to use train a Multilayer Perceptron based on that data, to take a set of coordinates and indicate the approximate number of hours for which that coordinate will have water.
Does this make sense?
If so, am I correct in saying, there has to be two input layers? One for lat and one for long. And the output layer should be the number of hours.
Would love some guidance.
I would solve that differently:
Just create an ArrayList of WaterInfo:
WaterInfo contains lat,lon, waterHours.
Then for a given coordinate search the closest WaterInfo in the list.
Since you have not many elements, just do a brute force search, to find the closest.
You further can optimize, to find the three closest WaterInfo points, and calculate the weithted average of WaterHours. As weight you use the air distance from current position to Waterinfo position.
To answer your question:
"Does this makes sense"?
From the goal to get a working solution: NO!
Ask yourself, why do you want to use MLP for this task.
Further i doubt that using two layers for lat / long makes sense.
A coordinate (lat/lon) is one point on the world, so that should be one layer in the model. You can convert the lat/lon coord to a cell identifier: Span a grid over Brazil; with cell width 10 or 50km; now convert a lat/long coordinate to a cellId: Like E4 on a chess board, you will calculate one integer value representing the cell. (There are other solutions to get an unique number, too, choose one you like)
Now you have a modell geoCellID -> waterHours, which better represents the real world situation.
Basically, I have a set of up to 100 co-ordinates, along with the desired tangents to the curve at the first and last point.
I have looked into various methods of curve-fitting, by which I mean an algorithm with takes the inputted data points and tangents, and outputs the equation of the cure, such as the gaussian method and interpolation, but I really struggled understanding them.
I am not asking for code (If you choose to give it, thats acceptable though :) ), I am simply looking for help into this algorithm. It will eventually be converted to Objective-C for an iPhone app, if that changes anything..
EDIT:
I know the order of all of the points. They are not too close together, so passing through all points is necessary - aka interpolation (unless anyone can suggest something else). And as far as I know, an algebraic curve is what I'm looking for. This is all being done on a 2D plane by the way
I'd recommend to consider cubic splines. There is some explanation and code to calculate them in plain C in Numerical Recipes book (chapter 3.3)
Most interpolation methods originally work with functions: given a set of x and y values, they compute a function which computes a y value for every x value, meeting the specified constraints. As a function can only ever compute a single y value for every x value, such an curve cannot loop back on itself.
To turn this into a real 2D setup, you want two functions which compute x resp. y values based on some parameter that is conventionally called t. So the first step is computing t values for your input data. You can usually get a good approximation by summing over euclidean distances: think about a polyline connecting all your points with straight segments. Then the parameter would be the distance along this line for every input pair.
So now you have two interpolation problem: one to compute x from t and the other y from t. You can formulate this as a spline interpolation, e.g. using cubic splines. That gives you a large system of linear equations which you can solve iteratively up to the desired precision.
The result of a spline interpolation will be a piecewise description of a suitable curve. If you wanted a single equation, then a lagrange interpolation would fit that bill, but the result might have odd twists and turns for many sets of input data.
I want to detect the best rototraslation matrix between two set of points.
The second set of points is the same of the first, but rotated, traslated and affecteb by noise.
I tried to use least squared method by obviously the solution is usually similar to a rotation matrix, but with incompatible structure (for example, where i should get a value that represents the cosine of an angle i could get a value >1).
I've searched for the Constrained Least Squared method but it seems to me that the constrains of a rototraslation matrix cannot be expressed in this form.
In this PDF i've stated the problem more formally:
http://dl.dropbox.com/u/3185608/minquad_en.pdf
Thank you for the help.
The short answer: What you will need here is "Principal Component Analysis".
Apply this to both sets of points centered at their respective centers of mass. The PCA will effectively give you a rotation matrix for each aligned to the data set principal components. Multiplying the inverse matrix of the original set by the new rotation will give you a matrix that takes the old (centered) set to the new. Inverse translations and translations can similarly be applied to the rotation to create a homogeneous matrix that maps the one set to the other.
The book PRINCE, Simon JD. Computer vision: models, learning, and inference. Cambridge University Press, 2012.
gives, in Appendix "B.4 Reparameterization", some info about how to constrain a matrix to be a rotation matrix.
It seems to me that your problem has also a solution based on SVD: see the Kabsch algorithm also described by Olga Sorkine-Hornung and Michael Rabinovich in
Least-Squares Rigid Motion Using SVD and, more practically, by Nghia Kien Ho in FINDING OPTIMAL ROTATION AND TRANSLATION BETWEEN CORRESPONDING 3D POINTS.