Camera matrix from essential matrix - camera

I am trying to extract camera matrix from essential matrix. I found some answers about this.
determine camera rotation and translation matrix from essential matrix
Rotation and Translation from Essential Matrix incorrect
In these answers, they suggest me to use newE where [U,S,V] = svd(E) and newE = U*diag(1,1,0)*Vt. I don't understand why I need to use newE. As I know, singular values are unique. So changing singular values to diag(1,1,0) seems to make E to completely different values.
I read 'Multiple View Geometry in Computer Vision' also, but it just refers to the ideal case, i.e., singular values are (1,1,0). I didn't find the reason of using newE.
Please can anyone explain me why people use newE?

If I understand your question correctly, then since you source data (and thus E) is usually noisy real world data, then by using diag(1,1,0) you are constraining the matrix to be of the correct scale and rank and algebraically enforcing the geometric constraints.
Wikipedia also a has a nice section explaining this.

Related

How is this Alpha Blending equation to understand?

So I was reading a Document about Displacement Mappings and Surface Blendings and came across this equation which is supposed to be a Alpha-Blending equation:
while v1,...,vn are supposed to be the value vector and w1,....,wn the weight vector (is how the document describes it).
To tell what my interpretation of this equation is, is that considering n being the number of surfaces we are trying to blend together the value vectors are supposed to represent as the name says the value (probably color related?) of each surface and the weight vector basically describing the value preference of each surface (so the higher the weight value the more we would see the color of that one surface after the blend). The multiplication and division part is something what i do not fully understand (just interpreting it as the 'it just works like that' part of the equation)
I couldn't find any similar equation anywhere so far so I figured out that either I didn't search deep enough or I am not understanding something that is supposed to be very obvious and I wanted to make sure that fully understand this equation for further read in the document which bases on this idea.

How do I plug distance data into scipy's agglomerative clustering methods?

So, I have a set of texts I'd like to do some clustering analysis on. I've taken a Normalized Compression Distance between every text, and now I have basically built a complete graph with weighted edges that looks something like this:
text1, text2, 0.539
text2, text3, 0.675
I'm having tremendous difficulty figuring out the best way to plug this data into scipy's hierarchical clustering methods. I can probably convert the distance data into a table like the one on this page. How can I format this data so that it can easily be plugged into scipy's HAC code?
You're on the right track with converting the data into a table like the one on the linked page (a redundant distance matrix). According to the documentation, you should be able to pass that directly into scipy.cluster.hierarchy.linkage or a related function, such as scipy.cluster.hierarchy.single or scipy.cluster.hierarchy.complete. The related functions explicitly specify how distance between clusters should be calculated. scipy.cluster.hierarchy.linkage lets you specify whichever method you want, but defaults to single link (i.e. the distance between two clusters is the distance between their closest points). All of these methods will return a multidimensional array representing the agglomerative clustering. You can then use the rest of the scipy.cluster.hierarchy module to perform various actions on this clustering, such as visualizing or flattening it.
However, there's a catch. As of the time this question was written, you couldn't actually use a redundant distance matrix, despite the fact that the documentation says you can. Based on the fact that the github issue is still open, I don't think this has been resolved yet. As pointed out in the answers to the linked question, you can get around this issue by passing the complete distance matrix into the scipy.spatial.distance.squareform function, which will convert it into the format which is actually accepted (a flat array containing the upper-triangular portion of the distance matrix, called a condensed distance matrix). You can then pass the result to one of the scipy.cluster.hierarchy functions.

Multiscale morphological dilation and erosion

Can anyone please specify what is meant by multiscale morphological filtering ? I understand the basic concepts of dilation and erosion. But in multiscale filtering, a scaled structuring function is being used. What does the term scaled mean ?
Please find more relevant information here : Please check link. I want to apply this structuring element in matlab coding but cannot do so. Please can anyone help me ?
Here the multiscale operator is described as:
F(x,s1,s2) = (f-s1)+s2
where f(x) is the original function and s1(x) is the structure function. Apparently, erosion and
dilation with different scales can filter positive and negative noises more perfectly.This operation satisfies
the four quantification principles of morphological filter. (from paper)
This operator is known in the Morphology community as an Alternating Sequential Filter, which basically performs filtering using a alternating series of dilations and erosions or openings and closings of increasing radii on the same image. This series of radii for the given structuring function can be decided based on the structure of the object/detail to be extracted or filtered. One can note that there are two different structuring elements s1 and s2 used to decide different scales for the erosions and dilations. This Matlab chain discusses on how to test it.

How to plot a Pearson correlation given a time series?

I am using the code in this website http://blog.chrislowis.co.uk/2008/11/24/ruby-gsl-pearson.html to implement a Pearson Correlation given two time series data like so:
require 'gsl'
pearson_correlation = GSL::Stats::correlation(
GSL::Vector.alloc(first_metrics),GSL::Vector.alloc(second_metrics)
)
This returns a number such as -0.2352461593569471.
I'm currently using the highcharts library and am feeding it two sets of timeseries data. Given that I have a finite time series for both sets, can I do something with this number (-0.2352461593569471) to create a third time series showing the slope of this curve? If anyone can point me in the right direction I'd really appreciate it!
No, correlation doesn't tell you anything about the slope of the line of best fit. It just tells you approximately how much of the variability in one variable (or one time series, in this case) can be explained by the other. There is a reasonably good description here: http://www.graphpad.com/support/faqid/1141/.
How you deal with the data in your specific case is highly dependent on what you're trying to achieve. Are you trying to show that variable X causes variable Y? If so, you could start by dropping the time-series-ness, and just treat the data as paired values, and use linear regression. If you're trying to find a model of how X and Y vary together over time, you could look at multivariate linear regression (I'm not very familiar with this, though).

Least Squared constrained for Rototranslation

I want to detect the best rototraslation matrix between two set of points.
The second set of points is the same of the first, but rotated, traslated and affecteb by noise.
I tried to use least squared method by obviously the solution is usually similar to a rotation matrix, but with incompatible structure (for example, where i should get a value that represents the cosine of an angle i could get a value >1).
I've searched for the Constrained Least Squared method but it seems to me that the constrains of a rototraslation matrix cannot be expressed in this form.
In this PDF i've stated the problem more formally:
http://dl.dropbox.com/u/3185608/minquad_en.pdf
Thank you for the help.
The short answer: What you will need here is "Principal Component Analysis".
Apply this to both sets of points centered at their respective centers of mass. The PCA will effectively give you a rotation matrix for each aligned to the data set principal components. Multiplying the inverse matrix of the original set by the new rotation will give you a matrix that takes the old (centered) set to the new. Inverse translations and translations can similarly be applied to the rotation to create a homogeneous matrix that maps the one set to the other.
The book PRINCE, Simon JD. Computer vision: models, learning, and inference. Cambridge University Press, 2012.
gives, in Appendix "B.4 Reparameterization", some info about how to constrain a matrix to be a rotation matrix.
It seems to me that your problem has also a solution based on SVD: see the Kabsch algorithm also described by Olga Sorkine-Hornung and Michael Rabinovich in
Least-Squares Rigid Motion Using SVD and, more practically, by Nghia Kien Ho in FINDING OPTIMAL ROTATION AND TRANSLATION BETWEEN CORRESPONDING 3D POINTS.