How to label x-axis with reciprocal relationship in linePlotImageDisplay? - dm-script

I have a 1D array and the x-axis is labeled by the index of pixel (e.g. 1,2,3,4...) if I simply display it. Now I want to show 1, 1/2, 1/3, 1/4 in x-axis. I know imageSetDimensionScale is able to scale the x-axis by a factor. But how to calibrate x-axis in such a reciprocal relationship?

You can't.
LinePlots only support linear scaling factors. The 'best' you can do is to specify that the units are reciprocal and still use the linear scale, i.e. make a graph like this:

Related

Finding the relative shift and rotation that gives the maximum overlap for two point-clouds

I have two pointclouds in a 3D euclidean space, and I want to find the relative rotation and translation between the two pointclouds that gives the most overlap (or in other words, that minimizes the Wasserstein Distance, or any other optimal-transport metric, between the pointclouds).
I know that for translation one would simply shift the "center of mass". However rotation doesn't seem very intuitive to me.
What I have tried:
My approach was to compute the inertia tensor and rotate the point clouds to align their principal axes. However this is numerically unstable for pointclouds with high degrees of symmetry.

Is the IOU in Tensorflow Object Detection API wrong?

I just digged a bit through the Tensorflow Object Detection API code especially the eval_util part, as I wanted to implement the COCO metrics.
But I noticed that the metrics are solely calculated using the bounding boxes which have normalized coordinates between [0, 1].
There are no aspect ratios or absolute coordinates used.
So, doesn't this mean that the intersection over unions calculated on these results are incorrect?
Let's take an 200x100 image pixel as an example.
If the box would be off by 20px to the left, that's 0.1 in normalized coordinates.
But if it would be off by 20px to the top, that would be 0.2 in normalized coordinates.
Doesn't that mean, being off to the top is harder penalizing the score than being off to the side?
I believe the predicted coordinates are resized to the absolute image coordinates in the eval binary.
But the other thing I would say is that IOU is scale invariant in the sense that if you scale two boxes by some factor, they will still have the same IOU overlap. As an example if we scale by 2 in the x-direction and scale by 3 in the y direction:
If A is (x1, y1, x2, y2) and B is (u1, v1, u2, v2), then IOU((A, B))
= IOU((2*x1, 3*y1, 2*x2, 3*y2), (2*u1, 3*v1, 2*u2, 3*v2))
What this means is that evaluating in normalized coordinates should give the same result as evaluating in absolute coordinates.

Computing Minkowski Difference For Circles and Convex Polygons

I'm needing to implement a Minkowski sum function that can return the Minkowski sum of either 2 circles, 2 convex polygons or a circle and a convex polygon. I found this thread that explained how to do this for convex polygons, but I'm not sure how to do this for a circle and polygon. Also, how would I even represent the answer?! I'd like the algorithm to run in O(n) time but beggars can't be choosers.
Circle is trivial -- just add the center points, and add the radii. Circle + ConvexPoly is nearly as simple: move each segment perpendicularly outward by the circle radius, and connect adjacent segments with circular arcs centered at the original poly vertices. Translate the whole by the circle center point.
As for how you represent the answer: Well, it depends on what you want to do with it. You could convert it to a NURBS if you just want to draw it with a vector drawing library. You could approximate the circular arcs with polylines if you just want a polygonal approximation. Or you might store it as is -- "this polygon, expanded by such-and-such a radius". That would be the best choice for things like raycasting, for instance. Or as a compromise, you could connect adjacent segments linearly instead of with circular arcs, and store it as the union of the (new) convex polygon and a list of circles at the vertices.
Oh, about ConvexPoly + ConvexPoly. That's the trickiest one, but still straightforward. The basic idea is that you take the list of segment vectors for each polygon (starting from some particular extremal point, like the point on each poly with the lowest X coordinate), then merge the two lists together, keeping it sorted by angle. Sum the two points you started with, then apply each vector from the merged vector list to produce the other points.

Does matplotlib autoscale have a default minimum tick size? Can this be changed?

I am using pyplot.scatter(x_coords, y_coords) to plot some points. When the points have very small granularity, the tick size is not scaled below 0.0002 like it should be.
I have tried using ax.autoscale(tight=True), but the result did not change. Is there a way to autoscale my axes when points have a small granularity without manually finding and setting the axis limits?
These graphs should explain what my problem is. Both graphs are generated using the same code, but given different data sets. The values along the y-axis of the lower graph are not all 0 - they are spread out on the 10^-9 order of magnitude.

How to depict multidimentional vectors on two-dinesional plot?

I have a set of vectors in multidimensional space (may be several thousands of dimensions). In this space, I can calculate distance between 2 vectors (as a cosine of the angle between them, if it matters). What I want is to visualize these vectors keeping the distance. That is, if vector a is closer to vector b than to vector c in multidimensional space, it also must be closer to it on 2-dimensional plot. Is there any kind of diagram that can clearly depict it?
I don't think so. Imagine any twodimensional picture of a tetrahedron. There is no way of depicting the four vertices in two dimensions with equal distances from each other. So you will have a hard time trying to depict more than three n-dimensional vectors in 2 dimensions conserving their mutual distances.
(But right now I can't think of a rigorous proof.)
Update:
Ok, second idea, maybe it's dumb: If you try and find clusters of closer associated objects/texts, then calculate the center or mean vector of each cluster. Then you can reduce the problem space. At first find a 2D composition of the clusters that preserves their relative distances. Then insert the primary vectors, only accounting for their relative distances within a cluster and their distance to the center of to two or three closest clusters.
This approach will be ok for a large number of vectors. But it will not be accurate in that there always will be somewhat similar vectors ending up at distant places.