OxyPlot HeatMapSeries Origin Start from top left instead - oxyplot

For the 2D array passed in as Data for HeatMapSeries of the OxyPlot library, the 2D array with [0,0] starts at the bottom left corner.
How do I go about manipulating the data/the HeatMapSeries in order to make the [0,0] data start from the top left corner instead?

Related

How to display 2D bar graph with number of counts on Y-axis and month/day on x-axis in LabVIEW?

I want to plot 2D bar graph in LabVIEW showing total number of counts on Y-axis and Month/day(abbreviated) on x-axis. How can I do it?
Drop an XY graph.
Go to the plot properties and set Bar Plots to have the style you want and Interpolation to be just points.
If you open the context help window and hover over the terminal for the graph, you can see the data types it supports. You want the cluster which has a 1D array for the X values (timestamps of a time in the specific day) and a 1D array for the Y values. Generate your data in that format and wire it into the graph.
Right click the X scale and select Formatting.... In the properties dialog, set the format to be absolute time and to only show the day and the month.
Run the VI and you should have your graph.

Line Profile Diagonal

When you make a line profile of all x-values or all y-values the extraction from each pixel is clear. But when you take a line profile along a diagonal, how does DM choose which pixels to use in the one dimensional readout?
Not really a scripting question, but I'm rather certain that it uses bi-linear interpolation between the grid-points along the drawn line. (And if perpendicular integration is enabled, it does so in an integral.) It's the same interpolation you would get for a "rotate" image.
In fact, you can think of it as a rotate-image (bi-linearly interpolated) with a 'cut-out' afterwards, potentially summed/projected onto the new X-axis.
Here is an example
Assume we have a 5 x 4 image, which gives the grid as shown below.
I'm drawing top-left corners to indicate the coordinates system pixel convention used in DigitalMicrgraph, where
(x/y)=(0/0) is the top-left corner of the image
Now extract a LineProfile from (1/1) to (4/3). I have highlighted the pixels for those coordinates.
Note, that a Line drawn from the corners seems to be shifted by half-a-pixel from what feels 'natural', but that is the consequence of the top-left-corner convention. I think, this is why a LineProfile-Marker is shown shifted compared to f.e. LineAnnotations.
In general, this top-left corner convention makes schematics with 'pixels' seem counter-intuitive. It is easier to think of the image simply as grid with values in points at the given coordinates than as square pixels.
Now the maths.
The exact profile has a length of:
As we can only have profiles with integer channels, we actually extract a LineProfile of length = 4, i.e we round up.
The angle of the profile is given by the arc-tangent of dX and dY.
So to extract the profile, we 'rotate' the grid by that angle - done by bilinear interpolation - and then extract the profile as grid of size 4 x 1:
This means the 'values' in the profile are from the four points:
Which are each bi-linearly interpolated values from four closest points of the original image:
In case the LineProfile is averaged over a certain width W, you do the same thing but:
extract a 2D grid of size L x W centered symmetrically over the line.i.e. the grid is shifted by (W-1)/2 perpendicular to the profile direction.
sum the values along W

How to change the anchor point from the top-left corner of a transformation matrix to the bottom-left corner?

Say, I have an image on an HTML page.
I apply an affine transformation to the image using CSS3 matrix function.
It looks like:
img#myimage {
transform: matrix(a, b, c, d, tx, ty);
/* use -webkit-transform, -moz-transform etc. */
}
The origin of an HTML page is the top-left corner and the y-axis is inverted.
I'm trying to put the same image in an environment (cocos2d) where the origin is the bottom-left corner and the y-axis is upright.
To get the same result in the other environment, I need to transform the origin somehow and reflect that in the resulting CGAffineTransform.
It would be great if I can get some help with the matrix math that goes here. (I'm not so good with matrices.)
The following formula would work,
for converting the position from CSS3 to Cocos2d:
(screen Size - "y" position in CSS3 - height of object)
Explanation:
To make the origin for the Cocos environment same as for the CSS3 environment we would only have to add the screen size to the cocos2d's bodies y co-ordinate.
Eg. The screen size is (100,100) and the body is a point object if you place it at (0,0) in CSS3 it would be at the top left corner. If we add the screen size to the y co-ordinates for cocos2d the object would be placed at (0,100) which is the top-left corner for cocos2d as well
To make the co-ordinates same, since the Y axis is inverted, we have to subtract the "Y" co-ordinate given in CSS3 from the Screen Size for Cocos2d. Suppose we place the same point object in the previous example at (0,10) in CSS3 we would place it at (0, 100 - 10) in cocos2d which would be the same positions on the screen
Since our body would NOT always be a point object we have to take care of its anchor point as well. If suppose the body's height is 20 and we place it at (0,10) in CSS3 then it would be placed at the top-left position and would be coming down because the Y axis is inverted
Hence we would also have to subtract the body's total height from the screen size and "y" co-ordinate to place it at the same position which would be (0, 100 - 10 - 20) putting the body at the same place in cocos2d environment
I hope I am correct and clear :)

depth image based rendering

I have to implement a depth image base rendering. Given a 2D image and a depth map, the algorithm will generate a virtual view - what the scene would look like if a camera was placed in a different position. I wrote this function, V is the matrix with the pixel of 2d view, D the pixels from depth map and camera shift a parameter.
Z=1.1-D./255; is a normalization. I try to follow this instruction:
For each pixel in the depth map, compute the disparity that results from the depth, For each pixel in the source 2D image, find a new location for it in the virtual view: old location + disparity of that specific pixel.
The function doesnt work very well, what's wrong?
function[virtualView]=renderViews(V,D,camerashift)
Z=1.1-D./255;
[M,N]=size(Z);
for m=1:M
for n=1:N
d=camerashift/Z(m,n);
shift=round(abs(d));
V2(m,n)=V(m+shift,n);
end
end
imshow(V2)

Moving x or y ticks in Matplotlib up

I have a numpy array with random values. I have plotted the values in the array using imshow() so that each element shows as a grey-scale square. The problem is that the labels (0, 1, 2 etc) start at the bottom corner. I would like to move them along a bit so they are centred underneath each square. Is there a straight-forward way of doing this?
Just found http://matplotlib.sourceforge.net/examples/pylab_examples/image_interp.html
and the most straightforward way is just to have grid(True) ->woohoo!