Convert Index To UI in Ocean - ocean

What is the difference between the index in Ocean environment and index in User environment?
Why should one use Convert Index To/From UI?

Let's assume in this case that we have a grid that is oriented such that I increments from south to north and J increments from west to east. So the upper right cartesian coordinate space. By convention when you access a pillar grid in Ocean via an Index3 the minimum index possible (0,0) will be at the lower left corner of this grid. In this case the UI index for the grid and the Ocean index for the grid align.
Now, if you had another grid where the I axis incremented from north to south (J axis the same) the lower left corner of this grid would be I maximum, J 0. However, Ocean would return this index as (0, 0) still.
Ocean has a convention that the origin of the grid (0, 0) is at the lower left corner.
Ocean provides the methods you mention ConvertIndexToUI and ConvertIndexFromUI to convert from how the user has described the grid, as I mention above, to the Ocean convention. This means that if you are performing some operation where the indexing matters you should first call the appropriate conversion method.

Related

Line Profile Diagonal

When you make a line profile of all x-values or all y-values the extraction from each pixel is clear. But when you take a line profile along a diagonal, how does DM choose which pixels to use in the one dimensional readout?
Not really a scripting question, but I'm rather certain that it uses bi-linear interpolation between the grid-points along the drawn line. (And if perpendicular integration is enabled, it does so in an integral.) It's the same interpolation you would get for a "rotate" image.
In fact, you can think of it as a rotate-image (bi-linearly interpolated) with a 'cut-out' afterwards, potentially summed/projected onto the new X-axis.
Here is an example
Assume we have a 5 x 4 image, which gives the grid as shown below.
I'm drawing top-left corners to indicate the coordinates system pixel convention used in DigitalMicrgraph, where
(x/y)=(0/0) is the top-left corner of the image
Now extract a LineProfile from (1/1) to (4/3). I have highlighted the pixels for those coordinates.
Note, that a Line drawn from the corners seems to be shifted by half-a-pixel from what feels 'natural', but that is the consequence of the top-left-corner convention. I think, this is why a LineProfile-Marker is shown shifted compared to f.e. LineAnnotations.
In general, this top-left corner convention makes schematics with 'pixels' seem counter-intuitive. It is easier to think of the image simply as grid with values in points at the given coordinates than as square pixels.
Now the maths.
The exact profile has a length of:
As we can only have profiles with integer channels, we actually extract a LineProfile of length = 4, i.e we round up.
The angle of the profile is given by the arc-tangent of dX and dY.
So to extract the profile, we 'rotate' the grid by that angle - done by bilinear interpolation - and then extract the profile as grid of size 4 x 1:
This means the 'values' in the profile are from the four points:
Which are each bi-linearly interpolated values from four closest points of the original image:
In case the LineProfile is averaged over a certain width W, you do the same thing but:
extract a 2D grid of size L x W centered symmetrically over the line.i.e. the grid is shifted by (W-1)/2 perpendicular to the profile direction.
sum the values along W

Get lateral and longitudinal position of vehicles

I need to get relative lateral coordinates (distance to the vehicle from one side of the lane i.e Px) of vehicles.
I know that SUMO provides absolute x,y coordinates and distance traveled (Py).
Is there a way to get Px information at each timestep directly like Py?
This information is part of the raw dump (or netstate dump, see https://sumo.dlr.de/wiki/Simulation/Output/RawDump) but only if the sublane model is active. It is given as an absolute deviation from the center line of the line (which is always 0 if the sublane model is not active).

Compute direction of gravity vector

As I understand to compute vector of gravity not saficient to compute normal to elipsoid, but we need to computer normal to geoid?
But how to compute normal to geoid, how geoid is defined?
Wiki article say that it's represented by Spherical harmonics.
You do not need the know anything about geoids to compute the vector of gravity.
Newtonian gravity is a monopole force, so it acts between the centers of mass of the two objects. The shapes of the objects don't matter at all. Let's assume you have two objects 1 and 2 and the coordinates of the object's centers of mass are:
r_1 = (x_1,y_1,z_1) and
r_2 = (x_2,y_2,z_2)
The direction of gravitational force on object 1 from object 2 is then simply the difference between the vectors (gravity is always attractive):
r = r_2 - r_1= (x_2 - x_1,y_2 - y1,z_2 - z_1)
If object 1 is something sitting on the surface of Earth and you are looking for the normal force caused by the Earth's surface pushing back on the bottom of object 1, that normal force vector is given by the normal to the surface at that point.

JTS with lat/lon

I'm having some spatial data that has all of its coordinates as lat/lon pairs (with about 10 digits decimal precision), it's stored in a database as WGS84 data.Some of the data is represented as polygons which are the resulting union of some smaller polygons whose boundaries are stored.Then I'm having a number of points from which I build a linesegments (just 2 points in each segment) which I use later for intersection tests with the polygons.
I'm using a SpatialIndex to improve my queries so I insert the envelopes of all polygons in a tree (tested with both QuadTree and STRtree).Then, I connect two points into a linesegment and I'm using its envelope to query the tree for possible intersections.The problem is that I get pretty much all the polygons as a result which is clearly wrong.. To give you some idea about the real scale of my data, I have about 100 polygons that cover the whole North america, each line covers a very very small part of a single polygon.Ideally, i would expect no more than 2 polygons as a result.
I'm using JTS to do this calculation and I'm aware that it's not really suited for spherical data so can you suggest me another library/tool to achieve the desired behaviour or possible a workaround (for example, projecting before using JTS)?
If you only have north america, just rotate earth by 90 degrees so that Alaska is no longer on the far east. (Fun fact: Alaska is both the most northern, western and eastern state of the U.S.) Then your rectangles should be okay.
There are a number of non-trivial cases though when working with spherical data. Depending on how your data is defined, your polygon borders may actually be bent lines, instead of straight lines. Consider this screenshot of Google Ingress: https://lh4.ggpht.com/S_9jrMqf08JfIbr7DgUDH96rvXMK4wOGtaSKYPGCruXv2HE4oeRuEaQIDIywMgH4198=h900
I read somewhere that the mismatch of the "fog" texture and the green line visible in the left field is due to the two drawing functions using different approximations. One is always a straight line, whereas the other follows the curvature of the earth. If you have a large field (polygon!), the error becomes worse.
"Intersection" becomes a tricky term when your data consists of non-straight lines on the surface of a sphere, unfortunately; and a "straight" line on the surface of earth will often yield an arctan type curve in latlon coordinates.
Projections: these can help, but mostly when your data is local. UTM projections are pretty good, but you need at least 9 UTM zones to cover north america without Alaska. As long as your data is within one UTM zone, projecting the data into this zone and then working with 2D euclidean space should work good. But if it gets lager than this, you may need to stitch different projections, and that is really messy, too.

Quartz scaling sprite vertical range but not horizontal when go to fullscreen mode / increase window size

I have create a Quartz composition for use in MAC OS program as part of my interface.
I am relying on the fact that when you have composition sprite movement (a text bullet point in my case) is limited both in the X plane and Y plane to minimum -1 and maximum +1.
When I scale up the window / make my window full screen, I find that the horizontal plane (X axis) remains the same, with -1 being my far left point and +1 being my far right point. However the vertical plane (Y axis) changes, in full screen mode it goes from -0.7 to +0.7.
This scaling is screwing with my calculations. Is there anyway to get the application to keep the scale as -1 to +1 for both horizontal and vertical planes? Or is there a way of determining the upper and lower limits?
Appreciate any help/pointers
Quartz Composer viewer Y limits are usually -0.75 -> 0.75 but it's only a matter of aspect ratio. X limits are allways -1 -> 1, Y ones are dependents on them.
You might want to assign dynamically customs width and heigth variables, capturing the context bounds size. For example :
double myWidth = context.bounds.size.width;
double myHeight = context.bounds.size.height;
Where "context" is your viewer context object.
If you're working directly with the QC viewer : you should use the Rendering Destination Dimensions patch that will give you the width and the height. Divide Height by 2, then multiply the result by -1 to have the other side.