I am currently working on a game in Godot which involves rendering countries on a globe. I have very little prior experience with Godot, but have experimented with it in the past.
I am using this data from Natural Earth for country borders, and have successfully gotten it to display on the globe using a line mesh. The data was originally in shapefile format, but I converted it to GeoJSON using mapshaper.org.
Picture
The data basically boils down to a list of points given in latitude and longitude, which I then converted into 3d points and created a mesh using SurfaceTool.
I am having trouble generating an actual surface for the mesh, however. Firstly, I am unable to find a built-in function to generate a triangle mesh from this data. I have looked into numerous solutions, including using the built-in Mesh.PRIMITIVE_TRIANGLE_FAN format, which doesn't work on concave shapes.
I have looked into triangulation algorithms such as delaunay triangulation, but have had little success implementing them.
My current plan is to generate a triangle mesh using the 2d data (x,y = longitude,latitude), and project it onto the surface of the sphere. In order to produce a curved surface, I will include the vertices of the sphere itself in the mesh (example).
I would like to know how to go about constructing a triangle mesh from this data. In essence, I need an algorithm that can do the following things:
Create a triangle mesh from a concave polygon (country border)
Connect the mesh to a series of points within this polygon
Allow for holes within the polygon (for lakes, etc.)
Here is an example of the result I am looking for.
Again, I am quite new to Godot and I am probably over-complicating things. If there is an easier way to go about rendering countries on a globe, please let me know.
This is my current code:
extends Node
export var radius = 1
export var path = "res://data/countries.json"
func coords(uv):
return (uv - Vector2(0.5, 0.5)) * 360
func uv(coords):
return (coords / 360) + Vector2(0.5, 0.5)
func sphere(coords, r):
var angles = coords / 180 * 3.14159
return Vector3(
r * cos(angles.y) * sin(angles.x),
r * sin(angles.y),
r * cos(angles.y) * cos(angles.x)
)
func generate_mesh(c):
var mesh = MeshInstance.new()
var material = SpatialMaterial.new()
material.albedo_color = Color(randf(), randf(), randf(), 1.0)
var st = SurfaceTool.new()
st.begin(Mesh.PRIMITIVE_LINE_STRIP)
for h in c:
var k = sphere(h, radius)
st.add_normal(k / radius)
st.add_uv(uv(h))
st.add_vertex(k)
st.index()
mesh.mesh = st.commit()
mesh.set_material_override(material)
return mesh
func load_data():
var file = File.new()
file.open(path, file.READ)
var data = JSON.parse(file.get_as_text()).result
file.close()
for feature in data.features:
var geometry = feature.geometry
var properties = feature.properties
if geometry.type == "Polygon":
for body in geometry.coordinates:
var coordinates = []
for coordinate in body:
coordinates.append(Vector2(coordinate[0], coordinate[1]))
add_child(generate_mesh(coordinates))
if geometry.type == "MultiPolygon":
for polygon in geometry.coordinates:
for body in polygon:
var coordinates = []
for coordinate in body:
coordinates.append(Vector2(coordinate[0], coordinate[1]))
add_child(generate_mesh(coordinates))
func _ready():
load_data()
What about using Geometry.triangulate_polygon() method to triangulate a polygon:
Related
I am making a space game in Godot and whenever my ship is a big distance away from (0,0,0) every time I move the camera or the ship, it shakes violently. Here is my code for moving the ship:
extends KinematicBody
export var default_speed = 500000
export var max_speed = 5000
export var acceleration = 100
export var pitch_speed = 1.5
export var roll_speed = 1.9
export var yaw_speed = 1.25
export var input_response = 8.0
var velocity = Vector3.ZERO
var forward_speed = 0
var vert_speed = 0
var pitch_input = 0
var roll_input = 0
var yaw_input = 0
var alt_input = 0
var system = "System1"
func _ready():
look_at(get_parent().get_node("Star").translation, Vector3.UP)
func get_input(delta):
if Input.is_action_pressed("boost"):
max_speed = 299792458
acceleration = 100
else:
max_speed = default_speed
acceleration = 100
if Input.is_action_pressed("throttle_up"):
forward_speed = lerp(forward_speed, max_speed, acceleration * delta)
if Input.is_action_pressed("throttle_down"):
forward_speed = lerp(forward_speed, 0, acceleration * delta)
pitch_input = lerp(pitch_input, Input.get_action_strength("pitch_up") - Input.get_action_strength("pitch_down"), input_response * delta)
roll_input = lerp(roll_input, Input.get_action_strength("roll_left") - Input.get_action_strength("roll_right"), input_response * delta)
yaw_input = lerp(yaw_input, Input.get_action_strength("yaw_left") - Input.get_action_strength("yaw_right"), input_response * delta)
func _physics_process(delta):
get_input(delta)
transform.basis = transform.basis.rotated(transform.basis.z, roll_input * roll_speed * delta)
transform.basis = transform.basis.rotated(transform.basis.x, pitch_input * pitch_speed * delta)
transform.basis = transform.basis.rotated(transform.basis.y, yaw_input * yaw_speed * delta)
transform.basis = transform.basis.orthonormalized()
velocity = -transform.basis.z * forward_speed * delta
move_and_collide(velocity * delta)
func _on_System1_area_entered(area):
print(area, area.name)
system = "E"
func _on_System2_area_entered(area):
print(area, area.name)
system = "System1"
Is there any way to prevent this from happening?
First of all, I want to point out, that this is not a problem unique to Godot. Although other engines have automatic fixes for it.
This happens because the precision of floating point numbers decreases as it goes away form the origin. In other words, the gap between one floating number and the next becomes wider.
The issue is covered in more detail over the game development sister site:
Why loss of floating point precision makes rendered objects vibrate?
Why does the resolution of floating point numbers decrease further from an origin?
What's the largest "relative" level I can make using float?
Why would a bigger game move the gameworld around the Player instead of just moving a player within a gameworld?
Moving player inside of moving spaceship?
Spatial Jitter problem in large unity project
Godot uses single precision. Support for double precision has been added in Godot 4, but that just reduces the problem, it does not eliminate it.
The general solution is to warp everything, in such way that the player is near the origin again. So, let us do that.
We will need a reference to the node we want to keep near the origin. I'm going to assume which node it is does not change during game play.
export var target_node_path:NodePath
onready var _target_node:Spatial = get_node(target_node_path)
And we will need a reference to the world we need to move. I'm also assuming it does not change. Furthermore, I'm also assuming the node we want to keep near the origin is a child of it, directly or indirectly.
export var world_node_path:NodePath
onready var _world_node:Node = get_node(target_node_path)
And we need a maximum distance at which we perform the shift:
export var max_distance_from_origin:float = 10000.0
We will not move the world itself, but its children.
func _process() -> void:
var target_origin := _target_node.global_transform.origin
if target_origin.length() < max_distance_from_origin:
return
for child in _world_node.get_children():
var spatial := child as Spatial
if spatial != null:
spatial.global_translate(-target_origin)
Now, something I have not seen discussed is what happens with physics objects. The concern is that The physics server might be trying to move them in the old position (in practice this is only a problem with RigidBody), and it will overwrite what we did.
So, if that is a problem… We can handle physic objects with a teleport. For example:
func _process() -> void:
var target_origin := _target_node.global_transform.origin
if target_origin.length() < max_distance_from_origin:
return
for child in _world_node.get_children():
var spatial := child as Spatial
if spatial != null:
var body_transform := physics_body.global_transform
var new_transform := Transform(
body_transform.basis,
body_transform.origin - target_origin
)
spatial.global_transform = new_transform
var physics_body := spatial as PhysicsBody # Check for RigidBody instead?
if physics_body != null:
PhysicsServer.body_set_state(
physics_body.get_rid(),
PhysicsServer.BODY_STATE_TRANSFORM,
new_transform
)
But be aware that the above code does not consider any physics objects deeper in the scene tree.
I'm trying to implement a leaning mechanic in a game that I'm building. To do that I want to set one variable to act as the default number of rotation degrees (ideally x0, y0, and z0), and one for the rotation degrees of a character that is leaning to the right (ideally x0.6, y0, and z0).
Here's my code (for context, this script is attached to a Spatial node called UpperBody):
extends Spatial
const LEAN_LERP = 5
export var default_degrees : Vector3
export var leaning_degrees : Vector3
func _process(delta):
if Input.is_action_pressed("LeanRight"):
transform.origin = transform.origin.linear_interpolate(leaning_degrees, LEAN_LERP * delta)
else:
transform.origin = transform.origin.linear_interpolate(default_degrees, LEAN_LERP * delta)
if Input.is_action_pressed("LeanLeft"):
transform.origin = transform.origin.linear_interpolate(-leaning_degrees, LEAN_LERP * delta)
else:
transform.origin = transform.origin.linear_interpolate(default_degrees, LEAN_LERP * delta)
As you can see, I have both default_degrees and leaning_degrees' types set to Vector3 instead of the (currently unknown) equivalent for rotational degrees.
My question is this: how do I set a variable to contain rotational degrees?
Thanks.
There is no dedicated type for Euler angles. Instead you would use … drum roll … Vector3.
In fact, if you see the rotation_degrees property, you will find it is defined as a Vector3.
That, of course, isn't the only way to represent rotations/orientations. Ultimately, the Transform has two parts:
A Vector3 called origin which represents the translation.
A Basis called basis which represent the rest of the transformation (rotation, scaling and reflection, and shear or skewing).
A Basis can be thought of a trio of Vector3 each representing one of the axis of the coordinate system. Another way to think of Basis is as a 3 by 3 matrix.
Thus whatever you use to represent rotations or orientations will ultimately be converted to a Basis (and then either replace or be composed with the Basis of the transform).
Now, you want to interpolate the rotations, right? Euler angles aren't good for interpolation. Instead you could interpolate:
Transformations (Transform using interpolate_with Transform.interpolate_with).
Basis (Basis using Basis.slerp).
Quaternions (Quat using Quat.slerp).
On the other hand, Euler angles are good for input. In this particular case that means it is relative easy to wrap your head around what the numbers mean compared to writing any of these in the inspector.
Thus, we have two avenues:
Convert Euler angles to either Transform, Basis or Quat.
Find an easy way to input a Transform, Basis or Quat.
Euler angle to Quat
The Quat has a constructor that takes a vector for Euler angles. The catch is that it is Euler angles in radians. So we need to convert degrees to radians (which we can do with deg2rad). Like this:
var target_quat := Quat(
Vector3(
deg2rad(degrees.x),
deg2rad(degrees.y),
deg2rad(degrees.z)
)
)
Alternatively, you could do this:
var target_quat := Quat(degrees * PI / 180.0)
We also need to get the current quaternion from the transform:
var current_quat := transform.basis.get_rotation_quat()
Interpolate them:
var new_quat := current_quat.slerp(target_quat, LEAN_LERP * delta)
And replace the quat:
transform = Transform(
Basis(new_quat).scaled(transform.basis.get_scale()),
transform.origin
)
The above line assumes the transformation is only rotation, scaling, and translation. If we want to keep skewing, we can do this:
transform = Transform(
Basis(new_quat) * Basis(current_quat).inverse() * transform.basis,
transform.origin
)
The explanation for that is in the below section.
Notice we ended up converting the Quat to a Basis. So perhaps we are better off avoiding quaternions entirely.
Euler angle to Basis
The Basis class also has a constructor that works like the one we found in Quat. So we can do this:
var target_basis := Basis(degrees * PI / 180.0)
The catch this time is that Basis does not only represent rotation. So if we do that, we are losing scaling (and any other transformation the Basis has). We can preserve the scaling like this:
target_basis = target_basis.scaled(transform.basis.get_scale())
Ah, of course, the current Basis is this:
var current_basis := transform.basis
We interpolate like this:
var new_basis := current_basis.slerp(target_basis, LEAN_LERP * delta)
And we replace the Basis like this:
transform.basis = new_basis
To be honest, I'm not happy with the above approach. I'll show you a way to have the Basis you interpolate be only for rotation (so it can preserve any skewing the original Basis had, not only its scale), but it is a little more involved. Let us start here again:
var target_rotation := Basis(degrees * PI / 180.0)
And we will not scale that, instead we want to get a Basis that is only the rotation of the current one. We can do that by going from Basis to Quat and back:
var current_rotation := Basis(transform.basis.get_rotation_quat())
We interpolate the same way as before:
var new_rotation := current_rotation.slerp(target_rotation, LEAN_LERP * delta)
But to replace the Basis we want to keep everything about the old Basis that wasn't the rotation. In other words we are going to:
Take the Basis:
transform.basis
Remove its rotation (i.e. compose it with the inverse of its rotation):
Basis(transform.basis.get_rotation_quat()).inverse() * transform.basis
Which is the same as:
current_rotation.inverse() * transform.basis
And apply the new rotation:
new_rotation * current_rotation.inverse() * transform.basis
And that is what we set:
transform.basis = new_rotation * current_rotation.inverse() * transform.basis
I have tested to make sure the composition order is correct. And, yes, code for preserving skewing with Quat I showed above is based on this.
Euler angle to Transform
The way to create a Transform from Euler angles is via a Basis:
var target_transform := Transform(Basis(degrees * PI / 180.0), Vector3.ZERO)
We could preserve scale and translation with this approach:
var target_transform := Transform(
Basis(degrees * PI / 180.0).scaled(trasnform.basis.get_scale()),
transform.origin
)
If you want to interpolate translation at the same time, you can set your target position instead of transform.origin.
The current transform is, of course:
var current_transform := transform
We interpolate them like this:
var new_transform = current_transform.interpolate_with(target_transform, LEAN_LERP * delta)
And we can set that:
transform = new_trasnform
If we inline these variables, we have this:
transform = transform.interpolated_with(target_transform, LEAN_LERP * delta)
If you want to preserve skewing, use the Basis approach.
Alternative input to Euler angles
We have found out that interpolating transforms is actually very easy. Is there a way to easily input a Transform? Rhetorical question. We can add some Position3D to the scene. Position and rotate them (and even scale them, even though Position3D has no size), and then use the Transform from them.
We can make the Position3D children of your Spatial (which is somewhat odd, but don't think too hard about it), or as sibling. Regardless, the idea is that we are going to take the transform from these Position3D and use it to interpolate the transform of your Spatial. It is the same code as before:
transform = transform.interpolated_with(position.transform, LEAN_LERP * delta)
In fact, while we are at it, why not have three Position3D:
The lean left target.
The lean right target.
The default target.
Then you pick which target to use depending on input, and interpolate to that:
extends Spatial
const LEAN_LERP = 5
onready var left_target:Position3D = get_node(…)
onready var right_target:Position3D = get_node(…)
onready var default_target:Position3D = get_node(…)
func _process(delta):
var left := Input.is_action_pressed("LeanLeft")
var right := Input.is_action_pressed("LeanRight")
var target := default_target
if left and not right:
target = left_target
if right and not left:
target = right_target
transform = transform.interpolate_with(target, LEAN_LERP * delta)
Put the node paths where I left ....
Ok, Ok, here is one of the Euler angles versions:
extends Spatial
const LEAN_LERP = 5
export var default_degrees : Vector3
export var leaning_degrees : Vector3
func _process(delta):
var left := Input.is_action_pressed("LeanLeft")
var right := Input.is_action_pressed("LeanRight")
var degrees := default_degrees
if left and not right:
degrees = -leaning_degrees
if right and not left:
degrees = leaning_degrees
var target_rotation := Basis(degrees * PI / 180.0)
var current_rotation := Basis(transform.basis.get_rotation_quat())
var new_rotation := current_rotation.slerp(target_rotation, LEAN_LERP * delta)
transform.basis = new_rotation * current_rotation.inverse() * transform.basis
I am working on converting Velox file (HDF5) to .dm3 file using Tore Niermann's plugin (gms_plugin_hdf5) to read string. Annotations on HDF5 file also need to transfer to .dm3 file. HDF5 file maybe rotate in any angle. But the position coordinate of annotation read from hdf5 file is corresponding to images without rotating.
I found that the annotations didn't move with rotating images. I had to re-calculate the position coordinate for every annotation. It isn't convenient for annotations such as box or oval. And I need to extract maximum area when rotating images. So the image size will change with rotation angle. So is there better solutions for rotating the annotations? Thanks.
Here is a sample function from my script. I didn't attach all because it's quite long.
image GetAnnotations(Taggroup names, string filename, string name, Taggroup Annotations, Image VeloxImg, number Angle)
{
number i, j, imagex, imagey, xscale, yscale
String Displaypath, AnnotationStr, DisplayStr, units
taggroup attr = NewTagList()
getsize(VeloxImg, imagex, imagey)
number centerx=imagex/2
number centery=imagey/2
getscale(veloximg, xscale, yscale)
units=getunitstring(veloximg)
component imgdisp=imagegetimagedisplay(VeloxImg, 0)
For (j=0; j<TagGroupCountTags(Annotations); ++j)
{
TagGroupGetIndexedTagAsString(Annotations, j, AnnotationStr)
string AnnotPath=h5_read_string_dataset(filename, AnnotationStr)
string AnnotDataPath=GetValueFromLongStr(AnnotPath, "dataPath\": \"", "\"")
AnnotDataPath=ReplaceStr(AnnotDataPath, "\\/", "\/")
string AnnotLabel=GetValueFromLongStr(AnnotPath, "label\": \"", "\"")
string AnnotDrawPath=h5_read_string_dataset(filename, AnnotDataPath)
image img := RealImage( "", 4, 1, 1 )
TagGroup AnnoTag=alloc(MetaStr2TagGroup).ParseText2ImageTag(AnnotDrawPath, img )
deleteimage(img)
string AnnotDrawType=TagGroupGetTagLabel(AnnoTag,0)
//AnnoTag.TagGroupOpenBrowserWindow( "AnnotationsTag", 0 )
if (AnnotDrawType=="arrow")
{
number p1_x,p1_y,p2_x,p2_y
TagGroupGetTagAsNumber(AnnoTag, "arrow:p1:x", p1_x)
TagGroupGetTagAsNumber(AnnoTag, "arrow:p1:y", p1_y)
TagGroupGetTagAsNumber(AnnoTag, "arrow:p2:x", p2_x)
TagGroupGetTagAsNumber(AnnoTag, "arrow:p2:y", p2_y)
//VeloxImg.CreateArrowAnnotation( p1y, p1x, p2y, p2x )
number p1_x_new=(p1_x-0.5)*cos(Angle)+(p1_y-0.5)*sin(Angle)+0.5
number p1_y_new=-(p1_x-0.5)*sin(Angle)+(p1_y-0.5)*cos(Angle)+0.5
number p2_x_new=(p2_x-0.5)*cos(Angle)+(p2_y-0.5)*sin(Angle)+0.5
number p2_y_new=-(p2_x-0.5)*sin(Angle)+(p2_y-0.5)*cos(Angle)+0.5
result(p1_x+" "+p1_y+" new "+p1_x_new+" "+p2_y_new+"\n")
component arrowAnno=newarrowannotation(p1_y_new*imagey, p1_x_new*imagex, p2_y_new*imagey, p2_x_new*imagex)
arrowAnno.ComponentSetForegroundColor( 1, 0 , 0 )
arrowAnno.ComponentSetDrawingMode( 2 )
imgdisp.ComponentAddChildAtEnd( arrowAnno )
}
Not directly answering your question, but maybe nevertheless of interest to you:
While GMS does not support the rotation of annotations (Rect, Oval, Text, ImageDisplay...) it does support a rotation property for ROIs. So maybe you can just use rect-ROIs and oval-ROIs instead of annotations in your application.
Example (never mind, that I did the shift-computation wrongly):
image test := realImage("Test",4,512,512)
test= abs(sin(6*PI()*icol/iwidth))*abs(cos(4*PI()*irow/iheight*iradius/150))
test.showimage()
imageDisplay disp = test.ImageGetImageDisplay(0)
ROI box = NewROI()
box.RoiSetRectangle(46,88,338,343)
box.RoiSetVolatile(0)
ROI oval = NewROI()
oval.ROISetOval(221,226,287,254)
oval.RoiSetVolatile(0)
disp.ImageDisplayAddROI(box)
disp.ImageDisplayAddROI(oval)
number rot_deg = 8
image rot := test.rotate( pi()/180*rot_deg)
rot.ShowImage()
imageDisplay disp_rot = rot.ImageGetImageDisplay(0)
ROI box_rot = box.ROIClone()
ROI oval_rot = oval.ROIClone()
disp_rot.ImageDisplayAddROI(box_rot)
disp_rot.ImageDisplayAddROI(oval_rot)
number shift_x = (rot.ImageGetDimensionSize(0)-test.ImageGetDimensionSize(0)) / 2
number shift_y = (rot.ImageGetDimensionSize(1)-test.ImageGetDimensionSize(1)) / 2
number t,l,b,r
box_rot.ROIGetRectangle(t,l,b,r)
box_rot.ROISetRectangle(t+shift_y,l+shift_x,b+shift_y,r+shift_x)
oval_rot.ROIGetOval(t,l,b,r)
oval_rot.ROISetOval(t+shift_y,l+shift_x,b+shift_y,r+shift_x)
box_rot.ROISetRotationAngle( rot_deg )
oval_rot.ROISetRotationAngle( rot_deg )
If I understood you correctly, then your source data (HDF5) stores the image (2D array?) plus a rotation angle, but the annotations in the coordinate system of the (not rotated) image? How is the source-data displayed in the original software then? (Is it showing a rotated rectangle-image?)
GMS does not support rotating imagesDisplays (as objects) and consequently also not rotations of annotations. The coordinates systems are always screen-axis aligned orthogonal. Hence the need for interpolation when "rotating" images. The data values are re-computed for the new grid.
If you don't need the annotations to be adjustable after your input, one potential thing you could do would be to create an "as displayed" image after import prior rotation, and then rotate the image with the annotations "burnt in".
This is obviously only good for creating "final display images" though.
image before := realImage("Test",4,512,512)
before = abs(sin(6*PI()*icol/iwidth))*abs(cos(4*PI()*irow/iheight*iradius/150))
before.showimage()
before.ImageGetImageDisplay(0).ComponentAddChildAtEnd(NewArrowAnnotation(30,80,60,430))
before.ImageGetImageDisplay(0).ComponentAddChildAtEnd(NewArrowAnnotation(30,80,206,240))
before.ImageGetImageDisplay(0).ComponentAddChildAtEnd(NewOvalAnnotation(206,220,300,256))
// Create as-displayed image
Number t,l,b,r,ofx,ofy,scx,scy
before.ImageGetOrCreateImageDocument().ImageDocumentGetViewExtent(t,l,b,r)
before.ImageGetOrCreateImageDocument().ImageDocumentGetViewToWindowTransform(ofx,ofy,scx,scy)
image asShown := before.ImageGetOrCreateImageDocument().ImageDocumentCreateRGBImageFromDocument(round((r-l)*scx),round((b-t)*scy),0,0)
asShown.ShowImage()
number angle_deg = 8
image rotated := asShown.Rotate( angle_deg/180*PI() )
rotated.ShowImage()
I am working with Kinect and reading example from DepthWithColor-D3D, has some code but i don't understand yet.
// loop over each row and column of the color
for (LONG y = 0; y < m_colorHeight; ++y)
{
LONG* pDest = (LONG*)((BYTE*)msT.pData + msT.RowPitch * y);
for (LONG x = 0; x < m_colorWidth; ++x)
{
// calculate index into depth array
int depthIndex = x/m_colorToDepthDivisor + y/m_colorToDepthDivisor * m_depthWidth;
// retrieve the depth to color mapping for the current depth pixel
LONG colorInDepthX = m_colorCoordinates[depthIndex * 2];
LONG colorInDepthY = m_colorCoordinates[depthIndex * 2 + 1];
How to calculate the value of colorInDepthX and colorInDepthY as above code?
colorInDepthX and colorInDepthY is a mapping between the depth and color images so that they will align. Because the Kinect's cameras are slightly offset from each other their field of views are not lined up perfectly.
m_colorCoordinates is defined at the top of the file as such:
m_colorCoordinates = new LONG[m_depthWidth*m_depthHeight*2];
This is a single dimension array representing a 2-dimensional image, it is populated just above the code block you post in your question:
// Get of x, y coordinates for color in depth space
// This will allow us to later compensate for the differences in location, angle, etc between the depth and color cameras
m_pNuiSensor->NuiImageGetColorPixelCoordinateFrameFromDepthPixelFrameAtResolution(
cColorResolution,
cDepthResolution,
m_depthWidth*m_depthHeight,
m_depthD16,
m_depthWidth*m_depthHeight*2,
m_colorCoordinates
);
As described in the comment, this is running an calculation provided by the SDK to map the color and depth coordinates onto each other. The result is placed inside of m_colorCoordinates.
colorInDepthX and colorInDepthY are simply values within the m_colorCoordinates array that are being acted upon in the current cycle of the loop. They are not "calculated", per se, but just point to what already exists in m_colorCoordinates.
The function that handles the mapping between color and depth images is explained in the Kinect SDK at MSDN. Here is a direct link:
http://msdn.microsoft.com/en-us/library/jj663856.aspx
I have a collection of latitudes and longitudes and I'll be grabbing sets of these and want to draw a polygon based on them.
The datasets won't be the outline so will need an algorithm to establish which ones make up the outline of a polygon containing all the latitudes and longitudes supplied. This polygon needs to be flexible so the polygon can be concave if the points dictate that.
Any help would be appreciated.
** UPDATE **
Sorry, should have put more detail.
My code below produces a horrible looking polygon. As explain in my first post I want to create a nice concave or convex polygon based on the latlng's provided.
Just need a way of plotting the outer latlngs.
Apologies if this is still asking too much but thought it was worth one last try.
function initialize() {
var myLatLng = new google.maps.LatLng(51.407431, -0.727142);
var myOptions = {
zoom: 12,
center: myLatLng,
mapTypeId: google.maps.MapTypeId.TERRAIN
};
var map = new google.maps.Map(document.getElementById("map_canvas"), myOptions);
var bermudaTriangle;
var map = new google.maps.Map(document.getElementById("map_canvas"), myOptions);
var triangleCoords = [
new google.maps.LatLng(51.392692, -0.740358),
new google.maps.LatLng(51.400618, -0.742469),
new google.maps.LatLng(51.40072, -0.72418),
new google.maps.LatLng(51.400732, -0.743817),
new google.maps.LatLng(51.401258, -0.743386),
new google.maps.LatLng(51.401264, -0.741445),
new google.maps.LatLng(51.401443, -0.725555),
new google.maps.LatLng(51.401463, -0.744042),
new google.maps.LatLng(51.402281, -0.739059)
];
var minX = triangleCoords[0].lat();
var maxX = triangleCoords[0].lat();
var minY = triangleCoords[0].lng();
var maxY = triangleCoords[0].lng();
for (var i = 1; i < triangleCoords.length; i++) {
if (triangleCoords[i].lat() < minX) minX = triangleCoords[i].lat();
if (triangleCoords[i].lat() > maxX) maxX = triangleCoords[i].lat();
if (triangleCoords[i].lng() < minY) minY = triangleCoords[i].lng();
if (triangleCoords[i].lng() > maxY) maxY = triangleCoords[i].lng();
}
// Construct the polygon
bermudaTriangle = new google.maps.Polygon({
paths: triangleCoords,
strokeColor: "#FF0000",
strokeOpacity: 0.8,
strokeWeight: 2,
fillColor: "#FF0000",
fillOpacity: 0.35
});
bermudaTriangle.setMap(map);
}
Your problem is not enough defined : with a given set of points, you may end up with many different polygons if you do not add a constraint other than 'create a nice concave or convex polygon'.
And even a simple example shows that :
imagine a triangle ABC, and let D be the center of this triangle, what output will you expect for {A,B,C,D} set of points ?
ABC, since D is inside ?
or ADBCA polygon ?
or ABDCA polygon ?
or ABCDA polygon ?
now if you say 'well D is in the center, it's obvious we should discard D', let D be closer and closer from, say, the AB segment. When do you decide the best output is ABC or ADBCA ?
So you have to add constraints to be able to build an algorithm, since if you cannot not decide by yourself for the above {A,B,C,D} example, how could a computer do ? :-) For example if you call AvgD the average distance beetween points, you could add the constraint that no segment of your outer polygon should be longer than 1.2*AvgD (or, better, Alpha*AvgD, and you try your algorithm with different alpha).
To solve your issue, i would use first a classical hull algorithm to get the outer convex polygon (which is deterministic), then break down the segments that are 'too' long (with the constraint(s) you want) putting more and more inner points into the outlining until all constraints are ok. Something like 'digging holes' into the convex polygon.
'Breaking down' a too long segment is also a thing you can do in quite different maners. One may be to search for the nearest not-in-the-outline point from the middle point of the segment. Another would be to choose the point having lowest radius with current segment... Now that you have your new point, break the segment in two, update your list of too-loong segment, and do it again until you're done (or until you reach a 'satisfactory' average length for too long segments, or ...)
Good luck !