I have an array of x,y points of location. I don't know how to use it because it's not long/lat.
for example: X=217338 , Y=703099
I want to know how to use it on the iphone SDK and with which framework?
Thanks in advance!
First you need to know in which format your values are.
If they are not lon/lat they can be anything like meters or inches or half arm lengths or even normalized doughnut holes.
In any case you need to come up with an conversion method because MKMapKit only understands geo coordinates (long/lat).
If you have clarified that you should take a look at the location awarness guide from apple. There are also some other good sources for mapkit stuff like raywenderlich.com.
Without knowing what is represented by those values, there isn't really anything you can do with them. Assuming you can convert them to Latitude/Longitude values, this is how you'd be able to center your map at that (X, Y) coordinate:
//Import the <MapKit/MapKit.h> and <CoreLocation/CoreLocation.h> framework
//and then this will go in your implementation file:
CLLocationCoordinate2D coord = CLLocationCoordinate2DMake(xConvertedToLat, yConvertedToLong);
//Set the region your map will display centered on the above coord and spanning 250m on x-axis and 250 on y-axis
MKCoordinateRegion region = MKCoordinateRegionMake(coord, 250, 250);
//You should have a MKMapView object
[myMapView setRegion:region animated:YES];
You can iterate through this for each object in your array, but you won't see anything appear until the last (x, y) coordinate is set.
Related
First, before I go on, I have read through: SceneKit painting on texture with texture coordinates which seems to suggest I'm on the right track.
I have a complex SCNGeometry representing a hexasphere. It's rendering really well, and with a full 60fps on all of my test devices.
At the moment, all of the hexagons are being rendered with a single material, because, as I understand it, every SCNMaterial I add to my geometry adds another draw call, which I can't afford.
Ultimately, I want to be able to color each of the almost 10,000 hexagons individually, so adding another material for each one is not going to work.
I had been planning to limit the color range to (say) 100 colors, and then move hexagons between different geometries, each with their own colored material, but that won't work because SCNGeometry says it works with an immutable set of vertices.
So, my current thought/plan is to use a shader modifier as suggested by #rickster in the above-mentioned question to somehow modify the color of individual hexagons (or sets of 4 triangles).
The thing is, I sort of understand the Apple doco referred to, but I don't understand how to provide the shader with what I think must essentially be an array of colour information, somehow indexed so that the shader knows which triangles to give what colors.
The code I have now, that creates the geometry reads as:
NSData *indiceData = [NSData dataWithBytes:oneMeshIndices length:sizeof(UInt32) * indiceIndex];
SCNGeometryElement *oneMeshElement =
[SCNGeometryElement geometryElementWithData:indiceData
primitiveType:SCNGeometryPrimitiveTypeTriangles
primitiveCount:indiceIndex / 3
bytesPerIndex:sizeof(UInt32)];
[oneMeshElements addObject:oneMeshElement];
SCNGeometrySource *oneMeshNormalSource =
[SCNGeometrySource geometrySourceWithNormals:oneMeshNormals count:normalIndex];
SCNGeometrySource *oneMeshVerticeSource =
[SCNGeometrySource geometrySourceWithVertices:oneMeshVertices count:vertexIndex];
SCNGeometry *oneMeshGeom =
[SCNGeometry geometryWithSources:[NSArray arrayWithObjects:oneMeshVerticeSource, oneMeshNormalSource, nil]
elements:oneMeshElements];
SCNMaterial *mat1 = [SCNMaterial material];
mat1.diffuse.contents = [UIColor greenColor];
oneMeshGeom.materials = #[mat1];
SCNNode *node = [SCNNode nodeWithGeometry:oneMeshGeom];
If someone can shed some light on how to provide the shader with a way to color each triangle indexed by the indices in indiceData, that would be fantastic.
EDIT
I've tried looking at providing the shader with a texture as a container for color information that would be indexed by the VertexID however it seems that SceneKit doesn't make the VertexID available. My thought was to provide this texture (actually just an array of bytes, 1 per hexagon on the hexasphere), via the SCNMaterialProperty class and then, in the shader, pull out the appropriate byte, based on the vertex number. That byte would be used to index an array of fixed colors and the resultant color for each vertex would then give the desired result.
Without a VertexID, this idea won't work, unless there is some other, similarly useful piece of data...
EDIT 2
Perhaps I am stubborn. I've been trying to get this to work, and as an experiment I created an image that is basically a striped rainbow and wrote the following shader, thinking it would basically colour my sphere with the rainbow.
It doesn't work. The entire sphere is drawn using the colour in the top left corner of the image.
My shaderModifer code is:
#pragma arguments
sampler2D colorMap;
uniform sampler2D colorMap;
#pragma body
vec4 color = texture2D(colorMap, _surface.diffuseTexcoord);
_surface.diffuse.rgba = color;
and I apply this using the code:
SCNMaterial *mat1 = [SCNMaterial material];
mat1.locksAmbientWithDiffuse = YES;
mat1.doubleSided = YES;
mat1.shaderModifiers = #{SCNShaderModifierEntryPointSurface :
#"#pragma arguments\nsampler2D colorMap;\nuniform sampler2D colorMap;\n#pragma body\nvec4 color = texture2D(colorMap, _surface.diffuseTexcoord);\n_surface.diffuse.rgba = color;"};
colorMap = [SCNMaterialProperty materialPropertyWithContents:[UIImage imageNamed:#"rainbow.png"]];
[mat1 setValue:colorMap forKeyPath:#"colorMap"];
I had thought that the _surface.diffuseTexcoord would be appropriate but I'm beginning to think I need to somehow map that to a coordinate in the image by knowing the dimensions of the image and interpolating somehow.
But if this is the case, what units are _surface.diffuseTexcoord in? How do I know the min/max range of this so that I can map it to the image?
Once again, I'm hoping someone can steer me in the right direction if these attempts are wrong.
EDIT 3
OK, so I know I'm on the right track now. I've realised that by using _surface.normal instead of _surface.diffuseTexcoord I can use that as a latitude/longitude on my sphere to map to an x,y in the image and I now see the hexagons being colored based on the color in the colorMap however it doesn't matter what I do (so far); the normal angles seem to be fixed in relation to the camera position, so when I move the camera to look at a different point of the sphere, the colorMap doesn't rotate with it.
Here is the latest shader code:
#pragma arguments
sampler2D colorMap;
uniform sampler2D colorMap;
#pragma body
float x = ((_surface.normal.x * 57.29577951) + 180.0) / 360.0;
float y = 1.0 - ((_surface.normal.y * 57.29577951) + 90.0) / 180.0;
vec4 color = texture2D(colorMap, vec2(x, y));
_output.color.rgba = color;
ANSWER
So I solved the problem. It turned out that there was no need for a shader to achieve my desired results.
The answer was to use a mappingChannel to provide the geometry with a set of texture coordinates for each vertex. These texture coordinates are used to pull color data from the appropriate texture (it all depends on how you set up your material).
So, whilst I did manage to get a shader working, there were performance issues on older devices, and using a mappingChannel was much much better, working at 60fps on all devices now.
I did find though that despite the documentation saying that a mapping channel is a series of CGPoint objects, that wouldn't work on 64 bit devices because CGPoint seems to use doubles instead of floats.
I needed to define my own struct:
typedef struct {
float x;
float y;
} MyPoint;
MyPoint oneMeshTextureCoordinates[vertexCount];
and then having built up an array of these, one for each vertex, I then created the mappingChannel source as follows:
SCNGeometrySource *textureMappingSource =
[SCNGeometrySource geometrySourceWithData:
[NSData dataWithBytes:oneMeshTextureCoordinates
length:sizeof(MyPoint) * vertexCount]
semantic:SCNGeometrySourceSemanticTexcoord
vertexCount
floatComponents:YES
componentsPerVector:2
bytesPerComponent:sizeof(float)
dataOffset:0
dataStride:sizeof(MyPoint)];
EDIT:
In response to a request, here is a project that demonstrates how I use this. https://github.com/pkclsoft/HexasphereDemo
I have deeply read the Nutiteq Api Reference and I haven't found built-in Methods to get the pixel representation of longitude and latitude on a device. There is nothing under the existing Projections, so I don't know how I could overcome this issue.
What I want to make is drawing a circle for my actual GPS Location like this,
NOT like n-vertices Polygon in HelloMap3D.
Getting Pixels of lat, lon and radius given Zoom Levelunder a given Projection is the Challenge because the rest would be calls like this
...
canvas.drawCircle(longitudeInPixel, latitudeInPixel, radiusInPixel, this.paintStroke); // <- For blue circunference
canvas.drawCircle(longitudeInPixel, latitudeInPixel, radiusInPixel, this.paintFill); // <- For blue translucent circle
...
So, how could I turn lat, lon and radius into their pixel representation under Nutiteq?
I thank you all in advance.
MapView has worldToScreen() method for this, see Map Calculations page in the Nutiteq Android demo project wiki.
I read the apple docs
"A map point is an x and y value on the Mercator map projection"
A point is a graphical unit associated with the coordinate system of a UIView
What is the difference logically between a Point and a MKPoint?
I obviously need CGPoint to display something on the screen.
So why does MapKit need MKMapPoint?
The fact that both the CGPoint and MKMapPoint structs happen to store two floating-point values named x and y is irrelevant.
They are given different names because they logically deal with different coordinate systems, transformations, ranges and scales.
A 2D world map needs a large, fixed coordinate system that allows a latitude and longitude to be converted to a fixed point on the map regardless of what portion is currently being displayed on the screen.
The range of MKMapPoint values are large since they need to represent the world's coordinates at a high-enough resolution (well beyond screen sizes).
However, you don't exactly need to care about the actual values of an MKMapPoint. Occasionally, you may need to convert a CLLocationCoordinate2D to an MKMapPoint (or the other way around) but you don't need to worry about those values nor should you store them (the docs recommend not doing this since the internal projection calculations to convert a latitude and longitude to a 2D projection may change between iOS releases).
Your usage of an MKMapPoint is only on the basis that you are dealing with the map's 2D projection independent of the device's screen size or what portion of the map is currently displaying.
I obviously need CGPoint to display something on the screen.
Yes but when adding annotations or overlays, you generally deal with CLLocationCoordinate2D values and let the map view do the conversion as needed.
MKMapPoint is a geographical point - projectively converted latitude and longitude. On the screen you have some bounded view containing your mapView. And you need to convert your geographical position (coord) to the CGPoint on your mapView
CLLocationCoordinate2D coord;
coord.latitude = location.latitude.doubleValue;
coord.longitude = location.longitude.doubleValue;
MKMapPoint point = MKMapPointForCoordinate(coord);
CGPoint cgpoint = [mapView convertCoordinate:coord toPointToView:mapView];
They both specify a map center and how big the box is.
So why use both?
Some function in MKMapview use one and some use the other
(MKCoordinateRegion)regionThatFits:(MKCoordinateRegion)region
(MKMapRect)mapRectThatFits:(MKMapRect)mapRect
edgePadding:(UIEdgeInsets)insets
What's their difference?
More importantly, which one we should use to set the region we see?
There is no regionThatFits:edgePadding: by the way.
A MKCoordinateRegion is defined using degrees coordinate of type CLLocationCoordinate2D which represents the latitude and longitude of a point on the surface of the globe.
MKMapRect represents an actual flat rectangle defined with view coordinate (x, y) on your map view.
You can use functions to do conversions for you like MKCoordinateRegionForMapRect
See http://developer.apple.com/library/ios/#documentation/MapKit/Reference/MapKitFunctionsReference/Reference/reference.html
And to answer your final question, you would use MKCoordinateRegion which will define what region of the globe's surface you want to see and by definition it will set your zoom level.
I am Using CorePlot in my app, and I want to display a annotation over the plotSymbol. I haven't found any code in the sample projects of the latest 0.9 version of CorePlot. After some research i have come to this point:
- (void)scatterPlot:(CPTScatterPlot *)plot plotSymbolWasSelectedAtRecordIndex:(NSUInteger)index
{
CPTLayerAnnotation *annot = [[CPTLayerAnnotation alloc]initWithAnchorLayer:graph];
CPTBorderedLayer * logoLayer = [[(CPTBorderedLayer *) [CPTBorderedLayer alloc] initWithFrame:CGRectMake(10,10,100,50)] autorelease];
CPTFill *fillImage = [CPTFill fillWithImage:[CPTImage imageForPNGFile:#"whatEver!"]];
logoLayer.fill = fillImage;
annot.contentLayer = logoLayer;
annot.rectAnchor=CPTRectAnchorTop;
[graph addAnnotation:annot];
}
But its obviously not working.... Can anybody help me?
My goal is to get an annotation over the selected plot symbol, similar to annotations in MKMapView.
Update
It is a DatePlot, just to clarify things and it is working with time intervals since 2001 on the x-axis.
There are several examples of this in the Core Plot example apps. The gradient scatter plot in the Plot Gallery app (and several other apps as well) use this method to attach a text label to the selected point. The point selection demo in the Mac version of CPTTestApp uses a second scatter plot to draw a crosshairs over the selected point.
Remember to set the plotSymbolMarginForHitDetection property on the scatter plot, too. The default is 0, which means you have to hit the center of the point exactly to register a touch.
There are two types of annotation in Core Plot. A CPTLayerAnnotation is anchored to a given Core Animation layer (the graph in your case). A CPTPlotSpaceAnnotation is anchored to a plot space coordinate (== data coordinate). Your comment below makes it sound like you want to use a plot space annotation instead of a layer annotation.