Search for specific aspects on a map at a certain location - objective-c

How would I go about programming a function to interact with Apple's MapKit so that when a specific location(latitude/longitude) is given, the program will search for certain aspects within 5-10km radius on that map?
So for example, gets given geo-location of an airport, and searching for runways within a radius of that airport, then placing vectors marking the runway for viewing purposes and specific location of the runway.
How would I go about programming something like that.

Just reading through the sample code, this is what you need:
// confine the map search area to the user's current location
MKCoordinateRegion newRegion;
newRegion.center.latitude = self.userLocation.latitude;
newRegion.center.longitude = self.userLocation.longitude;
// setup the area spanned by the map region:
// we use the delta values to indicate the desired zoom level of the map,
// (smaller delta values corresponding to a higher zoom level)
//
newRegion.span.latitudeDelta = 0.112872;
newRegion.span.longitudeDelta = 0.109863;
And the reference.

Related

Unity: Texture2D ReadPixels for specific Display

Unity has had support for multiple display outputs for a while now (up to 8).
With the ReadPixels function, you can specify an area to read from, and an origin coordinate. But I cannot specify a display number to perform the read on.
I need to be able to read pixels from a specific display (1-8) with a specific area and origin point.
How can I do this, please?
You can achieve ReadPixels for a specific screen/display. You have to do the following:
Before I start, I assume you have a number of cameras which are each rendering to a different display. The cameras must not have a RenderTexture attached to them in order to output to a display.
Define a function which does the following:
Assign the desired camera a temporary RenderTexture
Use RenderTexture.active = *temporary render texture* to make the currently active rendertexture equal the temporary one you just created
Use ReadPixels to read in pixels into a temporary texture2d using an appropriate Rect. This will read from the currently active RenderTexture
Call Apply() on the texture2d
Set the RenderTexture.active and camera RenderTexture to null
The idea is that ReadPixels works on the currently Active RenderTexture.
The code should look something like this:
outputCam.targetTexture = outputCamRenderTexture;
RenderTexture.active = outputCamRenderTexture;
outputCam.Render ();
tempResidualTex.ReadPixels (screenRect, 0, 0);
tempResidualTex.Apply ();
RenderTexture.active = null;
outputCam.targetTexture = null;

GIS data files converting each address to lat/lon in dbf shape data

I need Lat/LON from GIS data
I have data files from
http://www.mngeo.state.mn.us/chouse/land_own_property.html
given in the format of
.dbf, .prj, .sbn, .sbx, .shp, and .shx
in the .dbf I see
PIN, Shape_area, Shape_len
PARC_CODE Parcel Polygon to Parcel Point numeric 2
and PIN Relationship Code
and in the .prj
PROJCS["NAD_1983_UTM_Zone_15N",GEOGCS["GCS_North_American_1983",DATUM["D_North_American_1983",SPHEROID["GRS_1980",6378137.0,298.257222101]],PRIMEM["Greenwich",0.0],UNIT["Degree",0.0174532925199433]],PROJECTION["Transverse_Mercator"],PARAMETER["False_Easting",500000.0],PARAMETER["False_Northing",0.0],PARAMETER["Central_Meridian",-93.0],PARAMETER["Scale_Factor",0.9996],PARAMETER["Latitude_Of_Origin",0.0],UNIT["Meter",1.0]]
I also know the polygon points for each county
polygons points
Anoka 129139 129138
Carver 38134 38133
Dakota 135925 150294
Hennepin 422976 446623
Ramsey 149169 168233
Scott 55191 55191
Washington 98915 103915
and I know the bounding coordinates
-94.012
-92.732
45.415
44.471
there seems to be tons of software applications for GIS
http://en.wikipedia.org/wiki/List_of_geographic_information_systems_software
but what do I need to do?
I want the lat, lon of every house
Is there a library that will do this for me?
What is the data I need?
I think you need to install one GIS software. You can try open-source Qgis.
Because, firstly your data is not in long/lat (geographic) coordinates. Your .prj part of the shapefile (yes, all .dbf, .prj, .sbn, .sbx, .shp, and .shx files with the same name are one shapefile for GIS) says that the data are in the projected coordinate system NAD 1983 UTM Zone 15N. So, you need to transform your data to geographic system. This you easy can do in GIS, or programmatically by proj.4 library. (In Qgis add the shapefile to the project, then select it in the table of contents, right mouse button and choose "save as...". It will ask you for the target coordinate system.) Note, that you need to decide which geographic coordinates you wish, because your data are in the North American Datum (NAD 1983), but the most common worldwide now is WGS 1984.
Secondly, in GIS you will see your data, are they really points, or maybe polygons. (In case your houses are polygons you will need to get centroids of them, in Qgis menu Vector - Geometry Tools - Polygon Centroids).
Finally, when you really have your houses as points in geographic coordinates, you can get their coordinates, for example using advices from these questions Get list of coordinates for points in a layer and How do I calculate the latitude and longitude of points using QGIS.
Besides, there is a good library to work with GIS vector data, OGR, which can be used by many programming languages.
The file extensions above show, that the files are in ESRI Shape File format. In Java you could use GeoTools libraries, to read that.
The example below shows the first lines, search Internet for a more complete example.
// init shapefile
File shpFile = new File(fileName);
if (!shpFile.exists()) {
LOGGER.fatal(fileName + " does not exist");
}
Map<String, URL> connect = new HashMap<String, URL>();
FeatureCollection collection = null;
FeatureIterator iterator = null;
try {
connect.put("url", shpFile.toURI().toURL());
DataStore dataStore = DataStoreFinder.getDataStore(connect);
String typeName = dataStore.getTypeNames()[0];
"I want the lat, lon of every house" suggests that what you want to do is the process called geocoding. There are services you can use for that, some free (for limited uses) some not. You could start by looking at the List of geocoding systems to get an idea of where to start. I don't think you want to start by learning GIS or shapefiles, other than to extract the addresses you are trying to geocode.
You could estimate the lat/lon of each house by computing the centroid of each parcel. You could more roughly estimate the lat/lon of each house by calculating the centroid of the bounding rectangle of each parcel. Either of those would require extracting the parcel coordinates. If you are doing that for every house in Minnesota you will processing lots of data. A geocoding service would be cheaper. The Census Geocoder might help.

Zoom MKMapview to closest two MKAnnotations for current user location

What is the best way to zoom an MKMapview to closest two or three MKAnnotations for current user location?
I have a list of GPS coordinates (328 to be precise) loaded from a plist, every point is an annotation on the map. I'd like to limit the view to the two nearest annotation points around the user's current location.
Roughly, the steps would be:
Find current location, convert to MKMapPoint
Iterate your list of annotations, using MKMetersBetweenMapPoints to find distance from current location
Save 2 or 3 smallest distances
Use the largest of these three distances to make a region using MKCoordinateRegionMakeWithDistance
Center the map on current location
Zoom to the region using [mapView setRegion:region animated:TRUE]

CATIA-CAA CATIVisu

Hi i need the flow to read the visualisation details from a CATIA V5R18 Part file.
Visualisation details lik,
1.No of Vertices
2.No of Triangles
3.No of Strips
4.No of Fans
5.No of Normal
6.Bouding Sphere Centre and Radius
These details i have red from .cgr files using CAT3DRep/CATRep/CATSurfacicRep...
But i am not able to read the same for .CATPart files.
From .CATPart with the help of CATIVisu i got CAT3DBagRep type When i queried from PartFeatures But to get Visualisation details i need CATSurfacicRep.
Can anyone help?
Wat Interface i should query and from where i should query?
Well, information about the mesh (triangle, strips, fans, etc) is only carried by leaf Reps, like CAT3DSurfacicRep.
For complex files like CATPart or CATProduct, where you have a hierarchy of geometries, there's also a hierarchy of Reps. CAT3DBagRep is the class that allows building this hierarchy, as it has children Reps (which can of course be also CAT3DBagReps).
One solution may be to recursively explore this Rep hierarchy from the root CAT3DBagRep you get. The method to get the children Reps of a CAT3DBagRep is:
list<CATRep> *GetChildren();
You can go down the Rep tree until you get Reps of the expected type, like CATSurfacicRep. You may find many of them depending on your model.
When retrieving the mesh coordinates, normals and bounding element, please take into account that they are given in local Rep coordinates. A CAT3DBagRep carries positioning and orientation information (used when you position CATProducts, for example). This is returned by the following CAT3DBagRep method:
const CAT4x4Matrix * GetMatrix() const;
Depending on your scenario/model, you may need to take this positioning information into account.

How do you measure how far the map moved?

In my mapView:regionDidChangeAnimated method I'm making a call to find places on the map but I only want to make the call if the map has moved a significant amount.
Here is scenario:
User moves map or map load
HTTP call to get find places
Add places to the map.
PROBLEM! User clicks on an annotation opening the title bubble and it's close to the edge so it moves the map. Since the loading of data is tied to the map move event the marker disappears and is re-added.
How should I watch both the span and the center point for change?
#Scott Thanks for the visibleMapRect idea. This is what I have working so far, it still needs to account for zooming in and out.
MKMapRect newRect = _mapView.visibleMapRect;
MKMapRect oldRect = currentRect;
float leftBoundry = (newRect.origin.x-(newRect.size.width/4));
float rightBoundry = (newRect.origin.x+(newRect.size.width/4));
float topBoundry = (newRect.origin.y-(newRect.size.height/4));
float bottomBoundry = (newRect.origin.y+(newRect.size.height/4));
NSLog(#"Origin x %f, y %f", oldRect.origin.x, oldRect.origin.y);
NSLog(#"Boundries left %f, top %f, right %f, bottom %f", leftBoundry, topBoundry, rightBoundry, bottomBoundry);
if (oldRect.origin.x < leftBoundry || oldRect.origin.x > rightBoundry || oldRect.origin.y < topBoundry || oldRect.origin.y > bottomBoundry) {
[self loadLocations];
currentRect = newRect;
}
Hmm. It sounds like you're refreshing your map by removing all annotations, and then (re-)displaying all annotations that fall within the visibleMapRect – and that solving this problem may require a more nuanced approach to updating the map.
One approach might be to use MKMapRectIntersection to identify the overlap between your "old" and "new" visibleMapRects, and (if there is one) exclude the annotations in this region from being removed or re-added. Or, you could calculate the L-shaped area that's scrolling on-screen and only make an HTML call for data within that region.
Or you could just check whether the map's "old" center is still within visibleMapRect, and if so arbitrarily decide that the map hasn't moved a significant amount. That may leave you with areas on-screen that should have annotations but don't, though.
Or, finally, you could just store the coordinate of the user-selected annotation, and if that coordinate is still on-screen after the map moves, find it and re-select the annotation.