I was trying to obtain a gpx file with some coordinates by drawing on an Openlayers map with an ArcGIS baseMap.
When I draw the polyline and create the gpx, if I open it on Google Earth, what I see is not what I drawed before, the line is totally different from the original and not positioned where I drawed it.
I know it's a projection problem, I've tried trasforming the geometry object from Mercator to Geographic, also getting directly the geographic coordinates from the map coordinates, but nothing.
I tried to set "spatialReference" to 4362 and then to 3857, but nothing changes.
I'm going to use that .gpx on a gps device (the next week I'll go to the Svalbard islands and I need some gps tracks to go around Longyearbyen by snowmobile, there there aren't any sign of life out the town, so I must be prepared to it), when I'll be there I'll adjust the output right for the device they will rent to me, but now I need to save on the .gpx file almost the right coordinates.
I'm getting from the map those coordinates:
lat: 61.22582068741976
lon: 4.684820015391338
when I'm expecting instead something around 78. lat and 15. lon.
This is some of the code I use to create the map (I'm not pasting the code I know it's not responsible of my problems):
var initialExtent = new esri.geometry.Extent({"xmin":505615.5801124362,"ymin":8678955.717187276,"xmax":525935.6207525175,"ymax":8689168.654279819,"spatialReference":{"wkid":32633,"latestWkid":32633}});
map = map = new esri.Map("map", {extent: initialExtent, logo : false});
basemapURL = "http://geodata.npolar.no/ArcGIS/rest/services/inspire1/NP_TopoSvalbard_U33_CHL/MapServer/";
map.addLayer(new esri.layers.ArcGISTiledMapServiceLayer(basemapURL));
Here I'm using wkid 32633 that is the default for that map, tried to change with known ones, but nothing happened.
And now the code I use to get the coordinates:
dojo.connect(tb, "onDrawEnd", getCoords);
function getCoords(geo) {
var r=confirm("Salvare tracciato?");
if (r==true) {
geo = esri.geometry.webMercatorToGeographic(geo);
for ( var path = 0; path < geo.paths.length; path ++ ) {
for ( var pt = 0; pt < geo.paths[path].length; pt++ ) {
tra += '<wpt lat="'+geo.paths[path][pt][1]+'" lon="'+geo.paths[path][pt][0]+'"></wpt>';
}
}
}
}
"tra" is a variable that stores all the code I'll insert into the gpx file with an other function.
The "webMercatorToGeographic" function transform the map coordinates to geographic ones.
Thanks to Devdatta Tengshe on GIS.stackexchange I got what I need:
Use the proj4js library in your application. To do this, follow these
steps:
Download the latest library from
https://github.com/proj4js/proj4js/releases In your HTML file use
You'll
need to define the projections before you can use it. You can do this
by using the following lines of code: var UTM33N= "+proj=utm +zone=33
+ellps=WGS84 +datum=WGS84 +units=m +no_defs"; var GCS84="+proj=longlat +ellps=WGS84 +datum=WGS84 +no_defs";
When you need to transform, you can use the following line: var
transformed=proj4(UTM33N,GCS84,[x,y]); where x & y are the coordinates
in your given projection. As output you'll get an array with two
elements, the longitude, & the Latitude
That worked fine.
Related
I'm using vision framework to detect face landmark and it's working fine but i need to transform the face landmarks like nose, eyes and for that i need to get nose, eyes position in frame coordinate as face landmark is drawing using VNFaceLandmarkRegion2D points.
Please let me know how to convert VNFaceLandmarkRegion2D points to frame coordinate. So i can get the location in view for transformation or suggest any other way to to transform face landmark.
This code from Joshua Newnham solves your problem.
func getTransformedPoints(
landmark:VNFaceLandmarkRegion2D,
faceRect:CGRect,
imageSize:CGSize) -> [CGPoint]{
// last point is 0.0
return landmark.normalizedPoints.map({ (np) -> CGPoint in
return CGPoint(
x: faceRect.origin.x + np.x * faceRect.size.width,
y: imageSize.height - (np.y * faceRect.size.height + faceRect.origin.y))
})
}
As a newbie this is what I could find to get face marks as a CGPoint:
First converted the selected image to CIImage
Used faceDetector on the image
Parsed the image for each face in case it has more than one
Code:
let chosenPicture = CIImage(data: (self.selectedimage.image?.tiffRepresentation)!)
let selectedFace = faceDetector?.features(in: chosenPicture!, options: [CIDetectorSmile:true])
for person in selectedFace as! [CIFaceFeature] {
let p1LeftEye = person.leftEyePosition
let p1RightEye = person.rightEyePosition
let p1Mouth = person.mouthPosition
I'm newbie to the Intel Realsense SDK and coding in Visual Studio 2017(C or C++) for Intel Realsense camera D435.
In my example I have the following,
static rs2::frameset current_frameset;
auto color = current_frameset.get_color_frame();
frame = cvQueryFrame(color);
I've got an error on line 3 as "can not convert 'rs2::video_frame' to 'CvCapture'"
I've not being able to find a solution to this issue and it's proving difficult and resulted in more errors.
Does anyone know how I can overcome this problem?
Thanks for the help!
The cvQueryFrame accepts cvCapture instance, and it is used to retrieve the frame from camera. In LibRS, the frame you retrieved back can be used already, you don't have to get back it again. attached the snippet of CV example in LibRS, you can refer to the complete code here
rs2::pipeline pipe;
// Start streaming with default recommended configuration
pipe.start();
using namespace cv;
const auto window_name = "Display Image";
namedWindow(window_name, WINDOW_AUTOSIZE);
while (waitKey(1) < 0 && cvGetWindowHandle(window_name))
{
rs2::frameset data = pipe.wait_for_frames(); // Wait for next set of frames from the camera
rs2::frame depth = color_map(data.get_depth_frame());
// Query frame size (width and height)
const int w = depth.as<rs2::video_frame>().get_width();
const int h = depth.as<rs2::video_frame>().get_height();
// Create OpenCV matrix of size (w,h) from the colorized depth data
Mat image(Size(w, h), CV_8UC3, (void*)depth.get_data(), Mat::AUTO_STEP);
// Update the window with new data
imshow(window_name, image);
}
I have the SVG file of a particular state, I have successfully converted the svg coordinates into a jvectormap, but I dont know how to identify the bbox coordinates of the projected region, can any one help me to identify it ?
Assuming your map has been instanced in my_map, get a map reference:
var mapObj = $('#my_map').vectorMap('get', 'mapObject');
You can get the bbox of a region in following way:
onRegionOver: function(evt, code){
var bb = mapObj.regions[code].element.shape.getBBox();
console.log(bb);
}
You will get for example:
SVGRect
height:88.74008178710938
width:99.780029296875
x:705.31005859375
y:312.38995361328125
Is this what you need?
I am creating a small game in the Unity game engine, and the map for the game is generated from a 2d tilemap. The tilemap contains so many tiles, though, is is very hard for a device like a phone to render them all, so the frame rate drops. The map is completely static in that the only moving thing in the game is a main character sprite and the camera following it. The map itself has no moving objects, it is very simple, there must be a way to render only the needed sections of it or perhaps just render the map in once. All I have discovered from researching the topic is that perhaps a good way to do it is buy using the Unity mesh class to turn the tilemap into a mesh. I could not figure out how to do this with a 2d tilemap, and I could not see how it would benefit the render time anyways, but if anyone could point me in the right direction for rendering large 2d tilemaps that would be fantastic. Thanks.
Tile system:
To make the tile map work I put every individual tile as a prefab in my prefab folder, with the attributes changed for 2d box colliders and scaled size. I attribute each individual prefab of the tile to a certain color on the RGB scale, and then import a png file that has the corresponding colors of the prefabs where I want them like this:
I then wrote a script which will place each prefab where its associated color is. It would look like this for one tile:
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
public class Map : MonoBehaviour {
private int levelWidth;
private int levelHeight;
public Transform block13;
private Color[] tileColors;
public Color block13Color;
public Texture2D levelTexture;
public PlayerMobility playerMobility;
// Use this for initialization
void Start () {
levelWidth = levelTexture.width;
levelHeight = levelTexture.height;
loadLevel ();
}
// Update is called once per frame
void Update () {
}
void loadLevel(){
tileColors = new Color[levelWidth * levelHeight];
tileColors = levelTexture.GetPixels ();
for (int y = 0; y < levelHeight; y++) {
for (int x = 0; x < levelWidth; x++) {
// if (tileColors [x + y * levelWidth] == block13Color) {
// Instantiate(block13, new Vector3(x, y), Quaternion.identity);
// }
//
}
}
}
}
This results in a map that looks like this when used with all the code (I took out all the code for the other prefabs to save space)
You can instantiate tiles that are in range of the camera and destroy tiles that are not. There are several ways to do this. But first make sure that what's consuming your resources is in fact the large number of tiles, not something else.
One way is to create an empty parent gameObject to every tile (right click in "Hierarchy" > Create Empty"
then attach a script to this parent. This script has a reference to the camera (tell me if you need help with that) and calculates the distance between it and the camera and instantiates the tile if the distance is less than a value, otherwise destroys the instance (if it's there).
It has to do this in the Update function to check for the distances every frame, or you can use "Coroutines" to do less checks (more efficient).
Another way is to attach a script to the camera that has an array with instances of all tiles and checks on their distances from the camera the same way. You can do this if you only have exactly one large tilemap because it would be hard to re-use this script if you have more than a large tilemap.
Also you can calculate the distance between the tile and the character sprite instead of the camera. Pick whichever is more convenient.
After doing the above and you still get frame-drops you can zoom-in the camera to include less tiles in its range but you'd have to recalculate the distances then.
I am struggling in solving this problem.
On my scene, I have a camera which looks at the center of mass of an object. I have a some buttons that enable to set camera position on particular view (front view, back view,...) along a invisible sphere that surroung the object (constant radius).
When I click on the button, i would like the camera to move from its start position to the end position along the sphere surface. When camera moves I would like it to keep fixing center of mass of the object.
Has anyone have a clue on how to achieve this?
Thanks for help!
If you are happy/prefer to use basic trigonometry then in your initialisation section you could do this:
var cameraAngle = 0;
var orbitRange = 100;
var orbitSpeed = 2 * Math.PI/180;
var desiredAngle = 90 * Math.PI/180;
...
camera.position.set(orbitRange,0,0);
camera.lookAt(myObject.position);
Then in your render/animate section you could do this:
if (cameraAngle == desiredAngle) { orbitSpeed = 0; }
else {
cameraAngle += orbitSpeed;
camera.position.x = Math.cos(cameraAngle) * orbitRange;
camera.position.y = Math.sin(cameraAngle) * orbitRange;
}
Of course, your buttons would modify what the desiredAngle was (0°, 90°, 180° or 270° presumably), you need to rotate around the correct plane (I am rotating around the XY plane above), and you can play with the orbitRange and orbitSpeed until you hare happy.
You can also modify orbitSpeed as it moves along the orbit path, speeding up and slowing down at various cameraAngles for a smoother ride. This process is called 'tweening' and you could search on 'tween' or 'tweening' if you want to know more. I think Three.js has tweening support but have never looked into it.
Oh, also remember to set your camera's far property to be greater than orbitRadius or you will only see the front half of your object and, depending on what it is, that might look weird.