Augmented Reality with React Native (Points of Interest over the camera) - react-native

I'm working on an application for Android & iOS to show points of interest over the camera. ARkit & ARcore has poor compatibility nowdays.
Could you recommend me some library to do this? If it comes with an example, better! I know viro-media, but I don't understand how to do this using that library.
I don't want 3D models, just markers over the camera, similar to the attachment image.

To do this with Viro React -- and in AR in general -- the trick is to recognize that there are two coordinate systems:
The local coordinate system of your device, which we'll call 'AR space'. In Viro, this is centered at the user's initial position when the application starts, and is in meters.
Geographic coordinates (latitude and longitude).
To position the overlays, you have to convert your content from geographic coordinates into AR space. This is a two-step process. First project the spherical geographic coordinates onto a 2D plane -- the Web Mercator is great for this. Then translate the projected coordinates by the device's initial projected position.
The device's initial projected position can be derived by projecting its initial geographic position. In Viro React, you can use the Geolocation module to grab this when the user starts the app.
Finally, you'll need to do a similar transformation for the user's bearing: converting from compass direction to device orientation in AR space.
For this to work well you'll likely have to figure out how handle inaccurate geolocation lookups (e.g. what happens if the location retrieved from the device is inaccurate), and may also have to account for drift: over time the two coordinate systems may start to fall out of sync.
The last part, creating the info cards, is easy with Viro -- you either pre-bake the images with text and use ViroImage, or if the cards need to be more dynamic you can use a ViroFlexView.

I am also interested in this one and I'm trying out ViroReact!
I find a bit difficult to understand how to make this work when the lat's and long's have been converted to x-y-values. What should the z-value be?
Let's say you have the lat-lon coordinates [59, 10] as the user location you want to show where [59, 11] is relative to your location. How to you build that in a ViroARScene?
<ViroNode position={ **userLocationFromLatLonCartesian** }>
<ViroBox position={ **poiLocationFromLatLonToCartesian** }/>
</ViroNode>
So how do you calculate the scale, position and rotation, so that the object will be visible?
Seem like https://github.com/proj4js/proj4js is a library that could provide conversions from latlon to x-y values

I found that both android and ios AR sdk support location base AR View refererence:
https://developers.google.com/ar/develop/ios/geospatial/quickstart and https://developer.apple.com/documentation/arkit/argeoanchor

Related

Hide an object for a specific camera

I use godot to create my 3d game. I ran into a problem while creating portals using camera viewport rendering to texture. The problem is that the camera captures unnecessary objects that are behind portal. I partially solved this problem by setting the parameter "near " for the camera at a distance from the camera itself to the portal, but the part behind the portal began to be cut off.
The question is, is it possible to hide objects for a particular camera so that other cameras can see them? Perhaps there is another way to do this, for example by creating a static clipping plane?
Proximity Fade
Probably not what you are looking for, but I'll mention it for completeness sake.
The default material has proximity fade and distance fade, which you can use to make the material disappear if it is too close or to distant from the camera, respectively.
It is important to note that this is not a cull plane, and that the fading is gradual.
Thus, using proximity fade you can make objects near the camera appear semitransparent.
Using Visibility layers and cull mask
is it possible to hide objects for a particular camera so that other cameras can see them?
Every VisualInstance (you know, all things that are visible in 3D) has layers. And every Camera has a cull_mask. If the cull_mask of the Camera does not include any of the layers of a VisualInstance, then the Camera does not see that VisualInstance.
A VisualInstance with no layers will not show on no Camera, even if the Camera has all the layers in its cull_mask (which is the default).
You can either edit the cull_mask of the camera to not include the layers of the VisualInstance, or edit the layers of the VisualInstance, or both.
Using a custom shader cull plane
Perhaps there is another way to do this, for example by creating a static clipping plane?
You can use a custom spatial shader to cut things out based on a plane.
You need to define the plane as a uniforms. For this answer I'll use a point-normal definition of a plane:
n·(r - r_0)
That is:
dot(plane_normal, (world_position - plane_point)
Thus, we define a plane_normal and plane_point uniforms:
uniform vec3 plane_normal;
uniform vec3 plane_point;
The plane_normal gives us the orientation of the plane, while the plane_point is a point on the plane which allows us to position it.
And then use this logic:
vec3 wold_position = (CAMERA_MATRIX * vec4(VERTEX, 1.0)).xyz;
ALPHA = clamp(sign(dot(plane_normal, wold_position - plane_point)), 0.0, 1.0);
Here we are converting the coordinates of the current point to world space, and then using definition of the plane to find the points on one side (using sign), and set ALPHA based on that, such that everything on one side of the plane becomes invisible.
Note: This is not the only way to define the plane. Another popular definition is a 4D vector, where the xyz are the normal, and the w is the distance from plane to the origin.
Sadly, I don't think there is a way to make this work with multiple material passes, because ALPHA controls the blending of the passes, and will not result in transparency. And no, using discard; does not solve it either, because the other passes can write the fragment regardless. Thus, you are going to need to modify your materials to include that.
Further Sadly Godot 3.x does not support global uniforms (see Godot 4.0 gets global and per-instance shader uniforms). Which means you will have to set these parameter everywhere you need them.
Using Constructive Solid Geometry (CSG)
Add a CSGCombiner make the geometry that needs to disappear with other CSG nodes as children.
Then you can, for example, add a CSGSphere with operation set to "Subtraction", and move it with the Camera (for this purpose, I suggest to add a RemoteTransform node as child to the Camera and set its remote path to the CSGSphere).
Of course, it does not have to be a CSGSphere, you can use any CSG nodes for this purpose. For the portal, I imagine you could use a CSGBox and align it to the portal plane.
Note: Currently on Godot 3.3 CSG nodes do not support baking lights. This is a regression. See: Unable to bake lightmap with CSG due to the lack of ability to generate UV2 for CSG nodes.
Portals, actually
Bartleby Lawnjelly has a portal (godot-lportal) module for Godot 3.x.
Being a module, they require to build Godot from source. See Compiling on the official Godot documentation. It is not that bad, I promise. Or use build from godot-titan.
I have to explain that these portals are not portals in the Valve Portal video game series sense… The module lets you define areas as "rooms", and planes as "portals" that connect those rooms, in such way that you can look from one to the other. The purpose of this is to cull entire rooms unless you are looking through one of the portals.
Hopefully that makes more sense with a video. This is a somewhat old one, but good to get the idea across: Portal rendering module in Godot 3.2 - Improved performance. Seeing shadow pooping in the video? Bartleby Lawnjelly also has a custom lightmapper.

Unreal Engine 4 minimum distance of camera from object

I placed a standard Camera in front of a moving Actor. When I set the current view to this camera I noticed a strange behaviour: If the actor get really close to another object on the scenario (a default cube) it disappear from the view. It looks like the camera is getting into the cube. I'm pretty sure the camera is not colliding with the cube because the actor has a couple of bumpers that prevent the side where the camera is placed to collide with other objects and the whole camera mesh is placed fully 'inside' the actor. The problem maybe is related with the size of the actor that's about 40cm x 30cm x 10cm. The observed cube is 1mt x 1mt x 1mt, the minimum distance of camera from cube is around 3 cm.
Sounds to me like you're experiencing an issue with an object passing your camera's "clipping plane." In the 3D world, this is simply just draw distance minimum and maximum values. For more information on what you are experiencing, check out this brilliant explanation by Autodesk: https://knowledge.autodesk.com/support/maya/learn-explore/caas/CloudHelp/cloudhelp/2018/ENU/Maya-Rendering/files/GUID-D69C23DA-ECFB-4D95-82F5-81118ED41C95-htm.html
Now, let's fix the issue! In Unreal Engine, it's super easy. Go into your Project Settings/General Settings. There is a value called Near Clip Plane, which simply changes the minimum clipping value for Camera components. I would bet making this value smaller will fix your issue! For a visual representation, check out this tutorial by Kyle Dail: https://www.youtube.com/watch?v=oO79qxNnOfU

Convert a Lat/Lon coordinate on a map based on preset coordinates

First off, I am not sure if this is the right place so I apologize if this belongs elsewhere - please let me know if it does. I am currently doing some prototyping with this in VB so that's why I come here first.
My Goal
I am trying to make a program to be able to log different types of information for a video game that I play. I would like to be able to map out the entire game with my program and add locations for mobs, resources, etc.
What I have
The in game map can be downloaded so I have literally just stuck this in as a background image on the form (just for now). The map that I get downloaded though is not exactly as the map appears in the game though since the game will add extra water around everything when scrolling around. This makes it a bit tricky to match up where the origin for the map is in game compared to where it would be on the downloaded map.
The nice thing though is that while I am in the game I can print my current coordinates to the screen. So I thought that maybe I can somehow use this to get the right calculation for the rest of the points on the map.
Here is an example image I will refer to now:
In the above map you will see a dotted bounding box. This is an invisible box in the game where once you move your mouse out of the longitude and latitude points will no longer show. This is what I refer to above when I mean I can't find the exact point of origin for the in game map.
You will also see 2 points: A and B. In the game there are teleporters. This is what I would use to get the most accurate position possible. I am thinking I can find the position (in game) of point A and point B and then somehow calculate that into a conversion for my mouse drag event in VB.
In VB the screen starts at top-left and is 0,0. I did already try to get the 2 points like this and just add or subtract the number to the x and y pixel position of the mouse, but it didn't quite line up right.
So with all this information does anyone know if it is possible to write a lon/lat conversion to pixels based on this kind of data?
I appreciate any thoughts and suggestions and if you need any clarification of any information I have posted please let me know and I will be happy to expand on it. I am really hoping I can get this solved!
Thanks!
EDIT:
I also want to mention I am not sure if there is an exact pixel to lat/lon point for the in game map. I.e. the in game map could be 1 pixel = 100 latitude or something. So I might also need to figure out what that conversion number is?
Some clarifications about conversion between the pixel location to 'latitude and longitude'.
First the map in your game is in a geometry coordinate system, which means everything lies in 2D and you can measure the distance between two points by calculate the pixel position.
But when we talk about longitude and latitude, we are actually talking about a geography coordinate system, which is a '3D' model of the sphere oabout the surface of the earth. All the maps on earth are abstracted from 3D to 2D through one step called projection. Like google maps or your GPS. In this projection process, the 3D model converted to 2D model but there is always some part of the map will be tortured, so that same distance in pixels on a map could be different in length in reality.
So if you don't care about the accuracy then you can consider the geometry point as geography point. Otherwise, you need to implement some GIS library to handle the geodesic distance and calculate the geography point based on the projection coordinate system.

how to know gps device point at which direction?

Currently I manage to get the direction degrees using below code:
d = Math.Atan2(Math.Sin(long2 - long1) * Math.Cos(lat2), _
Math.Cos(lat1) * Math.Sin(lat2) - Math.Sin(lat1) * Math.Cos(lat2) * Math.Cos(long2 - long1))
Dim direction As Double = (RadToDeg(d) + 360.0) Mod 360
which, in my case let say I got 250.65°
I assign each of the direction values from 0 to 360 to its particular image from imageList which loaded in the pictureBox. (currently I have 36 compass images with different arrow direction, each represent 10 degrees)
When my device is pointed to the north, the arrow image is showing the correct direction, but when when I rotate the device (pointed to anywhere which is not north), the arrow image doesn't change, means it is not showing the correct direction anymore.
So my question is, is it possible to know in which direction the gps device is pointed?
Edit: I'm using Honeywell Dolphin 6000 Scanphone device
The Honeywell Dolphin 6000 documentation doesn't mention a magnetometer or compass, so you're probably SOL. But, if it does have one, then you should be able to find methods to access it in the SDK
I recommend downloading and reviewing any APIs and documents that come with the SDK and look for references to the magnetometer or compass. Microsoft does not have standard APIs to access those sensors in Windows Mobile, so you will need the SDK from Honeywell to get that information.
If I am reading your question correct, it sounds like you are trying to determine a heading when your position is fixed and you are only rotating the device.
Unfortunately, what you are looking for is not possible with GPS.
Both the formula you are using and the GetPosition.Heading is a calculated heading based on sampling your current Latitude/Longitude and your previous Latitude/Longitude. So if you aren't moving in a direction (or moving extremely slowly), your current & previous Latitude/Longitude values will effectively be the same, which reduces that accuracy of the calculated heading.
The only reliable way to get a heading when your position is relatively fixed is to get a magnetic or gyroscopic compass, which some devices to have built in.
"how to know gps device point at which direction?"
by using GPS Intermediate Driver, GetPosition.Heading will give you the current direction you are heading.
As stated in the GPS_POSITION documentation,
"flHeading
"Heading, in degrees. A heading of zero is true north."
You must distinguish between the direction you are moving, that is called bearing or course.
And the direction you are looking or holding your device. (Think of you sitting in a bus that drives north (course = 0°), where you make a photo in direction west. heading = 270°)
A (consumer-) GPS receiver always returns only the course (or bearing), although some API unfortunatley call it heading sometimes.
To know the direction in wich you are holding your device while standing still, you have to use the magnetometer. Some modern smartphones, like iPhones or androids have that build into.
Additonal hint:
If your device has GPS, do NOT calculate the position via your or other formulas, better take the value from the GPS Api. This is much more acurate. The GPS chip does NOT only caluclate the direction by positional change, it also may use physical doppler shift.

Adapt a drawn overlay to match roads

In my application I can draw an overlay (PolyLine) on my map (MKMapView). This overlay however, is not bounded to actual roads. Is there a way (some API or others) to adapt the overlay so that it covers/overlays a real road.
The application is run on mobile devices (iPod Touch & iPhone), so to not make my app a very battery consuming one, I would set the Core Location Accuracy not to the highest. As a result, the location will be a bit next to the road where you are. Then I would still like my program to adjust this error...
To get an overlay that matches actual roads, I used Google Maps API Webservices. I already had an array of all points forming a route and I used these points (coordinates) to easily create an overlay that maps on real roads.
I used the first and last point to create a navigation between and set all the other points as 'waypoints', see Directions API
To find the nearest address of a point, I used the Geocoding API
Google Maps API Webservices: http://code.google.com/intl/nl-NL/apis/maps/documentation/webservices/