Displaying an interactive map of polygons efficiently in react native - react-native

I am currently working on a project where I want to display a map to a user on which the user can click on certain regions. The user should also be able to pan and zoom the map to be able to click on certain smaller regions.
I have formatted a shapefile with a +-500 polygons nicely into json data I can use. The data format is not relevant, I can change it to the library I'm using.
What I have tried.
First, I went for react-native-maps because it looked promising as it allows for inserting custom polygons and clicking on them. But then it turned out that I need a Google Maps API Key for just using the library even when I am not actually using Google Maps at all. I have no interest in setting up an API Key and an account on Google for something I don't use so I started looking for an alternative.
Turns out, there aren't much alternatives that support only rendering polygons and not using an API Key(Mapbox has the same issue).
I also tried the react-native-svg-pan-zoom library but this had performance issues because I couldn't use an acceptable resolution where people would not see the pixels after zooming in a bit (canvasWidth above 2000) or my app would just crash, and with values of 1500 the movement was also not smooth.
I'm a bit stuck now because I can't find any alternatives, it's just very frustrating because react-native-maps will probably do the trick but it requires an API Token for no reason.
I created this question because maybe there is still a way to not use an API Key or some other library I just couldn't find but maybe you know where to find it.

Related

React Native Mobile Application - Generating a graph and converting to a PDF within react native

We're developing a react native mobile application where we need to generate several different types of graphs (plots, charts, line, etc.) and then convert those graphs into PDF format where we could download or send the graphs as pdf files outside the in-app visualization/rendering of them. Currently we are using react-native-chart-kit, and haven't been able to come up with an easy solution yet. We are open to using a different library if one exists that has similar graphing capabilities to that library and can easily generate pdf's from the graphing data. At the moment, all we can think of doing is to send the graphing data to a web server where we have access to other graphing libraries (js, react web, python, etc.) generate the pdf data then send it back to the mobile application. This isn't ideal, as we'd prefer to do everything directly in the mobile app.
Does anyone have any experience generating graphs within a react-native application and also exporting those graphs as pdf files? Any library we might have overlooked, or an easier solution we haven't though about? Any tips are greatly appreciated!
Not a lot of hits here, but I also cross posted this on reddit and got some traction. I'm posting a link here as the answer in case anyone else comes across this post:
https://www.reddit.com/r/reactnative/comments/v1wm0o/generating_a_graph_and_converting_to_a_pdf_within/
The two best answers we debated between (both valid):
You could potentially use https://github.com/gre/react-native-view-shot. It will “screenshot” anything within a particular View and you can then do what you want with the image data. Perhaps even find a way to write it to a PDF file?
and the other solution is to just do the pdf rendering and graph generation on server-side, though that means this would be an online only feature.

Media Foundation - Custom Media Source & Sensor Profile

I am writing an application for previewing, capturing and snapshotting camera input. To this end I am using Media Foundation for the input. One of the requirements is that this works with a Black Magic Intensive Pro 4K capture card, which behaves similar to a normal camera.
Media Foundation is unfortunately unable to create an IMFMediaSource object from this device. Some research lead me to believe that I could implement my own MediaSource.
Then I started looking at samples, and tried to unravel the documentation.
At that point I encountered some questions:
Does anyone know if what I am trying to do is possible?
A Windows example shows a basic implementation of a source, but uses IMFSensorProfile. What is a Sensor Profile, and what should I use it for? There is almost no documentation about this.
Can somebody explain how implementing a custom media source works in terms of: what actually happens on the inside? Am I simply creating my own format, or does it allow me to pull my own frames from the camera and process them myself? I tried following the msdn guide, but no luck so far.
Specifics:
Using WPF with C# but I can write C++ and use it in C#.
Rendering to screen uses Direct3D9.
The capture card specs can be found on their site (BlackMagic Intensity Pro 4K).
The specific problem that occurs is that I can acquire the IMFActivator for the device, but I am not able to activate it. On activation, an MF_E_INVALIDMEDIATYPE error occurs.
The IMFActivator can tell me that the device should output a UYVY format.
My last resort is using the DeckLinkAPI, but since I am working with several different types of cameras, I do not want to be stuck with another dependency.
Any pointers or help would be appreciated. Let me know if anything is unclear or needs more detail.

Kinect Hand Gestures

I have been working with Kinect gestures for a while now and so far the tools that are available to create gestures are only limited to track entire body movements for instance swiping your arm to left and right. The JOINT TYPES available in the original Kinect SDK involves elbows, wrists, hands, shoulders etc but doesn’t include minor details like index finger, thumb, and middle finger. I am mentioning al this because I am trying to create gestures involving only hand movements (like victory sign, thumb up/down). Can anyone guide me though this? Is there a blog or website where codes for hand movements are written?
I have been developing application with Kinect one year ago, and then it was very hard or nearly impossible to do that. Now Google shows me projects like this, be sure to check it out. If you generally want to focus on hands gestures, I really advise you to use LEAP Motion
My friends at SigmaRD have developed something called the SigmaNIL Framework. You can get it from the OpenNI website.
It offers "HandSegmentation", "HandSkeleton", "HandShape" and "HandGesture" modules which may cover your needs.
Also check out the rest of the OpenNI Middleware and Libraries that you can download from their website. Some of them also work with the Microsoft SDK.

Technique to identify a video in iOS camera roll

I'm trying to solve a specific problem (but this could benefit others) which from googling around doesn't seem to have a definitive solution. I think there are probably several partial solutions out there, I'd like to find the best of those (or a combination) that does the trick most of the time.
My specific example is: users in my app can send videos to each other and I'm going to allow them to save videos they have received to their camera roll. I would like to prevent them from forwarding the video on to others. I don't need to identify a particular video, just that it was originally saved from my app.
I have achieved a pretty good solution for images by saving some EXIF metadata that I can use to identify that the image was saved from my app and reject any attempts to forward it on, but the same solution doesn't work for videos.
I'm open to any ideas. So far I've seen suggested:
Using ALAssetRepresentation in some way to save a filename and then compare it when reading in, but I've read that upgrading iOS wipes these names out
x-Saving metadata. Not possible.
MD5. I suspect iOS would modify the video in some way on saving which would invalidate this.
I've had a thought about appending a frame or two to the start of the video, perhaps an image which is a solid block of colour, magenta for example. Then when reading in, get the first frame, do some kind of processing to identify this. Is this practical or even possible?
What are your thoughts on these, and/or can you suggest anything better?
Thanks!
Steven
There are 2 approaches you could try. Both solutions only work under iOS5.
1) Save the url returned by [ALAssetRepresentation url]. Under iOS 5 this URL contains a CoreData objectID and should be persistent.
2) Use the customMetadata property of ALAsset to append custom info to any asset you saved yourself.
Cheers,
Hendrik

Augmented reality in mono touch

I'm developing a typical "Windows GUI" based app for iPhone using MONO technologies. I need to add a little AR based functionality to it. It is just about opening up the camera, and showing information to the user regarding nearby businesses.
How can I do this using mono?
Of course it is possible. I have created a project and works very nice. It is quite complicated and I would need three pages to explain it and, the time to do it which I do not have.
In general, you need to look into:
CLLocationManager for location and
compass.
MapKit, if you want to provide
reverse geocoding information.
Implement an overlay view over the
UIImagePickerController which will
act as your canvas.
And of course, drawing.
I hope these guidelines will get you started.