arcgis: load kml files upto 80 megbytes total(10 megabytes per file) - esri

Hi I am using Arcgis for JS version 4.9, I am trying to load upto 8 kml files(each file is about 10 megabytes).
The loading of the kml is passing successfully but the interaction(pan&zoom) with the map is very slow and not smooth.
I have several questions regarding this issue:
Can Esri load such an amount of kml files? if not is there any other alternatives ?
Why in openlayers I can do it smoothly and in pure arcgis it more problematic?
Can I upload a RAW kml data and not diretly via hosting url?
I would appreciate any kind of help, thanks in advance!!

Kml is a vector format. One option would be to generalize the geometry of the features. Also, you could decrease the precision of the coordinates. For instance, a coordinate in meters with 5-6 digit after the coma is not necessary for visualization in a web map. You can round it to the meter.
Finally, if by raw data you mean parsing and loading a kml from a string, that's not possible with the ArcGIS API. The kml/kmz must be a separate file accessible on the internet.
The KMLLayer uses a utility service from ArcGIS.com, therefore your
kml/kmz files must be publicly accessible on the internet. If the
kml/kmz files are behind a firewall you must to set the
esriConfig.kmlServiceUrl to your own utility service (requires ArcGIS
Enterprise).
Source: the KMLLayer documentation

Related

How to Convert Climate API into as WMS / WFS Layers for advanced analytics

I'm trying to access openweathermap api forecast, in somepoint I need to take decision based on the climatic layers so kindly help me to convert as WFS Layer(Geoserver) from the api. Thanks. I'm JS Beginner know Leaflet and Geoserver
You don't say much about what the API in question returns. If it is a WFS itself then you can cascade it through your GeoServer instance, which will allow you to serve it out as a WFS and a WMS.
If it returns some other format, you can write a custom datastore by following the steps in this tutorial. When you have a working jar package you can drop it into the WEB-INF/lib folder and restart GeoServer and it should appear as a new datastore.

Azure Computer Vision API - OCR to Text on PDF files

I'm attempting to leverage the Computer Vision API to OCR a PDF file that is a scanned document but is treated as an image PDF.
I've tested it and it tells me that the PDF is "InvalidImageFormat", "Input data is not a valid image". When I test it on a PNG, it works perfectly.
Is there anyway to use the API against a PDF image or is there an Azure API that I could use in conjunction to go PDF > PNG > Text?
Edit
Since answering additional services have become available, although I have not personally tried some of them, they may suit this purpose.
https://learn.microsoft.com/en-us/azure/search/cognitive-search-concept-intro
And at some point in the future when It goes GA.
https://aws.amazon.com/textract/
Original Answer
Unfortunately Azure has no PDF integration for it's Computer Vision API. To make use of Azure Computer Vision you would need to change the pdf to an image (JPG, PNG, BMP, GIF) yourself.
Google do now offer pdf integration and I have been seeing some really good results from it from my testing so far.
This is done through the asyncBatchAnnotateFiles Method of the vision Client (I have been using the NodeJS Variant of the API)
It can handle files up to 2000 pages, Results are divided up into 20 page segments and output to Google Cloud Storage.
https://cloud.google.com/vision/docs/pdf
The latest OCR service offered recently by Microsoft Azure is called Recognize Text, which significantly outperforms the previous OCR engine. Recognize Text can now be used with Read, which reads and digitizes PDF documents up to 200 pages.
There is a new cognitive service API called Azure Form Recognizer (currently in preview - November 2019) available, that should do the job:
https://azure.microsoft.com/en-gb/services/cognitive-services/form-recognizer/
It can process the file formats you wanted:
Format must be JPG, PNG, or PDF (text or scanned). Text-embedded PDFs
are best because there's no possibility of error in character
extraction and location.
https://learn.microsoft.com/en-us/azure/cognitive-services/form-recognizer/overview
Here is the link the official Form Recognizer API documentation:
https://westus2.dev.cognitive.microsoft.com/docs/services/form-recognizer-api/operations/AnalyzeWithCustomModel
Note:
Form Recognizer is currently available in English, with additional language
availability growing (4.12.2019)
Form Recognizer is available in
the following Azure regions (4.12.2019):
Canada Central, North Europe, West Europe, UK South, Central US, East US, East US 2, South Central US, West US
https://azure.microsoft.com/en-in/global-infrastructure/services/?products=cognitive-services
Sorry you have to break the PDF pages into images (JPG and PNGs). Then send the images over to Computer Vision. It is also a good idea to break it down so that you don't have to OCR all pages, only the ones that have importance.
There is a new Read API to work with PDF
https://learn.microsoft.com/en-us/azure/cognitive-services/computer-vision/concept-recognizing-text
Computer Vision’s Read API is Microsoft’s latest OCR technology that extracts
printed text (seven languages), handwritten text (English only), digits, and
currency symbols from images and multi-page PDF documents.
Read API reference: https://westcentralus.dev.cognitive.microsoft.com/docs/services/computer-vision-v3-ga/operations/5d986960601faab4bf452005
It works well enough, but does not have a lot of languages yet.
You can convert the pdf to images for each page using fitz.
# import packages
import fitz
import numpy as np
import cv2
#set path to pdf
path2doc = <path to pdf>
#open pdf with fitz
doc = fitz.open(path2doc)
# determine number of pages
pagecount = doc.pageCount
# loop over all pages and convert to image (here jpeg)
for i in range(pagecount):
page = doc[i]
pix = page.getPixmap().getImageData(output='JPEG')
jpg_as_np = np.frombuffer(pix, dtype=np.uint8)
image = cv2.imdecode(jpg_as_np, flags=1)
Once this is done, you can send them to the API

How to store images for mobile development

I decided to use back4app for easily creating my backend and for having a built in hosting solution.
I'm quite a newbie with this tool so my question will seem "simple":
I was wondering how will I store the images of my mobile application. As far as I know they use AWS so I thought the service would provide like an interface to upload some images to a S3 bucket...
Should I create a personal bucket or does the service offer that kind of feature ?
The idea is to store then the absolute url of the image in my model. For example each Class has a cover field of type string.
you're right, Back4App use AWS.
Back4App prepared the Backend for you, for example, if you try to save a file direct at your Parse Dashboard, you will can access the image and you already have a absolute URL.You can configure the column with a type File, like below:
Add a column with the File type
After upload a file, you will can access click at the box :)
After that upload the file

Finding the file size of an image file in the Sony Camera Remote API

I'm writing a fairly involved application for working with Sony cameras.
I can list the contents of the camera and copy image files no problem at all, but I can't seem to figure out the size of the files before I start to download them.
I'm receiving the file list using the standard getContentList API, and finding the files using the originals array in the response. That response seems to have no file size information in it.
Is this possible? Knowing the file size before downloading is important for a good user experience, and all the other camera APIs support it.
I do get the size when I start to download in the HTTP Content-Length header, but performing HEAD requests to hundreds of URLs in a row seems very inefficient!
Unfortunately the API does not support getting the file size.

ARCGIS for javascript : TPK data retrieval

Is it possible to retrieve the data from a tpk?Means, is there a way to embed some information in tpk like address, region etc. and retrieve that information by means of querying
No, this wouldn't be possible.
Firstly according to the ESRI Documentation Tile Packages are solely for storing raster tiles, when displayed these tiles would show a user a map image but could not be queried interactively to identify or search for addresses / regions.
Additionally tile packages would not be practically accessible to web applications designed with the ArcGIS Javascript API. Tile packages are zipped file systems containing a large number of images, the usual way of making these tiles available to an application would be through a map service.
I would recommend for this type of functionality you view the examples on querying map services as a demonstration of what is available with the API.