I have a couple of zipped shapefiles with around 100-150 features. I am trying to add them on ArcGIS Online (which accepts under 1000 features per shapefile) but it is unable to do so, indicating to me that the zipped shapefile is too big.
I am not sure why since the features are way under 1000
You may be encountering a problem with file size and/or other data on your account, rather than the record limit.
How much storage space do I get?
Subscriptions provide flexible storage capacity options for your organization.
If you have an organizational account, check with your
administrator for information about your storage limit. If you are an
administrator, you can view detailed reports about your organization's
storage of tiles, features, and files. A public account comes with 2
GB of total storage space.
Also note:
Organizational and public accounts can upload items through My Content that are up to 1 GB in size. This is a browser limit; larger file sizes may be supported when uploading through desktop applications such as ArcGIS for Desktop.
Related
With Developer Account we get upto 5GB free for Tile And Data usage and uptoo 100 MB free for Feature Service. We are not sure what's the difference between two?
If I upload 100MB+ Geojson file will it be considered under 100MB or 5GB?
Thank you,
Raj
When you upload the data to ArcGIS, it will be published as a layer in a Feature Service. This will then count towards the 100MB Feature limit. However, feature service storage is typically (always?) more efficient than GeoJSON storage. For example, in a quick test, a 521KB GeoJSON file downloaded from here turned into 328KB Feature Service. Geometries in feature services are stored as binary fields, and various other efficiencies of the backing hosted feature service (such as efficiently storing attribute data) will also help. There are of course many factors that influence this, but I expect you would always see an improvement over the raw GeoJSON size.
Note that the GeoJSON file you upload will also be stored as the source for the published feature service as part of your 5GB limit (this is so you can upload updated GeoJSON and republish your feature service at the same URL). You can delete this if you won't ever need to update the feature service this way. For reference, here's the GeoJSON file I uploaded (it seems that was also compressed slightly for storage to 509KB).
I need to move data from an parameterized S3 Bucket into Google Cloud Storage. Basic Data dump. I don't own the S3 bucket. It has the following syntax,
s3://data-partner-bucket/mykey/folder/date=2020-10-01/hour=0
I was able to transfer data at the hourly granularity using the Amazon S3 Client provided by Data Fusion. I wanted to bring over a days worth of data so I reset the path in the client to:
s3://data-partner-bucket/mykey/folder/date=2020-10-01
It seemed like it was working until it stopped. The status is "Stopped." When I review the logs just before it stopped I see a warning, "Stage 0 contains a task of very large size (2803 KB). The maximum recommended task size is 100 KB."
I examined the data in the S3 bucket. Each folder contains a series of log files. None of them are "big". The largest folder contains a total of 3MB of data.
I saw a similar question for this error, but the answer involved Spark coding that I don't have access to in Data Fusion.
Screenshot of Advanced Settings in Amazon S3 Client
These are the settings I see in the client. Maybe there is another setting somewhere I need to set? What do I need to do so that Data Fusion can import these files from S3 to GCS?
When you deploy the pipeline you are redirected to a new page with a Ribbon at the top. one of the tools in the Ribbon is Configure.
In the resources section of the Configure Modal you can specify the memory resources. Fiddled around with the numbers. 1000MB worked. 6MB was not enough. (For me.)
I processed 756K records in about 46 min.
What is the best way for storing images and Microsoft Office documents:
Google Drive
Google Storage
You may want to consider checking this page to help you choose which storage option suits you best and also learn more.
To differentiate the two:
Google Drive
A collaborative space for storing, sharing, and editing files, including Google Docs and is good for the following:
End-user interaction with docs and files
Collaborative creation and editing
Syncing files between cloud and local devices
Google Cloud Storage
A scalable, fully-managed, highly reliable, and cost-efficient object / blob store and good for these:
Images, pictures, and videos
Objects and blobs
Unstructured data
In addition to that, see Google Cloud Platform - FAQ for more insights.
Different approaches can be taken into consideration, google docs are widely used for online working with office documents etc, it provides probably same layout in comparison to Microsoft office, the advantage is that you can share the document with other people as well, plus you can edit it online at any time.
Google Drive (useful way to store your files)
Every Google Account starts with 15 GB of free storage that's shared across Google Drive, Gmail, and Google Photos. When you upgrade to Google One, your total storage increases to 100 GB or more depending on what plan you choose.
Mediafire (another useful way to store your files)
In mediafire on the basic package it allows you 10 GB of cloud space for free, the files you store in the MediaFire can be encrypted by password encryption. It allows more other features as well. A suggestion to explore.
I have to big files range in size between 20 GB to 90 GB. I will download files with Internet Download Manager (IDM) to my Windows server at Azure Virtual Machine. I will need to transfer these files to my Azure Storage account to use it later. The total files size about 550 GB.
Will Azure Storage Explorer do the job, or there are a better solution?
My Azure account is a BizSpark one with 150 $ limit, shall I remove the limit before transferring the files to the storage account?
Any other advice?
Thanks very much in advance.
You should look at the AzCopy tool (http://aka.ms/AzCopy) - it is designed for large transfers of data to and from Azure Storage.
You will save network egress cost if your storage account is in the same region as the VM where you are uploading from.
As for cost, this depends on what all you are using. You can use the Azure price calculator (http://azure.microsoft.com/en-us/pricing/calculator/) to help with estimating, or just use the pricing info directly from Azure website and calculate an estimated usage to see whether you will fit within your $150 limit.
I have a few questions about storing files on the operating system. These may or may not be valid worries, but I don't want to go on without knowing.
What will happen when the file it is stored in get a very large amount of data (1 Million images of up to 2MB each): Will this effect RAM and make the OS go slow?
What security risks does it open as far as Viruses?
Would scalability just be transfering files from that machine to a new machine?
The only problem will be if you try to store all of those images in a single directory.
Serving static files, you are liable to hit limits of the network before you hit the machine's limit.
In terms of security, you want to make sure that only images are uploaded, and not arbitrary files - check more than the file extension or mime-type!