React Native & Firebase Cloud Storage - Create buckets dynamically - react-native

I am looking at creating an app that uses cloud storage for image storing, and according to this it seems like it is smart to create a bucket per user. However, this seems like a bad idea when you think about scaleability and such because you'd have millions of buckets. My question is: for an app that uses storage buckets to store images, is it better to create a per-user bucket or use a single bucket and just name files uniquely according to user-email and limit accesses to the files inside to each user?
It seems like every doc I visit mentions creating the bucket either in the console or in gsutil but I am looking to see if there is a way to do it from the react-native client side. This way when a user creates a new account, a new bucket can be allocated to them. I have looked into the Google Cloud JSON API too.

Related

Architecture for image storage and retrieval system

What is best architecture to store images for blog and retrieval? I have a usecase where I have to design image storage / retrieval system for articles. Where and how should I store store these images and retrieve / access those while displaying contents of article with minimum latency?
It would be great if you can provide any reference for this. Thanks.
If you want minimum latency for image retrieval, you need to use a CDN (Content Delivery Network)
Check out this article for more details.
For example, AWS offers Cloud Front which is very simple to use - store the images into an S3 bucket, and then use dedicated CloudFront URLs on your client-side code, to fetch the images.
There are other CDN providers out there, you can find them right away on a Google search.

Can we use SDKs directly in Suitelet?

Implementing a requirement to store images in AWS bucket instead of NetSuite. Since the bucket is private, I have to upload and generate the URL in backend/suitelet.
I tried to include AWS SDK into Suitelet by defining, but that doesn't work.
I want to get to know whether can we use/include SDKs inside Suitelet?
How can I implement a solution for this without using any third party solutions?
How are permissions for the links managed? Can you make them publicly viewable? Remember unless the links you generate are timestamped anyone with the link can get to the image.
In terms of uploading the images check out https://github.com/DeepChannel/netsuite-savedsearch-s3
If you need to keep have each image have a magic link you could use a Heroku app or an AWS lambda. The app would check a hash based on link parameters and proxy the image if the hash is valid. If your images are supposed to be private to a customer this would be the way to go.
If you are using the images generally on a website then just make the bucket publicly readable and use the API to upload.

Can the uploadcare widget be used without the uploadcare service?

Can uploadcare-widget be used without using the upload care service?
The goal:
Use the widget (specifically to allow users to upload files from their google drive/dropbox accounts).
Instead of using upload care's backend, use your own backend, i.e. node.js/aws s3.
Yes, it can. It's open source!
Although you will have to either replicate or get rid of functionality that relies on Uploadcare infrastructure:
uploads (this is the easiest part)
fetching files from social networks and cloud storage services
image preview and cropping that relies on Uploadcare CDN
So unless you're moving enormous amounts of files, most cost efficient way is to use Uploadcare as it is. BTW, you can use your own S3 storage and even upload directly to your S3 buckets.

Dropbox file unique identifier - RESTful API

Is there any unique identifier associated with a Dropbox file that doesn't change with revisions/changes/renaming, that can be accessed via the RESTful API? I want to store it in the database and keep track of some operations on the file.
Unfortunately, no, the Dropbox API does not currently expose any sort of file ID or hash like this.
Edit: The Dropbox API v2 does now offer file IDs that persist across moves/renames. You can find more information under "Path formats" in the documentation.
The file ID is available as the id field on the FileMetadata object, e.g., as returned by /files/get_metadata.
The new Dropbox API v2 supports IDs for files and folders.
However, it doesn't automatically differentiate file/folder moves, renames, etc. from deletions and creations in the event stream. You can use a service like Kloudless which provides a unified cloud storage API that includes file/folder IDs for Dropbox. Kloudless also normalizes the event stream and provides access to several other cloud storage services through a single API. (I work at Kloudless)

Possible to get image from Amazon S3 but create it if it doesn't exist

I'm not sure how to word the question but here is what I am looking to do.
I have a site that uses custom map tile overlays on a google map.
The javascript calls a php file on my server that checks to see if an existing map tile exists for the given x, y, and zoom level.
If if exists, it displays that image using file_get_contents.
If it doesn't exist, it creates the new tile then displays it.
I would like to utilize Amazon S3 store and serve the images since there could end being a lot of them and my server is slow. If I have my script check to see if the image exists on amazon and then display it, I am guessing I am not getting the benefits of the speed and Amazons CDN. Is there a way to do this?
Or is there a way to try and pull the file from Amazon first then set up something on Amazon to redirect to my script if the files no there?
Maybe host the script on another of Amazons services? The tile generation is quite slow also in some cases.
Thanks
Ideas:
1 - Use CloudFront, but point it to a cluster of tile generation machines. This way, you can generate the tiles on demand, and any future requests are served right from Cloudfront.
2 - Use CloudFront, but back with with an S3 store of generated tiles. Turn on logging for the S3 bucket, so you can detect failed requests. Consume those logs on a schedule, and generate the missing tiles. This results in a cheaper way of generating tiles, but means that when a tile fails the user get's nothing.
3 - Just pre-generate all the tiles. Throw tasks in an SQS queue, then spin up a collection of EC2 instances to generate the tiles. This will cost the most up front, but all users get a fast experience.
I've written a blog post with a strategy for dealing with this. It's designed to make intelligent and thrifty use of CloudFront, maximize caching and deal with new versions of existing images. You may find the technique described there helpful. The example code shows how to handle different dimensions (i.e. thumbnails) of images. You could modify it to handle different zoom levels.
I need to update that post to support CloudFront custom origins, and I think that for your application you might be better off skipping S3 and using a custom origin. The advantage of a custom origin is simply that it's probably going to be easier to manage all of your images on your local filesystem compared to managing them on S3.