How to upload and download media files using GUNDB? - gun

I'm trying to use GUN to create a File sharing platform. I read the tutorial and API but I couldn't find a general way to upload/download a file.
I hear that there is a limitation of 5Mb of localStorage in GUN, if I want to upload a large file, I have to slice it then storage it into GUN. But right now I can't find a way to storage file into GUN.
I read the question from Retric and I know how to store the image into GUN, but can I store the other type of Files such as .zip or .doc File? Is there a general API for file storage?

I wrote a quick little app in 35 lines of HTML to demonstrates file sharing for images, videos, sound, etc.
https://github.com/amark/gun/blob/master/examples/basic/upload.html
I've sent 20MB files thru it, tho yeah, I'm sure there is a better way of splitting it up into 2MB chunks - that is currently not automatic, you'd have to code it.
We'll have a feature in the future that will automatically split up video files. Do you want to help with this?
I think on the download side, all you have to do is make sure you have the whole file (stitch it back together if you do write a splitter upper), and add it to some <a href=" target. Actually, I'm not sure exactly how, but I know browsers support download file attributes for a few years now, where you can create a download link even of a in-memory file... but you'll have to search online for how. Then please write a tutorial and share it with the community!!

I would recommend using IPFS for file storage and GUN to store the links to those files. GUN isn't meant for file storage I believe, primarily user/graph data. Thus the 5 MB limitation.

Related

Genexus 15, save PDFs, GIFs, JPGs, WORD documents

I need to save PDFs, GIFs, TIFs, JPGs, etc...how can I do this in Genexus 15 compiling in C#.
After this I have to show the saved documents in a form.
Thank you..
PD: I'm new using Genexus...
This question seems too broad to answer... Will those files be stored in the database or the file system? Or perhaps in an external storage like Amazon's S3?
Will the application store different file types in the same database column, or will there be a filed storing images, another one por PDFs, etc.?
Anyway, here are some documents that may be of some help:
Blob data type for storing any file in the database (or see BlobFile data type if using GeneXus Tero, in pre-beta at this moment...)
File data type for storing files in the file system
Image data type for storing image files in the database (there is also Audio and Video which work exactly the same way)
External Storage for Multimedia explains hot to store multimedia files in an external service.
Hope this helps...

Creating thumbnails for images on S3

I have quite common situation, as I suppose. I have website that is lcoated on amazon EC2 and I'd like to move all dynamic files to amazon S3. Everything seems ok, except 2 points:
I'm using library PDFNet with their WebViewer. To display pdf files in browser Webviwer use special ".xod" format. PDFNet provide functionality to convert pdf files to xod format. Let's see an example, when PDF file was upload on S3 and no xod file was created (I'm going to use Lambda to avoid it in future, but still). So in this case I have to download file to my local machine, convert it to xod file and upload xod file on S3(I don't see any other opportunities to do it, but it can take a lot of traffic)?
Second problem is almost the same, but it's linked with thumbnails. Currently I'm dynamically resize thumbnails depending on the required resolution and I'd like to keep it. Amazon Lambda is not situable in this case, what is the best way to do it?
Why do you say that Lambda is not suitable here?
For pt#1 PDFNet gives a library for Java, you can write a lambda function in java (its possible now) and use that to get infinite scale.
For pt#2: Amazons tutorial (http://docs.aws.amazon.com/lambda/latest/dg/with-s3-example.html) gives a detailed example of how to resize images when uploaded to S3. The example is in nodeJs, you can write a java version as well if you like.
Note that if you want to have custom logic for decision making, you can add attributes while uploading the file in S3 (http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingMetadata.html#User-Defined Metadata) which you can use in your lambda function to take decisions while resizing.

Dropbox API - updating file

I'm new to the Dropbox API and I'm trying to figure out if I can manage large file (1-2 Gb) updates without having to copy over the whole file.
Something similar to the initial chunked upload. I'd like a way to say, here's a chunk of this file, starting at offset XXXX and here's the new content. And send only 100Kb or 1Mb, but not the whole file over!
I'm surprised nobody needs something like that, since once you upload a large file, which is pretty easy to do with the chunked upload, one would still have an issue if the file needed updating. Especially if the updates are small!
Anyway, all feedback is appreciated!
That's very much incremental sync. There's a free one doing this, by the name of OpenVCDiff. But when it comes to dropbox API, I don't know - I found your post when I was searching for dropbox here within SO.
The Dropbox API doesn't support any sort of incremental update.

File permissions on a web server?

I'm new at writing code for websites. The website allows users to upload files, such as profile pictures or other pictures. The files are saved in the unix file system and the URLs to find those images are stored in a MySQL database.
It seems like the only way I can let the user upload files is to give write access to anybody using chmod. Otherwise it complains that it doesn't have write permissions. But they shouldn't be able to write whatever they want or overwrite other users stuff. Similarly, to allow users to see images that they have rightful access to, they need read permissions on the file system. But now that means that anybody with the url to that picture can see the image too, correct? That's not what I want.
Is there a solution to this contradiction? Or am I thinking about the problem incorrectly? Thanks for any help.
You need to manage the permissions in your application and not expose arbitrary parts of your local filesystem directly to the clients. Your application should decide what files someone can see or where to write data. You should not trust data (filenames, etc) from your clients...ideally, store files on disk using systematically generated names and store human-readable names in the database.
SunStar9,
Since you are already using a MySQL database to store the URL of the image on the file system, why not just store the image itself as a BLOB (binary large object)?
This is generally a well-accepted design practice for allowing users to upload binary data to a website.
Are you using PHP, Java, Ruby/Rails, or something other to develop your website? Depending on what you are using, there could be file upload/management plugins or modules that will help you develop what you are trying to do if you are certain you want to use the files ystem for storing the image data.

iPad - how should I distribute offline web content for use by a UIWebView in application?

I'm building an application that needs to download web content for offline viewing on an iPad. At present I'm loading some web content from the web for test purposes and displaying this with a UIWebView. Implementing that was simple enough. Now I need to make some modifications to support offline content. Eventually that offline content would be downloaded in user selectable bundles.
As I see it I have a number of options but I may have missed some:
Pack content in a ZIP (or other archive) file and unpack the content when it is downloaded to the iPad.
Put the content in a SQLite database. This seems to require some 3rd party libs like FMDB.
Use Core Data. From what I understand this supports a number of storage formats including SQLite.
Use the filesystem and download each required file individually. OK, not really a bundle but maybe this is the best option?
Considerations/Questions:
What are the storage limitations and performance limitations for each of these methods? And is there an overall storage limit per iPad app?
If I'm going to have the user navigate through the downloaded content, what option is easier to code up?
It would seem like spinning up a local web server would be one of the most efficient ways to handle the runtime aspects of displaying the content. Are there any open source examples of this which load from a bundle like options 1-3?
The other side of this is the content creation and it seems like zipping up the content (option 1) is the simplest from this angle. The other options would appear to require creation of tools to support the content creator.
If you have the control over the content, I'd recommend a mix of both the first and the third option. If the content is created by you (like levels, etc) then simply store it on the server, download a zip and store it locally. Use CoreData to store an Index about the things you've downloaded, like the path of the folder it's stored in and it's name/origin/etc, but not the raw data. Databases are not thought to hold massive amounts of raw content, rather to hold structured data. And even if they can -- I'd not do so.
For your considerations:
Disk space is the only limit I know on the iPad. However, databases tend to get slower if they grow too large. If you barely scan though the data, use the file system directly -- may prove faster and cheaper.
The index in CoreData could store all relevant data. You will have very easy and very quick access. Opening a content will load it from the file system, which is quick, cheap and doesn't strain the index.
Why would you do so? Redirect your WebView to a file:// URL will have the same effect, won't it?
Should be answered by now.
If you don't have control then use the same as above but download each file separately, as suggested in option four. after unzipping both cases are basically the same.
Please get back if you have questions.
You could create a xml file for each bundle, containing the path to each file in the bundle, place it in a folder common to each bundle. When downloading, download and parse the xml first and download each ressource one by one. This will spare you the overhead of zipping and unzipping the content. Create a folder for each bundle locally and recreate the folder structure of the bundle there. This way the content will work online and offline without changes.
With a little effort, you could even keep track of file versions by including version numbers in the xml file for each ressource, so if your content has been partially updated only the files with changed version numbers have to be downloaded again.