how to put internal PDFs in my app, so they can be Viewed, downloaded and directed to the device's internal storage - kotlin

I need an explanation and the codes so I can put internal PDFs in my program, so that they can be viewed, and when downloaded they are directed to the internal storage of the device. Thanks.

Related

make it impossible to download the audio files

So guys, how do I prevent users from downloading audio files on my web app (running springboot in backend) by accessing the s3 url !
I want to make it impossible to download the audio files in my website ! Any suggestions pls ?
I assume you mean that you want to make it impossible to download the audio files, but still allow streaming them for playback.
You can't.
If it can be played, it can be downloaded. Simple as that.
At best, you can sign your S3 URLs so that they expire after a short period of time. This gives you control over who accesses your audio files, and prevents them from showing up in searches, or linked to from other sites. You can also look into Encrypted Media Extensions, but it's not all that useful for audio since audio is trivially digitally captured on the output.

How to get around storage quota limits in a progressive web app

My question is essentially how can I get around the storage quota limits enforced on a PWA? A little background...
I am hoping to create an offline-ready line-of-business progressive web app that would ideally push about 2GB of images and video resources onto my user's phones or tablets - well beyond the current storage quota for caches and Indexed DB. What I'd like to be able to do is have my users (we all work at the same company) do a 1 time download of a zip file or directory and have the user's store that on their phone/tablet's file system in a well known directory. As the online version of the app treats these files as URL's, the fetch api would seem ideal since I could serve from online if connected or the local serviceworker managed cache if not online. But the qouta limits have me stumped. None of the files are larger than 15MB, but there's no way to know which files are needed before a user goes offline. Can I use something like an HTML input type=file tag to load files into the cache at runtime and then treat them as URL's? Of course I would remove other files to make room. But since these files wouldn't be coming from "the origin" with its secure https address (a PWA requirement I think) , but rather a local file system, I'm not sure this will work. If it is workable, would my users be forced to browse to the files manually?
If its an option, you can have a native Android service to do the caching part to avoid space constraint and then serve the data from native code to PWA using websockets/secure web sockets.
No PWA solution possible for now. File API has limitation as its sand-boxed.

Show content of a zip file in a browser, rather than downloading it

I have a log server, where users upload archives and view their content online when needed. Currently the server unzips files, right after receiving them. Unfortunately, my peers consumed all the drive space I had. I can free up a lot of space, if there's a way of storing ZIP archives, but feeding them to users as HTML page (same as default Apache's file browser).
I know there are solutions relying on JS, like:
http://gildas-lormeau.github.io/zip.js/demos/demo2.html
https://stuk.github.io/jszip/
or I can unzip them on demand at server side and provide link to a temporary folder. However, some time ago I've heard a browser can view an archive content if proper headers are sent from Apache/nginx. Apache's mod-deflate doesn't help much here and I can't find other docs - perhaps it's not possible after all?
Cheers.

Google Drive SDK - Upload and Virus Scan

I am building a batch upload process for Google Drive. I have been trying to confirm that all files upload via the API are also scanned for viruses and malware, but can not find any documentation on this. Does anyone know if 1: All files are scanned, 2: if there is a API call to get the scan results or if a standard error is cast back if a file is infected? IF you can point me to any documentation on this would be fantastic.
-Eg
You can use the EICAR virus test string to understand how Drive behaves with viruses. Here's a globally shared file named eicar.exe which is nothing more than a harmless string but which Google Drive's scan on download will detect as a virus.
You'll notice that:
Attempts to download my file with files.get(alt=media) will fail with "403: Only the owner can download abusive files."
Attempts to files.copy() my file into your own Drive will succeed. (This is a nice workaround when the file is not accessible with files.get() for various reasons).
Attempts to files.get(alt=media, acknowledgeAbuse=true) YOUR copy of the file should succeed.
So to answer your original question, you should be able to follow a files.insert() with a files.get(acknowledgeAbuse=false) to determine if Drive thinks your new file is a Virus (watch for the 403 abuse response).
Be aware that like all Antivirus services, Google is constantly updating it's virus definitions so a file that was not detected as a virus (false negative) may be detected as a virus at a later time and a file that was wrongly detected as a virus (false positive) may no longer be detected as a virus in the future.
Virus scanning: Google Drive scans a file for viruses before the file
is downloaded or shared. If a virus is detected, users can't share the
file with others, send the infected file via email, or convert it to a
Google Doc, Sheet, or Slide, and they'll receive a warning if they
attempt these operations. The owner can download the virus-infected
file, but only after acknowledging the risk of doing so.
Seen here
I'm pretty sure it's not an exposed part of the Drive API though. If you want to implement your own, you'll have to find a different API (maybe this?) to scan your files prior to uploading it to Drive.

Uploading image to "buffer"

I'm developing an application that uses (lots) of image processing.
The general overview of the system is:
User Uploads photos to server (Raw photo, with FULL resolution)
Server Fetches new photos, and apply image processing on them
Server resizes image and serves those photos (delete the full one?)
My current situation is that I have almost no expertise in image hosting nor large data uploading and managing.
What I plan to do is:
User uploads directly from Browser to Amazon S3 (Full Image)
User notifies my server, and add the uploaded file to the Queue for my workers
When worker receives a job, it downloads the full image (from Amazon), and process it. Updates database, and then re-uploads the image to Cloudinary (resize in server?)
Use the hosted image on Cloudinary from now on.
My doubts are regarding the process time. I don't want to upload it directly to my server, because it would require a lot of traffic and create a bottleneck, so using Amazon S3 would reduce that. And hosting images with Amazon would not be that good, since they don't provide specific API's to deal with images as Cloudinary does.
Working with separate servers for uploading, and only triggering my server when upload is done by the browser is ok? Using Cloudinary for hosting images is also something that makes sense? Sending to Amazon, instead of my own server (direct upload to my server) should be avoided?
(This is more a guidance/design question)
Why wouldn't you prefer uploading directly to Cloudinary?
The image can be uploaded directly from the browser to your Cloudinary account, without any further servers involved. Cloudinary then notifies you about the uploaded image and its details, then you can perform all the image processing in the cloud via Cloudinary. You can either manipulate the image while keeping the original, or you may choose to replace the original with the manipulated one.