This is more of a conceptual question, but I'm wondering if it's possible to stream the output of mPDF directly to the user in a download (e.g. without saving in a temp folder on the server or loading into the user's browser).
I'm using a similar method successfully for downloading a zip file of S3 photos using ZipStream and AWS PHP S3 Stream Wrapper which works very well, so I would like to employ a similar method for my PDF generation.
I use the mPDF library to generate reports that have S3 images on Heroku. The mPDF documentation shows four output options including inline and download; inline loads it right into the user's browser and download forces the download prompt (desired behavior).
I've enabled S3 Stream Wrapper and embedded images in the PDF per mPDF Image() documentation like this:
$mpdf->imageVars['myvariable'] = '';
while (!feof($streamRead)) {
// Read 1,024 bytes from the stream
$mpdf->imageVars['myvariable'] .= fread($streamRead, 1024);
}
fclose($streamRead);
$imageHTML = '<img src="var:myvariable" class="report-img" />';
$mpdf->WriteHTML($imageHTML);
I've also added the header('X-Accel-Buffering: no'); which was required to get the ZipStream working within the Heroku environment, but the script always times out if there are more than a couple of images.
Is it possible to immediately prompt the download and just have the data stream directly to the user? I'm hoping this method can be used for more than just zip downloads but haven't had luck with this particular application yet.
Related
I need to build a recording feature on top of a web conferencing app that makes use of WebRTC. To do this I am using the RecordRTC js library.
The recording is NOT uploaded at the end of the call, but for practical reasons every 3 seconds one portion of the stream is uploaded from client to server. This is to avoid waiting at the end for a large upload.
Here's the JavaScript:
RTC_recorder = RecordRTC(stream, {
type: 'video',
mimeType: 'video/webm;codecs=vp8',
timeSlice: 3000,
ondataavailable: function(blob){
upload_to_server(blob);
}
});
I have been able to save separate blobs on the server:
-blob1.webm (readable video)
-blob2.webm (not readable)
-blob3.webm (not readable)
But unfortunately, I don't understand how to merge the blobs into 1 video (SERVER SIDE), and haven't found any working example in the documentation, nor any clear answer to this question.
Can anyone help?
Thanks.
Concatenating the files without any further modification should result in a valid file.
A simple search revealed this question which was about how concatenating files works in PHP.
I am working on a webapp where the user provides an image file-text sequence. I am compressing the sequence into a single ZIP file uisng JSZip.
On the server I simply use PHP move_uploaded_file to the desired location after having checked the file upload error status.
A test ZIP file created in this way can be found here. I have downloaded the file, expanded it in Windows Explorer and verified that its contents (two images and some HTML markup in this instance) are all present and correct.
So far so good. The trouble begins when I try to fetch that same ZIP file and expand it using JSZip.loadAsync which consistently reports Corrupted zip: missing 210 bytes. My PHP code for squirting back the ZIP file is actually pretty simple. Shorn of the various security checks I have in place the essential bits of that code are listed below
if (file_exists($file))
{
ob_clean();
readfile($file);
http_response_code(200);
die();
} else http_response_code(399);
where the 399 code is interpreted in my webapp as a need to create a new resource locally instead of trying to read existing resource data. The trouble happens when I use the result text (on an HTTP response of 200) and feed it to JSZip.loadAsync.
What am I doing wrong here? I assume there is something too naive about the way I am using readfile at the PHP end but I am unable to figure out what that might be.
What we set out to do
Attempt to grab a server-side ZIP file from JavaScript
If it does not exist send back a reply (I simply set a custom HTTP response code of 399 and interpret it) telling the client to go prepare its own new local copy of that resource
If it does exist send back that ZIP file
Good so far. However, reading the existent ZIP file into PHP and sending it back does not make sense + is fraught with problems. My approach now is to send back an http_response_code of 302 which the client interprets as being an instruction to "go get that ZIP for yourself directly".
At this point to get the ZIP "directly" simply follow the instructions in this tutorial on MDN.
In my node.js application I'm downloading multiple user files from AWS S3, compress them to single zip (with usage of Archiver npm library) file and send back to client. All the way I'm operating on streams, and yet I can't send files to client (so client would start download after successful http request).
const filesStreams = await this.awsService.downloadFiles(
document?.documentFiles,
);
const zipStream = await this.compressService.compressFiles(filesStreams);
// ts-ignore
response.setHeader('Content-Type', 'application/octet-stream');
response.setHeader(
'Content-Disposition',
'attachment; filename="files.zip"',
);
zipStream.pipe(response);
Where response is express response object. zipStream is created with usage of Archiver:
public async compressFiles(files: DownloadedFileType[]): Promise<Archiver> {
const zip = archiver('zip');
for (const { stream, name } of files) {
zip.append((stream as unknown) as Readable, {
name,
});
}
return zip;
And I know it is correct - because when I pipe it into WriteStream to some file in my file system it works correctly (I'm able to unzip written file and it has correct content). Probably I could temporarily write file in file system, send it back to client with usage of response.download and remove save file afterwards, but it looks like very inefficient solution. Any help will be greatly appreciated.
So I found a source of a problem - I'll post it here just for record, in case if anyone would have same problem. Source of the problem was something totally different - I was trying to initiate a download by AJAX request, which of course won't work. I changed my frontend code and instead AJAX I used HTML anchor element with src attribute set on exactly same endpoint and it worked just fine.
I had a different problem, but rather it came from the lint. I needed to read all files from the directory, and then send them to the client in one zip. Maybe someone finds it useful.
There were 2 issues:
I mismatched cwd with root; see glob doc
Because I used PassThrough as a proxy object between Archiever and an output, lint shows stream.pipe(response) and type issue... what is a mistake - it works fine.
I've run into the following problem when porting an app from REST API to GDAA.
The app needs to download some of (thousands of) JPEG images based on user selection. The way this is solved in the app is by downloading a thumbnail version first, using this construct of the REST API:
private static InputStream getCont(String rsid, boolean bBig){
InputStream is = null;
if (rsid != null) try {
File gFl = bBig ?
mGOOSvc.files().get(rsid).setFields("downloadUrl" ).execute():
mGOOSvc.files().get(rsid).setFields("thumbnailLink").execute();
if (gFl != null){
GenericUrl url = new GenericUrl(bBig ? gFl.getDownloadUrl() : gFl.getThumbnailLink());
is = mGOOSvc.getRequestFactory().buildGetRequest(url).execute().getContent();
}
} catch (UserRecoverableAuthIOException uraEx) {
authorize(uraEx.getIntent());
} catch (GoogleAuthIOException gauEx) {}
catch (Exception e) { }
return is;
}
It allows to get either a 'thumbnail' or 'full-blown' version of an image based on the bBig flag. User can select a thumbnail from a list and the full-blown image download follows (all of this supported by disk-base LRU cache, of course).
The problem is, that GDAA does not have an option to ask for reduced size / thumbnail version of an object (AFAIK), so I have to resort to combining both APIs, which makes the code more convoluted then I like (bottom of the page). Needles to state that the 'Resource ID' needed by the REST may not be immediately available.
So, the question is: Is there a way to ask GDAA for a 'thumbnail' version of a document?
Downloading thumbnails isn't currently available in the Drive Android API, and unfortunately I can't give a timeframe to when it will be available. Until that time, the Drive Java Client Library is the best way to get thumbnails on Android.
We'd appreciate if you go ahead and file a feature request against our issue tracker: https://code.google.com/a/google.com/p/apps-api-issues/
That gives requests more visibility to our teams internally, and issues will be marked resolved when we release updates.
Update: I had an error in the discussion of the request fields.
As Ofir says, you can't get thumbnails with the Drive Android API and you can get thumbnails with the Drive Java Client Library. This page has is a really good primer for getting started:
https://developers.google.com/drive/v3/web/quickstart/android
Oddly, I can't get the fields portion of the request to work as it is on that quick start. As I've experienced, you have to request the fields a little differently.
Since you're doing a custom field request you have to be sure to add the other fields you want as well. Here is how I've gotten it to work:
Drive.Files.List request = mService.files()
.list()
.setFields("files/thumbnailLink, files/name, files/mimeType, files/id")
.setQ("Your file param and/or mime query");
FileList files = request.execute();
files.getFiles(); //Each File in the collection will have a valid thumbnailLink
A sample query might be:
"mimeType = 'image/jpeg' or mimeType = 'video/mp4'"
Hope this helps!
Can I safely use self.image.staged_path to access file which is uploaded to Amazon S3 using Paperclip ? I noticed that I can use self.image.url (which returns https...s3....file) to read EXIF from file on S3 in Production or Development environments. I can't use the same approach in test though.
I found staged_path method which allows me to read EXIF from file in all environments (it returns something like: /var/folders/dv/zpc...-6331-fq3gju )
I couldn't find more information about this method, so the question is: does anyone have experience with this and could advise on reliability of this approach? I'm reading EXIF data in before_post_process callback
before_post_process :load_date_from_exif
def load_date_from_exif
...
EXIFR::JPEG.new(self.image.staged_path).date_time
...
end