XMLHTTPRequest fails on large files - xmlhttprequest

XMLHTTPRequest fails when I send a large file (>700MB) over .send(). Even worse, BlobBuilder fails for large files with append() as well. Is there a way to send a file in multiple chunks using XMLHTTPRequest? How do I tell the server to "append" the following stream of data?

If you have control of the server as well as the client, I'd suggest the following workaround:
break the file up into chunks (.slice())
upload the multiple file chunks
reassemble the file chunks on the server
I don't know that this problem can be solved strictly within the browser.

Related

Doing DSPSMTF to display a stmf on browser but it all junk and it is downlading the file instead of displaying it. Also any idea about CONTTYPES file?

I am using CGI DSPSTMF command to display stmf file on web browser. I am copying a spool file to a stmf file using CPYSPLF *STMF option. Once copied i am passing IFS location to DSPSTMF command but it is going to download automatically and when i open the download file i am getting all Junk data any idea why?
Also, i noticed it is using CONTTYPES file in CGILIB and on my server it is empty. What should be the values in it and what should i do show correct data instead of junk. I tried to use different methods to copy the file to IFS like used cpytostmf instead of cpysplf but on IFS file looks correct not the download version.
What CCSID is the resulting stream file tagged with?
use WRKLNK and option 8=Display attributes
If 65535, that tells the system the data is binary and it won't try to translate the EBCDIC to ASCII.
The correct fix is to properly configure your IBM i so that the stream file is tagged with it's correct CCSID.
Do a WRKSYSVAL QCCSID ... if your system is still set to 65535, that's the start of your problem. But this isn't programming related, you can try posting to Server Fault but you might get better responses on the Midrange mailing list

How chunk file upload works

I am working on file upload and really wandering how actually chunk file upload works.
While i understand client sends data in small chunks to server instead of complete file at once. But i have few questions on this:-
For browser to divide and send whole file into chunks, Will it read complete file to its memory? If yes, then again there will me chances of memory leak and browser crash for big files(say > 10GB)
How cloud application like google drive droopbox handles such big files upload?
If multiple files are selected to upload and all have size grater than 5-10 GB, Does browser keep all files into memory then send it chunk by chunk?
Not sure if you're still looking for answer, I been in your position recently, and here's what I've come up with, hope it helps: Deal chunk uploaded files in php
During uploading, If you can print out the request from the backend, you shall see three parameters: _chunkNumber, _totalSize and _chunkSize, with these parameters it's easy to decide whether this chunk is the last piece, if it is, assemble all of the pieces as a whole shouldn't be hard.
As for javascript side, ng-file-upload has a setting named "resumeChunkSize" where you can enable chunk mode and setup the chunk size.

WCF returning "dynamic" gzipstream

I need to create a service that returns a GZipStream consisting of one or more files. The number of files could be hundreds and each file could potentially take up more than 500MB.
Is it somehow possible to add the files dynamically to the gzipstream as the stream is being transfered? (to avoid running into an out-of-memory exception when the files needs to be copied into the stream)
Etc:
Copy fileA to the stream being returned.
The client starts reading the stream.
When fileA has been read (client side), copy fileB to the stream (server side).
The client continue to read the stream.
... and so on until there's no more files.
Btw. it's not important that the files are compressed, just that they are combined into a zip file so that the client only has to download one single file.
So my goal is: Stream multiple files back to the client as one single file without processing all the files at once on the server (to avoid loading all files into memory and therefore raise an out-of-memory exception).
Could this be done by creating a custom stream somehow or is there an easier way to go?
Thanks.
You could combine the files to a single zip file on disk and then stream that file back.
For how to combined the files to a zip file see: c# sharpziplib adding file to existing archive
This solves the out of memory problem, but it does mean that you need a lot of disk space.

Generate A Large File Inside s3 with .NET

I would to generate a big file (several TB) with special format using my C# logic and persist it to S3. What is the best way to do this. I can launch a node in EC2 and then write the big file into EBS and then upload the file from the EBS into S3 using the S3 .net Clinent library.
Can I stream the file content as I am generating in my code and directly stream it to S3 until the generation is done specially for such large file and out of memory issues. I can see this code help with stream but it sounds like the stream should have already filled up with. I obviously can not put such a mount of data to memory and also do not want to save it as a file to the disk first.
PutObjectRequest request = new PutObjectRequest();
request.WithBucketName(BUCKET_NAME);
request.WithKey(S3_KEY);
request.WithInputStream(ms);
s3Client.PutObject(request);
What is my best bet to generate this big file ans stream it to S3 as I am generating it?
You certainly could upload any file up to 5 TB that's the limit. I recommend using the streaming and multipart put operations. Uploading a file 1TB could easily fail in the process and you'd have to do it all over, break it up into parts when you're storing it. Also you should be aware that if you need to modify the file you would need to download the file, modify the file and re-upload. If you plan on modifying the file at all i recommend trying to split it up into smaller files.
http://docs.amazonwebservices.com/AmazonS3/latest/dev/UploadingObjects.html

Use of FTP "append" command

I want to upload a file to a ftp server programmatically (C++). If the connection is lost while uploading a file, I wouldn't want to upload the file from scratch, but to upload only the part that I haven't sent.
Does the APPE command fulfill my demand? What list of FTP commands should I use exactly? And how?
I am googling details about APPE FTP command, what actually it does but most site just state only append. Then I try out the command to make sure it behave as expected.
I designing FTP auto sender that is used to send a log file from a machine to a server for reporting. I only want to send the last line of the log file.
When using a APPE command, it actually append the whole file content and append to the existing one in the server. This will cause the line entry duplicated.
The answer:
To do the resume of file if the last transfer is failed, there is no such command for that, but we need to use a sequence of command to achieve it.
The key point here is seek your local file to the last uploaded byte if you are using APPE command or using command REST. REST will start transfer on that particular byte start position. I end-up with this solution to perform after connection established:
Use APPE (I got the idea from FileZilla log):
Use SIZE to check for file exist and use it as resume marker.
Open local file system and seek on the marker.
Use APPE to upload and FTP server will append it automatically.
Use STOR with REST (I got the idea from edtFTPnet):
Use SIZE to check for file exist and use it as resume marker.
Send REST with the result you get from SIZE to tell FTP server to start write on the position.
Open local file system and seek on the marker.
Use STOR as normal upload.
Note that not all FTP server support for both way. I see FileZilla switch this two way depending on the server. My observation shows that using REST is the standard way. Download can also use REST to start download on the given byte position.
Remember that using resume support for ASCII transfer type will produce unexpected result since Unix and Windows have different line break byte count.
Try to manipulate FileZilla to see the behave in the log.
You can also check this useful open source FTP for .NET library on how they do it.
edtFTPnet
Check the RFC and specifically the APPEND command:
This
command causes the server-DTP to
accept the data transferred via the
data connection and to store the data
in a file at the server site. If the
file specified in the pathname exists
at the server site, then the data
shall be appended to that file;
otherwise the file specified in the
pathname shall be created at the
server site.
Note that you cannot simply APPEND the same file again. You should send the bytes remaining. That is, continue at the same position when the connection was lost.