Autodesk Forge - problems with very large .zip files - asp.net-web-api2

We allow our users to upload files to forge, but to our bucket (they don't need to create their own) as we're only using the model viewer. This means they need to upload to our server first.
The upload method uses the stream from the HttpContent (we're using WebAPI2) and sends it right on into the Forge API methods.
Well, it would, but I get this exception - Error getting value from 'WriteTimeout' on 'System.Net.Http.StreamContent+ReadOnlyStream'.
This means that the Forge API is checking the Write Timeout without checking CanWrite or CanTimeout. Have I found an API bug?
Copying to another stream is feasible but I can't use a debugger to test the file our client is reporting further problems with, because it's 1.1GB and my dev box runs out of memory.

Related

How can I set a file upload function?

I am creating an SAPUI5 WebApp with an file upload function. I try this example from SAPUi5 Explored: sap.m.sample.UploadCollection
I try with my trial account in SAP WebIDE to set the upload function ( Upload Collection)
The issue is, not allowing to upload a file in the project folder or local desktop folder.
If I upload a file it appears but i can't open it and I get a 405 HTTP
error.
Any Ideas, what the problem is?
like you see already in your posts comments, you need a backend for this task. The UploadCollection control is only usable with a backend in the background which receives the transmitted file from the control.
On the page https://sapui5.hana.ondemand.com/#/api/sap.m.UploadCollection you see
This control allows you to upload single or multiple files from your devices (desktop, tablet or phone) and attach them to the application
while you can replace "application" with "receiving backend"
Indipendent of this may I allowed to ask where do you think the file should be uploaded when not to a backend system? I mean when you choose a file from your local storage it doesn't make any sense to upload it again to your local storage?!

Nancy/OWIN Service Fabric Microservice Writes Requests To Temp File

I have a microservice, hosted in Service Fabric, that handles uploading files to blob storage. The microservice is implemented with Nancy and OWIN. When the request is over a certain size, something like a couple hundred KB maybe, the request gets written to disk in a temp directory. Occasionally these .tmp files fail to get cleaned up, and eat up the limited disk space on the SF Cluster VM.
I have not been able to find anything about requests automatically getting written to disk. And nothing in the code creates .tmp files. What could be generating these files: Service Fabric, Nancy, OWIN?
Nancy is doing this, it has something called "request stream switching" which, as you say, switches from a memory stream to a file based stream over a certain size to avoid being able to fill all the memory up by uploading a large (or neverending file).
They should get cleaned up after every request, I haven't see any reports of them not being for a long time (we've fixed bugs around this in the past), but if you want to disable it completely (and accept the potential issue above) you can use "StaticConfiguration.DisableRequestStreamSwitching" in your bootstrapper app startup to turn it off.

Red5 stream audio from Azure Storage or Amazon S3

I'm wondering if its possible to stream audio in Red5 from files stored in Azure? I am aware of how to manipulate the playback path via a custom file name generator IStreamFilenameGenerator, our legacy Red5 webapp uses it. It would seem to me though that this path needs to be on the local red5 server, is this correct?
I studied the example showing how to use Amazon S3 for file persistence and playback (https://goo.gl/7IIP28) and while the file recording + upload makes perfect sense, I'm just not seeing how the playback file name that is returned is streaming from S3. Tracing the StringBuilder appends/inserts, it looks like the filename is going to end up to be something like {BucketLocation}/{SessionID}/{FileKey} ... this lead me to believe that bucket.getLocation() on Line 111 was returning an HTTP/S endpoint URL, and Red5 would somehow be able to use it. I wrote a console app to test what bucket.getLocation() returned, and it only returns null for US servers, and EU for Europe. So, I'm not even sure where/how this accesses S3 for direct playback. Am I missing something?
Again, my goal is to access files stored in Azure, but I figured the above Amazon S3 example would have given me a hint.
I totally understand that you cannot record directly to Azure or S3, the store locally + upload makes sense. What I am failing to see is how to stream directly from a blob cloud storage. If anyone has suggestions, I would greatly appreciate it.
Have you tried using Azure Media Services? I believe looking at their documentation will be a good start for your scenario.

Uploading large file (10+ GB) from Web client via azure web site to azure blob storage

I've got a bit of a problem in uploading a really large file into azure blob storage.
I have no problem uploading that file into the web site as a file
upload in an upload directory.
I have no problem either putting this into the blob storage, as chunking will be handled internally.
The problem I'm having is that the time it takes to move the large file from the upload directory to the blob storage takes longer than the browser timeout and the customer sees an error message.
As far as I know, the solution is to chunk-upload directly from the web browser.
But how do I deal with the block ids? Since the web service is supposed to be stateless, I don't think I can keep around a list of blocks already uploaded.
Also, can the blob storage deal with out-of-order blocks?
And do I have to deal with all the state manually?
Or is there an easier way, maybe just handing the blob service the httprequest input stream from the file upload post request (multipart form data)?
Lots of Greetings!
You could move from the web server to blobs asynchronously. So return success for the original request back once file is on web server, and then have javascript query your web server periodically to confirm file has made it to durable storage in blobs. This javascript doing the polling can then display success to the user once it gets a success response from web server, confirming that the file has made it to blob storage.

Pyramid/Pylons: How to check if an uploaded file is complete in a POST request?

I'm building a web tool which allows users to upload PDFs to a server using their web browsers. The server is based on Python (Paste + Pyramid).
The problem I have right now is the following: If a user uploads a rather large file (let's say 100 MB) and they cancel the upload before it is completed, my handler code on the server is still called (instead of the request being aborted).
The problem is that the request.POST['myfile'].file is incomplete when that happens. This effectively means that the PDF file is corrupted if I simply write it to some place on the server.
When I watch the server's log, it shows a "broken pipe" exception within the Paste server; however I have no idea how to catch that exception and have it prevent my view/handler code from executing and storing the incomplete file.
Seems like the paster HTTP server does not correctly validate the uploaded form data and simply passes the request down the WSGI pipeline even if the connection (HTTP POST) was closed by the user.
I worked around this issue by simply setting up NGINX to act as a reverse proxy. This also adds some security benefits as it might be better tested than paster.
Update:
My main problem was that I was using runserver (the built in web server of manage.py). After some trial and error we ended up using WSGI.
More specifically, uWSGI and Nginx as web server. Static content is served directly by Nginx while dynamic pages are piped through uWSGI and are handled by the Python web app.
Unless you are doing something fancy (like tracking the upload progress, etc), your pylons controller should not be invoked until the entire file has been uploaded.