I would need to upload Gziped content to S3 via a signed URL.
Here is how I generate the signed URL with a JS backend:
s3.createPresignedPost({
Bucket: 'name',
Fields: {
key: 'key'
}
})
I have tried passing the Content-Encoding header to the signedURL POST request but that did not work. The headers are not set properly on the s3 object.
I have also tried setting up a post upload lambda to update the metadata. It failed with an error File is identical error
Finally I have tried using cloudfront + a lambda to force a header. This failed too with an error stating that Content-Enconding is a protected error.
--Update Start--
For uploading to S3 via Ajax or JS scripts, I would advise to use s3.getSignedUrl method. s3.createPresignedPost is meant for only direct browser uploads.
Below is example of Ajax jQuery Upload I created using this guide.
s3.getSignedUrl('putObject', {
Bucket: 'bucketName',
Key: 'sample.jpg.gz',
// This must match with your ajax contentType parameter
ContentType: 'binary/octet-stream'
/* then add all the rest of your parameters to AWS puttObect here */
}, function (err, url) {
console.log('The URL is', url);
});
Ajax PUT Script - Take the Url from above function call and use it below.
$.ajax({
type: 'PUT',
url: "https://s3.amazonaws.com/bucketName/sample.jpg.gz?AWSAccessKeyId=AKIAIOSFODNN7EXAMPLE&Content-Type=binary%2Foctet-stream&Expires=1547056786&Signature=KyTXdr8so2C8WUmN0Owk%2FVLw6R0%3D",
//Even thought Content-Encoding header was not specified in signature, it uploads fine.
headers: {
'Content-Encoding': 'gzip'
},
// Content type must much with the parameter you signed your URL with
contentType: 'binary/octet-stream',
// this flag is important, if not set, it will try to send data as a form
processData: false,
// the actual file is sent raw
data: theFormFile
}).success(function () {
alert('File uploaded');
}).error(function () {
alert('File NOT uploaded');
console.log(arguments);
});
In S3 object, you should see Content-Type, Content-Encoding under metadata.
Importent Note
When you try to upload via JS scripts which is running on browsers, typically browsers will tend to send OPTIONS method preflight(or CORS check) first before calling PUT method. You will get 403 Forbidden error for OPTIONS since CORS on S3 bucket doesn't allow that. One way, I resolved is by using following CORS configuration on bucket level. Reference
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
<AllowedOrigin>*</AllowedOrigin>
<AllowedMethod>PUT</AllowedMethod>
<MaxAgeSeconds>3000</MaxAgeSeconds>
<AllowedHeader>*</AllowedHeader>
</CORSRule>
</CORSConfiguration>
--Update End--
Did you try like this?. I just tested the policy using sample html given in AWS documentation. Reference - https://docs.aws.amazon.com/AmazonS3/latest/API/sigv4-HTTPPOSTConstructPolicy.html
s3.createPresignedPost({
Bucket: 'name',
Conditions: [
{ "Content-Encoding": "gzip" }
],
Fields: {
key: 'key'
}
})
Update -
Here is my observation so far.
We really need to check your client which is doing upload operation. If you want Content-Encoding set on MetaData, then your Pre-Signed Url should have Content-Encoding property set. If Signed Url doesn't have it but your request header does then it will give you Extra input fields: content-encoding.
I have signed a url with Content-Encoding and uploaded a zipped file with following sample html.
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />
</head>
<body>
<form action="http://bucket-name.s3.amazonaws.com" method="post" enctype="multipart/form-data">
Key to upload:
<input type="input" name="key" value="sample.jpg.gz" /><br />
Content-Encoding:
<input type="input" name="Content-Encoding" value="gzip" /><br />
<input type="text" name="X-Amz-Credential" value="AKIAIOSFODNN7EXAMPLE/20190108/us-east-1/s3/aws4_request" />
<input type="text" name="X-Amz-Algorithm" value="AWS4-HMAC-SHA256" />
<input type="text" name="X-Amz-Date" value="20190108T220828Z" />
Tags for File:
<input type="hidden" name="Policy" value='bigbase64String' />
<input type="hidden" name="X-Amz-Signature" value="xxxxxxxx" />
File:
<input type="file" name="file" /> <br />
<!-- The elements after this will be ignored -->
<input type="submit" name="submit" value="Upload to Amazon S3" />
</form>
</html>
If I do not send Content-Encoding header it gives the error Policy Condition failed: ["eq", "$Content-Encoding", "gzip"]
Note -
If you are using https while uploading, please make sure you have proper certificate on S3 endpoint otherwise you will get cert errors.
S3 Screenshot.
Related
Cloud Flare, R2, how to upload images??
I`m new to Cloud Flare world,
and I can upload the pictures by dragging but
how to upload image using coding? from application??
do I have to use "WORKERS" <-- things?
I have uploaded files to r2 successfully with rclone.
Configure rclone
First, install rclone on our PC.
And then create a rclone.conf file under the path ~/.config/rclone/.
[r2]
type = s3
provider = Cloudflare
access_key_id = <ACCESS_KEY>
secret_access_key = <SECRET_ACCESS_KEY>
region = auto
endpoint = https://<ACCOUNT_ID>.r2.cloudflarestorage.com
acl = private
[r2]: A custom name(an alias) for storage service. We need to use it to operate files.
type = s3: The type of file operation API. R2 supports the S3 standard protocol.
provider = Cloudflare: The storage provider ID. You could use man rclone to get the supported providers.
access_key_id: You need to create a token with edit permission on the R2 console.
secret_access_key: Same as above.
endpoint: The url that rclone uses to operate files. To get the account id on the top-right of R2 homepage.
Usage
run rclone lsf r2: to see your buckets and rclone lsf r2:my-bucket to show the file list within a bucket.
Especially notice the last symbol :
upload a file:
rclone copy /path/to/file r2:my-bucket
Hey I got stuck with this for two days and was not able to figure it out. So I wanted to share my solution.
My Goal
I was developing a software for collecting data from our end users to get information from them. But they needed an image to be submitted for us to verify them, that is why I needed to have a form to enable users to upload their file and an API endpoint to store their file.
Solution
I set up an Cloudflare worker as an API since it connects well with R2, you just have to define it in your worker settings.
My Cloudflare worker script & Example form for allowing users to upload files
// CLOUDFLARE WORKER SCRIPT
// ------------------------
export default {
async fetch(request, env) {
const corsHeaders = {
'Access-Control-Allow-Origin': '*',
'Access-Control-Allow-Methods': 'GET, HEAD, POST, OPTIONS',
'Access-Control-Max-Age': '86400',
};
// Check for preflight request from the browser.
if (request.method === 'OPTIONS') {
return new Response(null, {
headers: {
...corsHeaders,
'Access-Control-Allow-Headers': request.headers.get(
'Access-Control-Request-Headers'
),
}
});
} else {
// Handle actual request and store image to bucket.
const { headers } = request;
// Key is date now since we want keys to be unique.
const key = Date.now();
await env.MY_BUCKET.put(key, request.body, {
httpMetadata: {
contentType: headers.get('content-type')
}
});
return new Response('success!', {
headers: {
...corsHeaders,
'Access-Control-Allow-Headers': request.headers.get(
'Access-Control-Request-Headers'
),
}
});
}
}
}
<!DOCTYPE html>
<html>
<head>
<title>Upload Images with Cloudflare R2</title>
</head>
<body>
<form method="POST" enctype="multipart/form-data">
<label for="image">Select image to upload:</label>
<input type="file" name="image" id="image" /><br /><br />
<input type="submit" value="Upload" />
</form>
<script>
async function uploadImage(file) {
fetch('https://<YOUR-OWN-WORKER>.workers.dev', {
method: 'POST',
headers: {
'Access-Control-Allow-Origin': '*',
'Content-Type': file.type
},
body: file
})
.then((response) => response.text())
.then((data) => console.log(data))
.catch((error) => console.error(error));
}
const image = document.getElementById('image');
const onSelectFile = () => uploadImage(image.files[0]);
image.addEventListener('change', onSelectFile, false);
</script>
</body>
</html>
<script src="http://ajax.googleapis.com/ajax/libs/jquery/1.3.2/jquery.js" type="text/javascript"></script>
<script>
$.get("http://example.com/", function(data) {
alert(data);
});
</script>
it does an OPTIONS request to that URL, and then the callback is never called with anything.
When it isn't cross domain, it works fine.
Shouldn't jQuery just make the call with a <script> node and then do the callback when its loaded? I understand that I won't be able to get the result (since it is cross domain), but that's OK; I just want the call to go through. Is this a bug, or am I doing something wrong?
According to MDN,
Preflighted requests
Unlike simple requests (discussed above), "preflighted" requests first
send an HTTP OPTIONS request header to the resource on the other
domain, in order to determine whether the actual request is safe to
send. Cross-site requests are preflighted like this since they may
have implications to user data. In particular, a request is
preflighted if:
It uses methods other than GET or POST. Also, if POST is used to send
request data with a Content-Type other than
application/x-www-form-urlencoded, multipart/form-data, or text/plain,
e.g. if the POST request sends an XML payload to the server using
application/xml or text/xml, then the request is preflighted.
It sets custom headers in the request (e.g. the request uses a header such as
X-PINGOTHER)
The OPTIONS is from http://www.w3.org/TR/cors/ See http://metajack.im/2010/01/19/crossdomain-ajax-for-xmpp-http-binding-made-easy/ for a bit more info
If you're trying to POST
Make sure to JSON.stringify your form data and send as text/plain.
<form id="my-form" onSubmit="return postMyFormData();">
<input type="text" name="name" placeholder="Your Name" required>
<input type="email" name="email" placeholder="Your Email" required>
<input type="submit" value="Submit My Form">
</form>
function postMyFormData() {
var formData = $('#my-form').serializeArray();
formData = formData.reduce(function(obj, item) {
obj[item.name] = item.value;
return obj;
}, {});
formData = JSON.stringify(formData);
$.ajax({
type: "POST",
url: "https://website.com/path",
data: formData,
success: function() { ... },
dataType: "text",
contentType : "text/plain"
});
}
Just change the "application/json" to "text/plain" and do not forget the JSON.stringify(request):
var request = {Company: sapws.dbName, UserName: username, Password: userpass};
console.log(request);
$.ajax({
type: "POST",
url: this.wsUrl + "/Login",
contentType: "text/plain",
data: JSON.stringify(request),
crossDomain: true,
});
I don't believe jQuery will just naturally do a JSONP request when given a URL like that. It will, however, do a JSONP request when you tell it what argument to use for a callback:
$.get("http://metaward.com/import/http://metaward.com/u/ptarjan?jsoncallback=?", function(data) {
alert(data);
});
It's entirely up to the receiving script to make use of that argument (which doesn't have to be called "jsoncallback"), so in this case the function will never be called. But, since you stated you just want the script at metaward.com to execute, that would make it.
In fact, cross-domain AJAX (XMLHttp) requests are not allowed because of security reasons (think about fetching a "restricted" webpage from the client-side and sending it back to the server – this would be a security issue).
The only workaround are callbacks. This is: creating a new script object and pointing the src to the end-side JavaScript, which is a callback with JSON values (myFunction({data}), myFunction is a function which does something with the data (for example, storing it in a variable).
I had the same problem. My fix was to add headers to my PHP script which are present only when in dev environment.
This allows cross-domain requests:
header("Access-Control-Allow-Origin: *");
This tells the preflight request that it is OK for the client to send any headers it wants:
header("Access-Control-Allow-Headers: *");
This way there is no need to modify the request.
If you have sensitive data in your dev database that might potentially be leaked, then you might think twice about this.
In my case, the issue was unrelated to CORS since I was issuing a jQuery POST to the same web server. The data was JSON but I had omitted the dataType: 'json' parameter.
I did not have (nor did I add) a contentType parameter as shown in David Lopes' answer above.
I was able to fix it with the help of following headers
Access-Control-Allow-Origin
Access-Control-Allow-Headers
Access-Control-Allow-Credentials
Access-Control-Allow-Methods
If you are on Nodejs, here is the code you can copy/paste.
app.use((req, res, next) => {
res.header('Access-Control-Allow-Origin','*');
res.header('Access-Control-Allow-Headers', 'Origin, X-Requested-With, Content-Type, Accept');
res.header('Access-Control-Allow-Credentials', true);
res.header('Access-Control-Allow-Methods', 'GET, POST, PUT, PATCH');
next();
});
It's looking like Firefox and Opera (tested on mac as well) don't like the cross domainness of this (but Safari is fine with it).
You might have to call a local server side code to curl the remote page.
I've a sails application. So in the backend I've node js server and in the front end I've an angular js application.
And from the frontend I want to upload some file to s3. So for that I'm using s3-skipper in the backend.
So in the frontend I've
<form id="formUpload"
method = "post"
action = "/bin/upload"
enctype= "multipart/form-data" >
<input type="file" id="inFile" name="file" />
<input type="button" value="Upload" onclick="uploadFile();" />
</form>
And then to upload the file in s3 using skipper I'm accessing the file stream in the backend as:
req.file("file");
And this whole process works perfectly.
Now I've another scenario when I'm recording an audio file from the browser and I want to upload the file using same s3-skipper to s3.
But in this case I don't have any html form element and input type file for obvious reason. And for the audio file I've base64 audio dataUrl:
data:audio/ogg;base64,T2dnUwACAAAAAAAAAABx+oohAAAAAH....
And as I can't use form submit I want to send the data using $http. So what I'm doing:-
var blob = $scope.dataURItoBlob(audioDataUrl, 'audio/ogg');
var formData = new FormData();
formData.append('file', blob);
$http({
method: 'POST',
url: '/file/upload/multi',
headers: { 'Content-Type': 'multipart/form-data;' },
data: formData
And after sending the request I can see some data being sent in the browser network.
But in the backend this is what I'm getting as req.file("file"):
{ _fatalErrors: [],
isNoop: true,
_files: [],....
So, can anyone help me here how to upload the file properly here?
I am using FU-5.0.8 on Ubuntu14, apache2 php5. I successfully created page to drag n drop file, create signature and upload to my AWS S3 bucket.
I then integrated that code into an existing form, using the qq-form. The FU element and the form are generated correctly, I am able to enter data and the data is posted to my DB including the key file name generated by FU.
Issue 1: FU is not pointing to the bucket I have identified in my script. Instead of the correct bucket as before, it make my domain the target bucket name.
Issue 2: FU provides a progress bar and shows the upload is complete and returns 200. I don't know how this is possible, do FU check that the file actually exists upon completion?
Here are the standalone upload script and the script with the form. As you can see they have an identical endpoint. I have included the POST created from the qq-form where you can see the endpoint is incorrect.
FU in the JSON POST is setting the bucket to www.biggytv.com, which does not exist. It should got to btv_upload_org.s3.amazonaws.com
You assistance is appreciate.
FORM CODE
<form action="https://www.biggytv.com/btvChannels/FSsubmit.php" id="qq-form" class="span6">
<div id="fineuploader-s3" class="span6"></div>
<div class="form-group span2">
<label>Program Title</label>
<input type = "text" class="span4" name="program_title" maxlength="200" autofocus="" required>
</div>
<div class="form-group span6">
<label>Program Description</label>
<textarea name="description" maxlength="1000" class="span8" id="count_me" placeholder="Please do not include links in the Program Description. Add any links to content on other sites as part of your profile." required></textarea>
</div>
<div class="form-group span5">
<label>Video Language</label>
<select name="vid_language" class="span4" required>
<option value=""></option>
<?php $language->data_seek(0);
while($language_row = $language->fetch_assoc()){
$languageid = $language_row['short'];
$languagename = $language_row['name'];
echo "<option value=\"$languageid\">
$languagename
</option>";
}?>
</select>
</div>
<div class="form-group span5">
<label>Target Audience</label>
<select name="Audience_DART1" class="span4" required>
<option value=""></option>
<?php $audience->data_seek(0);
while($audience_row = $audience->fetch_assoc()){
$audienceid = $audience_row['tar_aud_code'];
$audiencename = $audience_row['tar_aud_name'];
echo "<option value=\"$audienceid\">
$audiencename
</option>";
}?>
</select>
</div>
<div class="form-group span5">
<label>Genre</label>
<select name="genre" class="span4" required>
<option value=""></option>
<?php $genre->data_seek(0);
while($genre_row = $genre->fetch_assoc()){
$genreid = $genre_row['genre_id'];
$genrename = $genre_row['genre_name'];
echo "<option value=\"$genreid\">
$genrename
</option>";
}?>
</select>
</div>
<div class="form-group span5">
<label>Rating</label>
<select name="Rating_DART" class="span4" required>
<option value=""></option>
<?php $rating->data_seek(0);
while($rating_row = $rating->fetch_assoc()){
$dartid = $rating_row['dart_rating'];
$dartname = $rating_row['dart_rating_name'];
echo "<option value=\"$dartid\">
$dartname
</option>";
}?>
</select>
</div>
<input type="hidden" name="content_owner_name" value=<?php echo $content_owner_name;?>>
<input type="hidden" name="channel_id" value="801102">
<div class="span6 offset2">
<input id="submitButton" type="submit" value="Submit" class="span4">
</div>
</form>
QQ-FORM
jQuery(document).ready(function ($) {
$('#fineuploader-s3').fineUploaderS3({
request: {
endpoint: "btv_upload_org.s3.amazonaws.com",
accessKey: "XXXXXXXX"
},
template: "qq-template",
signature: {
endpoint: "/finesig/"
},
forceMultipart: {
enabled: true
},
debug: true,
cors: {
expected:true,
},
resume: {
enabled: true
},
objectProperties: {
acl: "public-read"
},
validation: {
itemLimit: 1,
sizeLimit: 150000000,
acceptFiles: "video/mp4, video/quicktime, video/x-flv, video/x-ms-wmv",
allowedExtensions: ["mp4", "mov", "flv", "wmv"],
},
})
});
POST JSON
JSON
conditions
[Object { acl="public-read"}, Object { bucket="www.biggytv.com"}, Object { Content-Type="video/mp4"}, 12 more...]
0
Object { acl="public-read"}
1
Object { bucket="www.biggytv.com"}
2
Object { Content-Type="video/mp4"}
3
Object { success_action_status="200"}
4
Object { key="e001946c-a472-4412-9854-b44fede53e72.mp4"}
5
Object { x-amz-meta-program_title="Marcus%20Johns"}
6
Object { x-amz-meta-description="Marcus%20Johns"}
7
Object { x-amz-meta-vid_language="en"}
8
Object { x-amz-meta-audience_dart1="U"}
9
Object { x-amz-meta-genre="6"}
10
Object { x-amz-meta-rating_dart="G"}
11
Object { x-amz-meta-content_owner_name="MarcusJohns"}
12
Object { x-amz-meta-channel_id="801102"}
13
Object { x-amz-meta-qqfilename="7920637590280230638.mp4"}
14
["content-length-range", "0", "150000000"]
expiration
"2014-10-14T00:11:08.742Z
Uploader Log
[Fine Uploader 5.0.8] Sending simple upload request for 0
custom.....0.8.js (line 207)
[Fine Uploader 5.0.8] Submitting S3 signature request for 0
custom.....0.8.js (line 207)
[Fine Uploader 5.0.8] Sending POST request for 0
custom.....0.8.js (line 207)
POST https://www.biggytv.com/finesig/
200 OK
552ms
custom.....0.8.js (line 3810)
[Fine Uploader 5.0.8] Sending upload request for 0
custom.....0.8.js (line 207)
POST https://www.biggytv.com/btvChannels/FSsubmit.php
200 OK
938ms
custom.....0.8.js (line 9849)
[Fine Uploader 5.0.8] Received response status 200 with body: Array
(
[key] => 05a829e8-bea1-44f6-b35d-1f620f1043af.mp4
[AWSAccessKeyId] => XXXXXXXXX
[Content-Type] => video/mp4
[success_action_status] => 200
[acl] => public-read
[x-amz-meta-program_title] => Marcus%20John
[x-amz-meta-description] => Mar
[x-amz-meta-vid_language] => en
[x-amz-meta-audience_dart1] => U
[x-amz-meta-genre] => 6
[x-amz-meta-rating_dart] => G
[x-amz-meta-content_owner_name] => MarcusJohns
[x-amz-meta-channel_id] => 801102
[x-amz-meta-qqfilename] => 7920637590280230638.mp4
[policy] => eyJleHBpcmF0aW9uIjoiMjAxNC0xMC0xNFQwMDozNzoxNS44NjZaIiwiY29uZGl0aW9ucyI6W3siYWNsIjoicHVibGljLXJlYWQifSx7ImJ1Y2tldCI6Ind3dy5iaWdneXR2LmNvbSJ9LHsiQ29udGVudC1UeXBlIjoidmlkZW8vbXA0In0seyJzdWNjZXNzX2FjdGlvbl9zdGF0dXMiOiIyMDAifSx7ImtleSI6IjA1YTgyOWU4LWJlYTEtNDRmNi1iMzVkLTFmNjIwZjEwNDNhZi5tcDQifSx7IngtYW16LW1ldGEtcHJvZ3JhbV90aXRsZSI6Ik1hcmN1cyUyMEpvaG4ifSx7IngtYW16LW1ldGEtZGVzY3JpcHRpb24iOiJNYXIifSx7IngtYW16LW1ldGEtdmlkX2xhbmd1YWdlIjoiZW4ifSx7IngtYW16LW1ldGEtYXVkaWVuY2VfZGFydDEiOiJVIn0seyJ4LWFtei1tZXRhLWdlbnJlIjoiNiJ9LHsieC1hbXotbWV0YS1yYXRpbmdfZGFydCI6IkcifSx7IngtYW16LW1ldGEtY29udGVudF9vd25lcl9uYW1lIjoiTWFyY3VzSm9obnMifSx7IngtYW16LW1ldGEtY2hhbm5lbF9pZCI6IjgwMTEwMiJ9LHsieC1hbXotbWV0YS1xcWZpbGVuYW1lIjoiNzkyMDYzNzU5MDI4MDIzMDYzOC5tcDQifSxbImNvbnRlbnQtbGVuZ3RoLXJhbmdlIiwiMCIsIjE1MDAwMDAwMCJdXX0=
[signature] => d48Ibli+V6/RmN3bUUZKVJ0woew=
)
custom.....0.8.js (line 207)
[Fine Uploader 5.0.8] Simple upload request succeeded for 0
Update
In the FU documentation you referred to "Integrating with existing html forms", the "simple example" 'action' is set to a script not a S3 bucket, that is why I am confused.
However, I have configured my script in the manner of your answer.
When I set the Action to the bucket, the uploadSuccess: {endpoint} POST does not include any of my form data, just key, uuid,name,bucket,etag.
If I set the Action as to https://www.biggytv.com/btvChannels/FSsubmit.php, the form data is sent in the POST and FU returns a 200 that the file was uploaded to www.biggytv.com, a bucket that does not exist.
So my issue is the uploadSuccess: {endpoint} is not including any of my form data in the POST.
Thank you.!
The action attribute in the form serves the same purpose as the request.endpoint option. You should leave the request.endpoint option off of your JavaScript configuration. The action attribute will define the S3 endpoint where your files will be uploaded. If you want the file to be sent to "btv_upload_org.s3.amazonaws.com", then that should be the value of the form's action attribute. Again, leave request.endpoint off your JavaScript config.
In your JavaScript config, you can (and must) define a signature.endpoint option. Fine Uploader will send a POST to this endpoint to sign each request before a request is sent to the S3 endpoint defined in the action attribute.
You may also define an uploadSuccess endpoint in your JavaScript config. Fine Uploader will POST to this location when the file is successfully in S3.
You can find all information regarding form support on the associated feature page.
I need to wrap this up but not sure how :-)
Basically, I'm using jQuery forms to upload a file to a WCF REST service.
Client looks like:
<script type="text/javascript">
$(function () {
$('form').ajaxForm(
{
url: '/_vti_bin/VideoUploadService/VideoUploadService.svc/Upload/theFileName',
type: 'post'
})
});
</script>
<form id="fileUploadForm" name="fileUploadForm" method="post" action="/_vti_bin/VideoUploadService/VideoUploadService.svc/Upload" enctype="multipart/form-data">
<input type="text" name="filename" />
<input type="file" id="postedFile" name="postedFile" />
<input type="submit" id="fileUploadSubmit" value="Do Upload" />
</form>
While server relevant snippets are
[WebInvoke(UriTemplate = "Upload/{fileName}", ResponseFormat = WebMessageFormat.Json)]
void Upload(string fileName, Stream fileStream);
public void Upload(string fileName, Stream stream)
{
// just write stream to disk...
}
Problem is that what I write to disk looks like this, not the content of the file (in my case, "0123456789"):
-----------------------------7dc7e131201ea
Content-Disposition: form-data; name="MSOWebPartPage_PostbackSource"
// bunch of same stuff here
-----------------------------7dc7e131201ea
Content-Disposition: form-data; name="filename"
fdfd
-----------------------------7dc7e131201ea
Content-Disposition: form-data; name="postedFile"; filename="C:\Inter\Garbage\dummy.txt"
Content-Type: text/plain
0123456789
-----------------------------7dc7e131201ea--
Question - should I be manually parsing what I get above and extract the last part which corresponds to the uploaded file (not elegant)? Or there is a way to apply attribute, content type, whatever setting, in order to get in the stream JUST the uploaded file's content?
I'm suspect there is, but I'm missing it... any help?
Thanks much!
Take a look at 2 articles about Form file upload using ASP.NET Web API:
http://www.asp.net/web-api/overview/working-with-http/sending-html-form-data,-part-1
http://www.asp.net/web-api/overview/working-with-http/sending-html-form-data,-part-2
ASP.NET Web API already has classes (like MultipartFormDataStreamProvider) to parse multipart requests and also to save data from them. Why not to use these classes to solve your problem
UPD In case of .NET 3.5 code and hosting:
Think that MultipartFormDataStreamProvider class and it's assembly aren't for .NET 3.5. In these case you should use some other library/class to parse multipart or do it manually.
Try following project (MIT license) and file HttpMultipartParser.cs from this project:
https://bitbucket.org/lorenzopolidori/http-form-parser/src