Fine Uploader S3 - Return Uploaded URL on Complete callback - amazon-s3

How can i get the full image url of the uploaded image on my amazon s3 bucket with fine-uploader?
My javascript code is:
jQuery(document).ready(function () {
jQuery("#fine-uploader").fineUploaderS3({
debug: true,
request: {
endpoint: 'bucket.s3.amazonaws.com',
accessKey: 'xxxxxxxx'
},
signature: {
endpoint: 'end.php '
},
uploadSuccess: {
endpoint: 'success.php '
},
iframeSupport: {
localBlankPagePath: 'success.html '
},
retry: {
enableAuto: true
},
validation: {
allowedExtensions: ['
jpeg ', '
jpg ', '
png '],
sizeLimit: 1048576
}
}).on('
complete ', function (event, id, name, response) {
console.log(response.tempLink);
});
});
UPDATE
Following the s3 demo i am using the response.tempLink and just trying to log it to the console and then i will use it later on. The upload always works fine but my console returns an undefined response everytime.
From finding this q&a: having trouble displaying an image uploaded to Amazon s3 by fine-uploader
It seems like my IAM user/policy settings and $serverPublicKey and $serverPrivateKey might be the cause? My setup is:
Exact copy of this file for my end.php file:
https://github.com/Widen/fine-uploader-server/blob/master/php/s3/s3demo-cors.php
with the following changes:
// changed to match the secret access key for the FIRST IAM user as discussed in the docs
$clientPrivateKey = 'user_secret_key...';
// bucket name
$expectedBucketName = "my.bucket.name";
// changed to match the access and secret of the SECOND IAM 'server' user
$serverPublicKey = 'server_user_access_key...';
$serverPrivateKey = 'server user secret key...';
// updated to my website
function handlePreflightedRequest() {
header('Access-Control-Allow-Origin: http://www.mywebsite.com');
}
In my Amazon IAM console I have mt SECOND IAM 'server' user setup as:
Group:
grp-server
Group Policy: (is GetObject the correct action?)
{
"Version":"2012-10-17",
"Statement":[{
"Effect":"Allow",
"Action":"s3:GetObject",
"Resource":"arn:aws:s3:::my.bucket.name/*"
}]
}
or i've tried the following which gives full admin access just to check
{
"Version":"2012-10-17",
"Statement":[{
"Effect":"Allow",
"Resource":"*"
}]
}
User:
user-server
added to grp-server (which inherits group policy)
user-server access key becomes $serverPublicKey in end.php
user-server secret key becomes $serverPrivateKey in end.php
Am i missing anything from this?

Have your uploadSuccess.endpoint return a pre-signed URL that you then handle in the onComplete handler. Note that pre-signed URLs can only be generated server-side. See this PHP server for details.

Related

AWS S3 getBucketLogging fails when called from lambda function

I am trying in an AWS lambda to get the bucket logging settings for my buckets. For this I enumerate the buckets with S3.listBuckets() - which works just fine. I then iterate over the bucket names like this (Typescript):
const bucketNames = await getBucketNames() // <- works without problems
for (const bucketName of bucketNames) {
try {
console.log(`get logging for bucket ${bucketName}`) // <-- getting to this log
const bucketLogging: GetBucketLoggingOutput = await s3.getBucketLogging({
Bucket: bucketName,
ExpectedBucketOwner: accountId
}).promise()
// check logging setup and adjust if necessary
} catch (error) {
console.log(JSON.stringify(error))
}
}
The call to getBucketLogging() fails
{
"message": "Access Denied",
"code": "AccessDenied",
"region": null,
"time": "2022-07-19T11:16:26.671Z",
"requestId": "****",
"extendedRequestId": "****",
"statusCode": 403,
"retryable": false,
"retryDelay": 70.19937788683632
}
The accountId that is passed in is definitely right (it's optional anyway); the lambda is in the same account as the bucket owner (which is the sole condition described in the docs at https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/S3.html#getBucketLogging-property).
When doing this call from a terminal CLI I have no problems to get results, only when running from a lambda.
What am I missing or overseeing?
You should make sure to attach the respective IAM permissions to your lambda function. Just because you have the s3:ListBuckets role doesn't mean that it is also permitted to perform the same for the BucketLogging information. Please refer to the following docs for more details on S3 IAM actions: https://docs.aws.amazon.com/service-authorization/latest/reference/list_amazons3.html

"Access key does not exist" when generating pre-signed S3 URL from Lambda function

I'm trying to generate a presigned URL from within a Lambda function, to get an existing S3 object .
(The Lambda function runs an ExpressJS app, and the code to generate the URL is called on one of its routes.)
I'm getting an error "The AWS Access Key Id you provided does not exist in our records." when I visit the generated URL, though, and Google isn't helping me:
<Error>
<Code>InvalidAccessKeyId</Code>
<Message>The AWS Access Key Id you provided does not exist in our records.</Message>
<AWSAccessKeyId>AKIAJ4LNLEBHJ5LTJZ5A</AWSAccessKeyId>
<RequestId>DKQ55DK3XJBYGKQ6</RequestId>
<HostId>IempRjLRk8iK66ncWcNdiTV0FW1WpGuNv1Eg4Fcq0mqqWUATujYxmXqEMAFHAPyNyQQ5tRxto2U=</HostId>
</Error>
The Lambda function is defined via AWS SAM and given bucket access via the predefined S3CrudPolicy template:
ExpressLambdaFunction:
Type: AWS::Serverless::Function
Properties:
FunctionName: ExpressJSApp
Description: Main website request handler
CodeUri: ../lambda.zip
Handler: lambda.handler
[SNIP]
Policies:
- S3CrudPolicy:
BucketName: my-bucket-name
The URL is generated via the AWS SDK:
const router = require('express').Router();
const AWS = require('aws-sdk');
router.get('/', (req, res) => {
const s3 = new AWS.S3({
region: 'eu-west-1',
signatureVersion: 'v4'
});
const params = {
'Bucket': 'my-bucket-name',
'Key': 'my-file-name'
};
s3.getSignedUrl('getObject', params, (error, url) => {
res.send(`<p>${url}</p>`)
});
});
What's going wrong? Do I need to pass credentials explicitly when calling getSignedUrl() from within a Lambda function? Doesn't the function's execute role supply those? Am I barking up the wrong tree?
tldr; Go sure, to have the correct order of signature_v4 headers/formdata, in your request.
I had the same exact issue.
I am not sure if this is the solution for everyone who is encountering the problem, but I learned the following:
The error message, and other misleading error messages can occur, if you don't use the correct order of security headers. In my case I was using the endpoint to create a presigned url, for posting a file, to upload it. In this case, you need to go sure, that you are having the correct order of security relevant data in your form-data. For signatureVersion 's3v3' it is:
key
x-amz-algorithm
x-amz-credential
x-amz-date
policy
x-amz-security-token
x-amz-signature
In the special case of a POST-Request to a presigned url, to upload a file, it's important to have your file, AFTER the security data.
After that, the request works as expected.
I can't say for certain but I'm guessing this may have something to do with you using the old SDK. Here it is w/ v3 of the SDK. You may need to massage it a little more.
const { getSignedUrl } = require("#aws-sdk/s3-request-presigner");
const { S3Client, GetObjectCommand } = require("#aws-sdk/client-s3");
// ...
const client = new S3Client({ region: 'eu-west-1' });
const params = {
'Bucket': 'my-bucket-name',
'Key': 'my-file-name'
};
const command = new GetObjectCommand(params);
getSignedUrl(client, command(error, url) => {
res.send(`<p>${url}</p>`)
});

500 Internal Server Error when uploading to an AWS S3 bucket from Nuxt

I am trying to upload a file to AWS S3 using aws-sdk v3 from a Nuxt app's Vue Component.
Here's how I upload it.
<script>
export default {
...
methods: {
onSubmit(event) {
event.preventDefault()
this.addPhoto()
},
addPhoto() {
// Load the required clients and packages
const { CognitoIdentityClient } = require('#aws-sdk/client-cognito-identity')
const { fromCognitoIdentityPool } = require('#aws-sdk/credential-provider-cognito-identity')
const {
S3Client,
PutObjectCommand,
ListObjectsCommand,
DeleteObjectCommand,
} = require('#aws-sdk/client-s3')
const REGION = 'us-east-1' // REGION
const albumBucketName = 'samyojya-1'
const IdentityPoolId = 'XXXXXXX'
const s3 = new S3Client({
region: REGION,
credentials: {
accessKeyId: this.$config.CLIENT_ID,
secretAccessKey: this.$config.CLIENT_SECRET,
sessionToken: localStorage.getItem('accessToken'),
},
})
var file = this.formFields[0].fieldName
var fileName = this.formFields[0].fieldName.name
var photoKey = 'user-dp/' + fileName
var s3Response = s3.send(
new PutObjectCommand({
Bucket: albumBucketName,
Key: photoKey,
Body: file,
}),
)
s3Response
.then((response) => {
console.log('Successfully uploaded photo.' + JSON.stringify(response))
})
.catch((error) => {
console.log(
'There was an error uploading your photo: Error stacktrace' + JSON.stringify(error.message),
)
const { requestId, cfId, extendedRequestId } = error.$metadata
console.log({ requestId, cfId, extendedRequestId })
})
},
...
}
</script>
The issue now is that the browser complains about CORS.
This is my CORS configuration on AWS S3
I'm suspecting something while creating the upload request using SDK. (I'm open to use an API that is better than what I'm using).
Nuxt setting that allows CORS.
Something else on S3 CORS config at permissions
Network tab on chrome dev tools shows Internal Server Error (500) for prefetch. (Don't know why we see 2 entries here)
Appreciate any pointers on how to debug this.
I was having the same issue today. The S3 logs were saying it returned a 200 code response, but Chrome was seeing a 500 response. In Safari, the error showed up as:
received 'us-west-1'; expected 'eu-west-1'
Adding region: 'eu-west-1' (i.e. the region where the bucked was created)to the parameters when creating the S3 service solved the issue for me.
https://docs.aws.amazon.com/sdk-for-javascript/v2/developer-guide/setting-region.html#setting-region-constructor
In the bucket policy use this
{
"Version": "2008-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": [
"s3:GetObjectAcl",
"s3:GetObject",
"s3:PutObject",
"s3:PutObjectAcl",
"s3:ListMultipartUploadParts"
],
"Resource": "arn:aws:s3:::YOUR_BUCKET_NAME/*",
"Condition": {
"StringLike": {
"aws:Referer": "https://example/*"
}
}
}
]}
and use the region of your bucket
const s3 = new aws.S3({
apiVersion: 'latest',
accessKeyId: process.env.AWS_ACCESS_KEY_ID_CUSTOM,
secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY_CUSTOM,
region: 'us-west-1',
})
I am having the same problem, but according to the docs you should be using Cognito Identity to access the bucket. Only in V3 for clients to be able to access the buckets from the browser you must use Cognito Identity to authenticate users in order to have access to bucket/object commands. Currently trying to implement, so I am not 100% how to do it just the process. Feel free to take a look. I hope this helps. ~~~~~~~~~~~~~~~~~~~~~~~~~~
| Cognito SDK Link: | https://docs.aws.amazon.com/cognito/latest/developerguide/what-is-amazon-cognito.html
| Example: | https://docs.aws.amazon.com/sdk-for-javascript/v3/developer-guide/loading-browser-credentials-cognito.html
The error needs to be fixed on the backend, since it's CORS. It's clearly states a missing header of Access-Control-Allow-Origin.
So, checking it in the official AWS docs gives you the answer: https://docs.aws.amazon.com/apigateway/latest/developerguide/how-to-cors.html
I was doing multiple things wrongly here. Every answer on this post helped me make a little progress while debugging. Can't thank you enough!
My bucket policy was not using role-based ALLOW/DENY that has to correspond to authenticated role on my cognito identity pool.
Needed to rightly configure the Authentication provider as Cognito Userpool.
Making sure the region is right. Cognito region could be different from S3 region.
Make sure CORS policy includes relevant information like "Access-Control-Allow-Origin".
Double check the token includes the right credentials. This comes very handy cognito decode-verify
Was stand-alone testing from the browser. But this is not a good approach. Use an API server to take the file and push to S3 from there.

slingshot meteor s3 error

I’m afraid I don’t understand how this is supposed to work at all. How does slingshot know the address to find my s3 bucket? Is this completely determined by the access keys?
This is the code I have in my server/files.js:
var imageDetails = new Mongo.Collection('images');
Slingshot.fileRestrictions("myImageUploads", {
allowedFileTypes: ["image/png", "image/jpeg", "image/gif"],
maxSize: 2 * 1024 * 1024,
});
Slingshot.createDirective("myImageUploads", Slingshot.S3Storage, {
AWSAccessKeyId: "AWSAccessKeyId",
AWSSecretAccessKey: "AWSSecretAccessKey",
bucket: "mybucketname",
acl: "public-read",
region: "us-west-1",
authorize: function () {
if (!this.userId) {
var message = "Please login before posting images";
throw new Meteor.Error("Login Required", message);
}
return true;
},
key: function (file) {
var currentUserId = Meteor.user().emails[0].address;
return currentUserId + "/" + file.name;
}
});
And this is my settings.json file
{
"AWSAccessKeyId" : "my access key",
"AWSSecretAccessKey" : "my secret access key",
"AWSBucket" : "mybucketname"
}
I get this error in my browser:
Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at https://mybucketname.s3-us-west-1.amazonaws.com/. (Reason: CORS header ‘Access-Control-Allow-Origin’ missing).
But I have a CORS configuration in my theportdata bucket.
The first step, I guess, is there any way to check if my application is making contact at all with my s3 bucket? Like I said, I don’t really understand how slingshot finds the bucket.
SOLVED
Changed "region: us-west-1" to "region: us-west-2" and it works.
There is also no need for the AWSAccessKeyId and AWSSecretAccessKey, since slingshot finds this automatically from settings.json.
Apparently all that's needed for an address is the bucket name and the region.
https://docs.aws.amazon.com/AmazonS3/latest/dev/WebsiteHosting.html

Rails - Uploading large files directly to S3 with Jquery File Upload (hosted on Heroku )

I'm using Heroku, which means I have to upload multiple large files to S3 directly.. I'm using Rails 3.2.11, and Ruby 1.9.3. I do not wish to use carrierwave or paperclip gems, or really change much at this point - I just need to get this what I have working.
Before trying to move to S3, if I ran my app locally, I could upload multiple large files to the local file system. When I ran it on Heroku, small files upload but large ones failed. Hence the switch to S3..
I tried several tweaks, and also this link below, but it's just too much of a change to what I have that already working with the local server's file system (and Heroku as well, but Heroku just can't handle large files ..)
Tried: https://devcenter.heroku.com/articles/direct-to-s3-image-uploads-in-rails
I've tried some of the other examples here on Stack Overflow but they are too much of a change for what works locally, and well, I don't grasp everything they are doing.
Now, what happens when I do try to upload images?
It's as if the file upload works - the preview images are successfully created, but nothing is ever uploaded to Amazon s3, and I don't receive any kind of error messages (like s3 authentication failure or anything.. nothing)
What do I need to change in order to get the files over to my s3 storage, and what can I write out to console to detect problems, if any, connecting to my s3?
My form:
<%= form_for #status do |f| %>
{A FEW HTML FIELDS USED FOR A DESCRIPTION OF THE FILES - NOT IMPORTANT FOR THE QUESTION}
File:<input id="fileupload" multiple="multiple" name="image"
type="file" data-form-data = <%= #s3_direct_post.fields%>
data-url= <%= #s3_direct_post.url %>
data-host =<%=URI.parse(#s3_direct_post.url).host%> >
<%= link_to 'submit', "#", :id=>'submit' , :remote=>true%>
<% end %>
My jquery is:
....
$('#fileupload').fileupload({
formData: {
batch: createUUID(),
authenticity_token:$('meta[name="csrf-token"]').attr('content')
},
dataType: 'json',
acceptFileTypes: /(\.|\/)(gif|jpe?g|png)$/i,
maxFileSize: 5000000, // 5 MB
previewMaxWidth: 400,
previewMaxHeight: 400,
previewCrop: true,
add: function (e, data) {
tmpImg.src = URL.createObjectURL(data.files[0]) ; // create image preview
$('#'+ fn + '_inner' ).append(tmpImg);
...
My controller:
def index
#it's in the index just to simplify getting it working
#s3_direct_post = S3_BUCKET.presigned_post(key: "uploads/#{SecureRandom.uuid}/${filename}", success_action_status: '201', acl: 'public-read')
end
The element that is generated for the form is (via Inspect Element):
<input id="fileupload" multiple="multiple" name="image"
data-form-data="{"key"=>"uploads/34a64607-8d1b-4704-806b-159ecc47745e/${filename}"," "success_action_status"="
>"201"," "acl"=">"public-read"," "policy"=">"[encryped stuff - no need to post]","
"x-amz-credential"=">"
[AWS access key]/[some number]/us-east-1/s3/aws4_request"
," "x-amz-algorithm"=">"AWS4-HMAC-SHA256"
," "x-amz-date"=">"20150924T234656Z"
," "x-amz-signature"=">"[some encrypted stuff]"}"
data-url="https://nunyabizness.s3.amazonaws.com" data-host="nunyabizness.s3.amazonaws.com" type="file">
Help!
With S3 there actually is no easy out of the box solutions to upload files, because Amazon is a rather complex instrument.
I had a similar issue back in the day and spend two weeks trying to figure out how S3 works, and now use a working solution for uploading files onto S3. I can tell you a solution that works for me, I never tried the one proposed by Heroku.
A plugin of choice I use is Plupload, since it is the only component I actually managed to get working, apart from simple direct S3 uploads via XHR, and offers the use of percentage indicators and in-browser image resizing, which I find completely mandatory for production applications, where some users have 20mb images that they want to upload as their avatar.
Some basics in S3:
Step 1
Amazon bucket needs correct configuration in its CORS file to allow external uploads in the first place. The Heroku totorial already told you how to put the configuration in the right place.
http://docs.aws.amazon.com/AmazonS3/latest/dev/cors.html
Step 2
Policy data is needed, otherwise your client will not be able to access the corresponding bucket file. I find generating policies to be better done via Ajax calls, so that, for example, admin gets the ability to upload files into the folders of different users.
In my example, cancan is used to manage security for the given user and figaro is used to manage ENV variables.
def aws_policy_image
user = User.find_by_id(params[:user_id])
authorize! :upload_image, current_user
options = {}
bucket = Rails.configuration.bucket
access_key_id = ENV["AWS_ACCESS_KEY_ID"]
secret_access_key = ENV["AWS_SECRET_ACCESS_KEY"]
options[:key] ||= "users/" + params[:user_id] # folder on AWS to store file in
options[:acl] ||= 'private'
options[:expiration_date] ||= 10.hours.from_now.utc.iso8601
options[:max_filesize] ||= 10.megabytes
options[:content_type] ||= 'image/' # Videos would be binary/octet-stream
options[:filter_title] ||= 'Images'
options[:filter_extentions] ||= 'jpg,jpeg,gif,png,bmp'
policy = Base64.encode64(
"{'expiration': '#{options[:expiration_date]}',
'conditions': [
{'x-amz-server-side-encryption': 'AES256'},
{'bucket': '#{bucket}'},
{'acl': '#{options[:acl]}'},
{'success_action_status': '201'},
['content-length-range', 0, #{options[:max_filesize]}],
['starts-with', '$key', '#{options[:key]}'],
['starts-with', '$Content-Type', ''],
['starts-with', '$name', ''],
['starts-with', '$Filename', '']
]
}").gsub(/\n|\r/, '')
signature = Base64.encode64(
OpenSSL::HMAC.digest(
OpenSSL::Digest::Digest.new('sha1'),
secret_access_key, policy)).gsub("\n", "")
render :json => {:access_key_id => access_key_id, :policy => policy, :signature => signature, :bucket => bucket}
end
I went as far as put this method into the application controller, although you could find a better place for it.
Path to this function should be put into the route, of course.
Step 3
Frontend, get plupload: http://www.plupload.com/ make some link to act as the upload button:
<a id="upload_button" href="#">Upload</a>
Make a script that configures the plupload initialization.
function Plupload(config_x, access_key_id, policy, signature, bucket) {
var $this = this;
$this.config = $.extend({
key: 'error',
acl: 'private',
content_type: '',
filter_title: 'Images',
filter_extentions: 'jpg,jpeg,gif,png,bmp',
select_button: "upload_button",
multi_selection: true,
callback: function (params) {
},
add_files_callback: function (up, files) {
},
complete_callback: function (params) {
}
}, config_x);
$this.params = {
runtimes: 'html5',
browse_button: $this.config.select_button,
max_file_size: $this.config.max_file_size,
url: 'https://' + bucket + '.s3.amazonaws.com/',
flash_swf_url: '/assets/plupload/js/Moxie.swf',
silverlight_xap_url: '/assets/plupload/js/Moxie.xap',
init: {
FilesRemoved: function (up, files) {
/*if (up.files.length < 1) {
$('#' + config.select_button).fadeIn('slow');
}*/
}
},
multi_selection: $this.config.multi_selection,
multipart: true,
// resize: {width: 1000, height: 1000}, // currently causes "blob" problem
multipart_params: {
'acl': $this.config.acl,
'Content-Type': $this.config.content_type,
'success_action_status': '201',
'AWSAccessKeyId': access_key_id,
'x-amz-server-side-encryption': "AES256",
'policy': policy,
'signature': signature
},
// Resize images on clientside if we can
resize: {
preserve_headers: false, // (!)
width: 1200,
height: 1200,
quality: 70
},
filters: [
{
title: $this.config.filter_title,
extensions: $this.config.filter_extentions
}
],
file_data_name: 'file'
};
$this.uploader = new plupload.Uploader($this.params);
$this.uploader.init();
$this.uploader.bind('UploadProgress', function (up, file) {
$('#' + file.id + ' .percent').text(file.percent + '%');
});
// before upload
$this.uploader.bind('BeforeUpload', function (up, file) {
// optional: regen the filename, otherwise the user will upload image.jpg that will overwrite each other
var extension = file.name.split('.').pop();
var file_name = extension + "_" + (+new Date);
up.settings.multipart_params.key = $this.config.key + '/' + file_name + '.' + extension;
up.settings.multipart_params.Filename = $this.config.key + '/' + file_name + '.' + extension;
file.name = file_name + '.' + extension;
});
// shows error object in the browser console (for now)
$this.uploader.bind('Error', function (up, error) {
console.log('Expand the error object below to see the error. Use WireShark to debug.');
alert_x(".validation-error", error.message);
});
// files added
$this.uploader.bind('FilesAdded', function (up, files) {
$this.config.add_files_callback(up, files, $this.uploader);
// p(uploader);
// uploader.start();
});
// when file gets uploaded
$this.uploader.bind('FileUploaded', function (up, file) {
$this.config.callback(file);
up.refresh();
});
// when all files are uploaded
$this.uploader.bind('UploadComplete', function (up, file) {
$this.config.complete_callback(file);
up.refresh();
});
}
Plupload.prototype.init = function () {
//
}
Step 4
The implemetation of the general multi-purpose file uploader function:
ImageUploader = {
init: function (user_id, config, callback) {
$.ajax({
type: "get",
url: "/aws_policy_image",
data: {user_id: user_id},
error: function (request, status, error) {
alert(request.responseText);
},
success: function (msg) {
// set aws credentials
callback(config, msg);
}
});
},
},
// local functions
photo_uploader: function (user_id) {
var container = "#photos .unverified_images" // for example;
var can_render = false;
this.init(user_id,
{
select_button: "upload_photos",
callback: function (file) {
file.aws_id = file.id;
file.id = "0";
file.album_title = "userpics"; // I use this param to manage photo directory
file.user_id = user_id;
//console.log(file);
[** your ajax code here that saves the image object in the database via file variable you get here **]
});
},
add_files_callback: function (up, files, uploader) {
$.each(files, function (index, value) {
// do something like adding a progress bar html
});
uploader.start();
},
complete_callback: function (files) {
can_render = true;
}
}, function (config, msg) {
config.key = "users/" + user_id;
// Most important part:
window.photo_uploader = new Plupload(config, msg.access_key_id, msg.policy, msg.signature, msg.bucket);
});
}
can_render variable is useful so that you can make the application only then re-render the page, when the uploader is actually done.
And to make the button work from somewhere else call:
ImageUploader.photo_uploader(user_id);
And the button will act as a Plupload uploader button.
What is important is that Policy is made in a way so that noone can upload the photo into someone else's directory.
It would be great to have a version that does the same not via ajax callbacks, but with web hooks, this is something I want to do in the future.
Again, this is not a perfect solution, but something that works good enough from my experience for the purpose of uploading images and videos onto amazon.
Note in case someone asks why I have this complex object-oriented structure of uploader objects, the reason is that my application has all different kinds of uploaders that behave differently and they need to have an initializer with common behavior. The way I did it I can write an initializer for, say, videos, with minimum amount of code, that will do similar things to the existing image uploader.