Objc-C Mongodb Driver to Meteor CollectionFS - objective-c

I have written an objective-c app that can write files to my meteor mongodb database. Using RadMongoDB (https://github.com/timburks/RadMongoDB) I write an image over to my mongo's gridfs .files and .chunks.
//Defining RadMongoDB
RadMongoDB *rad = [[RadMongoDB alloc] init];
//Connection Dictionary
NSDictionary *connection = #{
#"host" : #"127.0.0.1",
#"port" : [NSNumber numberWithInt:3002]};
int num =[rad connectWithOptions:connection];
[rad writeFile:path2 withMIMEType:#"image/png" inCollection:#"contacts" inDatabase:#"meteor"]
The image (path2) is successfully written to the gridfs. Im my meteor mondgodb shell I can see the files successfully written.
.chunks:
.files:
These gridfs files are linked to a collectionfs (https://github.com/CollectionFS/Meteor-CollectionFS) collection containing a variety of other pictures that have been inserted via a meteor app. The problem is pulling the image out written by the driver using collectionfs. It is apparent that the files written to gridfs by the driver do not get worked on by the file handlers. Therefore I tried re-forcing all of the files through (collectionfs filehandler reset) but that still didn't work: (Javascript Below, note ContactsFS is the collectionfs collection corresponding to gridfs contacts).
//Reset
ContactsFS.find({}).forEach(function(doc) {
ContactsFS.update({ _id: doc._id}, { $set: { handledAt: null, fileHandler: {} } });
});
//Set Completed to True
ContactsFS.update(fileRecord, {$set: {complete: true}});
I have come to the conclusion that the way the driver interacts with gridfs is much different than how meteor and collectionfs read and write to it. Is there any way to fix this? I am desperate for help, thanks!
EDIT:
After setting the uploaded files complete = true the file handler attempts to act on the drivers inserted file. However now I receive a server side error:
I believe this is because of how collectionfs reads the gridfs file. The gridfs image's data is stored by the obj-c driver as a Uint8Array (as seen in screenshot 1). I have tried setting each paramater on the obj-c driver image so that collectionfs will be happy:
ContactsFS.update(imageid, {$set: {handledAt: null}});
ContactsFS.update(imageid, {$set: {uploadDate: date}});
ContactsFS.update(imageid, {$set: {countChunks: 1}});
ContactsFS.update(imageid, {$set: {numChunks: 1}});
ContactsFS.update(imageid, {$set: {length: len}});›
ContactsFS.update(imageid, {$set: {uploading: false}});
ContactsFS.update(imageid, {$set: {encoding: encode}});
//Setting complete to True will send the file through the filehandlers
ContactsFS.update(imageid, {$set: {complete: true}});
Still nothing. How do I get around this problem?

Try this:
var len = "" + fileRecord.plength;
var chunkSize = 256 * 1024; // set this to whatever chunk size RadMongoDB is using
var chunkCount = Math.ceil(fileRecord.plength / chunkSize);
ContactsFS.update(imageid, {$set: {
handledAt: null,
uploadDate: Date.now(),
countChunks: chunkCount,
numChunks: chunkCount,
length: len,
uploading: false,
encoding: 'binary'
}});
ContactsFS.update(imageid, {$set: {complete: true}});
Requires the fix discussed in this issue, too.

Related

upload preset must be specified when using unsigned upload cloudinary

am trying to upload files directly from my front-end(Angular 8) using the cloudinary API_URL but am still getting the same bad request (400) and the same error "Upload preset must be whitelisted for unsigned uploads" even i tried different solutions like providing the preset_name in the FormData and setting the preset to unsigned in my cloudinary settings but still not working. is there any solution ?
my upload code :
const images = new FormData();
images.append('images', file);
images.append('upload_preset', [presetName]);
this.progressBar = true
const req = new HttpRequest('POST', 'https://api.cloudinary.com/v1_1/[cloudName]/image/upload', images,
{
reportProgress: true,
});
this.http.request(req).subscribe(event => {
if (event.type === HttpEventType.UploadProgress) {
const percentDone = Math.round(100 * event.loaded / event.total);
console.log(`File is ${percentDone}% uploaded.`);
} else if (event instanceof HttpResponse) {
console.log('File is completely uploaded!');
}
});
Upload preset must be whitelisted for unsigned uploads error means that that the preset you are using is marked for Signed uploads. Since you are not performing an authenticated API call, i.e. using a signature, the upload preset must be set as Unsigned. If you haven't already, go to the Settings -> Upload tab in your account and verify that the Signing Mode is set to Unsigned for the preset you are trying to use.
In addition, I see that you are passing a parameter called 'images'. This is not a valid parameter for the Upload API. Please update that to "file".
const data = new FormData();
data.append("file", file);
data.append("upload_preset", "default-preset");

Access encodingResult when uploading with Alamofire 5

I'm trying to update my app to Alamofire 5 and having difficulties due to a hack-ish way I'm using it I guess.
Anyhow, I need background uploads and Alamofire is not really designed to do this. Even so, I was using it to create a properly formatted file containing multipart form so I can give it to the OS to upload in the background later.
I'll post the code doing this in Alamofire 4, my question is how can I get the url of the file I was previously getting with encodingResults?
// We're not actually going to upload photo via alamofire. It does not offer support for background uploads.
// Still we can use it to create a request and more importantly properly formatted file containing multipart form
Api.alamofire.upload(
multipartFormData: { multipartFormData in
multipartFormData.append(imageData, withName: "photo[image]", fileName: filename, mimeType: "image/jpg")
},
to: "http://", // if we give it a real url sometimes alamofire will attempt the first upload. I don't want to let it get to our servers but it fails if I feed it ""
usingThreshold: UInt64(0), // force alamofire to always write to file no matter how small the payload is
method: .post,
headers: Api.requestHeaders,
encodingCompletion: { encodingResult in
switch encodingResult {
case .success(let alamofireUploadTask, _, let url):
alamofireUploadTask.suspend()
defer { alamofireUploadTask.cancel() }
if let alamofireUploadFileUrl = url {
// we want to own the multipart file to avoid alamofire deleting it when we tell it to cancel its task
let fileUrl = ourFileUrl
do {
try FileManager.default.copyItem(at: alamofireUploadFileUrl, to: fileUrl)
// use the file we just created for a background upload
} catch {
}
}
case .failure:
// alamofire failed to encode the request file for some reason
}
}
)
Multipart encoding is fully integrated into the now-asynchronous request pipeline in Alamofire 5. That means there's no separate step to use. However, you can use the MultipartFormData type directly, just like you would in the request closure.
let data = MultipartFormData()
data.append(Data(), withName: "dataName")
try data.encode()

Correct code to upload local file to S3 proxy of API Gateway

I created an API function to work with S3. I imported the template swagger. After deployment, I tested with a Node.js project by the npm module aws-api-gateway-client.
It works well with: get bucket lists, get bucket info, get one item, put a bucket, put a plain text object, however I am blocked with put a binary file.
firstly, I ensure ACL is allowed with all permissions on S3. secondly, binary support also added
image/gif
application/octet-stream
The code snippet is as below. The behaviors are:
1) after invokeAPI, the callback function is never hit, after sometime, the Node.js project did not respond. no any error message. The file size (such as an image) is very small.
2) with only two times, the uploading seemed to work, but the result file size is bigger (around 2M bigger) than the original file, so the file is corrupt.
Could you help me out? Thank you!
var filepathname = './items/';
var filename = 'image1.png';
fs.stat(filepathname+filename, function (err, stats) {
var fileSize = stats.size ;
fs.readFile(filepathname+filename,'binary',function(err,data){
var len = data.length;
console.log('file len' + len);
var pathTemplate = '/my-test-bucket/' +filename ;
var method = 'PUT';
var params = {
folder: '',
item:''
};
var additionalParams = {
headers: {
'Content-Type': 'application/octet-stream',
//'Content-Type': 'image/gif',
'Content-Length': len
}
};
var result1 = apigClient.invokeApi(params,pathTemplate,method,additionalParams,data)
.then(function(result){
//never hit :(
console.log(result);
}).catch( function(result){
//never hit :(
console.log(result);
});;
});
});
We encountered the same problem. API Gateway is meant for limited data (10MB as of now), limits shown here,
http://docs.aws.amazon.com/apigateway/latest/developerguide/limits.html
Self Signed URL to S3:
Create an S3 self signed URL for POST from the lambda or the endpoint where you are trying to post.
How do I put object to amazon s3 using presigned url?
Now POST the image directly to S3.
Presigned POST:
Apart from posting the image if you want to post additional properties, you can post it in multi-form format as well.
http://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/S3.html#createPresignedPost-property
If you want to process the file after delivering to S3, you can create a trigger from S3 upon creation and process with your Lambda or anypoint that need to process.
Hope it helps.

Files downloaded from Amazon S3 using Knox and Node.js are corrupt

I'm using knox to access my Amazon S3 bucket for file storage. I'm storing all kinds of files - mostly MS Office and pdfs but could be binary or any other kind. I'm also using express 4.13.3 and busboy with connect-busboy for streaming support; when uploading file I'm handling with busboy and thence direct to S3 via knox, so avoiding having to write them to local disk first.
The files upload fine (I can browse and download them manually using Transmit) but I'm having problems downloading.
For clarity I don't want to write the file to local disk, instead keeping it in an in-memory buffer. Here's the code I'm using to handle the GET request:
// instantiate a knox object
var s3client = knox.createClient({
key: config.AWS.knox.key,
secret: config.AWS.knox.secret,
bucket: config.AWS.knox.bucket,
region: config.AWS.region
});
var buffer = undefined;
s3client.get(path+'/'+fileName)
.on('response', function(s3res){
s3res.setEncoding('binary');
s3res.on('data', function(chunk){
buffer += chunk;
});
s3res.on('end', function() {
buffer = new Buffer(buffer, 'binary');
var fileLength = buffer.length;
res.attachment(fileName);
res.append('Set-Cookie', 'fileDownload=true; path=/');
res.append('Content-Length', fileLength);
res.status(s3res.statusCode).send(buffer);
});
}).end();
The file downloads to the browser - I'm using John Culviner's jquery.fileDownload.js - but what is downloaded is corrupt and can't be opened. As you can see I'm using express' .attachment to set the headers for mime type and .append for the additional headers (using .set instead makes no difference).
When the file downloads in Chrome I see the message 'Resource interpreted as Document but transferred with MIME type application/vnd.openxmlformats-officedocument.spreadsheetml.sheet:' (for an Excel file), so express is setting the header correctly, and the size of the file downloaded matches that I see when examining the bucket.
Any ideas what's going wrong?
Looks like the contents might not be being sent to the browser as binary. Try something like the following:
if (s3Res.headers['content-type']) {
res.type( s3Res.headers['content-type'] );
}
res.attachment(fileName);
s3Res.setEncoding('binary');
s3Res.on('data', function(data){
res.write(data, 'binary');
});
s3Res.on('end', function() {
res.send();
});
It will also send the data one chunk at a time as it comes in, so it should be a bit more memory efficient.

Rails - Uploading large files directly to S3 with Jquery File Upload (hosted on Heroku )

I'm using Heroku, which means I have to upload multiple large files to S3 directly.. I'm using Rails 3.2.11, and Ruby 1.9.3. I do not wish to use carrierwave or paperclip gems, or really change much at this point - I just need to get this what I have working.
Before trying to move to S3, if I ran my app locally, I could upload multiple large files to the local file system. When I ran it on Heroku, small files upload but large ones failed. Hence the switch to S3..
I tried several tweaks, and also this link below, but it's just too much of a change to what I have that already working with the local server's file system (and Heroku as well, but Heroku just can't handle large files ..)
Tried: https://devcenter.heroku.com/articles/direct-to-s3-image-uploads-in-rails
I've tried some of the other examples here on Stack Overflow but they are too much of a change for what works locally, and well, I don't grasp everything they are doing.
Now, what happens when I do try to upload images?
It's as if the file upload works - the preview images are successfully created, but nothing is ever uploaded to Amazon s3, and I don't receive any kind of error messages (like s3 authentication failure or anything.. nothing)
What do I need to change in order to get the files over to my s3 storage, and what can I write out to console to detect problems, if any, connecting to my s3?
My form:
<%= form_for #status do |f| %>
{A FEW HTML FIELDS USED FOR A DESCRIPTION OF THE FILES - NOT IMPORTANT FOR THE QUESTION}
File:<input id="fileupload" multiple="multiple" name="image"
type="file" data-form-data = <%= #s3_direct_post.fields%>
data-url= <%= #s3_direct_post.url %>
data-host =<%=URI.parse(#s3_direct_post.url).host%> >
<%= link_to 'submit', "#", :id=>'submit' , :remote=>true%>
<% end %>
My jquery is:
....
$('#fileupload').fileupload({
formData: {
batch: createUUID(),
authenticity_token:$('meta[name="csrf-token"]').attr('content')
},
dataType: 'json',
acceptFileTypes: /(\.|\/)(gif|jpe?g|png)$/i,
maxFileSize: 5000000, // 5 MB
previewMaxWidth: 400,
previewMaxHeight: 400,
previewCrop: true,
add: function (e, data) {
tmpImg.src = URL.createObjectURL(data.files[0]) ; // create image preview
$('#'+ fn + '_inner' ).append(tmpImg);
...
My controller:
def index
#it's in the index just to simplify getting it working
#s3_direct_post = S3_BUCKET.presigned_post(key: "uploads/#{SecureRandom.uuid}/${filename}", success_action_status: '201', acl: 'public-read')
end
The element that is generated for the form is (via Inspect Element):
<input id="fileupload" multiple="multiple" name="image"
data-form-data="{"key"=>"uploads/34a64607-8d1b-4704-806b-159ecc47745e/${filename}"," "success_action_status"="
>"201"," "acl"=">"public-read"," "policy"=">"[encryped stuff - no need to post]","
"x-amz-credential"=">"
[AWS access key]/[some number]/us-east-1/s3/aws4_request"
," "x-amz-algorithm"=">"AWS4-HMAC-SHA256"
," "x-amz-date"=">"20150924T234656Z"
," "x-amz-signature"=">"[some encrypted stuff]"}"
data-url="https://nunyabizness.s3.amazonaws.com" data-host="nunyabizness.s3.amazonaws.com" type="file">
Help!
With S3 there actually is no easy out of the box solutions to upload files, because Amazon is a rather complex instrument.
I had a similar issue back in the day and spend two weeks trying to figure out how S3 works, and now use a working solution for uploading files onto S3. I can tell you a solution that works for me, I never tried the one proposed by Heroku.
A plugin of choice I use is Plupload, since it is the only component I actually managed to get working, apart from simple direct S3 uploads via XHR, and offers the use of percentage indicators and in-browser image resizing, which I find completely mandatory for production applications, where some users have 20mb images that they want to upload as their avatar.
Some basics in S3:
Step 1
Amazon bucket needs correct configuration in its CORS file to allow external uploads in the first place. The Heroku totorial already told you how to put the configuration in the right place.
http://docs.aws.amazon.com/AmazonS3/latest/dev/cors.html
Step 2
Policy data is needed, otherwise your client will not be able to access the corresponding bucket file. I find generating policies to be better done via Ajax calls, so that, for example, admin gets the ability to upload files into the folders of different users.
In my example, cancan is used to manage security for the given user and figaro is used to manage ENV variables.
def aws_policy_image
user = User.find_by_id(params[:user_id])
authorize! :upload_image, current_user
options = {}
bucket = Rails.configuration.bucket
access_key_id = ENV["AWS_ACCESS_KEY_ID"]
secret_access_key = ENV["AWS_SECRET_ACCESS_KEY"]
options[:key] ||= "users/" + params[:user_id] # folder on AWS to store file in
options[:acl] ||= 'private'
options[:expiration_date] ||= 10.hours.from_now.utc.iso8601
options[:max_filesize] ||= 10.megabytes
options[:content_type] ||= 'image/' # Videos would be binary/octet-stream
options[:filter_title] ||= 'Images'
options[:filter_extentions] ||= 'jpg,jpeg,gif,png,bmp'
policy = Base64.encode64(
"{'expiration': '#{options[:expiration_date]}',
'conditions': [
{'x-amz-server-side-encryption': 'AES256'},
{'bucket': '#{bucket}'},
{'acl': '#{options[:acl]}'},
{'success_action_status': '201'},
['content-length-range', 0, #{options[:max_filesize]}],
['starts-with', '$key', '#{options[:key]}'],
['starts-with', '$Content-Type', ''],
['starts-with', '$name', ''],
['starts-with', '$Filename', '']
]
}").gsub(/\n|\r/, '')
signature = Base64.encode64(
OpenSSL::HMAC.digest(
OpenSSL::Digest::Digest.new('sha1'),
secret_access_key, policy)).gsub("\n", "")
render :json => {:access_key_id => access_key_id, :policy => policy, :signature => signature, :bucket => bucket}
end
I went as far as put this method into the application controller, although you could find a better place for it.
Path to this function should be put into the route, of course.
Step 3
Frontend, get plupload: http://www.plupload.com/ make some link to act as the upload button:
<a id="upload_button" href="#">Upload</a>
Make a script that configures the plupload initialization.
function Plupload(config_x, access_key_id, policy, signature, bucket) {
var $this = this;
$this.config = $.extend({
key: 'error',
acl: 'private',
content_type: '',
filter_title: 'Images',
filter_extentions: 'jpg,jpeg,gif,png,bmp',
select_button: "upload_button",
multi_selection: true,
callback: function (params) {
},
add_files_callback: function (up, files) {
},
complete_callback: function (params) {
}
}, config_x);
$this.params = {
runtimes: 'html5',
browse_button: $this.config.select_button,
max_file_size: $this.config.max_file_size,
url: 'https://' + bucket + '.s3.amazonaws.com/',
flash_swf_url: '/assets/plupload/js/Moxie.swf',
silverlight_xap_url: '/assets/plupload/js/Moxie.xap',
init: {
FilesRemoved: function (up, files) {
/*if (up.files.length < 1) {
$('#' + config.select_button).fadeIn('slow');
}*/
}
},
multi_selection: $this.config.multi_selection,
multipart: true,
// resize: {width: 1000, height: 1000}, // currently causes "blob" problem
multipart_params: {
'acl': $this.config.acl,
'Content-Type': $this.config.content_type,
'success_action_status': '201',
'AWSAccessKeyId': access_key_id,
'x-amz-server-side-encryption': "AES256",
'policy': policy,
'signature': signature
},
// Resize images on clientside if we can
resize: {
preserve_headers: false, // (!)
width: 1200,
height: 1200,
quality: 70
},
filters: [
{
title: $this.config.filter_title,
extensions: $this.config.filter_extentions
}
],
file_data_name: 'file'
};
$this.uploader = new plupload.Uploader($this.params);
$this.uploader.init();
$this.uploader.bind('UploadProgress', function (up, file) {
$('#' + file.id + ' .percent').text(file.percent + '%');
});
// before upload
$this.uploader.bind('BeforeUpload', function (up, file) {
// optional: regen the filename, otherwise the user will upload image.jpg that will overwrite each other
var extension = file.name.split('.').pop();
var file_name = extension + "_" + (+new Date);
up.settings.multipart_params.key = $this.config.key + '/' + file_name + '.' + extension;
up.settings.multipart_params.Filename = $this.config.key + '/' + file_name + '.' + extension;
file.name = file_name + '.' + extension;
});
// shows error object in the browser console (for now)
$this.uploader.bind('Error', function (up, error) {
console.log('Expand the error object below to see the error. Use WireShark to debug.');
alert_x(".validation-error", error.message);
});
// files added
$this.uploader.bind('FilesAdded', function (up, files) {
$this.config.add_files_callback(up, files, $this.uploader);
// p(uploader);
// uploader.start();
});
// when file gets uploaded
$this.uploader.bind('FileUploaded', function (up, file) {
$this.config.callback(file);
up.refresh();
});
// when all files are uploaded
$this.uploader.bind('UploadComplete', function (up, file) {
$this.config.complete_callback(file);
up.refresh();
});
}
Plupload.prototype.init = function () {
//
}
Step 4
The implemetation of the general multi-purpose file uploader function:
ImageUploader = {
init: function (user_id, config, callback) {
$.ajax({
type: "get",
url: "/aws_policy_image",
data: {user_id: user_id},
error: function (request, status, error) {
alert(request.responseText);
},
success: function (msg) {
// set aws credentials
callback(config, msg);
}
});
},
},
// local functions
photo_uploader: function (user_id) {
var container = "#photos .unverified_images" // for example;
var can_render = false;
this.init(user_id,
{
select_button: "upload_photos",
callback: function (file) {
file.aws_id = file.id;
file.id = "0";
file.album_title = "userpics"; // I use this param to manage photo directory
file.user_id = user_id;
//console.log(file);
[** your ajax code here that saves the image object in the database via file variable you get here **]
});
},
add_files_callback: function (up, files, uploader) {
$.each(files, function (index, value) {
// do something like adding a progress bar html
});
uploader.start();
},
complete_callback: function (files) {
can_render = true;
}
}, function (config, msg) {
config.key = "users/" + user_id;
// Most important part:
window.photo_uploader = new Plupload(config, msg.access_key_id, msg.policy, msg.signature, msg.bucket);
});
}
can_render variable is useful so that you can make the application only then re-render the page, when the uploader is actually done.
And to make the button work from somewhere else call:
ImageUploader.photo_uploader(user_id);
And the button will act as a Plupload uploader button.
What is important is that Policy is made in a way so that noone can upload the photo into someone else's directory.
It would be great to have a version that does the same not via ajax callbacks, but with web hooks, this is something I want to do in the future.
Again, this is not a perfect solution, but something that works good enough from my experience for the purpose of uploading images and videos onto amazon.
Note in case someone asks why I have this complex object-oriented structure of uploader objects, the reason is that my application has all different kinds of uploaders that behave differently and they need to have an initializer with common behavior. The way I did it I can write an initializer for, say, videos, with minimum amount of code, that will do similar things to the existing image uploader.