Show img from AWS S3 bucket - amazon-s3

I can now upload JPG ifrom my BackpackForLaravel to AWS 3 Yahhh!!!
But how or where can i make change this code:
$this->crud->addField([ // image
'label' => "Produkt foto",
'name' => "productfoto",
'type' => 'image',
'tab' => 'Produktfoto',
'upload' => true,
'crop' => true,
'aspect_ratio' => 1,
'disks' => 's3images' // This is not working ??
]);
To showing url from AWS S3 mith my uploaded JPG... (instead of local public)
I can't find any documentation or code examples of it :-(
Please help...

I don't really know how this can be of help to you. Pictures to S3 usually need to be base64 encoded so you will need to decode them before you can store in your s3 bucket. So how i handled it sometime ago is this:
$getId = $request->get('myId');
$encoded_data = $request->get('myphotodata');
$binary_data = base64_decode($encoded_data);
$filename_path = md5(time().uniqid()).".jpg";
$directory = 'uploads';
Storage::disk('s3')->put($directory.'/'.$filename_path , $binary_data);
In summary, i am asuming you have the right permission on your bucket and ready to put in image from your storage disks.

Related

Access Custom S3 Metadata After Completing Multipart Upload

I'm wanting to access the custom metadata for an object uploaded via the S3 multipart upload after firing off the completeMultipartUpload method.
I initiate a multipart S3 upload with some added custom metadata like so:
$response = $this->client->createMultipartUpload([
'Bucket' => $this->bucket,
'Key' => $key,
'ContentType' => $type,
'Expires' => 60,
'Metadata' => [
'file-guid' => $fileGuid,
],
]);
When I complete the multipart upload, I'm wanting to access the file-guid metadata and pass it along in my response.
$result = $this->client->completeMultipartUpload([
'Bucket' => $this->bucket,
'Key' => $key,
'UploadId' => $uploadId,
'MultipartUpload' => [
'Parts' => $parts,
],
]);
$fileGuid = $result['?'] // Couldn't find the metadata in the result.
return response()->json(['file-guid' => $fileGuid]);
I've checked the S3 object after it's been uploaded and it shows the custom metadata, but I don't see how to access it. I assumed it would be part of the completeMultipartUpload response, but I'm not seeing it.
Any help would be appreciated. Thanks!
I found a solution, but it involves an additional request. If anyone knows of a way to access the metadata without making another request, that would be better.
$headObject = $this->client->headObject([
'Bucket' => $this->bucket,
'Key' => $key,
]);

Logstash current date logstash.conf as backup_add_prefix (s3 input plugin)

I want to add the current date to every filename that is incoming to my s3 bucket.
My current config looks like this:
input {
s3 {
access_key_id => "some_key"
secret_access_key => "some_access_key"
region => "some_region"
bucket => "mybucket"
interval => "10"
sincedb_path => "/tmp/sincedb_something"
backup_add_prefix =>'%{+yyyy.MM.dd.HH}'
backup_to_bucket => "mybucket"
additional_settings => {
force_path_style => true
follow_redirects => false
}
}
}
Is there a way to use the current date in backup_add_prefix =>'%{+yyyy.MM.dd.HH}'
because the current syntax does not work as it produces: "
%{+yyyy.MM.dd.HH}test_file.txt" in my bucket.
Though it's not supported in s3 input plugin directly, it can be achieved. Use the following steps:
Go to logstash home path.
Open the file vendor/bundle/jruby/2.3.0/gems/logstash-input-s3-3.4.1/lib/logstash/inputs/s3.rb. The exact path will depend on your lagstash version.
Look for the method backup_to_bucket.
There is a line backup_key = "#{#backup_add_prefix}#{object.key}"
Add following lines before the above line:
t = Time.new
date_s3 = t.strftime("%Y.%m.%d")
Now change the backup_key to #{#backup_add_prefix}#{date_s3}#{object.key}
Now you are done. Restart your logstash pipeline. It should be able to achieve the desired result.

reading a file from s3 bucket with laravel getting error

Im trying to get file from s3 bucket using getObject
$s3 = AWS::createClient('s3');
$file = $s3->getObject(array(
'Bucket' => 'hotel4cast',
'Key' => $path->path,
'SaveAs' => public_path()
));
I'm getting below error
Error executing
"GetObject" on "https://s3.amazonaws.com/mybucket/filename.xlsx";
AWS HTTP error: Unable to open /var/www/html/laravel/public/ using mode r+: fopen(/var/www/html/laravel/public/):
ailed to open stream: Is a directory
if i take SaveAs out and dump $file i get object of data, body, stream all that stuff but not sure what to do with that.
I have figured out, there is bug in aws sdk,
i was able to get file to save by storing path in var before calling getObject
$r = fopen(public_path() . '/myfile.xlsx', 'wb');
$s3 = AWS::createClient('s3');
$file = $s3->getObject(array(
'Bucket' => 'bucketname',
'Key' => $path->path,
'SaveAs' => $r
));
can you tell me that what exactly these equals too ? So, I can guide you accordingly.
$path->path = ???
public_path() = ???
Edited
your method params should be like this, you just passing the saveAs path but attaching the key name, So, add the keyname with saveAs path, it will be downloaded.
$s3 = AWS::createClient('s3');
$file = $s3->getObject(array(
'Bucket' => 'hotel4cast',
'Key' => $path->path,
'SaveAs' => public_path()."/filename.xlsx"
));
here are the examples of code, which I am using for uploading file and coping file
for uploading
$result = $this->S3->putObject([
'ACL' => 'public-read-write',
'Bucket' => 'xyz', // REQUIRED
'Key' => 'file.xlsx', // REQUIRED
'SourceFile' => public_path()."/xlsx/file.xlsx",
]);
for Coping from one bucket to another
$copy = $this->S3->copyObject(array(
'ACL' => 'public-read-write',
'Bucket' => 'xyz', // REQUIRED
'Key' => 'file.xlsx', // REQUIRED
'CopySource' => 'mybucketname/xlsx/file.xlsx,
));
but your file which is exists in s3 bucket should have permission to read. other wise it will give you error to saveAs, copy etc
here are multiple permissions, you can see here
'ACL' => 'private|public-read|public-read-write|authenticated-read|aws-exec-read|bucket-owner-read|bucket-owner-full-control',

change content type of files that have been uploaded to amazon s3

i have uploaded thousands of images to amazon s3 and i need to change their content type to be image.
i know that i should do it when i try to putObject
$this->s3->putObject(array(
'Bucket' => $this->s3_bucket,
'Key' => $file_name,
'Body' => file_get_contents($tmp_name),
'ACL' => 'private',
'ContentType' => 'image/jpeg'
));
but i need to do so for all files that have been uploaded before.
thanks
Here is S3Object API:
http://docs.aws.amazon.com/AWSRubySDK/latest/AWS/S3/S3Object.html#copy_to-instance_method
Use the copy_to method。
Put content_type in "Options Hash", tested ok in Ruby.

Preventing duplicates when seeding with existing image using Paperclip + Amazon S3

Every time I re-seed my database locally, duplicate images are being created in my Amazon S3 bucket. I think this is happening because I am not seeding correctly, but I don't know the proper way to do it. I've been using the method shown here. I'm using Rails 4, Ruby 2, paperclip 3.5.2, and aws-sdk 1.20.0.
You can see below in my seeds.rb file, I'm trying to set the image to the url of an image that has already been uploaded to the correct folder in my bucket. However, I think using open() here is causing a new, identical file to be saved to the same folder, usually something like http://s3.amazonaws.com/BUCKET_NAME/restaurants/images/1/original/open-uri20131111-22904-xvzitl.?1384211739.
EDIT: so my bucket will have both this file stored as well as http://s3.amazonaws.com/BUCKET_NAME/restaurants/images/1/original/NAME.jpg
Would really appreciate any help!
model
has_attached_file :image,
:styles => { :medium => "300x300>", :thumb => "100x100>" }
seeds.rb
Restaurant.create!( name: ...,
description: ...,
image: open('https://s3.amazonaws.com/<BUCKET NAME>/restaurants/images/1/original/<NAME>.jpg') )
config/initializers/paperclip.rb
Paperclip::Attachment.default_options[:storage] = :s3
Paperclip::Attachment.default_options[:s3_credentials] = {
:access_key_id => ENV['AWS_ACCESS_KEY_ID'],
:secret_access_key => ENV['AWS_SECRET_ACCESS_KEY']
}
Paperclip::Attachment.default_options[:bucket] = ENV['AWS_BUCKET']
Paperclip::Attachment.default_options[:url] = ":s3_path_url"
Paperclip::Attachment.default_options[:path] = "/:class/:attachment/:id/:style/:basename.:extension"
Paperclip::Attachment.default_options[:default_url] = "https://s3.amazonaws.com/<BUCKET NAME>/images/missing.png"
I'm pretty late to the party on this one but I figure others may still be having the same problem. If you set the attachments on your models to nil before deleting them paperclip will delete them from S3.