Amazon S3 Prawn PDF - ruby-on-rails-3

upon adding an image to my prawn document and trying to pull that image from Amazon S3 storage I get the following error
ArgumentError (http://s3.amazonaws.com/briefbucket/photos/2/small/259823_1583726693707_1851950185_973122_7126850_n.jpg?1326839482 not found):
however i looked in my storage folder, the jpg is there. I noticed that the ending file name in prawn is "jpg?1326839482"?
any help would be appreciated.

Alright, I had the same issue today. Now:
I'm using Amazon S3 and loading images uploaded by users. Solution as follows:
if #user.avatar? #in case user didn't upload anything
image open("#{ #user.avatar(:small).to_s.sub!(/\?.+\Z/, '') }")
end
Following
.to_s.sub!(/\?.+\Z/, '')
is used the get rid of all stuff after "?"
Before I have moved to amazon i was not using "open" which was causing the issue.
Let me know if this helps.

I'm not sure I fully understand, but I think you need to URL encode your file name (See encoding).
So your encoded file would be something like:
http://s3.amazonaws.com/briefbucket/photos/2/small/259823_1583726693707_1851950185_973122_7126850_n.jpg%3F1326839482
The ? character is used to designate the beginning of the search path.

Related

How to upload images to AWS S3 with Ckeditor5

I am trying to upload an image to s3 bucket from CkEditor5. My front-end is built on vue and backend is on NodeJs. The uploading of images is working as expected as i can see the image is being saved to s3 bucket correctly. However I am having a confusion whether the bucket should be public or not ?
How CkEditor behaves to image uploading?
The ckEditor uses a simple upload adapter which uses its own built-in adapater that enables image uploading feature. When an images is dropped/copy-pasted to the ckEditor, it makes an http POST request to my backend NodeJs server and server in turns makes a call to S3 to upload that image(till this point everything is working as expected).
Now in order to embed an image inside CkEditor5, the server should respond with an URL attribute as JSON response like following so that CkEditor can fetch it and display inside editor.
{
url: url-path-of-image //full path of image in s3 bucket
}
This is where I am confused and need some pointers.
Question 1:
Should I make it public ? If yes, then what do I do about security, making it public will give access to anybody.
If I make it accessible with key/secret, how do I do it ?
Question 2:
This one is related to question#1
If I make it public then question#2 will not be an issue.However if I am not allowed to make it public in that case how would i display the images in normal div element ? Later on I need to display the content of CkEditor inside a div, with html parsing meaning inside a v-html attribute.
Any suggestions or pointers would be much helpful. Really appreciate taking time to read through the question.
I am not sure I am looking for the solutions. I can tell that it is related to signed urls in AWS S3. You keep bucket private, upload image to S3 and generate a signed url whenever you need to display the image to user on frontend.
If you learn more into the correct implementation please let me know.

aws-sdk delete_object deletes entire bucket contents?

Learning the ropes of AWS S3 and the sdk. Trying to use in in my Rails app via the aws-sdk gem. I was following this post:
Remove entire object directory tree using AWS-SDK
Luckily for me that I was playing around in my staging bucket. I created a folder test and uploaded an example.jpg image. From the rails console I ran (note the lack of {} brackets):
s3.delete_object(bucket: 'mystagingbucket', key: '/test/example.jpg')
=> #<struct Aws::S3::Types::DeleteObjectOutput delete_marker=true, version_id=nil, request_charged=nil>
then I go back into the web console to find my entire bucket empty. All my previously uploaded files, static assets, etc. all gone.
So I realize that I should turn versioning on and try this again to duplicate the issue. After some Googling I see the docs show the {} brackets.
Now I get:
s3.delete_object({bucket: 'mystagingbucket', key: '/test/example.jpg'})
=> #<struct Aws::S3::Types::DeleteObjectOutput delete_marker=true, version_id="blohAaXlnG2.RERnj_JT3zvQmAr8io48", request_charged=nil>
except that nothing happens and the file is not deleted. I have done some more Googling and I now see that it can take up to a few hours to see your files actually deleted so I will check back on the recent delete issues.
As for the bucket contents getting erased am I missing something here?

Cloud Storage Transfer "Failed"

I've tried repeatedly to use the Google Developers Console tools to Create a Transfer that works, but haven't had any luck. My source is in S3.
I tried with the "S3://" URL, but when trying to accept the transfer settings, I consistently get "source bucket doesn't exist". I test my URL by placing it in a browser, and I do get it to resolve, so I don't know what's up.
Even more puzzling is when I try using a text file of URLs. These URLs are all http:// strings, and each of them properly loads in a browser. I figured this would be even more straightforward as there are no permissions to be dealt with, really, since each file in the S3 bucket already has read permissions.
Instead, all I get in the Transfer history is "Failed", with no other information at all.
At first, I was greedy and included all my files. When I got nowhere with that, I cut it down to a single file. Still no go.
Here is the text file.
Any clues, por favor?
It looks like your text file doesn't follow the specified format. You should add the header and size/MD5 of each file as described at https://cloud.google.com/storage/transfer/#urls

How to implement XML-safe private Amazon S3 URLs?

On my photography website, I am storing photos on Amazon S3. To actually display them on the website, I am using signed URLs. This means that image URLs expire. Only the web application itself is able to generate valid image file URLs.
An example URL would look like this:
http://media.jungledragon.com/images/1849/21346_small.JPG?AWSAccessKeyId=05GMT0V3GWVNE7GGM1R2&Expires=1411603210&Signature=9MMO3zEXECtvB0w%2FuMEN8obt1ow%3D
Note that by the time you read this, that URL may have already expired. That's ok, the question is about the format.
Whilst the above URL format works fine on the website, it breaks XML files. The reason for this is the & character, which should be escaped.
For example, I'm trying to implement Windows 8.1 live tiles for the website, which you can link to an RSS feed. My RSS feed is here:
http://www.jungledragon.com/all/rss/promoted
That feed will work in most RSS readers, however, the Windows 8 tile builder (http://www.buildmypinnedsite.com/en) is particularly strict about the XML being valid. Here you can see the error it throws on said feed:
http://notifications.buildmypinnedsite.com/?feed=http://www.jungledragon.com/all/rss/promoted&id=1
Now, my simple thinking was to encode the & that are part of the signed URLs, by & or &. Whilst that may make the XML valid, unfortunately S3 does not accept & to be encoded. When used like that, the image will no longer load.
I'm wondering whether I am in a circular problem that cannot be solved?
I have had many similar problems with RSS feeds. XML documents should always use & (or an equivalent like & or &). If a reader is not capable of extracting the URL properly, then the reader is the culprit, not you. But I can tell you that reader programmers will disagree with you.
If you are a programmer, you could fix the problem by having a redirect, but that's a bit of work. So you'd retrieve the URL from S3, save that in your database and create a URL on your website such as http://www.jungledragon.com/images/123 and link the S3 URL with your images/123 page. Now when someone goes to page images/123, you retrieve the URL you saved from your S3 server.
Actually, if the URL http://www.jungledragon.com/images/123 is a reference to your image, you can get the S3 URL at that time and do the redirect on the fly!

Loading dynamically generated KML into google maps api

I have a bit of an issue with loading a dynamically generated KML into google maps api.
The KML file is generated by oracle and is of the format
http://server/oracleservioce.method?parm1=100&parm2=100
If I try and load that uRL (endcoded or decoded) I always get a KMLLayerStatus as INVALID_DOCUMENT.
If I save the resultant file to a local file with a KML extension it works foine, otherwise I get errors.
I even tried renaming the file to .xml and .dat (arbitrary names) and they all fail. It seems that google api need the file to have a .KML extension. This will not work in the dynamic environment. Can anybody suggest a way forward?
Thanks,
PS: I Need to use google maps API, I can not use openlayers or any other solution. The file needs to be loaded into a google.maps.kmllayer object.
I did this, no matter on the extension, but you have to set the mimetype on the http response: https://developers.google.com/kml/documentation/kml_tut#kml_server