Paperclip with multiple server instances - ruby-on-rails-3

I am using paperclip in RoR , and I am having some troubles when showing the images. Some times some images are shown and some other times thy are not. Does anybody has experienced something like this?

If you are using local paths to save the images, paperclip will save them on the server processing the request.
Next requests to show the image will work if they are received on the same server where it was saved, or will fail if the request is processed by another server.
To avoid that you should be using a common storage, for example s3.

Related

Reprocessing S3 asset with Paperclip

Background:
I have implemented user-defined cropping on image uploads roughly as-per Ryan Bates Railscast #182.
This works when set to the :file storage method, but not when set to :s3. S3 storage was working fine before adding the intermediate cropping step.
From the server log, it appears to be looking for the source file locally:
[paperclip] An error was received while processing: #<Paperclip::Errors::NotIdentifiedByImageMagickError: /profiles/pictures/000/001/543/original/headshot.jpg is not recognized by the 'identify' command.>
This file is present on S3, but not locally by this point, as the upload is processed before being cropped (as well as after).
My question:
How can I bring the file down from S3 to the local server before the second process step?
N.B. I have looked at other answers on SO already.
Paperclip looking for file locally for reprocessing when using S3 – seems relevant, but the only answer refers to downgrading Paperclip. I can’t do that, and besides, that answer is neither upvoted nor accepted.
Error reprocessing in Paperclip 2.3.5 – this is about an older version of Paperclip.
Other thoughts:
It has occurred to me that another approach would be to store the file locally until it has been cropped, and then use DelayedJob or something similar to upload it to S3 later on. This will be more work though, so I’d rather avoid it for now.
In order to better understand what's happening, it would be cool to see your model set up. Specifically I'm looking for the "has_attached_file" setup.
Just to cover the basics of what I'm looking for: here's an example
has_attached_file :picture,
path: <optional, default is fine.>
url: ':s3_alias_url',
s3_protocol: 'https',
s3_host_alias: 'cdn.<something>.com' (or, s3.amazonaws.com/bucketname/,
storage: :s3,
s3_credentials: Proc.new{ |a| a.instance.credentials }
When you reprocess an image, it should be brought down into a temp file and processed there, then reuploaded with these settings.
based on the profiles/pictures/000/001/543/original/headshot.jpg it almost looks like it's grabbing your path variable, but not going to your s3 bucket to get that image. so I would check the storage value, specifically.
With more info, I can update my answer appropriately.

Orchard CMS 1.7 Media with S3 Storage

Wanting to use Orchard 1.7 with Media storage on S3 (as I'm deploying to AppHarbor)
So far I'm looking at the S3 Storage provider But its a bit out of date.
Has anyone done this ? is there a better way to use S3 with the new media manager?
I've got images uploading to s3, but they don't display when I click the folder.
here is the Gist of my updated S3Provider
Missing methods for create file, rename folder, get file, and Get storage path. any help on how to complete these would be appreciated.... however stepping through the debugger in VS this doesn't seem to be the root cause of my displaying images issue above.
Edit
Looks like the file is up loading to s3 but not to the database, due to the GetFile method throwing an error...
Edit 2
Added some code to the Get file method. Now that works; (gist updated) Can up load images. However the thumbnails are still not working, they just come back as empty tags ...Think this is because the media manager is using the Open get method - which is supposed to open a file so you can write a stream to it. Don't know how to achieve this with S3... any ideas welcome
As Part of the AWSSKD NuGet package version 1.5.28.3 you can access a S3FileInfo object. I've used this in my S3 Storage File and updated the S3 Storage provider.
This seem to work, need to do a bit more testing on it.
NOTE: I had to add some code on the GetFile Method to ensure the permissions where set correctly otherwise the updating of thumbnails overwrote permissions on the file.... I'm sure there is a better way to do this.

Loadrunner file upload

I have Google'd this subject a lot over the past few days, but I cannot find a best practice solution. My question is basically how do I script in LR a fileupload? My app consists of a browse button, a pop up that lets me locate the file and it closes after I have located the file. Finally the app has a upload button to upload the file.
My script is recorded using URL mode and I guess I need to create some kind of custom request? URL mode creates somewhat complex scripts and placing custom requests inside these script is challenging.
BTW: I have not tried to record and play back the file upload process described yet, and using URL mode might just solve it without further customization? Or did someone actually made file upload using LR and URL mode work? A small example would be greatly appreciated!
Different applications will go about uploading a file from a client to a server in different ways. Your best bet is to record your application doing the upload and taking a look at what LoadRunner records.
Mark the point before and after the upload as you record by creating a transaction so you can easily find the spot in your code where the upload actually happens.

background upload of images to S3 with Paperclip and Delayed Jobs

I'm building an API for mobile apps which supports image uploading, using Paperclip.
Paperclip is set with S3 storage and its working fine.
I want to do the uploading from the server to S3 in the background using Delayed Jobs (the app will be hosted on Heroku).
Trying something such as #user.delay.photo = File.open(...), the result are errors by Delayed Jobs
UPDATE "delayed_jobs" SET "last_error" = '{uninitialized stream
how can I do the background uploading ?
The problem is IO objects cannot be marshal and retrieve it back easily.
Using .delay method, it tries to dump the object into database records and pull it back when processing the job. Doing this way, make the record is big and brittle.
Better use the custom job instead if you have a lot of things to do in the job.
class UploadJob < Struct.new(:user_id)
def perform
user = User.find(user_id)
user.photo = File.open(.....)
end
end
Delayed::Job.enqueue UploadJob.new(#user.id)
You could do yourself by writing the image to the tmp directory in the project and reference in from the job. Last do a clean up when the job is finished.
Or, you could try this gem: delayed_paperclip which is more handy.

Is there an automated way to push all my javascript/css/images to s3 everytime I do a website push?

So I am in the process of moving all the thumbnails of my major sites to S3 and now I am thinking about how I can consistently put all my CSS/JS/images that power the actual sites to it. It's easy enough to upload everything the first time but I am trying to think of a way to somehow automate the process everytime I push out to production.
Does anyone have any clever ways of doing this?
I used to use s3sync to compare and update the assets just before upload the site files using a bash file to iterate through my files
This works well but when the amount of likes to compare (lets say thousands) gets big this process start being really slow. If you have an small architecture (in term of assets) this would do the trick
to make this better I would recommend capistrano or some other assistant that helps you to deploy...this way you can run at all once..
upload assets
deploy your files
In the other hand you could take a look to cloudfront (amazon's CDN) and set it up using ORIGIN..this way you dont need to worry about upload the files to s3 since they will be automatically pulled on demand. The down side of this approach is the caching if you need to update a file and keep the same name (AKA expire the object)...you can do this in cloudfront but will need an script to do the task.
Depending in the traffic (and other factors, ofcourse) one or other path will fit the best.