Access private files with extra parameters AWS S3 - amazon-s3

I have quite a few PDF files which are stored as private on an AWS S3 storage.
I'm creating a url to access the pdf through python boto library (with signature and signed headers) and successfully able to access the files if I just provide the PDF file name. But I need to access these PDF files at a particular page and with some additional parameters (bold highlighted). e.g:
https://mybucket.amazonaws.com/media/private/xyz.pdf#page=6&zoom=100&toolbar=0&navpanes=0&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=***********************&X-Amz-Date=20180925T044257Z&X-Amz-Expires=60&X-Amz-SignedHeaders=host&X-Amz-Signature=a9ba6473464trdfghf76c578475hdfjdbv792cf7f1193fe8a274549
When I try to access the file with additional params, it gives me 'Resource not found" error but without the params, it accessible.
Can anyone guide me to achieve this goal ?

SOLVED:
The issue was that we need to append the PDF arguments after the signature in the url.
In my case it was:
https://mybucket.amazonaws.com/media/private/xyz.pdf&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=***********************&X-Amz-Date=20180925T044257Z&X-Amz-Expires=60&X-Amz-SignedHeaders=host&X-Amz-Signature=a9ba6473464trdfghf76c578475hdfjdbv792cf7f1193fe8a274549#page=6&zoom=100&toolbar=0&navpanes=0

Related

How to use signed url to access buckets in gcp

Goal: To be able to generate a link that allows the recipient to upload data to a specific folder in Google Cloud Storage regardless of them having a Google account (or any, for that matter). The link should authenticate them and give them access to the folder itself. This is intended to work in a very similar fashion as Drop Box where you can request files from users, and they get a URL to upload files to specified folders.
I've been able to follow the instructions from the GCP page and created a signed URL by running a gsutil command:
gsutil signurl -m PUT -d 1h -c application/octet-stream private_key.json gs://my_bucket/my_folder/
My expectation is to be able to copy the generated URL, and access it from a browser, to be welcomed into the GCS folder. But I keep getting this error:
<Error>
<Code>MalformedSecurityHeader</Code>
<Message>Your request has a malformed header.</Message>
<ParameterName>content-type</ParameterName>
<Details>Header was included in signedheaders, but not in the request.</Details>
</Error>
My gut tells me I'm either trying to use the signed URL in a way it's not meant to be used (maybe it should be part of a code that 'calls' that URL and lets the user access it via a UI). And I'm not a programmer, but more of an IT admin trying to automate file sharing/receiving. Or I'm doing something wrong; and thus my question here.
Any input you can give me would be greatly appreciated! Thanks in advance!

How to get complete path or URL of a File in Dropbox?

I am uploading bulk files using Dropbox .NET API.
I want that after uploading the file how to get a complete path or URL like - "https://www.dropbox.com/work/Apps/*****/testinng234/1.mp3". So that I can able to use this link directly.
Please suggest me the best way and share some code.

Presigned URL dont work for PUT but it works for GET S3

I'm having a trouble with Amazon S3 presigned URL. In my configuration of bucket policy I give access only to an specific IAM User, I mean, is not public. So, If I navigate in the browser to a file url of my S3 bucket, I receive an access denied message.
So, I use the aws-cli tool to generate a presigned url of that file. With that URL I'm able to get the file correctly, but the issue is when I try to put a file to the bucket. Using that url I cannot put a file beacuse I'm getting this message error:
<Code>SignatureDoesNotMatch</Code>
<Message>The request signature we calculated does not match the signature you provided. Check your key and signing method.</Message>
What I'm missing?
You'll need a different presigned URL for PUT methods and GET methods. This is because the HTTP verb (PUT, GET, etc.) is part of the "CanonicalResource" used to construct the signature. See "Authenticating Requests" in the Amazon S3 reference docs for details.

How to Upload PhantomJS Page Content to S3

I am using PhantomJS 1.9.7 to scrape a web page. I need to send the returned page content to S3. I am currently using the filesystem module included with PhantomJS to save to the local file system and using a php script to scan the directory and ship the files off to S3. I would like to completely bypass the local filesystem and send the files directly from PhantomJS to S3. I could not find a direct way to do this within PhantomJS.
I toyed with the idea of using the child_process module and pass in the content as an argument, like so:
var execFile = require("child_process").execFile;
var page = require('webpage').create();
var content = page.content;
execFile('php', '[path/to/script.php, content]', null, function(err,stdout,stdin){
console.log("execFileSTDOUT:", JSON.stringify(stdout));
console.log("execFileSTDERR:", JSON.stringify(stderr));
});
which would call a php script directly to accomplish the upload. This will require using an additional process to call a CLI command. I am not comfortable with having another asynchronous process running. What I am looking for is a way to send the content directly to S3 from the PhantomJS script similar to what the filesystem module does with the local filesystem.
Any ideas as to how to accomplish this would be appreciated. Thanks!
You could just create and open another page and point it to your S3 service. Amazon S3 has a REST API and a SOAP API and REST seems easier.
For SOAP you will have to manually build the request. The only problem might be the wrong content-type. Though it looks as if it was implemented, but I cannot find a reference in the documentation.
You could also create a form in the page context and send the file that way.

Does Amazon S3 help anything in this case?

I'm thinking about whether to host uploaded media files (video and audio) on S3 instead of locally. I need to check user's permissions on each download.
So there would be an action like get_file, which first checks the user's permissions and then gets the file from S3 and sends it using send_file to the user.
def get_file
if #user.can_download(params[:file_id])
# first, download the file from S3 and then send it to the user using send_file
end
end
But in this case, the server (unnecessarily) downloads the file first from S3 and then sends it to the user. I thought the use case for S3 was to bypass the Rails/HTTP server stack for reduced load.
Am I thinking this wrong?
PS. I'm using CarrierWave for file uploads. Not sure if that's relevant.
Amazon S3 provides something called RESTful authenticated reads, which are basically timeoutable URLs to otherwise protected content.
CarrierWave provides support for this. Simply declare S3 access policy to authenticated read:
config.s3_access_policy = :authenticated_read
and then model.file.url will automatically generate the RESTful URL.
Typically you'd embed the S3 URL in your page, so that the client's browser fetches the file directly from Amazon. Note however that this exposes the raw unprotected URL. You could name the file with a long hash instead of something predictable, so it's at least not guessable -- but once that URL is exposed, it's essentially open to the Internet. So if you absolutely always need access control on the files, then you'll need to proxy it like you're currently doing. In that case, you may decide it's just better to store the file locally.