Enable Transfer Acceleration for AWS SDK for macos - objective-c

Disclaimer: I posted this originally on Code Review but as the code does not currently work I was suggested to post at SO instead.
I have converted the official AWS SDK iOS (v 2.5.0) framework (focusing on S3) to macOS and everything is working as expected with uploads and downloads. However, I wanted to also enable Transfer Acceleration for S3 using AWSTransferManager. I know that you can enable Transfer Acceleration (TA) using the AWSTransferUtility, but that utility uses pre-signed requests that are valid for only 50 minutes (useful for iOS but not macOS). I would like to be able to transfer files that are large and can take hours even when using Transfer Acceleration.
I have edited the original code from AWS to enable TA for AWSTransferManager, however, I still can not get this to work properly as the final signing of the upload/download request fails. The error message is:
Message=The request signature we calculated does not match the signature you provided. Check your key and signing method.}]
For the most part, I have edited the files AWSSignature, AWSS3TransferManager, and AWSService (AWSServiceConfiguration). I think that the signing error occurs because I am editing a path, or URL without correctly code signing the change (probably in AWSSignature.m). As I am uncertain as to where my code breaks, I have created a repository with all of the AWS SKD macOS files required to compile the framework including a unit test. If I run the test initializing the call to AWSServiceConfiguration with:
AWSServiceConfiguration *configuration = [[AWSServiceConfiguration alloc]initWithRegion:[region aws_regionTypeValue] credentialsProvider:credentialsProvider
accelerateModeEnabled:#(NO)
bucketName:self.testBucketName];
Then everything works as expected and the test file uploads and downloads correctly. However, if I try to turn transfer acceleration on (I have already made sure that my bucket has acceleration enabled), then it fails with the code signing error above.
AWSServiceConfiguration *configuration = [[AWSServiceConfiguration alloc]initWithRegion:[region aws_regionTypeValue] credentialsProvider:credentialsProvider
accelerateModeEnabled:#(YES)
bucketName:self.testBucketName];
In fact it seems like the test script is trying to upload a file much larger than what the testfile (3 Mb) really is. I assume my error is related to the signing of the URL body in some way as the size of the file seems wrong.
I know that finding this error requires a bigger effort than what is usually expected at SO and it is not something many will throw themselves at (but hopefully some) as it involves very complex code and it is time-consuming. However, I do believe that if we could make this work then many can find the framework for AWS SKD for macOS + Transfer Acceleration very useful.
I hope you will take a look at try to identify what may be my problem breaking the code signing.
All code for the framework+test example is available here: https://github.com/trondkr/aws-sdk-macos-TA. To run the unit test you need to provide the secret key and access id to your AWS S3 and the name of a bucket that has transfer acceleration enabled.
Thanks. Cheers, Trond

Related

JSZip reports missing bytes when reading back previously uploaded zip file

I am working on a webapp where the user provides an image file-text sequence. I am compressing the sequence into a single ZIP file uisng JSZip.
On the server I simply use PHP move_uploaded_file to the desired location after having checked the file upload error status.
A test ZIP file created in this way can be found here. I have downloaded the file, expanded it in Windows Explorer and verified that its contents (two images and some HTML markup in this instance) are all present and correct.
So far so good. The trouble begins when I try to fetch that same ZIP file and expand it using JSZip.loadAsync which consistently reports Corrupted zip: missing 210 bytes. My PHP code for squirting back the ZIP file is actually pretty simple. Shorn of the various security checks I have in place the essential bits of that code are listed below
if (file_exists($file))
{
ob_clean();
readfile($file);
http_response_code(200);
die();
} else http_response_code(399);
where the 399 code is interpreted in my webapp as a need to create a new resource locally instead of trying to read existing resource data. The trouble happens when I use the result text (on an HTTP response of 200) and feed it to JSZip.loadAsync.
What am I doing wrong here? I assume there is something too naive about the way I am using readfile at the PHP end but I am unable to figure out what that might be.
What we set out to do
Attempt to grab a server-side ZIP file from JavaScript
If it does not exist send back a reply (I simply set a custom HTTP response code of 399 and interpret it) telling the client to go prepare its own new local copy of that resource
If it does exist send back that ZIP file
Good so far. However, reading the existent ZIP file into PHP and sending it back does not make sense + is fraught with problems. My approach now is to send back an http_response_code of 302 which the client interprets as being an instruction to "go get that ZIP for yourself directly".
At this point to get the ZIP "directly" simply follow the instructions in this tutorial on MDN.

Express-browserify and Watson Visual Recognition - TypeError: fs.existsSync is not a function

I'm trying to get the Watson Visual Recognition to run client side by using express-browserify with reference to the node-sdk for watson-developer-cloud. The VisualRecognitionV3 makes use of the fs package hence I get the fs.existsSync error when I'm trying to call it from the client-side as the browser doesn't know which filesystem to use. My question is how do I go about creating a so called 'abstraction layer' as I am restricted to using the express-browserify package for cross origin calls.
This thread is pretty helpful in shedding some light but I'm not sure where to start regarding the 'abstraction layer' or if there are any other solutions. Also, would something like socket.io work for this? I've linked a clone of the directory here as it seems less clunky than pasting the multiple portions below.
The repository can be cloned and just requires a personal iam_apikey with relevant launch configuration. Appreciate any pointers. Thanks!
I didn't manage to sort this out with express-browserify due to the require(fs) from browser issue but I was able to get it running using the express-ws package

Implementing basic S3 compatible API with akka-http

I'm trying to implement the file storage ыукмшсу with basic S3 compatible API using akka-http.
I use s3 java sdk to test my service API and got the problem with the putObject(...) method. I can't consume file properly on my akka-http backend. I wrote simple route for the test purposes:
def putFile(bucket: String, file: String) = put{
extractRequestEntity{ ent =>
val finishedWriting = ent.dataBytes.runWith(FileIO.toPath(new File(s"/tmp/${file}").toPath))
onComplete(finishedWriting) { ioResult =>
complete("Finished writing data: " + ioResult)
}
}
}
It saves file, but file is always corrupted. Looking inside the file I found the lines like these:
"20000;chunk-signature=73c6b865ab5899b5b7596b8c11113a8df439489da42ddb5b8d0c861a0472f8a1".
When I try to PUT file with any other rest client it works as fine as expected.
I know S3 uses "Expect: 100-continue" header and may it he causes problems.
I really can't figure out how to deal with that. Any help appreciated.
This isn't exactly corrupted. Your service is not accounting for one of the four¹ ways S3 supports uploads to be sent on the wire, using Content-Encoding: aws-chunked and x-amz-content-sha256: STREAMING-AWS4-HMAC-SHA256-PAYLOAD.
It's a non-standards-based mechanism for streaming an object, and includes chunks that look exactly like this:
string(IntHexBase(chunk-size)) + ";chunk-signature=" + signature + \r\n + chunk-data + \r\n
...where IntHexBase() is pseudocode for a function that formats an integer as a hexadecimal number as a string.
This chunk-based algorithm is similar to, but not compatible with, Transfer-Encoding: chunked, because it embeds checksums in the stream.
Why did they make up a new HTTP transfer encoding? It's potentially useful on the client side because it eliminates the need to either "read your payload twice or buffer [the entire object payload] in memory [concurrently]" -- one or the other of which is otherwise necessary if you are going to calculate the x-amz-content-sha256 hash before the upload begins, as you otherwise must, since it's required for integrity checking.
I am not overly familiar with the internals of the Java SDK, but this type of upload might be triggered by using .withInputStream() or it might be standard behavor for files too, or for files over a certain size.
Your minimum workaround would be to throw an HTTP error if you see x-amz-content-sha256: STREAMING-AWS4-HMAC-SHA256-PAYLOAD in the request headers since you appear not to have implemented this in your API, but this would most likely only serve to prevent storing objects uploaded by this method. The fact that this isn't already what happens automatically suggests that you haven't implemented x-amz-content-sha256 handling at all, so you are not doing the server-side payload integrity checks that you need to be doing.
For full compatibility, you'll need to implement the algorithm supported by S3 and assumed to be available by the SDKs, unless the SDKs specifically support a mechanism for disabling this algorithm -- which seems unlikely, since it serves a useful purpose, particularly (it appears) for streams whose length is known but that aren't seekable.
¹ one of four -- the other three are a standard PUT, a web-based html form POST, and the multipart API that is recommended for large files and mandatory for files larger than 5 GB.

Amazon S3 pre signed URLs using Amazon Java SDK and extra / characters

I've been creating Presigned HTTP PUT URLs and everything was working great until I wanted to start using "folders" in S3; I wanted the key to have the character '/'.
Now I get Signature doesn't match when I send the HTTP PUT requests due to the fact the '/' probably changes to %2F... If I escape the character before creating the presigned URL it works great, but then the Amazon console management doesn't understand it and shows it as one file instead of subfolders.
Any idea?
P.s.
The HTTP PUT requests are sent using C++ with POCO NET library.
EDIT
I'm using Poco HttpRequest from C++ to my Java web server to generate a signed url (returned on the response).
C++ then uses this url to put a file in s3 using Poco again.
The problem was that the urls returned from the web server were parsed through Poco URI objects that auto decoded the s3 object key thus changing it.With that in mind I was able to fix my problem.
Tricky - I'll try to approach this bottom up.
Disclaimer: I got carried away visually inspecting the Poco libraries instead of actually debugging a code sample, which should yield more reliable results much faster, see below ;)
Analysis
If I escape the character before creating the presigned URL it works
great, but then the Amazon console management doesn't understand it
and shows it as one file instead of subfolders.
The latter stems from S3 not having a concept of folders on the storage level actually, see e.g. section Index Documents and Folders within Index Document Support:
Objects stored in Amazon S3 are stored within a flat container, i.e.,
an Amazon S3 bucket, and it does not provide any hierarchical
organization, similar to a file system's. However, you can create a
logical hierarchy using object key names and use these names to infer
logical folders that contain these objects.
That's exactly what the AWS Management Console is doing here as well:
The AWS Management Console also supports the concept of folders, by
using the same key naming convention used in the preceding sample.
However, your test regarding the assumption of / being encoded as %2F proves, that this is indeed how Poco::Net is encoding the URL when performing the HTTP PUT request.
(I'm actually a bit surprised that the AWS Java SDK seems to generate different URLs here for / vs. %2F, insofar a recent analysis regarding Why is my S3 pre-signed request invalid when I set a response header override that contains a “+”? seems to indicate respective canonicalization by the AWS .NET SDK, see below for more on this.)
Potential Solution
In order for your scenario to work as desired, you'll need to figure out where the URL is encoded this way - I could think of two components in principle:
Poco::Net
Finding out why Poco::Net is encoding the URL different than S3 (if at all, see below) is best done by debugging your code, here's where I'd start:
Class HTTPRequest uses class URI in turn, which automatically performs a few normalizations on all URIs and URI parts passed to it, in particular percent-encoded characters are decoded. The other way round is handled by method encode(), which is where things get interesting and call for a breakpoint, see URI.cpp:
lines 575 ff. - here encode() does its magic, which indeed seems to be in place, insofar neither the code within the function nor the various chars passed in via the reserved parameter contain the offending / (see lines 47 ff. for the respective constants in use)
consequently you might want to set a breakpoint in this function and backtrace the callstack to find out which code is actually doing the encoding upfront, which might not yield an offender at all, see below.
Java => C++ transition
You haven't specified yet, which channel is actually used to communicate the pre-signed URL generated by the AWS Java SDK to C++ in turn. Given the code review (mind you, visual inspection only, I haven't debugged this myself yet) of the Poco::Net functionality yields the conclusion, that no obvious offender can be identified in the library itself, thus it seems more likely that it might already enter your C++ layer encoded (easily verified via debugging of course) - are you by chance using any kind of web service between these components for example?
Good luck!

Verifying app's signature by code [duplicate]

This question already has answers here:
How to obtain codesigned application certificate info
(2 answers)
Closed 8 years ago.
I have app signed. I created an identity and used codesign to sign my app as per Apple's Code Signing Guide.
Now, how do I check the signature from within my application?
I need to verify this on Cocoa apps (Objective-C) and apps written in C.
You could use NSTask and run "codesign --verify" and check the exit status. Of corse if the program was altered it could be altered to remove the check, so I'm not sure what that buys you.
If you are not worried about directed tampering (like the kind that might remove your check of the signature) you can use the codesign "kill" option, if you do merely executing means the signature is valid (at least for all pages that have been executed so far...but if a not-yet-resident page has been tampered with you will get killed when that one is read in anyway).
Maybe if you could explain a little more about why you want to verify the signature a better answer could be formed.
Note: Currently MacOS X does not verify signed code prior to execution. This may be different for sandboxed code, and it would seem sensible that it is otherwise anybody could edit the entitlements.
To check an applications signature from within the application itself you use the Code Signing Services. In particular look at SecCodeCheckValidity. The code to do the checking is not long, but there is quite a bit to understand so I won't give a code sample - you need to read and understand the documentation.
Checking the signature allows your application to detect changes to its code & resources and report it is "damaged" (it may well be, not all changes are malicious) and refuse to run. Adding such code does not of course guarantee your code is not damaged, but certainly it does raise the barrier higher against intentional damage (and if MacOS X starts doing the check itself then there will be a big win).
The way signiture verification is implemented on iOS is that when an application is being launched, the launchd daemon decrypts the binary using that device's specific private key (this is why you can't just decompile apps or copy-paste them from one device to another), if the decryption fails, the application fails to launch.
The native tools that do this are not available within applications due to the iOS sandboxing.
If you're simply attempting to track if someone has modified your binary, you can perform an MD5 or SHA1 hash of it, store it in NSUserDefaults and compare it at each app start. If the hash changes between executions you know it has been modified (possibly by a legitimate application update or possibly nefariously.)
Here's an example on how to get the hash of an NSData.
The binary file you're looking for is: AppName.app/AppName