How do I download an encrypted s3 object without decryption? - amazon-s3

I'm using Server-Side Encryption with Customer-Provided Encryption Keys (SSE-C) to store some files. I want to download them but not decrypt them just yet. The use case is something like the Game of Thrones finale. I want cable operators to have the data but give them the key in the last second. But the decrypt headers are mandatory when the file is encrypted. Maybe I can toggle the mark that the file is encrypted?

For this application, you wouldn't use any variant of SSE.
SSE prevents your content from being stored on S3's internal disks in a form where accidental or deliberate compromise of those physical disks or their raw bytes -- however unlikely -- would expose your content to unauthorized personnel. That is fundamentally the purpose of all varieties of SSE. The variants center around how the keys are managed.
Server-side encryption is about data encryption at rest—that is, Amazon S3 encrypts your data at the object level as it writes it to disks in its data centers and decrypts it for you when you access it.
https://docs.aws.amazon.com/AmazonS3/latest/dev/serv-side-encryption.html
SSE is decrypted by S3 and transiently re-encrypted using TLS for transmission on the network during the download. The final result in the client's hands is unencrypted.
For the application described, you would just upload the encrypted content to S3 without S3 being aware of the (external, already-applied) encryption.
If you also used some kind of SSE, that would be unrelated to the external encryption that you would also apply. Arguably, SSE would be somewhat redundant if the content is already encrypted before upload.
In fact, in the application described, depending on sensitivity and value of the content, each recipient would potentially have different keys and/or a slightly different source file (thus a substantially different encrypted file), so that the source of a leak could be identified by identifying which source variant was compromised.

Related

Would there be a compelling reason for implementing integrity check in a file transfer protocol, if the channel uses TLS?

I am developing a client server pair of applications to transfer files by streaming bytes over TCP/IP and the channel would use TLS always.
(Note: Due to certain OS related limitations SFTP or other such secure file transfer protocols cannot be used)
The application level protocol involves minimum but sufficient features to get the file to the other side.
I need to decide if the application level protocol needs to implement an integrity check (Ex: MD5).
Since TLS guarantees integrity, would this be redundant?
The use of TLS can provide you with some confidence that the data has not been changed (intentionally or otherwise) in transit, but not necessarily that the file that you intended to send is identical to the one that you receive.
There are plenty of other opportunities for the file to be corrupted/truncated/modified (such as when it's being read from the disk/database by the sender, or when it's written to disk by the receiver). Implementing your own integrity checking would help protect against those cases.
In terms of how you do the checking, if you're worried about malicious tampering then you should be checking a cryptographic signature (using something like GPG), rather than just a hash of the file. If you're going to use a hash then it's generally recommended to use a more modern algorithm such as a SHA-256 rather than the (legacy) MD5 algorithm - although most of the issues with MD5 won't affect you if you're only concerned about accidental corruption.

Amazon S3 Data integrity MD5 vs SSL/TLS

I'm currently working with the Amazon S3 API, and have a general wondering about the server-side integrity checks that can be done if you provide the MD5 hash during posting of an object.
I'm not sure I understand if the integrity check is required if you send the data (I'm assuming the object data you're posting also) via SSL/TLS, which provide their own support for data integrity in transit.
Should you send the digest regardless if you're posting over SSL/TLS? Isn't it superfluous to do so? Or is there something I'm missing?
Thanks.
Integrity checking provided by TLS provides no guarantees about what happens going into the TLS wrapper at the sender side, or coming out of it and being written to disk at the receiver.
So, no, it is not entirely superfluous because TLS is not completely end-to-end -- the unencrypted data is still processed, however little, on both ends of the connection... and any hardware or software that touches the unencrypted bits can malfunction and mangle them.
S3 gives you an integrity checking mechanism -- two, if you use both Content-MD5 and x-amz-content-sha256 -- and it seems unthinkable to try to justify bypassing them.

Good practices for AES key derivation and storage on STM32

I'm developing a device on STM32L4x6. It is connected through BLE to a smartphone and it exchanges encrypted data with it.
Encryption is AES-GCM and I'm using the reference implementation provided by STMicro.
I have implemented a shared secret exchange mechanism using Diffie-Hellman protocol on Curve25519. Right now I am using this shared secret directly as AES key.
However I am confused on 2 points:
I think I have to derive a session key from the shared key however I don't really understand how.
about key storage on STM32, what is the common/best practice ? Is it enough to store the key in Flash and to set the Flash in Read Protected Level 1 ?
Thank you
As for deriving a session key - you may want to look into the topic of Key Derivation Function (KDF). Googling it returns a lot of useful informations related to establishing session keys. You may also ask your question on https://crypto.stackexchange.com/.
As for storing keys in STM32 - it depends what your requirements are. Do the keys need to persist between sessions or can you generate a new one each time a connection is established? Generating a new key each time a new connection is made will be safer due to two reasons:
It's different for each connection so even if someone manages to get the key for a session from the past, it may only be used to decrypt that session.
If you generate a new key for each new session, you don't have the need to store it anywhere such as Flash memory, as you may keep it in RAM only. Powering down the device will wipe the key. Enabling read protection prevents access to RAM as well as to internal Flash.
Regarding 2nd point however - STM32 is NOT considered a "Secure Microcontroller". It lacks hardware elements that prevent hardware attacks - power voltage glitch detection, side-channel prevention, secure mesh etc. With enough resources and determination an attacker will be able to obtain the cryptographic keys that you use, for example by grinding down the chip package and optically reading your data. That touches on the aspect of how secure does the device really have to be - development time cost, hardware security cost. With STM32 all you can do is to make it harder (keep the keys in RAM and only when you need it, then overwrite them with noise) and limit the scope of the attacker (change session keys as often as possible, e.g. each session).

AWS S3 upload integrity checking

If a client is using AWS request signing (Signature Version 4), is there ever a reason to do separate integrity checking for AWS S3 uploads, or is the integrity checking inherent in the protocol adequate?
I'm referring particularly to multi-part uploads, which are described here:
https://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadUploadPart.html
https://docs.aws.amazon.com/AmazonS3/latest/API/mpUploadComplete.html
but also to single-part uploads.
To briefly summarize:
Each request to upload a part of a file is signed with a SHA-256 hash
of the entire request, including headers and data.
In response to each part, AWS returns an ETag, which is a proprietary
hash of the data in that part of the file. Usually this is an MD5 of
the data for that part, but in the case of AWS-KMS encryption, it's
an undocumented algorithm.
After all parts are uploaded, the client sends a request that
specifies that the individual parts be stitched together into a
file/key. The request contains the part numbers, and the AWS-generated ETag of each
part.
Some clients do extra checking based on the key's final AWS-generated ETag vs a locally-calculated version of the ETag (which has been discussed at What is the algorithm to compute the Amazon-S3 Etag for a file larger than 5GB? for instance), but is there any point to this?
One of the reasons I ask is that apparently no one has yet reverse-engineered the ETag algorithm used when server-side AWS-KMS encryption is in effect. However, it appears to me that integrity checking is sufficiently inherent in the protocol that additional checking is unnecessary.
Thanks.

Modify sslsniff to save .pcap

Would it be possible to modify sslsniff, i.e. by implementing libpcap, so you can create a .pcap file containing decrypted network traffic? Since sslsniff can decrypt packet data I thought it might be possible to replace the encrypted data with the decrypted data so I can view it in Wireshark? Is this possible to do?
.pcap files store network layer packets with a link layer specific header. However, the result of decrypting an SSL connection is actually a bidirectional stream of bytes at the application layer. There is no straightforward way of splitting that stream of bytes into network layer packets with link layer headers. It would be possible, in theory, to split the stream into arbitrary TCP segments, prepend an IP and a link layer header and to try very hard to make the packet's addresses, timestamp etc. match the corresponding ones from the original packets as closely as possible. The packet sizes, checksums etc. would of course change, and some packets would not be present at all, depending on whether the encapsulation is made by mimicking a plain TCP connection or an SSL connection using the NULL cipher. However, all of this is quite hard to do with the API provided by OpenSSL to the application and would not be easy to integrate into the existing architecture of sslsniff.
So in theory, yes, it could be done, but in practice it is not so easy because .pcap files are an abstraction at the wrong layer.