Modify sslsniff to save .pcap - ssl

Would it be possible to modify sslsniff, i.e. by implementing libpcap, so you can create a .pcap file containing decrypted network traffic? Since sslsniff can decrypt packet data I thought it might be possible to replace the encrypted data with the decrypted data so I can view it in Wireshark? Is this possible to do?

.pcap files store network layer packets with a link layer specific header. However, the result of decrypting an SSL connection is actually a bidirectional stream of bytes at the application layer. There is no straightforward way of splitting that stream of bytes into network layer packets with link layer headers. It would be possible, in theory, to split the stream into arbitrary TCP segments, prepend an IP and a link layer header and to try very hard to make the packet's addresses, timestamp etc. match the corresponding ones from the original packets as closely as possible. The packet sizes, checksums etc. would of course change, and some packets would not be present at all, depending on whether the encapsulation is made by mimicking a plain TCP connection or an SSL connection using the NULL cipher. However, all of this is quite hard to do with the API provided by OpenSSL to the application and would not be easy to integrate into the existing architecture of sslsniff.
So in theory, yes, it could be done, but in practice it is not so easy because .pcap files are an abstraction at the wrong layer.

Related

Using OpenSSL TLS with or without BIO?

I've been reading a lot about OpenSSL, specifically the TLS and DTLS APIs. Most of it makes sense, it's a pretty intuitive API once you understand it. One thing has really got me scratching my head though...
When/why would I use BIOs?
For example, this wiki page demonstrates setting up a barebones TLS server. There isn't even a mention of BIOs anywhere in the example.
Now this page Uses BIOs exclusively, not ever using the read and write functions of the SSL struct. Granted it's from 2013, but it's not the only one that uses BIOs.
To make it even more confusing this man page suggests that the SSL struct has an "underlying BIO" without ever needing to set it explicitly.
So why would I use BIOs if I can get away with using SSL_read() and SSL_write()? What are the advantages? Why do some examples use BIOs and others don't? What Is the Airspeed Velocity of an Unladen Swallow?
BIO's are always there, but they might be hidden by the simpler interface. Directly using the BIO interface is useful if you want more control - with more effort. If you just want to use TLS on a TCP socket then the simple interface is usually sufficient. If you instead want to use TLS on your own underlying transport layer or if you want have more control on how it interacts with the transport layer then you need BIO.
An example for such a use case is this proposal where TLS is tunneled as JSON inside HTTPS, i.e. the TLS frames are encoded in JSON and which is then transferred using POST requests and responses. This can be achieved by handling the TLS with memory BIO's which are then encoded to and decoded from JSON.
First, your Q is not very clear. SSL is (a typedef for) a C struct type, and you can't use the dot operator on a struct type in C, only an instance. Even assuming you meant 'an instance of SSL', as people sometimes do, in older versions (through 1.0.2) it did not have members read and write, and in 1.1.0 up it is opaque -- you don't even know what its members are.
Second, there are two different levels of BIO usage applicable to the SSL library. The SSL/TLS connection (represented by the SSL object, plus some related things linked to it like the session) always uses two BIOs to respectively send and receive protocol data -- including both protocol data that contains the application data you send with SSL_write and receive with SSL_read, and the SSL/TLS handshake that is handled within the library. Much as Steffen describes, these normally are both set to a socket-BIO that sends to and receives from the appropriate remote host process, but they can instead be set to BIOs that do something else in-between, or even instead. (This normal case is automatically created by SSL_set_{,r,w}fd which it should be noted on Windows actually takes a socket handle -- but not any other file handle; only on Unix are socket descriptors semi-interchangeable with file descriptors.)
Separately, the SSL/TLS connection itself can be 'wrapped' in an ssl-BIO. This allows an application to handle an SSL/TLS connection using mostly the same API calls as a plain TCP connection (using a socket-BIO) or a local file, as well as the provided 'filter' BIOs like a digest (md) BIO or a base64 encoding/decoding BIO, and any additional BIOs you add. This is the case for the IBM webpage you linked (which is for a client not a server BTW). This is similar to the Unix 'everything is (mostly) a file' philosophy, where for example the utility program grep, by simply calling read on fd 0, can search data from a file, the terminal, a pipe from another program, or (if run under inetd or similar) from a remote system using TCP (but not SSL/TLS, because that isn't in the OS). I haven't encountered many cases where it is particularly beneficial to be able to easily interchange SSL/TLS data with some other type of source/sink, but OpenSSL does provide the ability.

How do I download an encrypted s3 object without decryption?

I'm using Server-Side Encryption with Customer-Provided Encryption Keys (SSE-C) to store some files. I want to download them but not decrypt them just yet. The use case is something like the Game of Thrones finale. I want cable operators to have the data but give them the key in the last second. But the decrypt headers are mandatory when the file is encrypted. Maybe I can toggle the mark that the file is encrypted?
For this application, you wouldn't use any variant of SSE.
SSE prevents your content from being stored on S3's internal disks in a form where accidental or deliberate compromise of those physical disks or their raw bytes -- however unlikely -- would expose your content to unauthorized personnel. That is fundamentally the purpose of all varieties of SSE. The variants center around how the keys are managed.
Server-side encryption is about data encryption at rest—that is, Amazon S3 encrypts your data at the object level as it writes it to disks in its data centers and decrypts it for you when you access it.
https://docs.aws.amazon.com/AmazonS3/latest/dev/serv-side-encryption.html
SSE is decrypted by S3 and transiently re-encrypted using TLS for transmission on the network during the download. The final result in the client's hands is unencrypted.
For the application described, you would just upload the encrypted content to S3 without S3 being aware of the (external, already-applied) encryption.
If you also used some kind of SSE, that would be unrelated to the external encryption that you would also apply. Arguably, SSE would be somewhat redundant if the content is already encrypted before upload.
In fact, in the application described, depending on sensitivity and value of the content, each recipient would potentially have different keys and/or a slightly different source file (thus a substantially different encrypted file), so that the source of a leak could be identified by identifying which source variant was compromised.

What is the relation between Serialization and streaming?

Always when I find some articles or videos are talking about stream they're necessairly talking about serialization?
what is the relation between those? or to be specific,
Could we say that the data stream always needs serialization or could we find some data stream without serialization?
Firstly, it useful to have a reminder of serial vs parallel communication: if we take a simple example of transmitting a byte, in the parallel case all 8 bits are sent at the same time and in the serial case the 8 bits are sent one by one and the byte built again on the receiving side.
For your video domain example, If you imagine a frame of a video as being a large collection of bytes, lets say 720 by 1280 pixels and each pixel is represented by a byte, then we need 921,600 bytes to represent the frame.
If you are streaming the video you need to send each frame (plus overhead which we'll ignore here for simplicity) from the server to the client device, hence you need to send the 921,600 bytes for each frame.
If you had a very (very!) large parallel connections that could transmit 921,600 bytes in parallel between the server and the client in a single communication then this would be easy to understand.
However, this is almost always not the case, even for much smaller data structures, so serialisation is the name generally given to the process of taking the 921,600 bytes and breaking them down into the size which you can transmit - and that size is often one bit at a time.
Generally a video will be broken down into packets and the packets transmitted to the client. The packets themselves are just collections of bytes also and if the connection allows only a single bit of information to be transmitted at a time, then the packet needs to be broken down and sent 'serially' one bit at a time.
To complicate things, as is commonly the case in computer science and communications, the terms can mean different things in different contexts.
For example you may see it mentioned that you can either stream or 'serialise an object' in some client server communication. What this generally means is that you can either send the raw data 'stream' and let the client be responsible for how to interpret it, or you can use a framework or underlying mechanism which will take an object, convert it into a format that can be transmitted serially, and then reconstruct it on the other end and give it to the client. In fact the actually communication is serial in both cases (if it is using a serial communication channel) so the terms are being used in a different way here.

UDP Client and Server Buffer Agreement

Hi I am writing a program that will send a file from client to server using UDP socket using different packet sizes for example 512B, 1KB and 2KB and i don't want use fixed buffer size in the receiver(server).I need some codes in Java that will allow both server and client to agree upon a packet size before transfer start. Many thanks
Don't you forget that UDP packets may be fragmented, duplicated and lost? There is a whole bunch of things to take care of, starting with lost packet retransmissions.
I hate to give a "don't do this" kind of answers, but for this one, just use TCP. And if you want some user-level "packets", you can have them with TCP also (prefix each one with its length, that's enough).

Securing a UDP connection

For a personal MMO game project I am implementing a homebrew reliable UDP-based protocol in java. Given my current setup I beleive it would be relatively simple for a snooper to hijack a session, so in order to prevent this I am taking the opportunity to learn a little cryptology. Its very interesting.
I can successfully create a shared secret key between the client and server using a Diffie-Hellman key exchange (a very clever concept), but now I need to use this to guarantee the authenticity of the packets. My preliminary testing so far has shown that the couple of different ciphers Ive tried bloat the amount of data a bit, but I would like to keep things as small and fast as possible.
Given that I am only trying to authenticate the packet and not nessecarily conceal the entire payload, I have the idea that I could put an 8 byte session ID generated from the secret key into the packet header, encrypt the whole packet, and hash it back down to 8 bytes. I take the unencrypted packet and put the 8 byte hash into the place of the session ID and then send it off.
Would this be secure? It feels a little inelegant to encrypt the whole packet only to send it unencrypted - is there a better/faster way to achieve my goal? Please note I would like to do this myself since its good experience so Im not so interested in 3rd party libraries or other protocol options.
If both peers have access to a shared secret (which they should, since you're talking about Diffie-Hellman), you could simply store a hash of the datagram in its header. The receiver checks to see if it matches.
As an added security measure, you could also add a "challenge" field to your datagram and use it somewhere in the hashing process to prevent replays.
So this hash should cover:
The shared secret
A challenge
The contents of the datagram
EDIT
The "challenge" is a strictly incrementing number. You add it to your datagram simply to change the hash every time you send a new message. If someone intercepts a message, it cannot resend it: the receiver makes sure it doesn't accept it.