How much security is required for message storage and transmission? - wcf

I need to implement a very secured Web Service using WCF. I have read a lot of documents about security in WCF concerning authorization, authentication, message encryption. The web service will use https, Windows Authentication for access to the WS, SQL Server Membership/Role Provider for user authentication and authorization on WS operations and finally message encryption.
I read in one of documents that it is good to consider security on each layer indenpendently, i.e. Transport Layer security must be thought without considering Message Layer. Therefore, using SSL through https in combination with message encryption (using public/private key encryption and signature) would be a good practice, since https concerns Transport Layer and message encryption concerns Message Layer.
But a friend told me that [https + message encryption] is too much; https is sufficient.
What do you think?
Thanks.

If you have SSL then you still need to encrypt your messages if you don't really trust the server which stores them (it could have its files stolen), so this is all good practice.
There comes a point where you have a weakest link problem.
What is your weakest link?
Example: I spend $100,000,000 defending an airport from terrorists, so they go after a train station instead. Money and effort both wasted.
Ask yourself what the threat model is and design your security for that. TLS is a bare minimum for any Internet-based communications, but it doesn't matter if somebody can install a keystroke logger.

As you certainly understand, the role of Transport-Level Security is to secure the transmission of the message, whereas Message-Level Security is about securing the message itself.
It all depends on the attack vectors (or more generally the purpose) you're considering.
In both cases, the security models involved can have to purposes: protection against eavesdropping (relying on encryption) and integrity protection (ultimately relying on signatures, since based on public-key cryptography in most cases).
TLS with server-certificate only will provide you with the security of the transport, and the client will know that the communication really comes from the server it expects (if configured properly, of course). In addition, if you use client-certificate, this will also guarantee the server that the communication comes from a client that has the private key for this client certificate.
However, when the data is no longer in transit, you rely on the security of the machine where it's used and stored. You might no longer be able to assert with certainty where the data came from, for example.
Message-level security doesn't rely on how the communication was made. Message-level signature allows you to know where the messages came from at a later date, independently of how they've been transferred. This can be useful for audit purposes. Message-level encryption would also reduce the risks of someone getting hold of the data if it's stored somewhere where some data could be taken (e.g. some intranet storage systems).

Basically, if the private key used to decrypt the messages has the same protection as the private key used for SSL authentication, and if the messages are not stored for longer time than the connection, in that case it is certainly overkill.
OTOH, if you've got different servers, or if the key is stored e.g. using hardware security of sorts, or is only made available by user input, then it is good advice to secure the messages themselves as well. Application level security also makes sense for auditing purposes and against configuration mistakes, although personally I think signing the data (integrity protection) is more important in this respect.
Of course, the question can also become: if you're already using a web-service that uses SOAP/WSDL, why not use XML encrypt/sign? It's not that hard to configure. Note that it does certainly take more processor time and memory. Oh, one warning: don't even try it if the other side does not know what they are doing - you'll spend ages explaining it and even then you run into trouble if you want to change a single parameter later on.
Final hint: use standards and standardized software or you'll certainly run into crap. Spend some time getting getting to know how things work, and make sure you don't accept ill formatted messages when you call verify (e.g. XML signing the wrong node or accepting MD5 and such things).

Related

Intercepting, manipulating and forwarding API calls to a 3rd party API

this might be somewhat of a weird, long and convoluted question but hear me out.
I am running a licensed 3rd party closed-source proprietary software on my on-premise server that stores and manipulates data, the specifics of what it does are not important. One of the features of this software is that it has an API that accepts requests to insert/manipulate/retrieve data. Because of the poorly designed software, there is no mechanism to write internal scripts (at least not anymore, it has been deprecated in the newest versions) for the software or any events to attach to for writing code that further enhances the functionality of the software (further manipulation of the data according to preset rules, timestamping through a TSA of the incoming packages, etc.).
How can I bypass the need for an internal scripting functionality that still gives me a way to e.g. timestamp an incoming package and return an appropriate response via the API to the sender in case of an error.
I have thought about using the in-built database trigger mechanisms (specifically MongoDB Change Streams API) to intercept the incoming data and adding the required hash and other timestamping-related information directly into the database. This is a neat solution, other than the fact that in case of an error (there have been some instances where our timestamping authority API is down or not responding to requests) there is no way to inform the sender that the timestamping process has not gone through as expected and that the new data will not be accepted into the server (all data on the server must be timestamped by law).
Another way this could be done is by intercepting the API request somehow before it reaches its endpoint, doing whatever needs to be done to the data, and then forwarding the request further to the server's API endpoint and letting it do its thing. If I am not mistaken the concept is somewhat similar to what a reverse proxy does on the network layer - it routes incoming requests according to rules set in the configuration, removes/adds headers to the packets, encrypts the connection to the server, etc.
Finally, my short question to this convoluted setup would be: what is the best way of tackling this problem, are there any software solutions or concepts that I should be researching?

Would there be a compelling reason for implementing integrity check in a file transfer protocol, if the channel uses TLS?

I am developing a client server pair of applications to transfer files by streaming bytes over TCP/IP and the channel would use TLS always.
(Note: Due to certain OS related limitations SFTP or other such secure file transfer protocols cannot be used)
The application level protocol involves minimum but sufficient features to get the file to the other side.
I need to decide if the application level protocol needs to implement an integrity check (Ex: MD5).
Since TLS guarantees integrity, would this be redundant?
The use of TLS can provide you with some confidence that the data has not been changed (intentionally or otherwise) in transit, but not necessarily that the file that you intended to send is identical to the one that you receive.
There are plenty of other opportunities for the file to be corrupted/truncated/modified (such as when it's being read from the disk/database by the sender, or when it's written to disk by the receiver). Implementing your own integrity checking would help protect against those cases.
In terms of how you do the checking, if you're worried about malicious tampering then you should be checking a cryptographic signature (using something like GPG), rather than just a hash of the file. If you're going to use a hash then it's generally recommended to use a more modern algorithm such as a SHA-256 rather than the (legacy) MD5 algorithm - although most of the issues with MD5 won't affect you if you're only concerned about accidental corruption.

Amazon S3 Data integrity MD5 vs SSL/TLS

I'm currently working with the Amazon S3 API, and have a general wondering about the server-side integrity checks that can be done if you provide the MD5 hash during posting of an object.
I'm not sure I understand if the integrity check is required if you send the data (I'm assuming the object data you're posting also) via SSL/TLS, which provide their own support for data integrity in transit.
Should you send the digest regardless if you're posting over SSL/TLS? Isn't it superfluous to do so? Or is there something I'm missing?
Thanks.
Integrity checking provided by TLS provides no guarantees about what happens going into the TLS wrapper at the sender side, or coming out of it and being written to disk at the receiver.
So, no, it is not entirely superfluous because TLS is not completely end-to-end -- the unencrypted data is still processed, however little, on both ends of the connection... and any hardware or software that touches the unencrypted bits can malfunction and mangle them.
S3 gives you an integrity checking mechanism -- two, if you use both Content-MD5 and x-amz-content-sha256 -- and it seems unthinkable to try to justify bypassing them.

Two-way encryption/authentication between servers and clients

To be honest I don't know if this is the appropriate title since I am completely new to this area, but I will try my best to explain below.
The scenario can be modeled as a group of functionally identical servers and a group of functionally identical clients. Assume each client knows the endpoints of all the servers (possibly from a broker or some kind of name service), and randomly chooses one to talk to.
Problem 1: The client and the server first need to authenticate themselves to each other (i.e. the client must show the server that it's a valid client, vice versa).
Problem 2: After that, the client and server talk to each other over some kind of encryption.
For Problem 1, I don't know what's the best solution. For Problem 2, I'm thinking about letting each clients create a private key and give the corresponding public key to the server it talks to right after authentication, so that no one else can decrypt its messages; and let all servers share a private key and distribute the corresponding public key to all clients, so that the external world (including the clients) can't decrypt what the clients send to the servers.
These are probably very naive approaches though, so I'd really appreciate any help & thoughts on the problems. Thank you.
I asked a similar question about half a year ago here, I've been redirected to Information Security.
After reading through my answer and rethinking your question, if you still have questions that are so broad, I suggest to ask there. StackOverflow, from what I know, is more about programming (thus security in programming) than security concepts. Either way, you will probably have to migrate there at some point when doing your project.
To begin with, you need to seriously consider what needs protecting in your system. Like here (check Gilles' comment and others), one of the first and most important things to do is to think over what security measures you have to take. You just mentioned authentication and encryption, but there are many more things that are important, like data integrity. Check wiki page for security measures. After knowing more about these, you can choose what (if any) encryption algorithms, hashing functions and others you need.
For example, forgetting about data integrity is forgetting about hashing, which is the most popular security measure I encounter on SO. By applying encryption, you 'merely' can expect no one else to be able to read the message. But you cannot be sure if it reaches the destination unchanged (if anything), either because of interceptors or signal losses. I assume you need to be sure.
A typical architecture I am aware of, assumes asymmetric encryption for private key exchange and then communicating using private keys. This is because public-key infrastructure (PKI) assumes that the key of one of the sides is publicly known, making communication easier, but certainly slower (e.g. due to key length: RSA [asymmetric] starts with 512bits, but typical key length now is 2048, which I can compare to weakest, but still secure AES [symmetric], which key lengths start with 128bits). The problem is, as you stated, the server and user are not authenticated to each other, so the server does not really know if the person sending the data really is who they claim they are. Also, the data could have been changed during traffic.
To prevent that, you need a so called 'key exchange algorithm', such as one of the Diffie Hellman schemes (so, DH might be the 'raw' answer to both of your problems).
Taking all above into consideration, you might want to use one (or more) of the popular protocols and/or services to define your architecture. Popular ones are SSH, SSL/TLS and IPSec. Read about them, define what services you need, check if they are present in one of the services above and you are willing to use the service. If not, you can always design your own using raw crypto algorithms and digests (hashes).

Example of Intermediate System for Soap Processing

When you read about WCF Message Security and compare it to Transport security, one of the drawbacks that they always mention is that transport security is point-to-point and can't secure a message routed through intermediaries.
What is an example of these intermediaries. When would you use one?
All my experience with services is with point-to-point communication so I'm trying to build a context for when you might encounter a SOAP intermediary or router or proxy.
There are other questions on SO that beat around what I'm getting at but don't directly answer my question. For example, in this question:
Does SSL provide point-to-point security?
the answer says:
intermediate system', I think that quote means a system that must
access the message in the middle (intentionally or not)... not just a
router, but something actually decrypting, viewing and/or modifying
the message.
My question is: What would be an example of a system that need to view/decrypt/modify the message and why would it need to do that?
I just found a partial answer in this document from MS:
http://msdn.microsoft.com/en-us/library/ff647097.aspx
Part of it says:
Message layer security that uses X.509 certificates is flexible enough to provide point-to-point or end-to-end security. This allows messages to be persisted in a secure state for short periods for queue-based processing or for longer periods in an archived state.
So an example would be if a message were queued or stored off to disk. It would stay encrypted through the time it was processed and deleted from disk, presumably.
Edit: Here is a really good article on building a WCF Service Router that explains some of the reasons you might want to use one: http://msdn.microsoft.com/en-us/magazine/cc500646.aspx
See also: http://msdn.microsoft.com/en-us/library/ms731081.aspx