Test driven development: Testing ssl - testing

We are using the Akka framework and have recently switched our communication to being encrypted with SSL within a cluster.
It seems to be working, but I fear that we may accidentally disable it in the future developments and not notice it.
How would you go about writing a unit test that guarantees that your communication is encrypted, so that if someone moves it to being in clear we can detect it at build time?
This can be either or not akka specific, I'll be happy with both.

I would suggest the negative test that tries not encrypted communication. The failure will be the expected result of the such test in the encrypted communication channel. However if the encryption will omitted the test will signal that to you.

Related

What is the most reliable high-level language / library for non-blocking SSL sockets?

Everybody and their dog can write blocking socket code, but non-blocking is considerably harder. Add SSL into it, which even on protocol level brings complications into non-blocking processing, and you can get out of luck. Obviously, it can be done in C using direct OpenSSL calls, but that's get complicated and feels like reinventing the wheel. But higher level languages and libraries often have their non-blocking support buggy and unreliable.
I've already tried following:
ruby with kgio. It included something CALLED kgio_monkey_ext which apparently disappeared from internet and was never documented. It didn't worked, it just kept blocking.
perl with AnyEvent and Net::SSLeay. Mostly works, but occasionally lose some socket and occasionally blocks or rather hangs up. Also, I've found this article which describes some problems with Net::SSLeay's support for non-blocking sockets: http://devpit.org/wiki/OpenSSL_with_nonblocking_sockets_%28in_Perl%29 which may or may not be related.
My motivation is to write sort of SSL connect proxy, which accepts requests from clients in form on server address and port, connects to them with SSL/TLS and then proxies the traffic between the plain client and SSL server, while responding to other requests in parallel.
Would there be any language and/or library which is better suited for such task than with AnyEvent and Net::SSLeay, in terms of better reliability (not blocking, not leaking memory, not leaking sockets), clearer documentation/examples and how much I would need to write in it?
Alternatively, is this already implemented somewhere, in form I can modify to support the request protocol I need?
Or should I try harder with perl?

What causes a SOAP service to keep disconnecting TLS clients after responding to a single message?

I loaded a client-side .svclog file inside Microsoft Service Trace Viewer and there are a lot of entries in the log saying setting up secure session and close secure session. On the server side, I can see many instances of trust/RST/SCT/Cancel, indicating that the connections are being closed on the server side, but only after giving a response to a SOAP message. It seems like every web service call involves setting up a TLS session for SOAP, and then the connection being closed immediately after sending a response, requiring that TLS be set up again for the very next call.
I read this article: https://blogs.technet.microsoft.com/tspring/2015/02/23/poor-mans-guide-to-troubleshooting-tls-failures/
It said:
Keep in mind that TCP resets should always be expected at some point as the client closes out the session to the server. However, if there are a high volume of TCP resets with little or no “Application Data” (traffic which contains the encapsulated encrypted data between client and server) then you likely have a problem. Particularly if the server side is resetting the connection as opposed to the client.
Unfortunately, the article doesn't expand on this, because it is exactly what I am seeing!
This is a net.tcp web service installed in some customer environment, set up to use Windows authentication.
What's the next step in my diagnosis?
Most likely the behavior you are seeing is normal, and unless you are experiencing some problems I would not be concerned. The MSFT document you quote is referring to TCP resets, but you said your logs show trust/RST/SCT/Cancel entries, and in that context RST means RequestSecurityToken. In other words, your log messages don't in any way imply that there are TCP reset (RST) frames occurring.
The Web Services Secure Conversation Language (WS-SecureConversation) spec (here) says:
It is not uncommon for a requestor to be done with a security context
token before it expires. In such cases the requestor can explicitly
cancel the security context using this specialized binding based on
the WS-Trust Cancel binding. The following Action URIs are used with
this binding:
http://schemas.xmlsoap.org/ws/2005/02/trust/RST/SCT/Cancel
http://schemas.xmlsoap.org/ws/2005/02/trust/RSTR/SCT/Cancel
Once a
security context has been cancelled it MUST NOT be allowed for
authentication or authorization or allow renewal. Proof of possession
of the key associated with the security context MUST be proven in
order for the context to be cancelled.
If you actually are experiencing transport problems due to unexpected TCP RST frames, or if you are seeing them and are curious to understand their underlying cause, then you'll need to capture network traffic to see how and why TCP resets are occurring, and whether they are normal or abnormal.
I'd do that by firing up WireShark and looking at the frames. If you see FIN, ACK messages from each side then you expect the connection to be closed gracefully after a waiting period. Otherwise you'll see RST frames for a variety of reasons: application resets (performed to avoid tying up a lot of ports in Wait states), bad sequence number when re-accessing a port that's in a Wait state, router or firewall RST messages (typically sent both directions), retransmit timeouts, port choice RST messages, and others.
There are lots of resources to help with TCP traffic analysis. You might find it helpful to take a look at https://blogs.technet.microsoft.com/networking/2009/08/12/where-do-resets-come-from-no-the-stork-does-not-bring-them/ for a quick overview.
If you're not familiar with WireShark it can seem a little complicated, but the thing you want to do here is very simple and you can get your answer very quickly even with no prior experience. Just search for wireshark tutorials and you'll find one that fits your cognitive style.
You can also use WireShark to troubleshoot higher level protocols, including TLS. You can find information about that in many places. I'll just list a few to get you started:
WireShark documentation on SSL is here.
Wikiversity section on HTTPS is here.
A 5-minute youtube tutorial for looking at SSL traffic is here.
I believe this covers your next diagnostic step reasonably well, but if not, feel free to post more information and I can try to provide a better answer.

How Can I use Apache to load balance Marklogic Cluster

Hi I am new to Marklogic and Apache. I have been provided task to use apache as loadbalancer for our Marklogic cluster of 3 machines. Marklogic cluster is currently running on Linux servers.
How can we achieve this? Any information regarding this would be helpful.
You could use mod_proxy_balancer. How you configure it depends what MarkLogic client you would like to use. If you would like to use the Java Client API, please follow the second example here to allow apache to generate stickiness cookies. If you would like to use XCC, please configure it to use the ML-Server-generated or backend-generated "SessionID" cookie.
The difference here is that XCC uses sessions whereas the Java Client API builds on the REST API which is stateless, so there are no sessions. However, even in the Java Client API when you use multi-request transactions, that imposes state for the duration of that transaction so the load balancer needs a way to route requests during that transaction to the correct node in the MarkLogic cluster. The stickiness cookie will be resent by the Java Client API with every request that uses a Transaction so the load balancer can maintain that stickiness for requests related to that transaction.
As always, do some testing of your configuration to make sure you got it right. Properly configuring apache plugins is an advanced skill. Since you are new to apache, your best hope of ensuring you got it right is checking with an HTTP monitoring tool like WireShark to look at the HTTP traffic from your application to MarkLogic Server to make sure things are going to the correct node in the cluster as expected.
Note that even with the client APIs (Java, Node.js) its not always obvious or explicit at the language API layer what might cause a session to be created. Explicitly creating multi statement transactions definately will, but other operations may do so as well. If you are using the same connection for UI (browser) and API (REST or XCC) then the browser app is likely to be doing things that create session state.
The safest, but least flexable configuration is "TCP Session Affinity". If they are supported they will eliminate most concerns related to load balancing. Cookie Session Affinity relies on guarenteeing that the load balencer uses the correct cookie. Not all code is equal. I have had cases where it the load balancer didn't always use the cookie provided. Changing the configuration to "Load Balancer provided Cookie Affinity" fixed that.
None of this is needed if all your communications are stateless at the TCP layer, the HTTP layer and the app layer. The later cannot be inferred by the server.
Another conern is if your app or middle tier is co-resident with other apps or the same app connecting to the same load balancer and port. That can be difficult to make sure there are no 'crossed wires' . When ML gets a request it associates its identity with the client IP and port. Even without load balencers, most modern HTTP and TCP client libraries implement socket caching. A great perfomrance win, but a hidden source of subtle random severe errors if the library or app are sharing "cookie jars" (not uncomnon). A TCP and Cookie Jar cache used by different application contexts can end up sending state information from one unrelated app in the same process to another. Mostly this is in middle tier app servers that may simply pass on requests from the first tier without domain knowledge, presuming that relying on the low level TCP libraries to "do the right thing" ... They are doing the right thing -- for the use case the library programmers had in mind -- don't assume that your case is the one the library authors assumed. The symptoms tend to be very rare but catastrophic problems with transaction failures and possibly data corruption
and security problems (at an application layer) because the server cannot tell the difference between 2 connections from the same middle tier.
Sometimes a better strategy is to load balance between the first tier and the middle tier, and directly connect from the middle tier to MarkLogic.
Especially if caching is done at the load balancer. Its more common for caching to be useful between the middle tier and the client then the middle tier and the server. This is also more analogous to the classic 3 tier architecture used with RDBMS's .. where load balancing is between the client and business logic tiers not between business logic and database.

Netty SSL Handler Unit Tests

I'm about to embark on configuring the ssl handler for our server. I have looked at the secure chat example a few times. I'm just trying to formulate how I can write a unit test using the embedder testing classes.
Does anyone have a netty unit testing example for the ssl handler setup? I was wondering if anyone would like to share their efforts in this area. I'm still not sure how to begin.
Many thanks.
I've had to do unit tests for Netty handler pipelines that included StartTLS and compression. I found it easier just to use a loopback socket rather than try to wire up the encoder/decoder embedders.
I did use the embedders for the more isolated tests dealing specifically with individual encoding and decoding handlers but when testing the full pipeline I think a loopback socket is the way to go.

Subscribe Authentication With ZeroMQ

I am having a hard time understanding the ZeroMQ messaging system, so before I dive in, I wanted to see if anyone knew if what I want to do is even possible.
I want to setup a pubsub server with ZeroMQ that will publish certain streams of data and to subscribe to some of those streams, a user must authenticate to see if they have access to those streams. Everything I have seen has the subscribing taking place with the zmq.SUBSCRIBE, command.
Can this be modified to authenticate? Does it support it out of the box?
No, there is no such functionality out of the box. ZeroMQ operates on lower level and it is likely that auth-features will never be in the core.
Since pubsub is implemented on top of IP-multicast, I can suggest to write an auth-server that will control a network router and forbid all multicast traffic to the client by IP/port until this client will not be authorized. You're free to choose auth method in this case, of course.
If you can sacrifice ZeroMQ’s stability and performance to the development cost, just take ActiveMQ. It has authentication features.