I need to transfer small amounts of data intermittently from clients to our server in a secure fashion and pull down large binary files from the server ocassionally. It's important for all this to be reliable. I'm anticipating 100,000 clients. I control both ends, but I want to deliver a solution that doesn't require changing the firewall for the majority of customers. A lag of one or two minutes before the information migrates to the server or comes down seems to be acceptable at this time.
We need to make the connection secure, so was thinking about SSL, but open to suggestions. Basically, what is the best binding to use in this situation so that we have a secure transmission and the system handles the stress and load in a way that works for 95% of clients out of the box (firewalls will not block in majority of firewall configurations).
Firewall: you can port sharing to some well known port, or add yourself to exception list if client is using windows firewall
Using self signed certificate on net.tcp binding using transport security would be ideal.
Related
I heard that to alleviate the web server of the burden of performing the SSL Termination, it is moved to load balancers and then HTTP connection is made from the LB to the web server. However, in order to ensure security, an accepted practice is to re encrypt the data on the LB and then transmit it to the web server. If we are eventually sending the encrypted data to the web servers, what is the purpose of having a LB terminate SSL in the first place ?
A load balancer will spread the load over multiple backend servers so that each backend server takes only a part of the load. This balancing of the load can be done in a variety of ways, also depending on the requirements of the web application:
If the application is fully stateless (like only serving static content) each TCP connection can be send to an arbitrary server. In this case no SSL inspection would be needed since the decision does not depend on the content of the traffic.
If the application is instead stateful the decision which backend to use might be done based on the session cookie, so that requests end up at the same server as the previous requests for the session. Since the session cookie is part of the encrypted content SSL inspection is needed. Note that in this case often a simpler approach can be used, like basing the decision on the clients source IP address and thus avoiding the costly SSL inspection.
Sometimes load balancers also do more than just balance the load. They might incorporate security features, like a Web Application Firewall, they might sanitize the traffic or similar. These features work on the content so SSL inspection is needed.
I have created a service (WCF) that acts as a backend for a DB. For now it does basic operations such as INSERT, SELECT etc. I have run it locally and now it is time to expose her to the internet and enter 'production'. Is there a best practice to doing so? Bear in mind this service will be hosted on a PC as a Windows Service (not IIS). This is the first time I am putting a Windows Service into production so I am hazy on the details but I think this is the main idea:
On the service: Check for 'rookie' errors such as SQL Injection. Set maximum message sizes to ones marginally higher than the largest message that should be transmitted by my service. Also upgrade self signed X.509 certificate to one issued by a CA. (Where does one store this certificate? Locally on the PC?)
On the PC: Fully patched software (OS etc) and windows firewall with a specific set of rules that allows traffic only on the ports being used (I suppose the safest way to do this is to use the windows tool Allow a program or feature through Windows Firewall ?). Furthermore an updated antivirus running.
On the Network: For the network router, port forward the respective ports being used (the base address is declared as http://localhost:8080 so I guess port 80 for HTTP and 443 for HTTPS? I am using message level Security.)
General precautions: Full message logging on the service to analyze traffic and potential attackers. Also run a Network intrusion detection system such as Snort so that I can sleep a bit better at night.
Am I missing anything obvious? Also should I be hosting in IIS, on security exchange someone said that I would be vulnerable to HTTP attacks if I did not put the code behind a web server. However I have not read this anywhere else
I'm doing a network security course and trying to wrap my head around all the concepts. One of which is:
What technology other than firewall can be used to allow only a specific customers while block some other customers? Why is firewall not suitable?
During the course, I've been learning about all the security tools such as: firewall (static, dynamic, DPI), Proxy, VPN, Tunnel, all sorts of IDS (signature, anomaly, darknet/greynet and honeypot) then mod_security to secure apache but still puzzled by this question.
Any insights here will be greatly appreciated.
A firewall implied that you block based on the customer IP address. This may work if the customer has his own range of addresses and all requests from him are legitimate.
It gets complicated when he is with a large cloud provider who who provide a wide range of possible IPs, including IPs from other people.
For an application one good solution would be to use client-side certificates. In that case, during the TLS handshake (the process of putting in place a TLS (was: SSL) tunnel), the server will request the client to provide a certificate he (the server) trusts. Failure to providing one will break the connection.
This way, you can distribute the certificate to the clients you want to be able to reach your service and others will be rejected. This solution is better as it uses technologies which were developed exactly to solve this problem. The drawback is that you have to maintain and distribute the certificates (and usually run a PKI).
I am publically distributing an application which can be installed on users PC. Client will periodically communicate with the server to send information from the client. Server have to acknowledge the successful receipt of the information. Occasionally, server will do an one way communication with the client. My question is what is the best/failproof/recommended way to do client-server communication when client is massively distributed? I am currently focusing on self-hosted service to do the communication. What precaution should i take if the clients ip address change frequently?
My suggestions are:
Use HTTP or HTTPS on default ports. By massively I understand you will have no control over the network restrictions, firewalls, NAT traversal, etc. Using HTTP(S) and initiating the connections from the clients with simple web requests will save you a lot of trouble.
Use polling at regular/smart intervals to solve your occasional server initiated data transfer. Clients running on workstations wont have a public IP address, let alone a fixed one.
I've experienced a CPU usage surge coming from a WCF service that sends large files to requesting clients over HTTPS. Does TLS need to encrypt the whole file before sending it down or does it just encrypt the packets? I'm trying to find out what in the service is causing the surge as the WCF method responsible just serves files on disk. These files used to be smaller and so was the CPU load. There is only one endpoint with a binding that uses streaming and MTOM.
Regards,
F
TLS encrypts only the packets. The file you are sending is not encrypted, the communication of that file is encrypted -- it's a subtle but important difference.
Of course, using HTTPS does decrease scalability (because of server affinity caused by the HTTPS session) and performance degradation, but you can fix that by using special HTTPS hardware in your server.
SSL and TLS act at the transport layer, so anything sent over that session should be encrypted at the time of sending, and immediately decrypted upon receiving it. That means they can still be used to effectively secure streams or other open-ended communications.
Because the encryption will only happen as fast as the communication link, it should be reasonably constant. If you're seeing performance problems, it may simply be due to your files being larger, meaning proportionally more processing and time. Of course, if you have many clients requesting data at the same time, and it all needs to be encrypted, you'll soon reach the limit of either the processor or the network device. That's why web sites that support SSL often choose to secure only very specific sections, like login and password changing pages. If they secured every single request, they would get overloaded.