I have an client application that connects to a remote server via https for commercial purposes. This connection is using old IO (blocking connection). It normally runs smoothly.
Recently I have cloned the client thus created a new client instance, running from the same box and using the same client certificate. I'm noticing many connection timeouts from the server. I wonder if the cloning may have somehow been the cause of the timeouts and if there is a ssl issue here.
Both instances receive the following system parameters for security:
javax.net.ssl.trustStore=cacerts
javax.net.ssl.keyStore=1234567890123
javax.net.ssl.keyStorePassword=wordpass
Unfortunately the support from the server side is quite limited. I hope someone in this forum may come up with an idea.
Related
I have a Google Compute Engine Instance and have an ASP.NET Core application deployed to it. Within that application, I run
WebSocketServer server = new WebSocketServer("ws://0.0.0.0:2001");
To start a websocket server on port 2001. However, when I try and start a websocket connection to this port (m.y.i.p:2001), it times out. I don't understand why since the VM is tagged with the same network tag for ingress and egress that I created allowing access to all ports. If not the firewall, where else could I investigate?
For anyone else that seems to encounter a similar issue with opening a port on a VM running Windows Server (I was using the 2016 edition), I fixed it by remote desktoping into the machine and disabling its firewall. I had to do this even though I had made Compute Engine firewall exceptions. If anyone wants to clarify, I am assuming it's better to handle all firewall related things in GCP rather than having the internal firewall of the VM itself as well since there is likely to be conflict?
I'm testing SSL/TLS stream proxying within NGINX that will connect to a web server using gnutls as the underlying TLS API. Using the command line test tool in gnutls (gnutls-serv) the entire process works, but I can't understand the logic:
the NGINX client (proxying HTTP requests from an actual client to the gnutls server) seems to want to handshake the connection multiple times. In fact in most tests it seems to handshake 3 times without error before the server will respond with a test webpage. Using wireshark, or just debugging messages, it looks like the socket on the client side (in the perspective of the gnutls server) is being closed and reopened on different ports. Finally on the successful connection, gnutls uses a resumed sessions, which I imagine is one of the previously mentioned successful handshakes.
I am failing to find any documentation about this sort of behaviour, and am wondering if this is just an 'NGINX thing.'
Though the handshake eventually works with the test programs, it seems kind of wasteful (to have multiple expensive handshakes) and implementing handshake logic in a non-test environment will be tricky without actually understanding what the client is trying to do.
I don't think there are any timeouts or problems happening on the transport, the test environment is a few different VMs on the same subnet connected between 1 switch.
NGINX version is the latest mainline: 1.11.7. I was originally using 1.10.something, and the behaviour was similar though there were more transport errors. Those errors seemed to get cleaned up nicely with upgrading.
Any info or experience from other people is greatly appreciated!
Use either RSA key exchange between NGINX and the backend server or use SSLKEYLOGFILE LD_PRELOAD for NGINX to have the necessary data for Wireshark to decrypt the data.
While a single incoming connection should generate just one outgoing connection, there may be some optimisations in NGINX to fetch common files (favicon.ico, robots.txt).
Recently I migrated my company's product client-server communication from socket based Kerberos enabled tcp/ip communication within corporate intranet to WCF with netTcpBinding, Windows security and encryption and signing on, impersonation turned off. After quite a lot reading everything was ready and operational within our test environment, so we exposed service contract in production release. Clients from within intranet communicate smoothly with server, but the once who are connected through VPN suffer from occasional "The server has rejected client credentials" exception. Log off/log in fixes things by their side, but just for a couple of minutes. Windows user's password, I tested with is a fresh one.
I tried to disabling connection pooling, but this didn't fix the problem. Any ideas what might be wrong with our infrastructure or WCF configuration ?
I have been trying to connect my Node.js Public Bluemix app to a DB2 server which is behind a firewall using the secure gateway service of Bluemix. When I try that by just using TCP everything works fine. I am now trying to use the TLS:Mutual Auth option and I can't make it work.
I followed this tutorial (https://developer.ibm.com/bluemix/2015/04/17/securing-destinations-tls-bluemix-secure-gateway/) and the tunnel seems to be created (I can see that at logs of the gateway client) but no data is coming through.
In the object Options which is a parameter of tls.connect, if I set rejectUnauthorized: true then I get "UNABLE_TO_GET_ISSUER_CERT" while I am using the generated certificates of the destination. If I set rejectUnauthorized: false, then it seems to work and the connection opens but nothing comes through, it just hangs. In both cases, I am using the same code that works when TLS is not set up and is based on the ibm_db node driver for DB2.
Has anyone experience with this, I have been struggling with it for some days now and any help would be much appreciated.
After some discussion, we determined that part of the problem was explicitly stating a piece of the cert chain in the CA, causing the UNABLE_TO_GET_ISSUER_CERT error to be emitted. This can be resolved by either adding the full chain to the CA or not explicitly adding anything to the CA (as the cert is publicly signed).
An underlying issue that was identified is that the ibm_db node driver for DB2 does not appear to work as expected for TLS connections.
I have created a service (WCF) that acts as a backend for a DB. For now it does basic operations such as INSERT, SELECT etc. I have run it locally and now it is time to expose her to the internet and enter 'production'. Is there a best practice to doing so? Bear in mind this service will be hosted on a PC as a Windows Service (not IIS). This is the first time I am putting a Windows Service into production so I am hazy on the details but I think this is the main idea:
On the service: Check for 'rookie' errors such as SQL Injection. Set maximum message sizes to ones marginally higher than the largest message that should be transmitted by my service. Also upgrade self signed X.509 certificate to one issued by a CA. (Where does one store this certificate? Locally on the PC?)
On the PC: Fully patched software (OS etc) and windows firewall with a specific set of rules that allows traffic only on the ports being used (I suppose the safest way to do this is to use the windows tool Allow a program or feature through Windows Firewall ?). Furthermore an updated antivirus running.
On the Network: For the network router, port forward the respective ports being used (the base address is declared as http://localhost:8080 so I guess port 80 for HTTP and 443 for HTTPS? I am using message level Security.)
General precautions: Full message logging on the service to analyze traffic and potential attackers. Also run a Network intrusion detection system such as Snort so that I can sleep a bit better at night.
Am I missing anything obvious? Also should I be hosting in IIS, on security exchange someone said that I would be vulnerable to HTTP attacks if I did not put the code behind a web server. However I have not read this anywhere else