Can you run a Selenium server hub behind nginx, to proxy port 443/ssl (or 80 without) to localhost:4444 where Selenium server is bound? My remote nodes won't connect to Selenium server behind nginx, only if I specifically open port 4444 in the firewall and bypass nginx do remote nodes connect.
Not sure if nginx handles this. I imagine the problem is more that your network firewall blocks ports outside 443 and certain others, and expects all traffic to go via HTTPS.
Get your network administrators to allow a punch-through for port 443.
Host your CI platform behind the firewalled network
Look for an alternate way to access the application nodes -- some firewalled networks allow access to public nodes from a private network via a different IP/hostname from the normal ones
I don't think you could really run Selenium on, say, port 80, because the selenium server itself is not quite really a web service.
it's maybe a bit late for answering the question of #xref. However, I have just deployed my Selenium Grid behind Nginx.
In order to do so, I used Docker and Docker-Compose. I describe how to do it here
I hope it will be helpful for you.
Related
I have a squid proxy container on my local Docker for Mac (datadog/squid image). Essentially I use this proxy so that app containers on my local docker and the browser pod (Selenium) on another host use the same network for testing (so that the remote browser can access the app host). But with my current setup, when I run my tests the browser starts up on the remote host and then after a bit fails the test. The message on the browser is ERR_PROXY_CONNECTION_FAILED right before it closes. So I assume that there is an issue with my squid proxy config. I use the default config and on the docker hub site it says
Please note that the stock configuration available with the container is set for local access, you may need to tweak it if your network scenario is different.
I'm not really sure how my network scenario is different. What should I be looking into for more information? Thanks!
I am confused about ports.
I find it odd that we need to bind different servers to different ports.
Example:
Apache binded on 8080, Express.js can't bind on 8080
How does server port binding differ from application port listening?
Example:
Different browsers, ie, chrome, firefox, can listening and communicated on port 80?
This issue came up when trying to run "grunt test:unit". There was a tomcat server that was already bound to 8080, but the server grunt starts, middleware I believe, is able to startup, but it is not able to to capture the browser. Stopping the tomcat server made things work.
Actually, Firefox, Chrome, etc. use different source ports. They don't listen on ports; they connect to remote servers. The servers are listening on one port (80). The source port from which the browser connects is chosen randomly and is a high number. You can check this using netstat. Their destination port is the same (80).
The reason why you can't have multiple servers binding to the same port* is because the operating system wouldn't know which application to hand off an incoming connection to.
*actually, you can, but it's complicated. SO_REUSEPORT
The reason only one application can control/listen on a port at one time is this:
When the OS receives a request for, say, port 80, and there were two apps listening on it, how is it supposed to know which app to pass on the request to?
The reason multiple apps can access the web at once is because they don't do it the same way - they use an unused port (maybe something like 62332 or whatever) and only the destination is port 80, for example.
That's what ports are for - so that you can run more than one server at once per machine.
I created a WebSockets app to provide communication between connected clients, but I'm concerned about corporate firewalls and ISP rules that might block the port 8080 it's using. But the usual HTTP port 80 (that really no one would block) is already used by Apache on that server to provide the functionality for the rest of the app (which is a clasic web app running on PHP).
What are my options there? Are my concerns misplaced?
One option is to set up an Apache reverse proxy to make your app available via port 80. See (for example) Running a Reverse Proxy in Apache.
I got a Selenium server all set up, and for security reasons, I want to make it accept requests from just a specific IP. Is there a way to configure that?
You could configure a firewall on the machine on which your selenium server is running. The firewall would only accept incoming connections from one specific IP.
I need a Reverse Proxy to front both Lablz Web server and SSL VPN Adito (SSL Explorer fork) by sitting on one IP/port. Failed to achieve that with Nginx. Failed to use Adito as a generic reverse HTTP proxy.
Can HAProxy fall back to being a TCP proxy if it does not sense HTTP traffic?
In other words can it fall back to Layer 4 if its Layer 7 inspection determines this is not HTTP traffic?
Here is my setup
EC2 machine with one public IP (Elastic IP).
Only one port is open - 443.
Stunnel is sitting on 443 and is passing traffic to HAProxy (I do not like to use Stunnel but HAProxy does not have full support for SSL yet, unlike Nginx).
HAProxy must be configured to pass some HTTP traffic to one server (Apache server which fronts the SVN server) and the rest of the HTTP traffic to our Lablz Web/App server.
All non-HTTP traffic must be forwarded to Adito VPN.
This traffic is:
VNC, NX, SMB
... and all other protocols that Adito supports
I can not rely on source IP address or port to split traffic into HTTP and non-HTTP.
So, can such config be accomplished in HAProxy? Can any other reverse proxy be used for this? Let me know if I am not thinking right about HAProxy and an alternative approach is possible.
BTW, Adito SSL VPN is amazing and if this setup works we will be able to provide Lablz developers with a fantastic one-click single-login secure VNC-over-HTTPS access to their boxes in the cloud.
No solution exists for this but via Adito - please prove me wrong. But please do not say that VNC over SSH is better. Yes, VNC-over-SSH is faster, more secure, but also is much harder (for our target user base) to setup and presumes that user is behind the firewall that allows outbound traffic on port 22 (not always the case).
Besides, Adito is much more than the remote access gateway - it is a full blown in-browser VPN, a software distribution platform and more. I am not associated with Adito guys - see my Adito post on our Lablz blog.
OK, first off, I'd use a simple firewall to divide all HTTP from NON-HTTP traffic. What you need is packet inspection to figure out what it is that is coming in.
Neither haproxy or nginx can do that. They are both made for web traffic and I don't see how they could inspect traffic to guess what it is that they are dealing with.
Update: Looked into this it a bit and with iptables you could probably use string matching to devide the traffic. However, that's all tricky, especially with the encrypted nature. A friend of mine discovered l7-filter and this looks like what you need. Let me know if this helps.