I'd like to run node-solid-server as docker, configured by docker-compose, behind træfik. Because solid uses webid-tls all encrypted requests need to be bypassed through the frontend proxy.
How to bypass all TLS to the solid docker?
Related
I'm trying to use the zap proxy running on docker image. It works good on my local machine but when trying to use it behind corporate network the zap proxy requests timeout because it can't connect to the internet. I already have configured the http_proxy and https_proxy environmental variables but it seems that zap proxy isn't using them
You can configure a chained proxy (outbound proxy) for ZAP to use. Via Tools : Options > Connection (in the GUI), or the endpoints below in the API.
optionUseProxyChain
optionUseProxyChainAuth
optionProxyChainName
optionProxyChainPassword
optionProxyChainPort
optionProxyChainPrompt
optionProxyChainRealm
optionProxyChainSkipName
optionProxyChainUsername
proxyChainExcludeDomains
Can you run a Selenium server hub behind nginx, to proxy port 443/ssl (or 80 without) to localhost:4444 where Selenium server is bound? My remote nodes won't connect to Selenium server behind nginx, only if I specifically open port 4444 in the firewall and bypass nginx do remote nodes connect.
Not sure if nginx handles this. I imagine the problem is more that your network firewall blocks ports outside 443 and certain others, and expects all traffic to go via HTTPS.
Get your network administrators to allow a punch-through for port 443.
Host your CI platform behind the firewalled network
Look for an alternate way to access the application nodes -- some firewalled networks allow access to public nodes from a private network via a different IP/hostname from the normal ones
I don't think you could really run Selenium on, say, port 80, because the selenium server itself is not quite really a web service.
it's maybe a bit late for answering the question of #xref. However, I have just deployed my Selenium Grid behind Nginx.
In order to do so, I used Docker and Docker-Compose. I describe how to do it here
I hope it will be helpful for you.
I'm currently using an AJP proxy through apache to tomcat 8. I don't want to reason why I'm using AJP, but the basics are that Apache site outside the firewall while tomcat is inside the firewall with multiple apps being virtual hosted through the one apache instance.
A component to the app has been added with the need for websockets. I know that our current AJP implementation will not support websockets, however I'm looking for an alternative that someone else has confirmed working, i.e. different apache module, I'm using mod_proxy_ajp.
If there is no known module to allow this to work does anyone know of any works in progress for an enhancement to any of the existing modules or a new module?
FWIW I'm using spring4 websocket support with a STOMP endpoint and SockJS.
At the time of your question there is no solution for WebSocket support via AJP.
Apache does have mod_proxy_wstunnel but this support proxying of WebSocket using the HTTP protocol itself to the backend server. AJP work differently.
.
See this tomcat mailing list item for some useful background:
https://mail-archives.apache.org/mod_mbox/tomcat-users/201408.mbox/%3C53FF3A3A.3040507#christopherschultz.net%3E
I followed the instructions from this link:
How do you get Amazon's ELB with HTTPS/SSL to work with Web Sockets? to set up ELB to work with Websocket (having ELB forward 443 to 8443 on TCP mode). Now I am seeing this issue for wss: server sends message1, client does not receive it; after few seconds, server sends message2, client receives both messages (both messages are around 30 bytes). I can reproduce the issue fairly easily. If I set up port forwarding with iptable on the server and have client connecting directly to the server (port 443), I don't have the problem Also, the issue seems to happen only to wss. ws works fine.
The server is running jetty8.
I checked EC2 forums and did not really find anything. I am wondering if anyone has seen the same issue.
Thanks
From what you describe, this pretty likely is a buffering issue with ELB. Quick research suggests that this actually is the issue.
From the ELB docs:
When you use TCP for both front-end and back-end connections, your
load balancer will forward the request to the back-end instances
without modification to the headers. This configuration will also not
insert cookies for session stickiness or the X-Forwarded-* headers.
When you use HTTP (layer 7) for both front-end and back-end
connections, your load balancer parses the headers in the request and
terminates the connection before re-sending the request to the
registered instance(s). This is the default configuration provided by
Elastic Load Balancing.
From the AWS forums:
I believe this is HTTP/HTTPS specific but not configurable but can't
say I'm sure. You may want to try to use the ELB in just plain TCP
mode on port 80 which I believe will just pass the traffic to the
client and vice versa without buffering.
Can you try to make more measurements and see how this delay depends on the message size?
Now, I am not entirely sure what you already did and what failed and what did not fail. From the docs and the forum post, however, the solution seems to be using the TCP/SSL (Layer 4) ELB type for both, front-end and back-end.
This resonates with "Nagle's algorithm" ... the TCP stack could be configured to bundling requests before sending them over the wire to reduce traffic. This would explain the symptoms, but worth a try
We've got an establish Apache/PHP site, with a Flash front-end. We're going to start to need to implement some sort of socket communication, or 'long-polling', to push updates to the flash app. Since this obviously isn't going to be a good situation for Apache, or PHP, I'd like to use Tornado for this aspect of the functionality, but I also don't want to run Tornado on another port, since the Flash app will be running on the client machine, we don't want to have to deal with restrictive firewalls blocking the socket connections.
Ideally I'd like to run a proxy which can forward most requests to Apache, and other requests to Tornado. I saw some suggestions for using Apache as the first-contact proxy, forward requests to Tornado when necessary, but I've also seen this discounts a lot of the async capabilities of Tornado.
I thought, why no use Tornado as the first-contact for port 80 and have it proxy back to Apache? I couldn't find anything on this at all and am wondering if this is even possible?
Another option would be to use something like lighttpd as the proxy and have it decide whether to pass things along to Apache or to Tornado, but does this kind of setup make sense? Or what about Nginx?
Any suggestions, advice or corrections on my understanding of things would be greatly appreciated!
This is called a reverse proxy and it's very easy to configure nginx to perform this. (lighttpd should also be able to do this job well, but I have no experience using it).
The tornado documentation has an example nginx configuration
One thing to note when using a reverse proxy is that the connection to your upstream server will now be originating from the proxy, not the client. The de facto standard is to put information about the original request in certain http headers. In the example from the tornado docs, the X-Real-IP header is set to the IP of the original client and X-Scheme is set to the scheme of the original request (http/https for example).
This may require some modifications to your upstream server. With tornado this is done by constructing the HTTPServer with the xheaders argument set to True. This will instruct the server to try and pull the IP address and scheme from the X-headers. Note that if you use this with a server that isn't behind a reverse proxy that sets the appropriate headers than you are open to IP Address spoofing.