Ontotext GraphDB broken transactions behind HTTPS reverse proxy - graphdb

I'm setting up the following cloud-based deployment of GraphDB 9.11.1 Free Edition:
the server runs on a Google Compute Engine VM instance listening for HTTP requests on port 80
external requests are routed through an HTTPS load balancer listening on https://example.org/
the graphdb.external-url configuration property is set to https://example.org/ according to the instructions provided # https://graphdb.ontotext.com/documentation/enterprise/configuring-graphdb.html#url-properties
The console is working as expected, but when I try to interact with the database from an external agent using an RDF4J 3.7.7 HTTPRepository (extended binary protocol supporting transactions), I get the following exception:
org.eclipse.rdf4j.repository.RepositoryException: unable to rollback transaction. HTTP error code 404
at org.eclipse.rdf4j.http.client.RDF4JProtocolSession.rollbackTransaction(RDF4JProtocolSession.java:785)
at org.eclipse.rdf4j.repository.http.HTTPRepositoryConnection.rollback(HTTPRepositoryConnection.java:354)
at org.eclipse.rdf4j.repository.http.HTTPRepositoryConnection.close(HTTPRepositoryConnection.java:368)
Looking at network traffic the library is trying to close a transaction using
http://example.org/repositories/data-work/transactions/b99f4327-91d4-4f64-8aa0-c9eb0fb9db92
that is an HTTP transaction URL, where an HTTPS URL would be expected.
Am I missing something or is this an issue with the construction of transaction URLs behind reverse HTTPS proxies?

Related

What should I do to fix HTTP Request Smuggling on Apache?

I scaned my site with Burp Suite Proffessional.
It said a vulnerability called "HTTP Request Smuggling" has been detected.
This vulnerability was detected in the August 7, 2019 Burp Suite Professional ver2.1.03.
My server environment is as follows.
CentOS 7
Apache 2.4
PHP 7.3
PortSwigger says how to resolve this problem.
That is by changing the network protocol of the web server from "HTTP/1.1" to "HTTP/2".
https://portswigger.net/web-security/request-smuggling#how-to-prevent-http-request-smuggling-vulnerabilities
So I changed my site with SSL support and then HTTP/2 support as well.
And I scaned again, the "HTTP Request Smuggling" vulnerability was detected AGAIN.
HOW TO FIX THIS?????????
I am NOT interested in what is this problem details or how it works at all.
What I want to know is how to stop detecting this problem.
If you have encountered a similar event, tell me the solution. please?
If possible, I wish what you did something to this, wrote in httpd.conf or php.ini, etc.
I found that need to improve version of tomcat but I haven't tried yet
Article about solution
If you are using end-to-end HTTP/2 communication then that should eliminate the vulnerability. What I mean by this is that HTTP/2 is the only HTTP version used in all HTTP traffic.
Many web architectures has a load balancer or proxy in front of the web server which accepts HTTP/2 traffic. However, many frontend servers rewrite the incoming HTTP/2 traffic into HTTP/1 when it forwards the traffic to the backend server/ web server. When the traffic gets rewritten to HTTP/1 then HTTP request smuggling is possible. More info here: https://www.youtube.com/watch?v=rHxVVeM9R-M
I'm posting this quote from James Kettle, a researcher from Portswigger: "you can resolve all variants of this vulnerability by configuring the front-end server to exclusively use HTTP/2 to communicate to back-end systems, or by disabling back-end connection reuse entirely. "
source: https://portswigger.net/research/http-desync-attacks-request-smuggling-reborn

can not deploy web service via endpoint.publish() in apache reverse proxy env

we have a weblogic server in the internal network without SSL. To access the application, apache server is installed as a reverse proxy and also have configure SSL. it is ok to deploy web service via endpoint.publish(address) that address is get from httpservletrequest.getRequestURL() if the access url is internal. But failed and throw the below exception if the access url is proxy url. Any idea to publish it via proxy url?
weblogic.wsee.server.ServerURLNotFoundException: Cannot resolve URL for protocol http/https
at weblogic.wsee.server.ServerUtil.getHTTPServerURL(ServerUtil.java:211)
at weblogic.wsee.server.ServerUtil.getServerURL(ServerUtil.java:150)
at weblogic.wsee.server.ServerUtil.getServerURL(ServerUtil.java:137)
at weblogic.wsee.jaxws.spi.WLSEndpoint.calculatePublicAddressFromEndpointAddress(WLSEndpoint.java:335)
at weblogic.wsee.jaxws.spi.WLSEndpoint.publish(WLSEndpoint.java:207)
As per Oracle KM: Secure WebService call throwing CANNOT RESOLVE URL FOR PROTOCOL HTTP/HTTPS through web server(APACHE) plug-in. (Doc ID 1598617.1)
This is a Product Bug 8358398. For wls 10.3.0 to 10.3.2 you need to apply patch for this BUG and set -Dweblogic.wsee.useRequestHost=true
Blockquote
in your JAVA_OPTIONS.
For 10.3.3 and above version you need not to apply the patch. You can only set the above flag to true.

what is proxy server and how it helps in server architecture

I am very confused with proxy server, and proxy and this word proxy. I saw everywhere people are using proxy program, proxy server. Some of them using the proxy websites to unblock the websites. There are lot of things like reverse-proxy like that..
When I read one article about nginx I ran into one pic it says proxy cache. So what's proxy cache?
And how can I write a proxy program? What does that mean ? Why we need to use a proxy program?
Anybody can answer my question as simple as possible, I am not much in to this area.
A proxy server is used to facilitate security, administrative control or caching service, among other possibilities. In a personal computing context, proxy servers are used to enable user privacy and anonymous surfing. Proxy servers are used for both legal and illegal purposes.
On corporate networks, a proxy server is associated with -- or is part of -- a gateway server that separates the network from external networks (typically the Internet) and a firewall that protects the network from outside intrusion. A proxy server may exist in the same machine with a firewall server or it may be on a separate server and forward requests through the firewall. Proxy servers are used for both legal and illegal purposes.
When a proxy server receives a request for an Internet service (such as a Web page request), it looks in its local cache of previously downloaded Web pages. If it finds the page, it returns it to the user without needing to forward the request to the Internet. If the page is not in the cache, the proxy server, acting as a client on behalf of the user, uses one of its own IP addresses to request the page from the server out on the Internet. When the page is returned, the proxy server relates it to the original request and forwards it on to the user.
To the user, the proxy server is invisible; all Internet requests and returned responses appear to be directly with the addressed Internet server. (The proxy is not quite invisible; its IP address has to be specified as a configuration option to the browser or other protocol program.)
An advantage of a proxy server is that its cache can serve all users. If one or more Internet sites are frequently requested, these are likely to be in the proxy's cache, which will improve user response time. A proxy can also log its interactions, which can be helpful for troubleshooting.

Can Apache HTTP Server be configured to forward a request to multiple backend workers simultaneously?

Are you aware of a mod_proxy, mod_proxy_balancer, mod_proxy_http configuration of Apache 2.2 that would allow HTTP requests to be replicated? That is: each matched request is sent to an existing balancer AND replicated to an another worker node.
Goal:
Take production HTTP traffic coming into Apache 2.2, retain normal production load-balanced routing AND replicate that same traffic to one more [test] worker fronting a new back-end database required to be performance and load tested under production operations.
Background info:
Multi-tier system.
(a) Custom applications
(b) Redirector/Proxy [Apache 2.2 using mod_proxy, mod_proxy_balancer, mod_proxy_http]
(c) Workers [application server nodes: Tomcat 7.0.56 over Java 1.7.0_67 over 64-bit Linux kernels]
(d) Database [Oracle 11.2]
End-users driving custom applications generate HTTP requests funneled to the redirector. The redirector forwards application requests on a round-robin basis to a pool of worker nodes. Workers directly access backend database. HTTP responses funnel back through the redirector to the end-user workstation.
No, it is not currently possible. But you can save the traffic and replay it with a relatively new module called mod_firehose. But it is not an all in one tool.

Delay issue with Websocket over SSL on Amazon's ELB

I followed the instructions from this link:
How do you get Amazon's ELB with HTTPS/SSL to work with Web Sockets? to set up ELB to work with Websocket (having ELB forward 443 to 8443 on TCP mode). Now I am seeing this issue for wss: server sends message1, client does not receive it; after few seconds, server sends message2, client receives both messages (both messages are around 30 bytes). I can reproduce the issue fairly easily. If I set up port forwarding with iptable on the server and have client connecting directly to the server (port 443), I don't have the problem Also, the issue seems to happen only to wss. ws works fine.
The server is running jetty8.
I checked EC2 forums and did not really find anything. I am wondering if anyone has seen the same issue.
Thanks
From what you describe, this pretty likely is a buffering issue with ELB. Quick research suggests that this actually is the issue.
From the ELB docs:
When you use TCP for both front-end and back-end connections, your
load balancer will forward the request to the back-end instances
without modification to the headers. This configuration will also not
insert cookies for session stickiness or the X-Forwarded-* headers.
When you use HTTP (layer 7) for both front-end and back-end
connections, your load balancer parses the headers in the request and
terminates the connection before re-sending the request to the
registered instance(s). This is the default configuration provided by
Elastic Load Balancing.
From the AWS forums:
I believe this is HTTP/HTTPS specific but not configurable but can't
say I'm sure. You may want to try to use the ELB in just plain TCP
mode on port 80 which I believe will just pass the traffic to the
client and vice versa without buffering.
Can you try to make more measurements and see how this delay depends on the message size?
Now, I am not entirely sure what you already did and what failed and what did not fail. From the docs and the forum post, however, the solution seems to be using the TCP/SSL (Layer 4) ELB type for both, front-end and back-end.
This resonates with "Nagle's algorithm" ... the TCP stack could be configured to bundling requests before sending them over the wire to reduce traffic. This would explain the symptoms, but worth a try