soapaction header is not passing through - header

I have a very simple definition in traefik.toml file. The backend is a service that echoes back the header passed through.
[frontends]
[frontends.test]
entryPoints = ["http"]
backend = "test"
passHostHeader = true
[frontends.test.routes]
[frontends.test.routes.route0]
rule = "Host:localhost;PathPrefixStrip:/test"
[backends]
[backends.test]
[backends.test.servers]
[backends.test.servers.server0]
url = "http://localhost:8000"
weight = 1
I can pass any http header from the client to the backend and it is echoed back as implemented in the backend service. However I cannot pass soapaction header. Traefik does not return any response till it times out. Nothing in the log that indicates an issue.
Any help will be much appreciated

This may be due to the fact that Traefik rewrites headers canonically as they should be case-insensitive (see https://github.com/containous/traefik/issues/466).
Could you check that on your backend server ?

Related

Proxy an incoming upload file post request using lua-resty-http request module

I am trying to proxy requests using lua-resty-http module. I am currently passing headers, body, request method, and url for http request which works fine. But when a file upload post request comes it is unable to do so, obviously i haven't configured it to do so. So I want proxy that type of request also. Here is a snap of my so far code, what changes should i make so that it extract file upload data and sends it using lua-resty-http module ->
local http = require "resty.http"
local cjson = require "cjson"
local httpc = http.new()
local path = ngx.var.request_uri
local passHeader = {["cookie"]=ngx.req.get_headers()["cookie"]}
passHeader["content-type"] = ngx.req.get_headers()["content-type"]
ngx.req.read_body();
body = ngx.req.get_body_data();
local original_req_uri = "https://" .. "fakehost.com" .. path
local req_method = ngx.req.get_method()
local res, err = httpc:request_uri(original_req_uri, {
method = req_method,
ssl_verify = false,
keepalive_timeout = 60000,
headers = passHeader,
keepalive_pool = 10,
body = body
})
Read the docs!
https://github.com/openresty/lua-nginx-module#ngxreqget_body_data
POST request may contain big body and nginx may write it to a disk file.
If the request body has been read into disk files, try calling the ngx.req.get_body_file function instead.
PS: IMO approach to proxy HTTP request by Lua is not optimal, because it is the full buffered way. For me it make sense only if we need to issue a subrequest(s). For the main path with most requests processed I recommend to use proxy_pass.

NiFi bypass host name verification in SSL context service

I am trying to connect to a REST endpoint via the GetHTTP Processor in NiFi 1.5.0.
The problem that I am faceing is, that the SSL certificate is issued to the domain but I only have direct access to the IP:Port address (company firewall).
With that I run into the problem that host name and certificate owners don't match up and the IP is not added as subject alternative name.
When I try to connect, I get this error message:
javax.net.ssl.SSLPeerUnverifiedException: Certificate for
<[IP-ADDRESS]> doesn't match any of the subject alternative names: []
Is there a way to bypass the host name verification?
I have found this NiFi Jira ticket but it doesn't seem to be addressed yet. Is there a workaround I could use?
You could try using InvokeHttp and use the "Trusted Hostname" property.
As the "Trusted Hostname" property is deprecated in recent versions of NiFi you can use the ExecuteScript processor with Ruby. The example is below. The body of the POST request must be in FlowFile contents. The body of the response will be in FlowFile contents after the processor.
require "uri"
require "net/http"
require "openssl"
java_import org.apache.commons.io.IOUtils
java_import java.nio.charset.StandardCharsets
java_import org.apache.nifi.processor.io.StreamCallback
# Define a subclass of StreamCallback for use in session.read()
class JRubyStreamCallback
include StreamCallback
def process(inputStream, outputStream)
text = IOUtils.toString(inputStream, 'utf-8')
url = URI("https://...")
https = Net::HTTP.new(url.host, url.port)
https.use_ssl = true
https.verify_mode = OpenSSL::SSL::VERIFY_NONE
request = Net::HTTP::Post.new(url)
request["Authorization"] = "Basic ..."
request["Content-Type"] = "application/json"
request.body = text
response = https.request(request)
outputStream.write((response.read_body).to_java.getBytes(StandardCharsets::UTF_8))
end
end
jrubyStreamCallback = JRubyStreamCallback.new
flowFile = session.get()
if flowFile != nil
flowFile = session.write(flowFile, jrubyStreamCallback)
session.transfer(flowFile, REL_SUCCESS)
end

HAProxy add header based on URL parameter

Using HAProxy v1.6
I'm doing Websocket request that currently (at least on javascript) won't support custom headers.
I'm trying to add a custom header at the HAProxy layer (before forwarding it to the load balancer) based on a get parameter
Example:
The next code works (on backend)
#match get-url someGetKey paramater
acl is_key_match url_reg \?(?:.*?)someGetKey=([\w|=]+)
#Add header
http-request set-header My-Custom-Header hardcoded_string if is_key_match
My goal is the to replace hardcoded_string with the first match group of the regex \?(?:.*?)someGetKey=([\w|=]+)
Is it possible?
Thanks!
Found the solution:
http-request set-header cookie %[urlp(SSession)] if is_sticky_url
%[] - variable
url(SSession) - HTTP-GET parameter with the key SSession
For that example, the URL:
https://www.example.com/path?sSession=abcd
will forward a request with the header:
cookie=abcd

httpwebrequest 401 response

I have a program written in VB.net that interacts with a data services hosted on IIS. Authentication is handled through the users Active Directory credentials. At one of my customer sites, on exactly one (out of about 100) of the customer's workstations, requests to the data service fail with status of 401.
Some additional relevant information: the production IIS installation is split into two nodes. A load balancer directs traffic to the nodes. Also, the exact same request made with Internet Explorer from workstation in question does not fail.
I suspect that something is stripping the user's credentials out of the requests when I make the request through the VB code, but I am stumped as to what that could be.
Here is the VB code that I use to make the request:
Dim httpRequest As HttpWebRequest = Nothing
Dim httpResponse As HttpWebResponse = Nothing
httpRequest = WebRequest.Create("http://server/xyzportal/portal.php")
httpRequest.KeepAlive = False
httpRequest.UseDefaultCredentials = True
httpRequest.Method = "GET"
httpRequest.ContentLength = 0
httpRequest.Accept = "text/xml"
httpRequest.Timeout = 3000000
httpResponse = httpRequest.GetResponse
Any thoughts would be appreciated.
Additional information: here are the IIS log entries for a request that fails. Notice the 2nd entry does not include the Windows user name:
2014-11-11 22:20:42 199.99.51.58 GET /xyzportal/portal.php - 80 - 199.99.50.128 - 401 2 5 0
2014-11-11 22:20:42 199.99.51.58 GET /xyzportal/portal.php - 80 - 199.99.50.128 - 401 1 2148074248 0
Contrast that to the IIS entries for a request from a working machine. Notice the 2nd entry does include the Windows user name:
2014-11-11 22:56:40 199.99.51.58 GET /xyzportal/portal.php - 80 - 199.99.50.128 - 401 2 5 0
2014-11-11 22:56:40 199.99.51.58 GET /xyzportal/portal.php - 80 MYDOMAIN\jreichert 199.99.50.128 - 200 0 0 93
The machine with the IP Address 199.99.50.128 is the load balancer.
I am logged in on the exact same domain and user on both machines.
You haven't said but if you are using a proxy then you haven't told the HttpRequest to use the AD user credentials for the proxy and so you are getting a 401 Unauthorised error, i.e. you are being refused access via the proxy. If so try this to explicitly tell it to...
HttpRequest.Proxy.Credentials = System.Net.CredentialCache.DefaultCredentials
I had exactly the same problem and it's solved it.
keepalive must be set to true. Setting keepalive = true fixes my problem. The following page explains the role of keepalive in the authentication handshake:
http://www.innovation.ch/personal/ronald/ntlm.html
I am still not sure why the request does not work on <1% of the workstations in my customer base when keepalive = false. All I know is setting keepalive = true makes the request work on 100% of the workstations.
More info: keepalive must be set to true when the authentication protocol is Kerebos. The request works if the authentication protocol is NTLM. I don't know why Kerebos gets used on only the two workstations where the request does not work.

How to handle https traffic using libmproxy?

I want to implement a proxy server that intercepts both http and https requests. I came across libmproxy (http://mitmproxy.org/doc/scripting/libmproxy.html) that it is SSL-capable. I start with this simplest proxy that just prints the headers of all requests and responses, and forwards them to clients and servers normally.
#!/usr/bin/env python
from libmproxy import controller, proxy
import os
class Master(controller.Master):
def __init__(self, server):
controller.Master.__init__(self, server)
self.stickyhosts = {}
def run(self):
try:
return controller.Master.run(self)
except KeyboardInterrupt:
self.shutdown()
def handle_request(self, msg):
print "handle request.................................................."
print msg.headers
msg.reply()
def handle_response(self, msg):
print "handle response................................................."
print msg.headers
msg.reply()
config = proxy.ProxyConfig(
cacert = os.path.expanduser("~/.mitmproxy/mitmproxy-ca.pem")
)
server = proxy.ProxyServer(config, 1234)
m = Master(server)
m.run()
Then I configure http and ssl proxy in firefox to 127.0.0.1 port 1234. http seems to work fine as I can see all the headers are printed out. However, when the browser sends https requests, the proxy server does not print anything at all, and the browser displays "the connect was interrupted" error.
Further investigation reveals that the https requests go though the proxy server but not controller.Master. I see that proxy.ProxyHandler.establish_ssl() is being called when there is an https request, but the request does not go though controller.Master.handle_request(). Despite that establish_ssl() is called, the browser does not seem to get any response back. I test this with https://www.google.com.
First, how can I make proxy.ProxyHandler works properly with https requests/responses? Second, how can I modify controller.Master so that it can intercept https requests? I'm also open to other tools that I can build a custom http/https proxy server on top of.
You need to install the mitmproxy CA in the browser you are testing with.
Please see details here ("Installing the mitmproxy CA" section):
http://mitmproxy.org/doc/ssl.html
This solved the problem for me.