How does Web Api 2 handle encoding? - asp.net-web-api2

I originally asked the following question, to get some anwsers on how to handle special characters in an URL (GET) request for my Web Api:
Web Api 2 routing issue with special characters in URL
Encoding was obviously the way to go. But in order to get everything working, i had to do a pretty nasty workaround. And now, Im' at the point where i don't really understand why my workaround had to be done in the first place. So, the following is my setup:
A client can call my Web Api 2, Hosted on iis 8.5, by a get request containing an email in the URL. The most extreme example would be the following email:
!$%&'*+-/=?^_`{}|~#test.com
And yes, that sucker is a valid email, which therefore the API has to support. The URL pattern is as follows:
.../api/permissions/{email}/{brand}/
So a get request would be something along the lines of this:
.../api/permissions/#!$%&'*+-/=?^_`{}|~#test.com/economy
As the marked answer to my other question suggests, encoding this url is obviously a necessity. But this left me with a couple of other issues, such as "double escape of characters not allowed", and some specific "404 - not found" (routing could not pass the url). This i could mange to handle with the following settings for iis:
<system.web>
...
<httpRuntime requestPathInvalidCharacters=""/>
</system.web>
<system.webServer>
<security>
<requestFiltering allowDoubleEscaping="true"/>
</security>
...
</system.webServer>
Now i was able to call my method with those pesky special characters, and everything was fine. But i hit another bump. The email specified above, #!$%&'*+-/=?^_`{}|~#test.com, resulted in a 404 - not found. An actual 404 - not found. The routing couldn't handle the request url:
[Route("{email}/{brand}"]
As I understand it, the iis decodes the request url, passes it on to the iis request pipeline, it is then picked up by the web api and run through the http message handlers before hitting the controller. In the controller, i could clearly see that the email part of the url was no longer encoded (provided i used a simple encoded email. the encoded email "#!$%&'*+-/=?^_`{}|~#test.com" still responded 404). I quickly figured, that the routing probably couldn't handle the fragmentation inside the url path, as the iis passes on a decoded url to the web api. So i had to get the url reencoded in the iis before handed on to the web api.
This i was able to make a workaround for, by using Url ReWrite. It reencoded that specific part of the url containing the email, and now the routing was handled properly with the special-character-email. The expected method was hit, and i could just decode the encoded email. To briefly sum up, this was the flow:
Request flow
Now, we have set up a LogMessageHandler which logs incoming requests and outgoing responses. When the logger logs the request.RequestUri, it is clear that the email is double encoded. But when the controller method is hit, it is only encoded once! So.. My question is, why do i have to reencode the URL in the iis for the routing to handle the request properly, when the url is already automatically encoded (and decoded again before hitting the controller)? Is this something i can configure? Can i somehow extend the scope of which the URL is encoded, all the way to the controller??
Regards
Frederik

Related

How to pass original URI, with arguments, to Traefik ErrorPage handler specified in `query`?

I'm trying to use nginx to serve a custom error page using the Error Page middleware so that 404 requests to a lambda service (which I don't control) can be handled with a custom error page. I want to be able to get the context of this original request on that error page, either in Nginx for further forwarding, or else as a header for further handling e.g. in PHP or whatnot so I can provide contextual links on the 404 page.
However, right now after the redirection to Nginx in Traefik's ErrorPage middleware it seems the request has lost all the headers and data from the original service query.
The relevant part of my dockerfile:
traefik.port=8080
traefik.protocol=http
traefik.docker.network=proxy
traefik.frontend.rule=PathPrefix:/myservice;ReplacePathRegex:^/myservice/(.*) /newprefix/$$1
traefik.frontend.errors.myservice.status=404
traefik.frontend.errors.myservice.service=nginx
traefik.frontend.errors.myservice.query=/myservice-{status}
Nginx receives the forwarded 404 request, but the request URI comes through as nothing more than the path /myservice-404 specified in query (or /, if I omit traefik.frontend.errors.myservice.query). After the ReplacePathRegex I have the path of the original request available in the HTTP_X_REPLACED_PATH header, but any query arguments are no longer accessible in any header, and nginx can't see anything else about the original URI. For example, if I requested mysite.com/myservice/some/subpath?with=parameters, the HTTP_X_REPLACED_PATH header will show /myservice/some/subpath but not include the parameters.
Is it possible in Traefik to pass another service the complete context about the original request?
What I'm really looking for is something like try_files, where I could say "if this traefik request fails, try this other path instead", but I'd settle for being able to access the original, full request arguments within the handling backend server. If there was a way to send Nginx a request with the full path and query received by Traefik, that would be ideal.
tl;dr:
I am routing a request to a specific service in Traefik
If that request 404s, I want to be able to pass that request to Nginx for further processing / a contextual error page
I want Nginx and/or the page which receives the ErrorPage redirect to be able to know about the request that 404'd in the service
Unfortunately this is not possible with Traefik. I tried to achieve something similar but I realized that the only information that we are able to pass to the error page is the HTTP code, that's it.
The only options available are mentioned in their docs: https://doc.traefik.io/traefik/middlewares/errorpages/

Windows Authentication issue with .Net Reverse Proxy using IIS custom HTTP module

We use a custom HTTP module in IIS as a reverse proxy for web applications. Generally this works well and has done for some time, but we've come across an issue with Windows Authentication (WA). We're using IE 11, IIS 10 and Server 2016.
When accessing the target site directly, WA works fine - we get a browser login dialog when the initial HTML page is requested and the subsequent requests (CSS, JS, etc) go through fine.
When accessing via our proxy, the same (correct behaviour) happens for the initial html page, the first CSS/JS request authenticates ok too, but the subsequent ones cause a browser login to popup.
What seems to happen on the 'bad' requests (i,.e. those that cause the login dialog) is:
1) Browser decides it needs to authenticate, so sends an Authorization header (Negotiate, with an NTLM token)
2) Server responds (401) with a WWW-Authenticate: Negotiate response with a full NTLM token
3) Browser re-requests with an Authorization header (Negotiate, with a full NTLM token)
4) Server responds (401) with a WWW-Authenticate: Negotiate (with no token), which causes the browser to show the login dialog
5) With login credentials entered, Browser sends the same request as in (1) - identical NTLM token, server responds as in (2), Browser re-requests as in (3), but this time it works!
We've set up a test web site with one html page, requesting 3 JS and 2 CSS files to replicate this. On our test server we've got two sites, one using our reverse proxy and one using ARR. The ARR site works fine. Also, since step (5) above works, we believe that the proxy pass-through is fundamentally working, i.e. NTLM tokens are not being messed up by dodgy encoding, etc.
One thing that does work, is that if we use Fiddler and put breakpoints on each request, we're able to hold back on the 5 sub-requests (JS & CSS files), letting one go through at a time. If we let each sequence (i.e. NTLM token exchange for each URL/file, through to the 200 response), then it works. This made us think that there is some inter-leaving effect (e.g. shared memory corruption) in our proxy, this is still a possibility.
So, we put code at the start of BeginRequest and end of EndRequest with a Synclock and a shared var to store the Path (AppRelativeCurrentExecutionFilePath). This was for our code to 'Single Thread' each of these request/exchanges. This does what we expected, i.e. only allowing one auth exchange to happen and resulting in a 200 before allowing the next. However, we still have the same problem of the server rejecting the first exchange. So, does this indicate something happening in/before BeginRequest, where if we hold the requests back in Fiddler then they work, but not if we do it in our http module?
Or is there some sort of timing issue where the manual breakpoints in Fiddler also mean we’re doing it at ‘human’ speed and therefore allowing things to work better?
One difference we can see is the ‘Connection: Keep-Alive’. That header is in the request from the browser to our proxy site, but not passed from our proxy to the base site, yet the ARR site does pass that through... It’s all using HTTP 1.1. and so we can't find a way to set Keep-Alive on our outgoing request - could this be it?
Regarding 'things to try', we think we've eliminated things like having the site in the Intranet Zone for IE by having the ARR site work ok, and having the same IE settings for that site. Clearly, something is not right, so we could have missed something here!
In short, we've been working on this for days, and have tried most of what we can find on SO and elsewhere, but can't figure out what the heck is going on.
Any suggestions - let me know if you want any further info. All help will be very gratefully received!

Can I use vb.net to interrogate a website to know if it uses SSL

I have a program that asks the user to type in a URL, and click download. Then the program downloads the webpage.
However, some websites use SSL, and in that case the user has to prefix his URL with https:// for this to work.
The problem is that the user may not know whether the website uses SSL, and may type http://... instead of https://....
Is there some way to send a preliminary message to the website (from vb.net) asking whether the URL should start with https or just http? If there is, I can correct the user URL before attempting to retrieve the web page.
(I should say there it is not enough to use something like this:
request.RequestUri.Scheme - this looks at the URL the user submitted, not the URL coming back from the server, as far as I know)
For websites that uses SSL, usually they will force the request to use HTTPS. That is when you send a request in HTTP, for example, http://www.example.com, the website will send a redirect response with HTTP status code 302 as well as the URL the client side that initiate the request should redirect user to.
So, you can try HTTP first and check the response to see if there is a redirect. So, you will need to handle that in your code.

Can't get anything other than HTTP 202 response from eBay API

(Also posted this on the eBay Dev Forums, but it has low volume and is very slow)
I'm trying to build my first call to eBay's API. I am using Postman Chrome App, but have also tried the Python Requests library, with the same problem in each.
No matter what I send, I just get back an HTTP 202 code with an empty body, instead of the XML response I am expecting. This happens with either the sandbox or the production endpoint. Doesn't matter if I use correct or incorrect credentials, or a valid or invalid API call name.
Screenshot of building the call in Postman:
Finally heard back from eBay Support.
The problem is the trailing slash on the endpoint. Any kind of trailing slash will cause a generic 202 response. Removing it fixed the problem.

Can CSRF Be Circumvented By JSONP

I'm very new to CSRF protection so please excuse if I make poor assumptions or if I'm missing something, but I'd like to make sure I'm doing all I can to prevent CSRF. From my research so far, I've found the following:
CSRF can be thwarted (and is probably best thwarted) by placing a nonce parameter within the request body of all HTML POSTs, only using POSTs to modify data, and on the server side verify the token is valid before processing the request.
A malicious website can send requests to my site (thwarted by the nonce), but they can't read responses because of same-origin policies in place on browsers. Assuming that someone is using a secure browser, a malicious site cannot GET a page from my site using AJAX, read the nonce, and use it for themselves.
Script tags are not bound by same-origin policies in most (possibly any) browsers and can therefore allow for content to be read from other sites.
When I got to point 3 I decided to try to get at HTML content in Chrome using JSONP; I opened up my console (with a page not from localhost) and ran the following code:
$.ajax({
url: "http://localhost:8080/my-app/",
dataType: "jsonp",
jsonp: "alert"
}).done(function(data) {
alert(data);
});
What I received in the console window was the this:
Resource interpreted as Script but transferred with MIME type
text/html
As far as I can tell the browser is essentially telling me that it received the content, proceeded to parse the response, but stopped because the type was not application/json. So finally, my question is, can this be compromised? Even though the browser failed to parse the response as JSON, it does have the response. Is there a way that this response could be parsed as HTML, grab my CSRF nonce, and compromise the protections I'm trying to enforce? It would seem to me (and I hope this to be true) that browsers will not allow this, just like they don't allow cross domain requests in the first place for basically all other communication, and that's what we as developers are relying upon (in addition to same-origin policies) to protect our sites. Is my thinking correct?