I have a test plan that runs fine under http, and the Cookie Manager is correctly keeping my sessions in place. It is also capable of talking to the same server when switched to ssl, and even thinks everything is working correctly because it gets a 200 response with our custom message about not being logged in.
All I need to do to reproduce the behavior is switch from http to https. The test is still able to talk to the server, but I can see in the "View Results in Table" log that cookies has a JSESSIONID under http, and is empty under https. And each request under ssl is answered with a Set-Cookie for JSESSIONID.
Interesting scenario. Does the Jmeter log file offer any clues?
Could it be that Jmeter needs a copy of the certificate to properly store the SSL cookie? The console would display a handshake problem, which can be resolved by adding the certificate into the key store:
http://www.java-samples.com/showtutorial.php?tutorialid=210
You might be able to do some further debug by writing out the cookie value to a variable and logging its value:
Received Cookies can be stored as JMeter thread variables (versions of JMeter after 2.3.2 no longer do this by default). To save cookies as variables, define the property "CookieManager.save.cookies=true". Also, cookies names are prefixed with "COOKIE_" before they are stored (this avoids accidental corruption of local variables) To revert to the original behaviour, define the property "CookieManager.name.prefix= " (one or more spaces). If enabled, the value of a cookie with the name TEST can be referred to as ${COOKIE_TEST}.
Source: http://jmeter.apache.org/usermanual/component_reference.html#HTTP_Cookie_Manager
Edit: Somebody asked how my specific problem was solved. It turned out not to have anything to do with ssl specifically, but that other unrelated headers changed very slightly in their format, so the regex we were using to match on them started failing. So I'd start there with looking at your headers and comparing the difference between when you post http vs https
Related
We use a custom HTTP module in IIS as a reverse proxy for web applications. Generally this works well and has done for some time, but we've come across an issue with Windows Authentication (WA). We're using IE 11, IIS 10 and Server 2016.
When accessing the target site directly, WA works fine - we get a browser login dialog when the initial HTML page is requested and the subsequent requests (CSS, JS, etc) go through fine.
When accessing via our proxy, the same (correct behaviour) happens for the initial html page, the first CSS/JS request authenticates ok too, but the subsequent ones cause a browser login to popup.
What seems to happen on the 'bad' requests (i,.e. those that cause the login dialog) is:
1) Browser decides it needs to authenticate, so sends an Authorization header (Negotiate, with an NTLM token)
2) Server responds (401) with a WWW-Authenticate: Negotiate response with a full NTLM token
3) Browser re-requests with an Authorization header (Negotiate, with a full NTLM token)
4) Server responds (401) with a WWW-Authenticate: Negotiate (with no token), which causes the browser to show the login dialog
5) With login credentials entered, Browser sends the same request as in (1) - identical NTLM token, server responds as in (2), Browser re-requests as in (3), but this time it works!
We've set up a test web site with one html page, requesting 3 JS and 2 CSS files to replicate this. On our test server we've got two sites, one using our reverse proxy and one using ARR. The ARR site works fine. Also, since step (5) above works, we believe that the proxy pass-through is fundamentally working, i.e. NTLM tokens are not being messed up by dodgy encoding, etc.
One thing that does work, is that if we use Fiddler and put breakpoints on each request, we're able to hold back on the 5 sub-requests (JS & CSS files), letting one go through at a time. If we let each sequence (i.e. NTLM token exchange for each URL/file, through to the 200 response), then it works. This made us think that there is some inter-leaving effect (e.g. shared memory corruption) in our proxy, this is still a possibility.
So, we put code at the start of BeginRequest and end of EndRequest with a Synclock and a shared var to store the Path (AppRelativeCurrentExecutionFilePath). This was for our code to 'Single Thread' each of these request/exchanges. This does what we expected, i.e. only allowing one auth exchange to happen and resulting in a 200 before allowing the next. However, we still have the same problem of the server rejecting the first exchange. So, does this indicate something happening in/before BeginRequest, where if we hold the requests back in Fiddler then they work, but not if we do it in our http module?
Or is there some sort of timing issue where the manual breakpoints in Fiddler also mean we’re doing it at ‘human’ speed and therefore allowing things to work better?
One difference we can see is the ‘Connection: Keep-Alive’. That header is in the request from the browser to our proxy site, but not passed from our proxy to the base site, yet the ARR site does pass that through... It’s all using HTTP 1.1. and so we can't find a way to set Keep-Alive on our outgoing request - could this be it?
Regarding 'things to try', we think we've eliminated things like having the site in the Intranet Zone for IE by having the ARR site work ok, and having the same IE settings for that site. Clearly, something is not right, so we could have missed something here!
In short, we've been working on this for days, and have tried most of what we can find on SO and elsewhere, but can't figure out what the heck is going on.
Any suggestions - let me know if you want any further info. All help will be very gratefully received!
I am new to Jmeter and load testing overall but I have read about the Cookie Manager over and over and still can't find the answer to my problem.
The site I am trying to test uses several cookies to authenticate but not all of them are seen in the Jmeter Response Headers. I can see them if I look using the browser but Jmeter doesn't seem to pick them up at all.
If I manually set the cookies in the Cookie Manager after a recent session then the test passes but my concern is that when I use multiple threads, they won't all get individual values and rather just the ones I have specified.
I expect all the cookies that are set to be displayed in the response headers, that way I can set variables etc but out of 3 only one appears in Jmeter.
Please check the below information:-
JMeter checks that received cookies are valid for the URL. This means
that cross-domain cookies are not stored. If you have bugged behaviour
or want Cross-Domain cookies to be used, define the JMeter property
"CookieManager.check.cookies=false".
Received Cookies can be stored as JMeter thread variables. To save
cookies as variables, define the property
"CookieManager.save.cookies=true". Also, cookies names are prefixed
with "COOKIE_" before they are stored (this avoids accidental
corruption of local variables) To revert to the original behaviour,
define the property "CookieManager.name.prefix= " (one or more
spaces). If enabled, the value of a cookie with the name TEST can be
referred to as ${COOKIE_TEST}.
You can find these setting under JmeterFolder/bin/jmeter.properties file.
For more information:-
Cookie Manager
Kindly check if this helps.
This problem is driving me nuts. Our web app uses HTTP POST to login users and now IE 10 is aborting the connection and saying:
SCRIPT7002: XMLHttpRequest: Network Error 0x2f7d, Could not complete the operation due to error 00002f7d.
Here are all the details I have
IE version 10.0.9.16618, update version 10.0.6. I've also reproduced this on IE version 10.0.9200.16635, update version 10.0.7.
The domain is using HTTPS. The problem doesn't occur on HTTP connections
I've read that for some reason IE needs to get a certificate before it can do an HTTP POST, so I have HTTP GETs running before my POST request, but now the GET is erroring out. See network flow screen shot. The GET is super simple, just a PING page that returns "I'm up."
Asyn is turned off $.ajax({type: 'POST',url: url,async: false...}); I've read in other posts that this matters.
The certificate is good, see screen shot.
The problem goes away if the site is added as a "trusted site" but that's not really the user experience we're shooting for.
This just started about a month ago. Did Microsoft push some new updates recently?
I've already read: http://social.msdn.microsoft.com/Forums/windowsapps/en-US/dd5d2762-7643-420e-880a-9bf75554e383/intermittent-xmlhttprequest-network-error-0x2f7d-could-not-complete-the-operation-due-to-error. It doesn't help.
Screen shots:
Network flow:
Cert is good:
Any help is greatly appreciated. I've spent a lot of hours on this with no luck. As you would expect this works fine in Chrome and Firefox. If you need any more detail about what's happening please let me know.
Thanks,
Certificate revocation checks may block the initial JSON POST, but allow subsequent requests after the GET callback
We recently determined that URLMon's code (Win8, Win7, and probably earlier) to ignore certificate revocation check failures is not applied for content uploads (e.g. HTTP POST). Hence, if a Certificate Revocation check fails, that is fatal to the upload (e.g. IE will show a Page Cannot Be Displayed error message; other clients would show a different error). However, this rarely matters in the real world because in most cases, the user first performs a download (HTTP GET) from the target HTTPS site, and as a result the server's certificate is cached with the "ignore revocation check failures" exemption for the lifetime of the process and thus a subsequent POST inherits that flag and succeeds. The upload fails if the very first request to the HTTPS site in the current process was for an upload (e.g. as in a cross-origin POST request).
Here is how it works:
A little background: When a web browser initiates a HTTPS handshake with a web server, the server immediately sends down a digital certificate. The hostname of the server is listed inside the digital certificate, and the browser compares it to the hostname it was attempting to reach. If these hostnames do not match, the browser raises an error.
The matching-hostnames requirement causes a problem if a single-IP is configured to host multiple sites (sometimes known as “virtual-hosting”). Ordinarily, a virtual-hosting server examines the HTTP Host request header to determine what HTTP content to return. However, in the HTTPS case, the server must provide a digital certificate before it receives the HTTP headers from the browser. SNI resolves this problem by listing the target server’s hostname in the SNI extension field of the initial client handshake with the secure server. A virtual-hosting server may examine the SNI extension to determine which digital certificate to send back to the client.
The GET may be victim of the operation aborted scenario:
The HTML file is being parsed, and encounters a script block. The script block contains inline script which creates a new element and attempts to add it to the BODY element before the closing BODY tag has been encountered by the parser.
<body>
<div>
<script>document.body.appendChild(newElem)</script>
</div>
</body>
Note that if I removed the <div> element, then this problem would not occur because the script block's immediate parent would be BODY, and the script block's immediate parent is immune to this problem.
References
Understanding Certificate Revocation Checks
Client Certificates vs Server Certificates
Understanding and Managing the Certificate Stores
Preventing Operation Aborted Scenarios
HTTPS Improvements in IE
Online Certificate Status Protocol - OCSP
[SOLVED]
I only observed this error today. for me the Error code was different though.
SCRIPT7002: XMLHttpRequest: Network Error 0x2efd, Could not complete
the operation due to error 00002efd.
I was occuring randomly and not all time. but what it noticed is, if it comes it comes for subsequent ajax calls.. so i put some delay of 5 seconds between the ajax calls and it resolved.
Also the CORS must be configured on your web server.
I had the same exact issue and I just finally resolved it. For some reason I got the same error that you were receiving on IE when connecting to the API using OWIN middleware that was used to receive login credentials. It seemed to work fine while connecting to any other sort of API though. For some reason it didnt like cross domain request even though I had CORS enabled server side on the API.
Anyways I was able to resolve the issue using the xdomain library. Make sure you load this script before loading any other javascript.
First create a proxy.html page on the root of your API server and add this code. Replace placeholder URL.
<!DOCTYPE HTML>
<script src="//cdn.rawgit.com/jpillora/xdomain/0.7.3/dist/xdomain.min.js" master="http://insert_client_url_here.com"></script>
Now simply add this to your client replacing the placeholder URL pointing to the proxy.html page on your API server.
<script src="//cdn.rawgit.com/jpillora/xdomain/0.7.3/dist/xdomain.min.js" slave="http://Insert_Api_Url_Here.com/proxy.html"></script>
Adding a delay is not a proper solution.
This can be because the IE will treat it as an network error when the empty body request is made.
Try adding a empty class as the parameter in the server and IE should start working.
I have an ASP.NET MVC action that sends a GET request to another server via HttpWebRequest. I'd like to include all cookies in the original action's request in the new request. Some of the System.Web.HttpCookies in the original request have empty domain values (i.e. ""), which apparently doesn't cause any issues. When I create a System.Net.Cookie using the name, value, path, and domain of each of these cookies and add it to the request's CookieContainer, I get this error:
"System.ArgumentException: The parameter '{0}' cannot be an empty string. Parameter name: cookie.Domain"
Here's some code that will throw the same error (when the cookie is added):
var request = (HttpWebRequest)WebRequest.Create("http://www.whatever.com");
request.Method = "GET";
request.CookieContainer = new CookieContainer();
request.CookieContainer.Add ( new Cookie ( "MyCookieName", "MyCookieValue", "/", "") );
EDIT
I sort of fixed this by using "localhost" for the domain, instead of the null or empty string value from the original HttpCookie. So, why does an empty domain not work for the CookieContainer? And does HttpCookie use an empty value to signify localhost, or do I need to find another fix for this problem?
Disclaimer:
As stated earlier by #feroze, setting your cookies' domain to localhost is not going to work out so well for you. I'm assuming you're writing a helper that allows you to tunnel HTTP requests out to foreign domains. Note that this is not best practice and in a lot of cases is not needed (i.e. jQuery has a lot of cool cross-domain support built-in, also see the new CORS specification). But sometimes you may be stuck doing this (i.e. the external resource is XML only, and is on a server that doesn't support CORS).
Background Information on Cookie Domains and How They Work:
If you haven't already take a look at HTTP Cookie: Domain and Path on Wikipedia -- pretty much everything you need to know is in there.
When evaluating a cookie, the Domain and Path are taken into account by both the client (the "local" requester) and the web server (the "foreign" responder). When a client requests a resource, the client should only send cookies where those cookies match the Domain (or a more generic parent domain) and Path (or a more generic parent path) of the URI being requested.
Web browsers handle this correctly. If a web browser has a cookie for the domain "localhost" and you're requesting "google.com", for example, those cookies for the "localhost" domain won't be sent in the request to "google.com". -- In fact, most modern browsers won't just not send them, they'll completely ignore them in Set-Cookie response headers that they receive (these are called third-party cookies -- enabling the acceptance of third party cookies in your web browser is a huge privacy/security concern -- don't do it!).
It works in the other direction as well -- even though it's unlikely for a client to include a third party cookie in a request, if it does, the foreign web server is supposed to ignore it (and even some cookies for correct domains/paths, so as to prevent the infamous super-cookie issue. (i.e. The web server hosting "example.com" should ignore cookies belonging to its parent domain: ".com", because ".com" is a "public suffix")).
What You Should Do [if you have to]:
The course of action I recommend for you, is when you read in your client's cookies (I'm not an MVC guy, but in regular ASP.NET this would be in Request.Cookies), loop through them (make sure to filter out your own site's legitimate cookies, especially SessionId, etc -- or use Path properly so they never get sent to this page in the first place), then add them one at a time to the outgoing request's cookie collection, rewriting the domain to be "www.whatever.com" (per your example -- if you're doing this dynamically, load the URL into a new Uri() object and use the .Host property), and then set the Path to "/". -- This will build the "Cookie" header for the outgoing request to the foreign web server.
When that request returns to your server, you then need to check it's incoming response for new cookies -- those cookies can be repackaged and sent back down to your client in much the same type of loop as I illustrated in the previous paragraph, except you'll want to rewrite Host to be Request.Url.Host -- and you'll want to set path back to "/" unless the path to your passthru page is static (I'm guessing it isn't since you're using MVC) then you'd want to set it to Request.Url.AbsolutePath for instance.
Happy Coding!
EDIT:
Also, you'll want to set the X-Forwarded-For tag of the outgoing request, so that the website you're calling doesn't think your web server is one single client that's been spamming the crap out of them.
Not sure it solves your problem. But to add cookies without the "Domain" property you must add to the headers the cookies using HttpRequestHeader.Cookie as follows.
request.Headers.Add(HttpRequestHeader.Cookie, "Your cookies...");
Hope it helps!
Some background
This occurs because CookieContainer is client-side container designed to be reused across multiple HttpWebRequest. Reusing it provides the expected cookie behavior that cookies set by the remote host are sent back with every subsequent HttpWebRequests targeted at the same host.
As a result of the reuse, a CookieContainer might actually contain cookies from multiple request and\or hosts.
So, in order to determine which of the cookies in the container need to be sent with a particular HttpWebRequest to some host (domain), CookieContainer examines the Domain and the Path property.
That's why a Cookie in a CookieContainer needs to have a valid Domain.
Conversely, on the server-side cookies are delivered via a different type, CookieCollection which a simple list of cookies with no extra logic.
Specifically, in your case, while copying cookies from the CookieCollection to the CookieContainer you need to set the Domain property of every cookie to the domain your are going to forward the request to, so that HttpWebRequest will know to include the cookies while sending the request.
You are trying to get cookies sent to localhost, right?
Why don't you do something like this where you give your own machine a real name:
Edit your hosts file and add a line "127.0.0.1 myname.com"
Test using myname.com - which is actually your localhost.
Your browser or app will not know the difference and send cookies to myname.com if that is where the cookie belongs.
Detailed info:
The Hosts file on windows is located at C:\Windows\System32\drivers\etc\hosts
We are reviewing the design of a system. And need to verify what we think may be a security issue.
In this system some sensitive information is sent in the query string. Question is:
Can the query string parameters be read as the request goes over the internet, even if the request is sent over https?
Can the query string parameters be read be read from the browsing history on the client machines?
When you use HTTPS, the SSL/TLS connection is established before any HTTP traffic is sent, thus the whole request (including the URL and its parameters) will be encrypted and won't be readable. The only thing that's possibly visible by a third party is the server certificate (so they could see the host name, but that's it).
The browser's history isn't protected in any way by HTTPS as such, although some browsers may have some "safe browsing" options which would delete some HTTPS URLs automatically perhaps. This one ultimately really depends on the browser and its configuration.
This is certainly a security issue if sensitive details are being passed in get request.
Sensitive data will not only get cached in the user's browser but also in any proxy on d way and plus in webserver logs
Yes for the first. Not sure about the second - depends on the browser, I guess - but I suspect, Yes, here as well.