We have one page that is causing the Session_start to run and I cannot figure out why. Anytime home.aspx is called, Session_start in global.asax is called and a new sessionid is created.
The sessionid looks to be created right when session_start is called. I think that is normal. I just don't know why it is called all from this page.
Home.aspx uses a different master page than the others. I have not found anything in it that is causing this. I checked the Response.Redirect calls, because others have indicated that could be the issue. I found none being called.
Using firebug, I found this:
Server Microsoft-IIS/5.1
.
.
Location /ent4_sql/(S(unfzfplfp5ltgxcrtpt2bk3f))/Home.aspx?_TSM_HiddenField_=ctl00_ctl04_HiddenField&_TSM_CombinedScripts_=%3b%3bAjaxControlToolkit%2c+Version%3d4.1.40412.0%2c+Culture%3dneutral%2c+PublicKeyToken%3d28f01b0e84b6d53e%3aen-US%3aacfc7575-cdee-46af-964f-5d85d9cdcf92%3ade1feab2%3af9cec9bc%3aa67c2700%3af2c8e708%3a8613aea7%3a3202a5a2%3aab09e3fe%3a87104b7c%3abe6fb298
The session in the url is different than the one that is displayed in the browser address bar. The browser seems to always keep the original.
From searching the web, this issue seems to be one that doesn't really have a direct answers. I am just out of things to look for. Any suggestions would be helpful.
Update***
Using fiddler, I found that the system is actually going to ent4_sql/Home.aspx?.....
Notice the session is not in the URL
That displays a page
<html><head><title>Object moved</title></head><body>
<h2>Object moved to here.</h2>
</body></html>
and seems to redirect whatever is calling that to a url with session. I am trying to find more information on if others have seen the AjaxControlToolkit doing this.
The problem had to do with the Ajax Control Toolkit and cookieless sessions, as expected. Here is an issue reported in 2009 dealing with the same problem. Fix is now included on the issue
http://ajaxcontroltoolkit.codeplex.com/workitem/23443
Related
I need some help with this issue in Vue Storefront that I have had some difficulty trying to solve:
After navigating to PDP(Product Detail Page) and refreshing the page, the page gets redirected to ‘page-not-found’ for many products. The products that get redirected to page-not-found are always the same and not all products get redirected to page-not-found. Traversing to the PDP using router-link works, even when the path is hardcoded, but navigating directly to it or refreshing the page while on a PDP doesn’t work. I also tried isolating the problem and found that even including just the Product.js mixin file from the core and removing all other code on PDP still causes the redirect.
I am unable to solve this bug despite trying for days. Even if you can’t see what is causing the issue, it would be helpful if anyone can at least show me how to debug this issue. I don’t know how to know what code is causing the redirect to 404.
Thanks
Is behavior present only for given type of products (like bundle) or do they have some common characteristic?
The problem may be lack of some data during SSR refresh. Then on a client side product can't be fetched. This can happen if we are missing some checks for some specific product types. Can you ping me on VS slack and give access to the code? Looks like the fault is on our side :)
Which version of VS are You using? If 1.9 - please make sure You've got the products properly imported with the product.url_path set. Then please do check if You don't have
I'm trying to a CSRF protection to an existing MVC4 web application which uses DevExpress grids. I've added the Html.AntiForgeryToken() into the forms on the aspx pages (which contain ascx as partials containing the grids) and can see the __RequestVerificationToken and it's value clearly in developer tools when a save is called.
I've tried commenting out all my ValidateAntiForgeryToken attributes bar one - I went with the delete post method for simplicity (And also to eliminate the DevExpress grids messing with it) and I still keep coming up against this error:
There was a HttpAntiForgeryException
Url: http://localhost:54653/Users/Delete/f86ad393-0039-44e8-beed-a66dbab9266e?ReturnURL=http%3A%2F%2Flocalhost%3A54653%2FUsers
The exception message is
The required anti-forgery form field "__RequestVerificationToken" is not present.
Does anyone have any idea why this might be happening? Could it be that the error is non-descriptive and it's actually that the token doesn't match rather than that it doesn't exist? In previous answers to this question people just say "oh, you have to add the token," which is obviously not helpful here.
Are you submitting the form manually through Ajax? If that's the case, you need to pass the anti forgery token as another parameter with the name "__RequestVerificationToken".
Point 1 : Make sure if your application is has https secure protocol. Please load in https.
Point 2 : In case of DevExpress you have to call in the below pattern.
ViewContext.Writer.Write(Html.AntiForgeryToken().ToHtmlString());
After struggling with this for days I had a thought - maybe the browser is stopping the cookie being written. I did a search for dev servers and cookies not being written, and found that with Chrome and IE10 and up that there's problems writing the cookies.
I downloaded Firefox and tried it with that and it worked instantly. I then reapplied all the validate attributes to the all the controller methods and the all worked, every single one of them! Even the DexExpress postbacks seem to be working correctly.
I'll carry out more exhaustive testing, but for now, I think we're there.
Not exactly. If MVC AntiForgeryToken is already defined on page where you are using MvcxGridView and you want to protect grid actions you should send this token back to server during grid client side begin callback event.
settings.ClientSideEvents.BeginCallback = "function(s,e) { e.customArgs[\"__RequestVerificationToken\"] = $('input[name=\"__RequestVerificationToken\"]', $(s.GetMainElement())).val(); }";
Background: I'm trying to log into an HTTPS site with my valid credentials, navigate to a page that has a frequently updated list, and then scrape the list.
I was using code someone else wrote, which worked until a few weeks ago. I am new to this, but even i can see that the code was not very good, so i am trying to rewrite.
First I log into the site and create an tunnel. Then I move to the page where my list is and grab the list, etc.
Here's what's weird. The login fails every time, until I turn on Fiddler. With Fiddler running it succeeds every time.
Any idea about why this would happen and how to fix?
Many thanks.
I got it working!
For anyone who finds themselves in the same situation (I've seen a number of posting of similar questions - but the answers hadn't worked for me, so I expect I am not alone), I eventually saw that I needed to set the security protocol to TLS. The specific syntax I used was:
ServicePointManager.SecurityProtocol = SecurityProtocolType.Tls;
The setting needs to be specified before the Httpwebrequest get or post event occurs.
If you have a similar problem, I hope this helps.
I had an invalid "User-Agent" header. It contained invalid characters (ä, ö, ü).
This is an odd question, in that I'm not trying to display a page in EnterpriseMode - I'm trying to prevent it from displaying in EnterpriseMode. I'm assisting the Webserver team, so my access is limited to only changes in the page itself.
The twist is that the rest of the domain has to be displayed in EnterpriseMode, save for this one page.
I've tried utilizing an XML document and changing HKLM\software\microsoft\internet explorer\main\enterprisemode -- setting SiteList to my file location on the local machine, and Enabled to blank. The page ignores this and loads itself into EnterpriseMode anyways.
Example of my Site.XML. Note: I've changed the server name to protect the innocent. Also I'm having to use the escape characters so the note quits trying to interact with my example. I could've sworn code block should've stopped that.
<rules version="1">
<emie>
<domain exclude="false">internalportal.ExampleServer.com<path exclude="true">/OperationsRecap/</path></domain>
</emie>
</rules>
I've tried the same thing in the HKCU key, and even checked gredit for anything that might be pushing it to default. No such luck. This should be a fairly simple procedure, but it's stumping me. I'm starting to wonder if the Webserver team has a customHeader stuck in web.config, but I don't have access and I've been waiting for an answer from them for a few days now. And by 'waiting' I mean 'continually hounding'.
Compatibility mode doesn't seem to make a difference, whether its on or off. I've several sites with different settings that get the same problem - and then several sites with different settings that do not get that problem. There does not appear to be a rhyme or reason in terms of configuration on the local machines. So while it's tempting to call it an issue with IIS7 web.config and dust my hands of the whole thing, I have to be absolutely certain.
I've dug at the source code, and literally the only difference is in the META tag. Those that load correctly load X-UA-Compatible as IE=Edge, like they're supposed too. Those that do not load as IE=8, despite all my attempts to force them to stop that. In fact, when it fails to load I can go to tools on the IE11, de-select EnterpriseMode, and it reloads just fine. The META tag changes as well in the source. Again, whether compatibility mode is on or off, whether there's a list in play, utterly ignoring any changes I make to EnterpriseMode key.
Thoughts?
Found the answer. I was looking in HKLM\software\microsoft\internet explorer\main\EnterpriseMode
I should have been looking in hklm\software\policies\microsoft\internet explorer\main\EnterpriseMode
Lesson learned, stupid mistake.
I have a page tab app that I am hosting. I have both http and https supported. While I receive a signed_request package as expected, after I decode it does not contain page information. That data is simply missing.
I verified that like schemes are being used (https) among facebook, my hosted site and even the 'go between'-- facebook's static page handler.
Also created a new application with page tab support but got the same results-- simply no page information in the signed_request.
Any other causes people can think of?
I add the app to the page tab using this link:
https://www.facebook.com/dialog/pagetab?app_id=176236832519816&next=https://www.intelligantt.com/Facebook/application.html
Here is the page tab I am using (Note: requires permissions):
https://www.facebook.com/pages/School-Auction-Test-2/154869721351873?id=154869721351873&sk=app_176236832519816
Here is the decoded signed_request I am receiving:
{"algorithm":"HMAC-SHA256","code":!REMOVED!,"issued_at":1369384264,"user_id":"1218470256"}
5/25 Update - I thought maybe the canvas app urls didn't match the page tab urls so I spent several hours going through scenarios where they both had a trailing slash or not. Where they both had a trailing ? or not, with query parameters or not.
I also tried changing the 'next' value when creating the page tab to the canvas app url and the page tab url.
No success on either count.
I did read where because I'm seeing the 'code' value in the signed_request it means Facebook either couldn't match my urls or that I'm capturing the second request. However, I given all the URL permutations I went through I believe the urls match. I also subscribed to the 'auth.authResponseChange' which should give me the very first authResponse that should contain the signed_request with page.id in it (but doesn't).
If I had any reputation, I'd add a bounty to this.
Thanks.
I've just spent ~5 hours on this exact same problem and posted a prior answer that was incorrect. Here's the deal:
As you pointed out, signed_request appears to be missing the page data if your tab is implemented in pure javascript as a static html page (with *.htm extension).
I repeated the exact same test, on the exact same page, but wrapped my html page (including js) within a Perl script (with *.cgi extension)... and voila, signed_request has the page info.
Although confusing (and should be better documented as a design choice by Facebook), this may make some sense because it would be impossible to validate the signed_request wholly within Javascript without placing your secretkey within the scope (and therefore revealing it to a potential hacker).
It would be much easier with the PHP SDK, but if you just want to use JavaScript, maybe this will help:
Facebook Registration - Reading the data/signed request with Javascript
Also, you may want to check out this: https://github.com/diulama/js-facebook-signed-request
simply you can't get the full params with the javascript signed_request, use the php sdk to get the full signed_request . and record the values you need into javascript variabls ...
with the php sdk after instanciation ... use the facebook object as following.
$signed_request = $facebook->getSignedRequest();
var_dump($signed_request) ;
this is just to debug but u'll see that the printed array will contain many values that u won't get with js sdk for security reasons.
hope that helped better anyone who would need it, cz it seems this issue takes at the min 3 hours for everyone who runs into.