Internet Explorer 11 downloads a URL unexpectedly - internet-explorer-11

I am trying to diagnose an issue, where one single user out of many, finds that IE11 downloads an EXE
That EXE is mentioned in the URL, lets say: http://my.site/path/this.EXE
For all other users, an HTML page is returned as you would expect, otherwise I would suspect that MIME-type handling was broken on the server, but since this only occurs to one user, presumably the fault is with the config of this one instance of IE?
Searching for an explanation is tricky, as 99% of people on the internet appear to have the exact opposite problem!

Related

Retrieve current chrome open page in html without saving it

I'm implementing a python script mainly based on pyautogui. One of the things the script does is to open a chrome webpage. After that I would need to access the DOM of this currently open webpage.
Since I've not opened the browser with selenium, I can't use it to analyze the DOM.
However, my question is: is this currently open chrome page available/saved somewhere in the hard drive so that I can access it with selenium? Like an .html file?
I checked many other questions here and users talk about chrome cache, but there are no html files there.
I just need to be able to access the current open page and not all the historical data in the cache.
Opening web browser directly with selenium is not an option either, since most of the websites analyzed have captchas and distil technology.
Thanks.
If you start the original chrome with --remote-debugging-port=PORT_NR argument, and visit localhost:PORT_NR from another browser, you will have access to the full content of the browser, including dev console.
Once you have this, you have multiple ways to go:
You can visit http://localhost:PORT_NR with with any other browser (or even with the same browser), and you should have full access to the content of the original Chrome. With Selenium you should have a relatively easy time to get by.
You can also use the devtools api (the documentation.. is.. well... there is room for improvement. Search for chrome devtools protocol to be amazed by the lack of docs). As an example you can get to http://localhost:PORT_NR/json to get the available debugging URIs. Grab the relevant websocket endpoint (webSocketDebuggerUrl). Open a websocket connection, and issue a command, like {"method": "DOM.getDocument", "id":12}. You can find available DOM related commands here: https://chromedevtools.github.io/devtools-protocol/1-3/DOM
Sice I had to reinvet the wheel I may give some extra info that I coudn't find anywhere:
Start the Browser with remote debugging enabled (see previous posts)
Connect to the given port on localhost and use these HTTP-GET-Requests to geta very limited control on your browser:
https://chromedevtools.github.io/devtools-protocol/#endpoints
Most important:
GET /json/new?{url}
GET /json/activate/{targetId}
GET /json/close/{targetId}
GET /json or /json/list
To gain full control over the browser, you need to use a "websocket" connection. Each Object in the GET /json or /json/list has it's own ID. Use this ID to interact with the tab. Btw: Type "page" are normal tabs, the other stuff are extentions and so on. Once you know which Tab you want to influence, get it's "webSocketDebuggerUrl".
Use this URL and connect with something that can speak the Websocket-protocol.
Once connected, you must craft a valid Json by the following structure:
{
"id":0,
"method":"Page.navigate",
"params":{url:http://google.com}}
}
Notes:
ID is a simple counter (int) that get bigger - not the ID of the tab(!)
Method is the method described in the docs params is also in the docs.
The return values are always JSONs.
From now on you can use the official docs:
https://chromedevtools.github.io/devtools-protocol/tot/Page/#method-navigate
Dunno how other ppl found out about it but it took a few hours to get it working. Probably cause everyone is just using python's selenium to do it.

IE11 - Getting a webpage to STOP showing in Enterprisemode

This is an odd question, in that I'm not trying to display a page in EnterpriseMode - I'm trying to prevent it from displaying in EnterpriseMode. I'm assisting the Webserver team, so my access is limited to only changes in the page itself.
The twist is that the rest of the domain has to be displayed in EnterpriseMode, save for this one page.
I've tried utilizing an XML document and changing HKLM\software\microsoft\internet explorer\main\enterprisemode -- setting SiteList to my file location on the local machine, and Enabled to blank. The page ignores this and loads itself into EnterpriseMode anyways.
Example of my Site.XML. Note: I've changed the server name to protect the innocent. Also I'm having to use the escape characters so the note quits trying to interact with my example. I could've sworn code block should've stopped that.
<rules version="1">
<emie>
<domain exclude="false">internalportal.ExampleServer.com<path exclude="true">/OperationsRecap/</path></domain>
</emie>
</rules>
I've tried the same thing in the HKCU key, and even checked gredit for anything that might be pushing it to default. No such luck. This should be a fairly simple procedure, but it's stumping me. I'm starting to wonder if the Webserver team has a customHeader stuck in web.config, but I don't have access and I've been waiting for an answer from them for a few days now. And by 'waiting' I mean 'continually hounding'.
Compatibility mode doesn't seem to make a difference, whether its on or off. I've several sites with different settings that get the same problem - and then several sites with different settings that do not get that problem. There does not appear to be a rhyme or reason in terms of configuration on the local machines. So while it's tempting to call it an issue with IIS7 web.config and dust my hands of the whole thing, I have to be absolutely certain.
I've dug at the source code, and literally the only difference is in the META tag. Those that load correctly load X-UA-Compatible as IE=Edge, like they're supposed too. Those that do not load as IE=8, despite all my attempts to force them to stop that. In fact, when it fails to load I can go to tools on the IE11, de-select EnterpriseMode, and it reloads just fine. The META tag changes as well in the source. Again, whether compatibility mode is on or off, whether there's a list in play, utterly ignoring any changes I make to EnterpriseMode key.
Thoughts?
Found the answer. I was looking in HKLM\software\microsoft\internet explorer\main\EnterpriseMode
I should have been looking in hklm\software\policies\microsoft\internet explorer\main\EnterpriseMode
Lesson learned, stupid mistake.

Is it possible for others to find images on my server that aren't referenced on my website?

If I upload a file to my webserver, is it possible for anyone or any crawler of some sort to find that file even though I haven't linked to it from anywhere or referenced to it?
Say for example you have a site that hides content to non logged in users, if I know the path to an image file I am able to reach that file even though I am not logged in. This is the case of several sites I regularly visit. But is this really a problem, is it possible for people with bad intentions to find these images even though they can't log in?
My next question would of course be (maybe that's another thread though): how can I as a web developer, using a LAMP stack, protect file paths from being requested from non logged in users?

Externally triggering Thunderbird into displaying a wanted message

I would like having a way to trigger Thunderbird, from an external script, into displaying a particular message in a particular folder.
If it were Firefox, say, I would use firefox -new-tab http://some-URL, and an already running Firefox (or a new one if none) would nicely fetch and display URL. But I found no way to do something equivalent with Thunderbird, neither on the Thunderbird site or through existing extensions, and even after some furious Googling around, which I attempted more than once!
One problem, compared to a plain URL, is the need some notation for selecting a message. Short of a better solution, I wrote a script which knows folder:SOME-FOLDER:ORDINAL, and behaves like an extension of xdg-open. My tool inserts a proper prefix and a few .sbd as needed within the SOME-FOLDER part to turn it into an absolute Thunderbird file reference, and ORDINAL picks a message in that folder. My tool then grabs the message, heuristically converts it into HTML file, and then, directs a Web browser to the resulting file (and if :ORDINAL is not given, it processes the whole folder instead, yielding an HTML index and many linked messages).
My current tool helps a bit at saving message references in other documents and efficiently retrieving them later, but I handle a copy of the Thunderbird message, and not the original. So if I want to delete it, refile it in another Thunderbird folder, and do other similar operation, I still have to go to Thunderbird, interactively find my way again to the wanted message before I can handle it, and this, is not efficient. What I'm dreaming of is a way to get rid of all my HTML conversion and browser trickery, but still keep the pseudo-URL paradigm and pseudo xdg-open interface, to directly force Thunderbird into the correct folder, with the wanted message correctly displayed.
In previous email readers I used (Emacs RMAIL and then Gnus, and Mutt as well later), such things could be managed, and I heavily used such capabilities in scripts. I am astonished, surprised, even a bit dismayed, by the apparent weakness of Thunderbird as a scriptable mail reader. Am I missing something evident? Any avenue or suggestion?
François
P.S. Of course, I agree that using ORDINAL is not very clever. It might mean a different message if the folder get some messages added or deleted. This is a lesser bad. A better but potentially heavier notation might use Message-ID values, but then, an index would also be needed to find the Thunderbird folder containing each message.
There seems to be some way to do it since Google Desktop supported it according to this thread - http://forums.mozillazine.org/viewtopic.php?f=39&t=584542. Perhaps try installing Google Desktop and see what kind of hyperlink its using?
I'll add Outlook supports using external hyperlinks using the outlook: naming scheme, for example outlook:Inbox or outlook:0000000007A2379547B0624691F4FB2E5468A0D7642E2000. See http://www.davidtan.org/create-hyperlinks-to-outlook-messages-folders-contacts-events/ for more info.

Web site migration and differences in firebug time profiles

I have a php web site under apache (at enginehosting.com). I rewrote it in asp.net MVC and installed it at discountasp.net. I am comparing response times with firebug.
Here is the old time profile:
Here is the new one:
Basically, I get longer response times with the new site (not obvious on the pictures I posted here but in average yes with sometimes a big difference like 2s for the old site and 9s for the new one) and images appear more progressively (as opposed to almost instantly with the old site). Moreover, the time profile is completely different. As you can see on the second picture, there is a long time passed in DNS search and this happens for images only (the raw html is even faster on the new site). I thought that once a url has been resolved, then it would be applied for all subsequent requests...
Also note that since I still want to keep my domain pointed on the old location while I'm testing, my new site is under a weird URL like myname.web436.discountasp.net. Could it be the reason? Otherwise, what else?
If this is more a serverfault question, feel free to move it.
Thanks
Unfortunately you're comparing apples and oranges here. The test results shown are of little use because you're trying to compare the performance of an application written using a different technology AND on a different hosting company's shared platform.
We could speculate any number of reasons why there may be a difference:
ASP.NET MVC first hit and lag due to warmup and compilation
The server that you're hosting on at DiscountASP may be under heavy load
The server at EngineHosting may be under utilised
The bandwidth available at DiscountASP may be under contention
You perhaps need to profile and optimise your code
...and so on.
But until you benchmark both applications on the same machine you're not making a proper scientific comparison and are going to be pulling a straws.
Finally, ignore the myname.web436.discountasp.net url, that's just a host name/header DiscountASP and many other hosters add so you can test your site if you're waiting for a domain to be transferred/registered, or for a DNS propagation of the real domain name to complete. You usually can't use the IP addresse of your site because most shared hosters share a single IP address across multiple sites on the same server and use HTTP Host Headers.