MS Edge: Opening the developer tools panel causes all http requests to occur twice - apache

Using MS Edge and apache w/ php, I just discovered via access.log that when I have the JavaScript debug panel (i.e. developer panel) open, it is making every http call twice. When I closed this panel, it has fixed the issue of all insert statements getting called twice.
Question: Does this doubling of http calls happen on every / most browsers that I need to look out for, or is this something special/unique with MS Edge?

I can't speak for all browsers and all developer tools. But, for IE and Edge the first time you open the tools and then open a JS file in the sources view it will try to request the file again. That request will be served from the local browser cache, sometimes not, depending on the cache settings for the file being requested.
The reason browser tools need to make this request is that browsers will often throw out the original source file as it doesn't need it to execute the page, as the source has been parsed it into something else that it can work with.
However, after you've opened the developer tools the browser will keep around sources in future navigations, either in the tools front end or elsewhere. Not keeping sources is an optimization for the first time use case, to save browsers keeping around source on the very low odds of the tool being used on any given navigation.
Of course some files are never cached by the browser and will need to be downloaded when requested by the tools, for example sourcemapped files.
In general any resources on your site that can be accessed by HTTP GET should be idempotent. That is, a GET shouldn't change the resource being requested (or generall the state of your site), so hopefully making additional requests shouldn't be an issue.

Related

Teams desktop client sometimes caches my tab application and I can't clear it

I've built a Microsoft Teams channel tab with SSO and I'm hosting the tab application which I've built with React via create-react-app.
The auth works well, and the app loads and runs.
But when I update my app on the web site, the Teams desktop client (Mac and PC) will sometimes cache the old app and will not pick up the changes. But then sometimes it will.
If I run the web client, it usually picks up the changes.
I've verified that I'm serving up new bundles with different names each time I update. But running the Teams desktop devtools I can see that Teams is asking for the old bundle, every time, so it's definitely caching the response from my app's URL.
I've read about the problems people have with the Teams desktop client has with caching Sharepoint content and not picking up content changes. I've tried the cache clearing techniques but they don't seem to work for this issue. And I can't reasonably have users do crazy cache clearing every time I make an update to the tab app.
What should I do? Some have suggested I need to update my version in the app manifest and redeploy to Teams -- that seems really brutal. Do I need to set some cache headers in a certain way to force the Teams client to pick up the new code?
Solution
Set a Cache-Control response header to no-cache (or must-revalidate) for your build/index.html.
Explanation
We had the exact same issue. Turns out it was because we cached our build/index.html.
According to the create-react-app doc, only the content of build/static/ can safely be cached, meaning build/index.html shouldn't be cached.
Why? Because files in build/static/ have a uniquely hashed name and are therefore cache busted on deployment. index.html is not.
What's happening is since Teams uses your old index.html, it tries to load the old /static/js/main.[hash].js defined in it, instead of your new JS bundle.
It works properly in the Teams web client because most browsers send a Cache-Control: max-age=0 request header when requesting your index.html, ignoring any cache set for the file. Teams desktop doesn't as of today.
This seems like an issue with the way your app is managing the default browser caching logic. Are service workers enabled for your app? What cache control headers is your web server returning?
There are some great articles that describe all the cache controls available to you; for example:
https://medium.com/#codebyamir/a-web-developers-guide-to-browser-caching-cc41f3b73e7c
Have you tried doing something like this to prevent caching of your page (do note that long term you might want to use something like ETags which is a more performant option):
https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Cache-Control#preventing_caching
P.S. You can also follow the instructions here to open the dev tools in the Desktop Client to debug all this:
(How) can I open the dev tools in the Microsoft Teams desktop client?
And even force clear any cached data/resources for your app:

Issues with Adobe Acrobat

We have some users who are using Adobe Acrobat to edit .pdf files over WebDAV. There are a couple of issues that we are experiencing.
The acrobat client seems to be very chatty. We get multiple PROPFIND calls before the first GET. To edit even the simplest pdf takes ~11 secs. due to all these calls which includes PROPFINDS & OPTIONS, a LOCK, GET, PUT, UNLOCK and frequently a pair of MOVE/DELETE commands.
When the user eventually saves then closes the document and immediately reopens the document, their changes to not appear to have saved. If they wait for about 30 seconds (possibly less) before they reopen the document the changes do show up so there appears to be some type of caching going on, but our website (asp.net) has output caching turned off.
Sometimes the users get an error of 109 saying the document could not be saved. This appears to be coming from Adobe Acrobat because we don't see errors in the log, however it could be related to #1 above where the MOVE/DELETE has been issued and enough time has not passed.
My questions are therefore
Have you tested/used Acrobat for editing pdfs?
If so did you have these issues?
Is there a setting in the WebDAV engine that allows you to turn caching off or does it use the underlying IIS settings?
I guess you are using Microsoft Mini-redirector driver (Windows Shell, WebDAV client provided with Windows) to open and edit documents.
This is a Mini-redirector specifics, sometimes it traverses folders and submits other unnecessary requests.
Regarding performance. This may be caused by proxy settings. Please see "Long Delays When Connecting and Browsing WebDAV Server" section here.
This is typically caused by Mini-redirector cache. As far as I know there is no any documentation about how to disable cache in Mini-redirector. There is no real solution for this, you just need to wait for some time until the client cache invalidates.
The server Engine itself does not have any caching options. It just processes the WebDAV request and generates response. It also independent of hosting environment and its settings, such as IIS, HttpListener, etc.
Please examine the WebDAV log file WebDAVLog.txt. By default it is located in \App_Data\WebDAV\Logs. Are there any exceptions in it?

Tools for finding Non SSL resources in web page (firebug like tool)

I'm trying to find a non-SSL resource that is being loaded on my site.
This happens occasional where one of us forgets to use the https version of a resource (like some js in a CDN).
My question is there any firebug-like tools to find these "Turds in the punch bowl"? I want my green padlock back :)
Besides Firebug, which you've mentioned, you can use the developer tools in Chrome:
Tools menu -> Developer Tools
Go through the list of loaded resources in the Network tab
Alternatively, the HttpFox extension for Firefox can also be useful. It will keep logging the traffic even when you change pages, which may be useful in some cases.
(This is very similar to Firebug.)
mitm-proxy is great for stuff like this - http://crypto.stanford.edu/ssl-mitm/
You run it on your local machine in a console window, set your browser to use it as a proxy, and you can watch /log everything that your browser requests. It's a little noisy since it shows SSL hand-shaking and file contents, but you can filter that down. When you need to debug SSL communications it's invaluable to see those details though..
mitm-proxy is based on http://grinder.sourceforge.net/g3/tcpproxy.html which has more in the way of scripting capabilities.

Why would .htaccess fire twice in IE when downloading a protected XLS file?

Certain directories are protected by Basic Auth using a .htaccess file on an older Apache 1.x server. Today a user pointed out that the username/password was requested twice for the file he had just posted - once when entering the directory to see the index, and then AGAIN when downloading the file. Finding this odd, I researched the usual problems with double-firing .htaccess authentication:
server name (http://server vs. http://www.server)
trailing slash (http://server/somedir vs. http://server/somedir/)
http vs https
No luck. Add to the confusion that Firefox/Chrome/Safari don't ask twice - only IE (6 and 7). Further investigation showed that this doesn't happen with PDF files - only Excel files - even blank ones.
Is Excel calling back to the server somehow that requires a second authentication? Why does it only happen in IE?
Not critical - but I'm very curious what could be causing this.
EDIT - I think bmdhacks nailed it. Watching the network traffic, Excel+IE fires back a second request with a different User Agent called "Microsoft Protocol Discovery"
I'm not sure about Excel, but Windows Media Player has a special interaction with IE where when IE requests a file with a MIME type that Windows Media Player owns, instead of handing the downloaded file over to WMP, it instructs Windows Media Player to download the file itself. This could result in requesting the file twice, once for IE to ask for the file, and a second time when WMP downloads it.
It's possible that Microsoft uses this mechanism in other products like Excel too. You might be able to discover this by looking at the User-Agent header submitted in the second request. In the Windows Media case, it actually changes from IE to WMP's User-Agent on the second request.

Is there an HTTP proxy tool that can substitute browsed content?

What I'm looking for is some sort of a proxy tool that will allow me to specify a local file to load instead of one specified in the web page that is being browsed. I have tried Burp Suite which is almost working - it allows us to intercept a file and replace it by pasting the contents of the file we are swapping in into an input field. The file content is compiled code (Flash content) so we are pasting in bytecode, but something isn't working.
The reason is we are a 3rd party software developer without access to our client's development or testing environments. Our content must interact correctly with the rest of the content on their webpage (there are elements on their page that communicate with our content) and to test any changes we make takes several hours turnaround to get our files uploaded to their servers. So what we need is some sort of hacking tool to let us test our work with their web pages, hence the requirement to specify a file in a webpage to swap with a local version.
The autoresponder feature in Fiddler Web Debugging Proxy might do what you need, if it's only static content.
I've been using HTTP::Proxy for a long time, and it has always helped me fiddle with things on the fly.
You might be able to do this with Greasemonkey but I'm not sure if the tests will be totally reliable.
http://diveintogreasemonkey.org/patterns/replace-element.html
And if Greasemonkey seems plain wrong for you I would take it as the perfect excuse to try out mouseHole. Now I have to admit that I've never tried it but since _why also made Hpricot I expect it to be fun, productive, and different.