Is it possible to create a button to delete an hour old web history? - browser-history

I would like to offer the possibility to delete the last hour web history of my users (in order to protect people who used the website but don't have technical skills) or at minimum the url of the website and the referer (google search, social media link, etc.)
Is it possible with a html/js button which would interact with history like some extension ( https://developer.mozilla.org/fr/docs/Mozilla/Add-ons/WebExtensions/API/history/deleteUrl ) ?
Thanks for your help

Clear-Site-Data is a header, implemented to different degrees by different browsers, to indicate to the browser that you'd like the browser to clear cookies, caches and other kinds of storage for your site. This can be a useful security practice for your site -- to clear any sign that the user has logged in, or to make sure that no one who uses the device later will be able to access that user's account.
While I can see important and legitimate use cases for it, it isn't possible to force or prompt the browser to delete the browsing history, even just for your site. You can see how there might be security issues with that, like an attacker clearing evidence of their malicious site from the user's device, or just annoying websites that hide themselves from history and so the user can't see where they were. See these other questions that have also longed for this capability:
How to clear browsing history using JavaScript?
How to clear browsers (IE, Firefox, Opera, Chrome) history using JavaScript or Java except from browser itself?
You might consider exploring the History.replaceState method and other parts of the History API. That won't let you delete URLs and referrers in general, but it can be used to modify the URL of the current page in the history. So if a user arrives on your site visiting a page about something particularly sensitive or revealing, you might be able to modify the current history so that their browser only records that the user visited your domain, and not that particular page.

Related

Workarounds for Safari ITP 2.3

I am very confused as to how Safari 2.3 works in certain respects, and why sites can’t easily circumvent it. I don’t understand under what circumstances limits are applied, what the exact limits are, to what they are applied, and for how long.
To clarify my question I broke it down into several cases. I will be referring to Apple’s official blog post about ITP 2.3 [1] which you can quote from, but feel free to link to any other authoritative or factually correct sources in your answer.
For third-party sites loaded in iframes:
Why can’t they just use localStorage to store the values of cookies, and send this data along not as actual browser cookies🍪, but as data in the body of the request? Similarly, they can parse the response to updaye localStorage. What limits does ITP actually place on localStorage in third party iframes?
If the localStorage is frequently purged (see question 1), why can’t they simply use postMessage to tell a script on the enclosing website to store some information (perhaps encrypted) and then spit it back whenever it loads an iframe?
For sites that use link decoration
I still don’t understand what the limits on localStorage are in third party sites in iframes, which did NOT get classified as link decorator sites. But let’s say they are link decorator sites. According to [1] Apple only start limiting stuff further if there is a querystring or fragment. But can’t a website rather trivially store this information in the URL path before the querystring, ie /in/here without ?in=here … certainly large companies like Google can trivially choose to do that?
In the case a site has been labeled as a tracking site, does that mean all its non-cookie data is limited to 7 days? What about cookies set by the server, aren’t they exempted? So then simply make a request to your server to set the cookie instead of using Javascript. After all, the operator of the site is very likely to also have access to its HTTP server and app code.
For all sites
Why can’t a service like Google Analytics or Facebook’s widgets simply convince a site to additional add a CNAME to their DNS and get Google’s and Facebook’s servers under a subdomain like gmail.mysite.com or analytics.mysite.com ? And then boom, they can read and set cookies again, in some cases even on the top-level domain for website owners who don’t know better. Doesn’t this completely defeat the goals of Apple’s ITP, since Google and Facebook have now become a “second party” in some sense?
Here on StackOverflow, when we log out on iOS Safari the StackOverflow network is able to log out of multiple sites at once … how is that even accomplished if no one can track users across websites? I have heard it said that “second party cookies” still can be stored but what exactly makes a second party cookie different from a third party?
My question is broken down into 6 cases but the overall theme is, in each case: how does Apple’s latest ITP work in that case, and how does it actually block all cases of potentially malicious tracking (to the point where a well-funded company can’t just do the workarounds above) while at the same time allowing legitimate use cases?
[1] https://webkit.org/blog/9521/intelligent-tracking-prevention-2-3/

Is there a way to check if the browser is logged in for each site?

From the browser's point of view, I'd like to check if a specific site is logged in.
No developer wanted to check with these needs, so I couldn't even find a similar question.
There are many ways to login, so I think it is difficult to check the exact login status of all sites. Is there a way to check with pure browser function (DB, config? etc) rather than using JavaScript or Selenium?

Anchor.click using executeJs function not working in real iPhone safari browser

I have a question regarding executeJs function.
page.executeJs("$0.click();", downloadAnchor.getElement());
This line of code is not working in real iPhone Safari browser, though it works in mobile responsive mode from desktop safari. Appreciate your help on this
Browsers will be "suspicious" of anything starting a download that isn't a direct reaction to interaction by the user. This is done as a security precaution since starting to download files without the user's explicit consent can be dangerous in specific cases. Different browsers and configurations have different policies for exactly where to draw the line.
In your case, the download isn't started as a direct consequence of user interaction but instead as direct consequence of receiving a message from the server. This kind of pattern will simply not work reliably no matter what you do.
Instead, you need to design the interaction so that the download is directly triggered by the user. The easiest way of doing that is by having the user directly click on the actual download link. If you want to have some indirection, then you still need to make the action work directly in the browser without going through the server.

Is it possible to retrieve a users browser history via the API?

We are moving to Chromebooks in our school and would like to know if it's possible to retrieve the users browser history. Every month we perform a device check in where the teacher physically inspects each students device for anything "suspicious". This generally entails looking at their browser history for things like pornographic or other unsavory URLs.
I would be very handy to build a nightly process that slurps up the student's browser history, checks it for any anomalies, and reports that to the administration. Is this possible? Is there another way to do this?
I don't think this is fully supported yet. chrome.history from Chrome API can only
interact with the browser's record of visited pages. You can add, remove, and query for URLs in the browser's history; that's as far as API goes.

Google Plus Interactive Posts not displayed on any stream (Client side API)

As the title denotes, looking for insight on reasons why an Interactive Post doesn't show up on any stream (user sharing the post, and to whomever the user is sharing it with).
Brief
Implement client side api of G+ interactive posts
This seems successful
application auth is requested and if granted is displayed in user's "applications list"
intended content, prefill text, etc. are all displayed when the trigger to initiate the share is invoked
no error indicators (that I know of) are displayed when the "Share" button is clicked by the user (the act of actually posting the share).
it is visible in some way only to Google - explained below
Findings
It seems Google is blocking the post because even though the share isn't posted on any stream (origin nor target), I received a warning about violating Google policies as displayed below, indicating that the (http) post was sent/submitted...
further inspection of network activity also displays what looks like (a guess on my part) a spam score (of 8), somehow already pre-determined (another guess on my part):
https://apis.google.com/u/0/_/sharebox/post/?spam=8&hl=en&ozv=...
Questions
Primary question is why would interactive posts not appear on stream? Any debugging tool out there?
IF my guess on spam blocking is accurate, then why would such be the case? For interactive posts (which somehow inherently is a case of some user "promoting" something in the first place) - eg: with a "BUY" calltoactionlabel?
IF my other guess on the content being "pre-tagged" as spam, how/why would that occur. I didn't include it above, but it is a "product page" - the idea of it, which isn't new nor revolutionary, is to give the opportunity for a user to "share" an item he/she just purchased, say in a normal checkout flow?
It's my assumption that implementation was done correctly, no errors reported, etc. - or perhaps it wasn't? Though it seems unlikely, it's "grasping at straws" time..
Further testing/debugging seems unwise given the warning of policy violation - and yes, I've stopped further dev on this to prevent harming my accounts (one personal, one work, both used above for testing this API).
Thanks for any assistance/input.
Note: I've posted this on G+ community (no luck so far) so once this is resolved I'll share the answer there too (or vice versa).
It looks like you are posting the Interactive post from http://localhost or from any private domain url. Google crawler can only allow interactive post from public domain.
As from their website -
Important: Interactive posts will not work when PhotoHunt is hosted at http://localhost:8888 because the Google crawler can only access public URLs to get microdata about the content of the post. In the case of PhotoHunt for Java, you can deploy your app to appspot.com as a public Google App Engine app.