I want
history.back()
to be cached like Safari naturally does.
But this does not happen in other browsers
How can I implement safari like cache of history.back() in other browsers?
Your can cache the page resources in 'localStorage', but most modern browsers already do something similar(and better). Despite this native browser cache, the code generated from these resources takes a while to be calculated and applied.
You can give a little help to the browser structuring your website pages this way:
<script>
if(!localStorage[location.pathname]) {
//load this page from server
localStorage[location.pathname] = getGeneratedPage();
} else {
body.innerHTML = parseGeneratedPage(localStorage[location.pathname]);
}
</script>
This is just a VERY generic example. The getGeneratedPage could be a function which stores ONLY:
The DOM tree after page load
CSS rules matched for this page
JS functions which have at least one listener
Base64 Images(only recommended for small images or previews of big images)
etc
Also, you can make a server-side version of this or something like Opera Turbo.
Well, there are countless ways to make your page load in the blink of an eye. Hope it helps.
Related
Is there a way to just load the server generated HTML (without any js or images)?
The docs seem a little sparse
The strength of PhantomJS is exactly in its ability to emulate a real browser, which opens a page and makes all the subsequent request. If you want just html maybe better use curl or wget?
But nevertheless there is a way not to run js or load images: set corresponding page settings: http://phantomjs.org/api/webpage/property/settings.html
page.settings.javascriptEnabled = false;
page.settings.loadImages = false;
For my application I need to programmatically save a copy of a webpage HTML along with the images and resources needed to render it. Browsers have this functionality in their Save page as... Webpage, complete options.
It is of course easy to save the rendered HTML of a page using phantomjs or casperjs. However, I have not seen any examples of combining this with downloading the associated images, and doing the needed DOM changes to use the downloaded images.
Given that this functionality exists in webkit-based browsers (Chrome, Safari) I'm surprised it isn't in phantomjs -- or perhaps I just haven't found it!
Alternatively the PhantomJS, you can use the CasperJS to achieve the required result. CasperJS is a framework based on the PhantomJS, however, with a variety of modules and classes that support and complement the PhantomJS.
An example of a script that you can use is:
casper.test.begin('test script', 0, function(test) {
casper.start(url);
casper.then(function myFunction(){
//...
});
casper.run(function () {
//...
test.done();
});
});
With this script, within a "step", you can perform your downloads, be it a single image of a document, the page, a print or whatever.
Take a study on the download methods, getPageContent and capture / captureSelector in this link.
I hope these pointers can help you to go further!
I'm creating a SPA app using Durandal and I would like to include a credit card payment facility. The guys that I'm looking at requires you to give return URLs to success, cancel and a view other pages, is that possible?
To me it would be breaking the 'single page' part of SPA, but is it possible? Could I do it all in a window?
Disclaimer: I don't know Durandal, but you would solve this in an SPA using either "hashbang URIs" or actually re-serving the SPA in your webserver for the requested return URI and adjusting the content using the same technique as hangbash URIs but using history.pushstate/history.popstate instead, see here: https://developer.mozilla.org/en-US/docs/Web/Guide/API/DOM/Manipulating_the_browser_history
A more general article from Google is available here that covers the same principle: https://developers.google.com/webmasters/ajax-crawling/
This "works" because SPAs are SPAs only in that the browser requests a new HTML document from the server once (or in your case, twice), the SPA should still be updating the history and address-bar state of the UA as the user navigates the application, just as though it were a regular multi-page application.
A great example of this is GitHub's source navigator: Try here ( https://github.com/angular/angular.js ) and navigate the repository, observe that the contents of the file-listing change as does the address bar, but your browser doesn't reload the whole page... yet if you copy+paste the (modified) address bar address into a new browser window, you get the same page back.
I looked into doing credit card processing from a SPA and the best option I had found was Stripe. They supply a javascript file that looks like it would work, I never implemented it on my project due to time constraints so I can't confirm that it works but it looked very promising.
IFRAMEs are quite good for this sort of thing. You can use jQuery to hook an event handler to the page load event and this will tell you when the other end has responded. Load the 3rd party page into the IFRAME and serve response pages on the URLs you provide to the service provider. As mentioned by others you can use routes to identify the response pages. The IFRAME will stop the round-tripping from mucking up your application state and in fact it is possible to put script in your response pages that dot-notates its merry way up the DOM and into your app.
On my website, I have a booking widget at the top of each page to allow visitors to enter our booking engine. The code behind it uses quite a bit of HTML, pushing down the content on each page in the source. In an attempt to better my SEO, I decided to have the code placed in a DIV tag at the bottom of the page, and, when the DOM is ready, I use JQuery to physically move the DIV from the bottom of the DOM to the top where it needs to be to render correctly.
My question is if this is really helping SEO? Does Google look at the DOM/Source after all Javascript has run, or before? Does moving these few hundred lines of HTML to the bottom of the HTML source gain me any advantage?
Spiders do not process javascript. So any content that appears/moves or is created by javascript will appear as if it hasn't been moved or created at all.
I'd be really surprised if web crawlers execute the scripts on the page. They probably scan the raw response.
That doesnot have any effect on the SEO.
But placing the javascript at the bottom will defnitely help you to load the webpages faster.
There is no harm for SEO as well, you can defnitely proceed with your approach
There is a distinction between javascript executed on load versus during the user session. The on-load javascript is more times than not indexed by google. The dynamic content or alterations on the client side are not well indexed.
So, it can't be ignored.
I don't want to use frame or iframe because most websites are busting it nowdays.
what are the possible methods to do this?
The intended behaviour is as if like two browser controls loaded in a website.
What is wrong with an iframe? What do you mean by busting (lots of websites are full of iframes)? You do not want to use iframes within an email but that is a different point!
The advantage is when people click on links within an iframe the content of the iframe gets refreshed, while if you pull the content of a webpage and then display it within a div the user will be moved on loosing the rest of the page.
you can pull it via script (PHP / curl) and display the code from there.
or you can use good old frames, although i wouldn't recommend it.
If you do not want frames, then another option is to do it server side - make your server access the pages and combine them into one response stream.
You can also so this client side using Javascript and DHTML
fopen() returns a file pointer:
$file = fopen("http://www.site.com/", "r");