'Dynamic' page loading? - dynamic

I've no idea what it is exaclty I need, but I'll use Facebook as an example. When you load another page (for example: on the home page, you click your profile) the page itself doesn't look like it reloads.The data loads of course, and the URL changes, however the topbar and chat bar stay put.
I'm looking to do something similar to this. Is this me assuming too much and it's really a simple cache function, or is there a JavaScript way to do it, or any other?
Thanks.

It's using AJAX, which is a combination of various technologies (primarily JavaScript) to achieve this dynamic client-server interaction within the overall context of a page.
Essentially what you'd have for your setup is something like this:
A page which the user requests and is returned to the browser. This page will contain (or reference in external files, etc.) some JavaScript code to drive the interactive functionality. For example, a "link" may trigger a JavaScript function instead of navigate to another page. That function will make an AJAX call to the server.
An AJAX handler on the server. Think of it like another page, or like a web service of some kind. It expects requests to come from JavaScript code, not from humans. (Though humans can call it manually if they want, so don't return sensitive data or anything like that.) This can return data in any number of formats (JSON, XML, HTML, etc.) and the client-side JavaScript code will use that response in code.
Depending on your preferred web development technologies, there are various frameworks and tools to help with AJAX functionality. My personal favorite is just to use the AJAX methods in jQuery and to manually control the responses from the server. If you specify your development platform, I can help find some useful examples to get you started.

This is all done using AJAX. Depending on what you want, you can load entire HTML chunks, or just load the data and have the page's javascript output what's needed.

Related

See what API a webpage calls

Say I have a webpage that renders all it's contect using javaScript.
It has to fetch the data from somewhere (likely some API). How do I see what webpages/API the javaScript calls?
I have used the chrome-console and looked under fetch/XHR, but the only thing that pops up is an .svg file.
Do I need then to inspect each javaScript function under JS and look at, what they call or..?
Finding out which servers are contacted via Devtools is not always easy. You can try extensions like webQsee (https://chrome.google.com/webstore/detail/webqsee-web-sniffer-recor/gamdpfnfkjknkimfbboonmgdfnondfme). They explicitely list the domains the browser is communicating with.

Handling SEO on Isomorphic React

i'm using React & Node JS for building universal apps (). I'm also using react-helmet as library to handle page title, meta, description, etc.
But i have some trouble when loading content dynamically using ajax, google crawler cannot fetch my site correctly because content will be loaded dynamically. Any suggestion to tackle this problem?
Thank you!
I had similar situation but with backend as django, but I think which backend you use doesnt matter.
First of let me get to the basics, the google bots dont actually wait for your ajax calls to get completed. If you want to test it out register your page at google webmaster tools and try fetch as google, you will see how your page is seen by bots(mine was just empty page with loading icon), so since calls dont complete, not data and page is empty, which is bad for SEO ,as bots read text.
So what you need to do, is try server side rendering. This you can do in 2 ways either you prerender.io or create templates on backend which are loaded when the page is called for the first time, after which your single page app kicks in.
If you use prerender its paid but pre-render internally uses phantom.js which you are you can directly use. But it did not work out really well for me so I went with creating templates on the backend. So the bots or the user when come to page for first time(or first entry) the page is served from backend else front end.
Feel free to ask in case in any questions :)

Is it possible to serve HTML dynamically in Shopify?

I'm thinking something like pulling in a json file and generating a page from it. I'm just wanting to know if this is a possibility in Shopify.
We're wanting to move our current site completely over to Shopify, and we're hoping we can still have much of the functionality we got from angular and node.
Yes you can do this through Application proxies.
An application proxy is a feature that fetches and displays data on a
Shopify shop page from an outside location that you, the developer,
specify. For app developers, this means that the content on these
proxy pages can be dynamic; capable of being updated as much as you
feel necessary. App developers can use application proxies to display
image galleries, statistics and custom forms, just to name a few
examples.
Basically when Shopify receives an HTTP request on a proxied path, it forwards that request to your specified URL so you can do as you wish. Even further, if you set the Content-Type: application/liquid on your response headers, Shopify will render the template for you; so you could use your very own liquid templates making this page look and behave exactly as the rest of your shop.
There's information on how to get started on the official Shopify Documentation.

How inspectlet and other services store user video sessions?

I was wondering how the services like http://www.inspectlet.com/ does store the video sessions. By the looks I don't think it's a webRTC implementation. What i was able to figure out that there is active express socket which is making communication but in that case they will have to store the page and track all the events from DOM. Just wanted to confirm that this is the approach they are following.
Looking at the event listeners on the page, it looks like there are a lot of bindings. For example, the <body> has scroll, keyup, and change events bound to a function. I'm sure it also has mousemove, mouseclick, etc. All of this is likely stored in a Javascript variable (object, probably) and then AJAXed every so often to http://hn.inspectlet.com/adddata with the data parameters. Here is a sample of what is being sent:
http://hn.inspectlet.com/adddata?d=mr,212941,46,337,46,1277)mr,213248,163,498,163,1438)mr,213560,144,567,144,1507)mr,213873,138,240,138,1180)mr,214188,169,184,169,1124)mr,214504,158,520,158,1460)mr,214816,231,487,231,1427)mr,215130,329,197,329,1137)mr,215444,894,289,894,1229)mr,215755,903,295,903,735)s,215755,440,0)&w=259769975&r=494850609&sd=1707&sid=1660474937&pad=3&dn=dn&fadd=false&oid=99731212&lpt=212629
As suggested in Adam's answer, they track many events in the page and send them either via a websocket or post/get request (XHR) to their servers.
I am not sure what inspectlet does specifically, but in general, such a solution will need to follow the below general steps:
When the page is fully loaded (hook on DOMContentLoaded probably) they will send the page data to the server. Then they also hook on MutationObserver in order to track all changes to the DOM in the page, so when something changes dynamically, the tracking script can 'record' it and send it to the server.
The SaaS application in turn, will have a player that will parse all this raw data and then play it back, this will usually require using some virtual file system for assests (images, css, scripts) and handling js scripts to not post back again (replay xhr will have bad results for tracked websites) but play back the mutations as they were captured (recorded)
In regards of data HTML pages compress really well, especially when you can make assumptions about the data (since you know its html) - so yes, they actually cache a lot (similar to what google does in that regards, but google does it for the entire web, not just 'customers')
The DOM Mutations will need to be stored as well. This is up to the algorithm, it can either be stored as plain text or using a smarter data model, storing them in plain text is obviously not the cost effective solution.
The above is an abstraction, there are many edge cases to handle in order to implement such a solution as well as a lot of mathematical and algorithmic thinking in regards of heatmaps to make them accurate.
So after a long search was able to find a new promising solution on the block, which solves all the complex parts in building such service. It is still under development but it solves the problem. Below is the link to it,
https://www.rrweb.io/
https://github.com/rrweb-io/rrweb

Is there a way to delay/organize page load without javascript?

In the quest for fast (initial) page load, I'm wondering if there is a way to organize page load without using any javascript?
I don't know if you really need the delay capacity of javascript, if you could just order the way the content of your page loads (header-->content-->sidebars-->elements further down...
Or am I wrong about getting good page ratings without delaying?
Of course you can write a page without using js at all (for example pure html like it was in the beginning of web) and if you will use css3 then your page could look modern and dynamic. But it is hard to imagine any rich and modern functionality without support of scripting on client side (like autoload data on scroll via ajax, or collecting statistics or having local storage etc).
If you are thinking about fastest way to load initial page - try to think about caching static objects like images, js files etc, and to load static data first giving perception of a fast load, loading dynamic data on a background using js. However this is quite a spread topic and there are different techniques how to do such effect, i would recommend searching for topics about architectures of modern high load web apps like facebook, youtube, twitter and so on.