Updates using server-side rendering without page refresh - api

I was reading an interesting blog. Here the author said:
Updates using server-side rendering is where a lot of developers start
going off the deep end. They actually think page refresh. Instead,
what I thought we've all been doing for the last half decade, is some
form of:
$('#loadTweets').on('click', function(e) {
$.get('/tweets/person', {last_id: 239393939}, function(r) {
$('#tweets').prepend(r);
});
e.preventDefaults();
});
In other words, we are still only doing a partial update, but letting
the server do the rendering and inserting that finalized output into
our DOM.
I did not understand what he meant by "is some form of:...we are still only doing a partial update".
I mean, if I understood correctly, server sending the html and css on every request is Server-Side Rendering (SSR). Server sending json on every request except first is Client-Side Rendering (CSR).
As far I understand, in the code bellow, if r is json then it is CSR and if r is html then it is SSR:
$.get('/tweets/person', {last_id: 239393939}, function(r) {
$('#tweets').prepend(r);
});
What am I getting wrong here?

Bassed on your definition of SSR vs CSR
Server sending [HTML] on every request is Server-Side Rendering (SSR)
Server sending [JSON] on every request except first is Client-Side Rendering (CSR).
let's try to apply that to the example logically:
$.get('/tweets/person', {last_id: 239393939}, function(r) {
// do stuff with `r`
});
For that I made your statements into this decision table. (I'll get to the undefined cases right away, keep reading.)
First Response?
JSON
HTML
Yes
undefined
SSR
No
CSR
undefined
First, we check whether it's the first request. We can say without problems that it is not, wouldn't the client have gotten the JavaScript earlier it couldn't be running it.
Now let's introduce what type of data is send:
if [response] is json then it is CSR
if [response] is html then it is SSR
The first statement is valid, it would definitely be CSR. But the second one would lead an undefined case. We're deeply confused now!
To address that, let's read how the author defines as CSR and SSR:
With client-side rendering, your initial request loads the page layout, CSS and JavaScript. It's all common except that some or all of the content isn't included. Instead, the JavaScript makes another request, gets a response (likely in JSON), and generates the appropriate HTML (likely using a templating library).
With server-side rendering, your initial request loads the page, layout, CSS, JavaScript and content.
For now, this leads to a similar table than yours, but note how the format/type headers are slightly different!
First Response?
Data as JSON
Data as HTML
Yes
undefined
SSR
No
CSR
undefined
S/He continues:
For subsequent updates to the page, the client-side rendering approach repeats the steps it used to get the initial content. Namely, JavaScript is used to get some JSON data and templating is used to create the HTML.
So s/he's now at the second row of his table, i.e. not the first response.
Then your quote starts (emphasis mine):
Updates using server-side rendering is where a lot of developers start going off the deep end. They actually think page refresh. Instead, what I thought we've all been doing for the last half decade, is some form of [...] doing a partial update, [...] letting the server do the rendering [of the HTML] and inserting that finalized output into our DOM.
With this we can fix the undefined case in the not-first-request-row we have been confused about!
First Response?
JSON
HTML
Yes
undefined
SSR
No
CSR
Partial Update
There is still the first-response-JSON-case, but as the browser cannot generate further requests from this on it's own we can ignore it here.
Hope this helps!

Related

How to return Vuex-generated page to client on initial Vue load?

I have a Vue / Nuxtjs app which displays lots of user-provided content (think of it as a crowdsourced blog). The content on the client is retrieved and stored in Vuex. When a page is loaded, it displays the current content and then uses fetch to get the updated data. Here is a typical component:
fetch() {
this.$store.dispatch('feeds/refreshLatest')
},
computed: {
feed() {
return this.$store.state.feeds.latest
}
}
where feeds/refreshLatest uses axios to retrieve the posts.
This works quite well. The problem is the initial load is very slow, especially on the front page which has to process and display dozens of articles.
I have SSR enabled, and would like the server to store the content, and then on initial load provide a rendered page to the client. However, the Vuex object on the server seems to be new for each request, and so the client has to wait for the entire set of articles to be fetched before anything is displayed, which is unacceptable. Doing all the fetches only on the client solves this problem, but it is still too slow.
I thought I could somehow use the same server Vuex on each call and sending it to the client with nuxtServerInit, but I don't see a way to achieve sharing the Vuex. Thank you for any pointers or other packages which could help.
The question is that after the fetch is finished after the api call in the server rendering, the DOM is dropped to the client, and the process is running every time and slow?
I solved similar issues using cookies. This is because cookies can also be used to render servers. I used the method below.
Store the data in the cookie after the initial api call, and send the data in the cookie to the client first.(If cookies are present, do not call api from server)
Call api from client to update data.
I use this library.
https://github.com/microcipcip/cookie-universal/tree/master/packages/cookie-universal-nuxt#readme

Changing page content with custom scheme in WebKitGTK1

I have an app using the WebKitGTK1 API with WebKit-GTK 2.4.9 on Linux. (This is the current version in Debian Jessie and versions 2.5+ don't support the v1 API.)
I've implemented a custom URI scheme for loading entire basic page content by using a resource-request-starting handler, which parses the incoming URI via webkit_web_resource_get_uri and if it matches the custom scheme, generates some HTML content and calls webkit_network_request_set_uri to replace the original URI with a base64'd data: URI containing the content to render. (This is similar to the accepted answer of this question.)
This mostly works well, and my handler is called on each request (including repeated requests with the same original URI) and generates the correct content -- but somewhere upstream the browser appears to render only the first returned data for any given original URI, even if the data URI I generate is different.
Possibly of note is that webkit_web_resource_get_uri returns the original non-data: URI even after calling webkit_network_request_set_uri, so I assume this URI is being cached, and in turn is then being used as a key in some higher-level component to cache the data instead of using the real URI from the request.
Unfortunately this appears to be a G_PARAM_CONSTRUCT_ONLY property and there doesn't appear to be any public API to set and/or clear it so that it uses the rewritten URI of the request instead. Is there some way to force GTK to set the property after construction anyway? As far as I can tell it does have a setter method internally, and the getter would do the Right Thing™ if the internal property were reset to NULL.
Or is there some better method to force WebKit to render the new data: URI despite anything it thinks to the contrary?
For the moment I've worked around it by including the values that make it generate different data in the original custom URI (passed to webkit_web_view_load_uri or in links in the generated page). This does work but it's a bit ugly, and could be problematic if I forget to add something in the future, or if something changes generation but is not known in advance. It seems a bit silly that it goes to all the trouble of raising the event that generates the correct data, only to throw it away later, (presumably) due to an URI comparison on the wrong URI.
I suppose using a known-unique value (eg. sequential incrementing id) would also work, and resolve some of the unknown-in-advance issues, but that's no less ugly.

How is XHR a viable alternative to asynchronous module definition?

I'm learning about the case for asynchronous module definition (AMD) from here but am not quite clear about the below:
It is tempting to use XMLHttpRequest (XHR) to load the scripts. If XHR
is used, then we can massage the text above -- we can do a regexp to
find require() calls, make sure we load those scripts, then use eval()
or script elements that have their body text set to the text of the
script loaded via XHR.
XHR is using ajax or something to make a call to grab a resource from the database, correct? What does the eval() or script elements have to do with this? An example would be very helpful
That part of RequireJS' documentation is explaining why using XHR rather than doing what RequireJS does is problematic.
XHR is using ajax or something to make a call to grab a resource from the database, correct?
XHR is what allows you to make an Ajax call. jQuery's $.ajax for instance creates an XHR instance for you and uses it to perform the query. How the server responds depends on how the server is designed. Most of the servers I've developed won't use a database to answer a request made to a URL that corresponds to a JavaScript file. The file is just read from the file system and sent back to the client.
What does the eval() or script elements have to do with this?
Once the request is over, what you have is a string that contains JavaScript. You've fetched the code of your module but presumably you also want to execute it. eval is one way to do it but it has the disadvantages mentioned in the documentation. Another way to do it would be to create a script element whose body is the code you've fetched, and then insert this script in the DOM but this also has issues, as explained in the documentation you refer to.

dojo load js script and then execute it

I am trying to load a template with xhr and then append it to the page in some div.
the problem is that the page loads the script but doesn't execute it.
the only solution I got is to add some flags in the page (say: "Splitter"), before the splitter, I put the js code, and after the splitter I add the html code, and when getting the template by ajax, I split it. here is an example:
the data I request by ajax is:
//js code:
work_types = <?php echo $work_types; ?>; //json data
<!-- Splitter -->
html code:
<div id="work_types_container"></div>
so the callback returns 'data' which I simply split and exeute like this:
data = data.split("<!-- Splitter -->");
dojo.query("#some_div").append(data[1]); //html part
eval(data[0]); //js part
Although this works for me, but it doesn't seem so professional!
is there another way in dojo to make it work?
If you're using Dojo, it might be worth to look at the dojox/layout/ContentPane module (reference guide). It's quite similar to the dijit/layout/ContentPane variant but with one special extension, that it allows executing the JavaScript on that page (using eval()).
So if you don't want to do all that work by yourself, you could do something like:
<div data-dojo-type="dojox/layout/ContentPane" data-dojo-props="href: myXhrUrl, executeScripts: true"></div>
If you're concerned about it being a DojoX module (DojoX will disappear in Dojo 2.0), the module is labeled as maintained, so it has a higher chance of being integrated in dijit in later versions.
As an anwer to your eval() safety question (in comments). Well, it's allowed of course, else they wouldn't have such a function called eval(). But indeed, it's less secure, the reason for this is that the client in fact trusts the server and executes everything the server sends to the client.
Normally, there are no problems unless the server sends malicious content (this could be due to an issue on your server or man in the middle attacks) which will be executed and thus, causing an XSS vulnerability.
In the ideal world the server only sends data and the client interpretes this data and renders it by himself. In this design, the client only trusts data from the server, so no malicious logic can be executed (so there will be no XSS vulnerability).
It's unlikely that it will happen and the ideal world solution is not even possible in many cases since the initial page request (loading your webpage) is in fact a similar scenario where the client executes whatever the server sends.
Web application security is not about being 100% safe (it's impossible), but it's to try to create as less as possible open doors that can be used by hackers. It's up to you what you consider safe and to verify if the "ideal world" solution is possible in this specific scenario (it might not be, or it might take too much time compared to the other solution).

How can I pass data to the success callback of an ExtJS-based AJAX file upload?

So, I've read a lot about using ExtJS's fileuploadfield to submit a form via an IFRAME. I understand that I'm supposed to reply with a JSON object indicating success or failure; fine. What I want to know is, how can I get more information back to the calling code? I don't want to simple send a file and say "yup, that worked fine" -- I want to submit a document, act on it, and return a result.
Say I have the user upload an XML document -- I might want to do a lookup or conversion based on it and update the contents of a form on my page accordingly. Is this even possible? I'd strongly prefer to avoid involving Flash or embedded applets if at all possible. If need be, I could even restrict this behavior to HTML5-compliant browsers...
I honestly thought I wasn't seeing the response I sent, but it was a server-side error. My success callback is now firing, with the full text of my server's response available as f.responseText (where f is the first argument to the success callback). My mistake!