Why is there a time span between different network requests? - optimization

I'm optimizing the loading times in a web app and I don't know what's the problem. Firebug's Net panel is showing time holes between requests.
Can someone explain me this chart?

The gap between the requests can have two reasons:
Time needed to parse the requested page
When you request a URL, the browser needs to parse the returned contents to check whether they contain URLs to other ressources like JavaScripts, CSS files, images, etc. Subsequently requested ressources need to be parsed, too. So e.g. CSS files can contain references to images. Though the contents of the CSS file first need to be parsed to get those URLs.
Dynamically requested ressources
Using JavaScript resources can be requested asynchronously. These requests can be triggered e.g. through AJAX or by dynamically inserting DOM nodes like <img src="xyz.png" alt=""> into the page.

Related

Jquery add loading=lazy to all images

I have a bigger webpage and it would take days to add the loading=lazy attribute to all img tags on my site. Is it useful to use something like $('img'). attr('loading', 'lazy') (does this work?) to the site, or will it just make the site more slower?
It doesn‘t necessarly have the expected effect - if you‘re adding the attributes via JavaScript, the page itself has already been parsed by the browser and their preloading scripts as well and all of those images would be been put to the download queue, as if the attribute wouldn‘t have existed on them.
So I would heavily recommend to add those attributes within the source code itself already.

Force asset re-caching

Cornerstone's carousel didn't work for my client's design, and so I built a custom hero component that I've included on several custom page templates. To allow the client to manually update images, I've set the hero images to use the {{cdn}} handlebars helper to pull images down from WebDAV.
E.g. background-image: url('{{cdn "webdav:img/home-hero.jpg"}}');
The issue we're running into now, is that, because the CDN caches asset files for the site on the server, when my client updates home-hero.jpg through WebDAV, the server has no way of knowing, and so it continues to serve the old version of home-hero.jpg.
Is there a way for my client to force re-caching of assets, or to bypass it altogether? I've attempted to use the imbypass parameter (webdav:img/home-hero.jpg?imgbypass=on), but this apparently just serves the unoptimized, but already cached, image.
One solution would be to append a random query string to the image src URL to prevent caching. If you're developing on a Stencil theme, the easiest way to accomplish this would be to use the {{moment}} helper to generate a date string so you can be sure you're getting a unique value each time.
<img src="/content/home-hero.jpg?{{moment}}"/>
will render as:
<img src="/content/home-hero.jpg?2018-08-23T00:00:00-05:00">
More info on using query strings to prevent caching: https://css-tricks.com/strategies-for-cache-busting-css/

SEO: Can dynamically generated links be crawled?

I have a page containing <div> tags with onclick="" code that calls an ajax request to get json data, and then iterates through the results to form links (<a />) to append to the page. These links do not exist in any other place on my website. How can I make these dynamically generated links crawlable?
My initial thought was to turn the <div> tags into <a> tags with a href="#", but with my limited knowledge of how typical crawlers work, i don't think this would solve my problem since the "#" would be what's recognized by the crawler, and not necessarily the dynamically generated output. This is besides the point that i don't want the scroll positioning to be altered at all, which would also rule out giving the <a> tag an id and having it reference itself.
Do I have any options aside from making a new page containing all of the links i need to be crawled? Thanks.
As a general rule, content that is created or made available through JavaScript cannot be found or indexed by search engines. Google does support crawlable Ajax but using it as the only means of accessing your content is bad for accessibility. Also, other search engines can't get to that content which is also not a good thing. Basically crawable ajax is a bad thing.
You should always make your content available without requiring JavaScript to get it. Then you can improve your site by adding JavaScript to make getting the content faster or easier. This is called Progressive Enhancement and is how good websites are built.

Preventing Images being cached in the browser

I have a feature of "Browse Pictures" where there are thumbnails and when a user clicks it expands.
Now, both these images are stored in separate virtual directories with different sizes, the larger being 200*200 px.
Still it only shows the smaller image when I click it to enlarge, instead of the 200*200 images.
You can add a random URL parameter to the image's href, so that the HTML rendered looks like
<img src="http://static.example.com/some/large/image.jpg?234234652346"/>
instead of
<img src="http://static.example.com/some/large/image.jpg"/>
It sounds like you don't want to prevent them from being cached as such, but you want to give them different URLs.
If they do have different URLs, then this is not a caching problem.
To prevent caching, you use a cache-control:no-cache HTTP response header when serving the images. (are you using Apache?)
But if you really prevent caching, your data transfer will be higher than it needs to be, every time they visit your gallery, they will be fetching your images.

Can I force all links in a WebView (or WebFrame) to be treated as absolute paths?

So I'm working with WebKit's - WebView and WebFrame. I use a custom NSURLProtocol to retrieve the HTML for each request from a database. The problem arises from the fact that the links in the HTML are all relative, when they really ought to be absolute. For example, the page
foo/bar.html
May have a link in it that points to
foo/baz.html
The problem is that since the link is relative, the request ends up being for
/foo/foo/baz.html
So far, I've tried to work around this by comparing the two URLs and stripping off the common prefix - in this case 'foo/' - leaving me with foo/baz.html. This doesn't work for all possibilities, however, especially when there are multiple directories in the path. I do this in the "didStartProvisionalLoadForFrame:" method of my WebView's frameLoadDelegate.
Unfortunately, I do not have control over the HTML that I'm displaying, so modifying the links themselves is not an option.
Try being the main frame's resource load delegate, and implementing webView:resource:willSendRequest:redirectResponse:fromDataSource: to modify the URL being requested. Send relativeString to the request's URL to get the original relative URL, then use -[NSURL initWithString:baseURL:] to create a new URL with the same relative string against the correct base URL.