I am trying to increase page loading speed in Shopify. I tested with gtmetrix. Added many apps but page loading speed didn't improved. Tried defer and async in But nothing happened. Is there any other way to increase Page Speed Score and Y Slow Score?
This is a right old can o' worms and is not unique to Shopify, but I would suggest looking at three broad areas:
Most ecommerce stores are extremely image heavy. (You can't sell product without pictures of it, right?!) Optimize those as much as possible, use bleeding edge front-end code, and consider using a 3rd-party CDN to serve specific-sized renders based on browser features (e.g. wewbp) and bandwidth limitations.
Store owners tend to be heavily invested in software that monitor impressions and conversions via 3rd-party javascript — think analytics generally, advertising ROI, affiliate commissions. Audit your tracking pixels and consider using a tag manager or acceleration platform.
Investigate possible automated improvements to your development workflow. Can you improve Time To Interactive and SpeedIndex using optimizations to your critical path: inlined critical css, icon generation, bundled javascript etc etc.
Related
How we can increase the speed performance in Nuxt with SSR for the following points.
Reduce unused JavaScript
Avoid serving legacy JavaScript to modern
Minimize main-thread work
Reduce JavaScript execution time
Avoid enormous network payloads
Pretty generic questions, so let's go point by point:
Reduce unused JavaScript: you can tree-shake your code (+3rd party) + lazy load your routes + components (Nuxt does that nicely)
Avoid serving legacy JavaScript to modern: the modern property is nice for that
Minimize main-thread work: beware of the heavy 3rd party scripts, like Google Analytics/GTM, heavy chats, heavy operations etc. Using a Service worker can help, other you could also try Partytown
Reduce JavaScript execution time: same, depends of your code here. More analysis of it will be required
Avoid enormous network payloads: check if you're making huge amounts of HTTP calls or loading big 5MB of i18n JSON files
As always, you cannot have a quick and simple answer on that kind of subject. You either need a performance expert or debug/learn it yourself.
This is a nice start, you could get quite a lot of explanations regarding core web vitals.
This frontend checklist is always a nice article to read too.
PS: also if the matter is mostly SSR, it may come down to have a better infrastructure on the backend, with bulkier VPS server, some improved DB, maybe some Elasticsearch, some cache etc etc... (all the usual things you can improve on the backend)
For speed optimization, we need to follow the following steps.
Need to optimize the images
Use shouldPreload in the render function nuxt.config.js
Use compressor: shrinkRay(), for compression
Use dns-prefetch for Google fonts
Use minify js and css
optimize API queries
I can't find enough data about pdf generation performance. I'm planning to create some system and one of its features is to generate PDFs. Mostly simple ones that have about 3-5 pages only with text and tables, occasionally some logo.
What's bothering me is the requirement to support high user traffic (about 2500 requests per second).
Do you know any tools (preferably in java) that are fast and reliable to serve that bunch of users as fast as possible ? How long will it take to serve this amount of people on a single, average machine? I would appreciate any info about experience on this topic.
You almost certainly have to execute some tests with your typical workload on your typical machine. This is probably the only way you can evaluate whether any tools will be able to do what you need.
2500 requests per second is a non-trivial requirement so you are right to be concerned. If that 2500/sec is a sustained load and each request has to produce the 3-5 page pdf you simply might not be able to keep up on a "single average machine". It's not only processing power you'll have to consider, but memory and IO performance.
From experience iText is fast and Docmosis has some built-in facilities to distribute load to other hosts. I've seen both working stably under load. Be careful with memory management when you have that many documents on the fly - if you fall behind you might "blow up" no matter what document engine you use.
For an ecommerce website, if the number of the visotor is keeping increasing, the user acces speed on the website are getting slow.
Is there any solution to avoid user access speed becoming slow if the amount of visitors are increaed.
Many thanks!
I think that the answer depends on many variables. Probably too many.
First of all it depends on these factors:
The software used for the site (it is something written from scratch, something you bought, an open source project for ecommerce?)
It depends on the bandwidth available (you can increase it if needed)
It depends on the quality of the code (i saw some software that when loading some pages it loads several tables in it, causing the page loading very slowly)
It depends on the hardware, how many session it can handles concurrently.
etc.
Obviously if the number of users is growing of few units then probably there are some problems with the software (configuration? bad software? and so on).
Probably if you provide more details, the answer could be more accurate.
how can i get to know about my web site maximum capacity for the visitors at the same time?
-kind of stress test for unexpected situations-
You are correct when you think in terms of stress test. You need to be able to reproduce the amount of users you are expecting in order to know precisely how many concurrently users your application will be able to handle.
You start with a low number of users and then you can increase it until you reach a point where your app stops answering in a acceptable amount of time.
I'm afraid there is no simple answer to this, but the simplest way to do this that I can think of is write a simple script that will make GET/POST requests (maybe even using wget) and run it on a farm on Amazon EC2 or something like that so you can truly reach the max capacity of your infrastructure.
If your site is primarily static content, then you will most likely be limited by bandwidth. In this case, an estimate of the capacity can be easily calculated for a given set of expected user activities.
If you have site that is built on common software, you might be able to find benchmarks of that software that will give you a rough estimate of the capacity you can expect.
If this is a critical site or it is a hand-built or highly customized application, then there is no substitute for testing. You need "web performance load testing" software - google for it. This type of software will simulate many browsers visiting your site at the same time. There is a wide variety of choices, from free to $$$$$$$$s.
I am charged with designing a web application that displays very large geographical data. And one of the requirements is that it should be optimized so the PC still on dial-ups common in the suburbs of my country could use it as well.
Now I am permitted to use Flash and/or Silverlight if that will help with the limited development time and user experience.
The heavy part of the geographical data are chunked into tiles and loaded like map tiles in Google Maps but that means I need a lot of HTTP requests.
Should I go with just javascript + HTML? Would I end up with a faster application regarding Flash/Silverlight? Since I can do some complex algorithm on those 2 tech (like DeepZoom). Deploying desktop app though, is out of the question since we don't have that much maintenance funds.
It just needs to be fast... really fast..
p.s. faster is in the sense of "download faster"
I would suggest you look into Silverlight and DeepZoom
Is something like Gears acceptable? This will let you store data locally to limit re-requests.
I would also stay away from flash and Silverlight and go straight to javascript/AJAX. jQuery is a ton-O-fun.
I don't think you'll find Flash or Silverlight is going to help too much for this application. Either way you're going to be utilizing tiled images and the images are going to be the same size in both scenarios. Using Flash or Silverlight may allow you to add some neat animations to the application but anything you gain here will be additional overhead for your clients on dialup connections. I'd stick with plain Javascript/HTML.
You may also want to look at asynchronously downloading your tiles via one of the Ajax libraries available. Let's say your user can view 9 tiles at a time and scroll/zoom. Download those 9 tiles they can see plus whatever is needed to handle the zoom for those tiles on the first load; then you'll need to play around with caching strategies for prefetching other information asynchronously.
At one place I worked a rules engine was taking a bit too long to return a result so they opted to present the user with a "confirm this" screen. The few seconds it took the users to review and click next was more than enough time to return the results. It made the app look lightening fast to the user when in reality it took a bit longer. You have to remember, user perception of performance is just as important in some cases as the actual performance.
I believe Microsoft's Seadragon is your answer. However, I am not sure if that is available to developers.
It looks like some of it has found its way into Silverlight