Calculating Front end performance metrics via Web API's ( navigation API and performance timeline API) - testing

In order to calculate the first contentful paint , i used the below command in my browser console.
window.performance.getEntriesByType('paint') -> From that , i fetched the start time of first contentful paint which is : startTime: 710.1449999972829 ms.
Reference
But if i audit the same page via lighthouse(from chrome dev tools), the first contentful paint calculated by lighthouse is '1.5 s'
I am trying to understand why there is a wide difference between the two data. Tried running the audit couple of times via lighthouse, still the data hardy matches with web api data.
Can anyone explain me as to why there is huge difference. Should i go ahead with the data from web api's or should i consider lighthouse data as valid one?

Thank you for the great question, I learned something today because of it.
It appears that even on desktop view there is some throttling applied to the CPU, this didn't used to be the case as far as I am aware.
I found this article that explains the current throttling policy.
The key part here is as follows:
Starting with Chrome 80, the Audits panel is simplifying the throttling configuration:
Simulated throttling remains the default setting. This matches the setup of PageSpeed Insights and the Lighthouse CLI default, so this provides cross-tool consistency.
No throttling is removed as it leads to inaccurate scoring and misleading metric results.
Within the Audits panel settings, you can uncheck the Simulated throttling checkbox to use Applied throttling. For the moment, we are keeping this Applied throttling option available for users of the View Trace button. Under applied throttling, the trace matches the metrics values, whereas under Simulated things do not currently match up.
Point 3 is the main part. Basically the throttling is still applied to the CPU on desktop. Also note they say "for the moment" so this is obviously something they are considering removing in the future.
This is actually a really good idea, most developers are running powerful hardware, most consumers are running cheap off the shelf laptops with i3 processors (or equivalent...or worse!).
As Google spend a lot of time refining Lighthouse I would leave Simulated throttling ON and use their results as they will be more indicative of what an end user might see.
Switching off simulated throttling
If you want your trace results (or console performance API results) to match then uncheck "simulated throttling" at the top of the page.

Related

Google Optimise Delivering No Sessions When Using UTM Targeting

I have a Google Optimise experiment running whereby I'm targeting the utm_medium parameter. I have tried almost every variation of experiment I can imagine and every time I target this parameter my experiment receives nearly no sessions. Couple of helpful things:
I have run experiments using simple geo-targeting which run fine on the same account, so the Optimize installation is hopefully correct
I am only testing one single image change, and it's working with simpler targeting on my landing page
I've tried measuring against my main (lead gen) objective but also session duration and bounce rate, all with the same result
When I check my tagged URLs in the 'checker' tool, every variant (https and http) give me a green 'passed' verification to say that my tagged URLs SHOULD be triggering my experiment
I have attached my targeting criteria so hopefully this is helpful!
Any help would be incredibly appreciated!
Targeting Rule in Place

Google plus determine changes in network

I am trying to determine changes in the Google+ network in an efficient manner (profile changes). My first idea was to use the eTags of the People.List and People.Get. My assumption was that the eTag in the List (person) would be the same as the one in the Get. This is not the case!
I rather not want to get the details of all the people in the network and checking the eTag for each of them. I will run out of daily api-calls very quickly using that scenario.
Are there any other ways of determining the changes in the network?
Thanks!
I'm not aware of a way to notify your service when changes occur on a user's profile. I don't think that etags will work for what you are trying to do and the client libraries should already be using the etags to manage any query caching. You can perform a few tricks to make queries lighter on your backend though:
Batch API calls
Use a fields filter to just only get the data that matters for your application
If you are running out of quota, you can also request to have your limits raised from the Google APIs console by clicking the Quotas link on the left. The developer relations team from Google+ checks the request regularly and will raise your quota limits if your usage justifies it.

Is it a bad idea to have a web browser query another api instead of my site providing it?

Here's my issue. I have a site that provides some investing services, I pay for end of day data which is all I really need for my service but I feel its a bit odd when people check in during the day and it only displays yesterdays closing price. End of day is fine for my analytics but I want to display delayed quotes on my site.
According to the yahoo's YQL faq: If you use IP based authentication then you are limited to 1000 calls/day/IP, if my site grows I may exceed that but I was thinking of trying to push this request to the people browsing my site themselves since its extremely unlikely that the same IP will visit my site 1,000 times a day(my site itself has no use for this info). I would call a url from their browser, then parse the results so I can allow them to view it in the format of the sites template.
I'm new to web development so I'm wondering is it a common practice or a bad idea to have the users browser make the api call themselves?
It is not a bad idea at all:
You stretch up limitations this way;
Your server will respond faster (since it does not have to contact the api);
Your page will load faster because the initial response is smaller;
You can load the remaining data from the api in async manner while your UI is already responsive.
Generally speaking it is a great idea to talk with api's from the client. It's more dynamic, you spread traffic, more responsiveness etc...
The biggest downside I can think of is depending on the availability of other services. On the other hand your server(s) will be stressed less because of spreading the traffic.
Hope this helped a bit! Cheers!

Ajax TruClient, scope of use and limitations?

Just got LR 11 Vugen licence and tried TruClient, looks great and the firefox based script recording works really nice. However, I have not found answers to the following:
1)Is TruClient running limited the same way as QuickTest Pro virtual users scripts (1 user per OS)?
2)It is called Ajax TruClient, does it mean it supports only javascript based web pages or all (standard php/html) including javascript etc.?
Here are a few answers for ya:
1) TruClient is not limited like a GUI Vuser (WinRunner or now QTP) to a single GUI session on a Load Generator. You can run multiple AJAX TruClient Virtual Users on a single Load Generator and they will run "invisibly" like a virtual user. You might find that the driver is much heavier (takes more memory and CPU), so you can't run as many vusers as the Web HTTP/HTML vuser.
2) TruClient is not just for AJAX-based web pages - it can work on any web page that will render in a browser.
In addition to what Mark said, it's purely event driven, i.e. if a user clicks on a link, this is what gets rendered, consumed as a resource and subsequently displayed, as opposed to traditional headless implementations, which are, however in return, using less system resources.
This is one of the main caveats with TruClient (from experience): depending on the complexity of your script or workflow, single user simulated can take lots of resources, mainly memory, in my case.
This is because for every Virtual user that gets emulated, an instance of Gecko Web Engine is being spawned, in order to replay the script, and this has its cost.
However, the level of realism reaches very close to typical user session and experience, as you can, for example, set the typewriting speed, decide whether to simulate caching mechanisms or not, make additional corrections of pattern and images recognition, etc.
Overall, mostly positive experience, which has, however, certain price. Talk to your HP sales (disclaimer: A company which I don't work for, just experience).
A little more ...
TC is a big win in some respects as you can avoid a ton of nasty correlation. But it also has some downsides, the memory/CPU footprint can be huge, and the sync issues can be tricky.

API, building an API but giving priority access for certain requests

Not sure how others have addressed this, but generally speaking what is the best practice for giving your own apps priority treatment when it comes to using one of your own public APIs?
Use Cache Priority
Caching responses or interim calculations in RAM is typically the first optimization point because caching is easier than micro optimizing all your code. Controlling what goes into the cache and how long it stays presents a top level place to apply "priority treatment".
I like the cache management approach better than thread priority because if you are under load delaying the execution of a request often creates complex thread pool problems and decreases overall server throughput.
Caching Based on Load (rather than on app ownership) will Expand the Resource Pie
We take the ram cache priority approach with MapLarge Tile Server and Geocoding API. However, we don't actually give our own apps priority, instead we base priority on request frequency and time required to render a response. Unless you have large numbers of low value api users, I would recommend doing something similar because this approach should reduce overall load and enables the server to handle more api requests.
I recently wrote a white paper that highlights the different load profiles of cached and non cached responses in a multi tenant api environment. You can see it here:
http://maplarge.com/Tile-Server-Performance
API Policies can drive revenue
If you have free or low paying users who are generating massive load you might want to review your business plan and consider instituting account based rate limits that match user revenue to server costs in a scalable way. If you do limit API users I would recommend having explicit and predictable policies so they can project usage and know when to purchase an API account upgrade.