Google Optimise Delivering No Sessions When Using UTM Targeting - optimization

I have a Google Optimise experiment running whereby I'm targeting the utm_medium parameter. I have tried almost every variation of experiment I can imagine and every time I target this parameter my experiment receives nearly no sessions. Couple of helpful things:
I have run experiments using simple geo-targeting which run fine on the same account, so the Optimize installation is hopefully correct
I am only testing one single image change, and it's working with simpler targeting on my landing page
I've tried measuring against my main (lead gen) objective but also session duration and bounce rate, all with the same result
When I check my tagged URLs in the 'checker' tool, every variant (https and http) give me a green 'passed' verification to say that my tagged URLs SHOULD be triggering my experiment
I have attached my targeting criteria so hopefully this is helpful!
Any help would be incredibly appreciated!
Targeting Rule in Place

Related

Calculating Front end performance metrics via Web API's ( navigation API and performance timeline API)

In order to calculate the first contentful paint , i used the below command in my browser console.
window.performance.getEntriesByType('paint') -> From that , i fetched the start time of first contentful paint which is : startTime: 710.1449999972829 ms.
Reference
But if i audit the same page via lighthouse(from chrome dev tools), the first contentful paint calculated by lighthouse is '1.5 s'
I am trying to understand why there is a wide difference between the two data. Tried running the audit couple of times via lighthouse, still the data hardy matches with web api data.
Can anyone explain me as to why there is huge difference. Should i go ahead with the data from web api's or should i consider lighthouse data as valid one?
Thank you for the great question, I learned something today because of it.
It appears that even on desktop view there is some throttling applied to the CPU, this didn't used to be the case as far as I am aware.
I found this article that explains the current throttling policy.
The key part here is as follows:
Starting with Chrome 80, the Audits panel is simplifying the throttling configuration:
Simulated throttling remains the default setting. This matches the setup of PageSpeed Insights and the Lighthouse CLI default, so this provides cross-tool consistency.
No throttling is removed as it leads to inaccurate scoring and misleading metric results.
Within the Audits panel settings, you can uncheck the Simulated throttling checkbox to use Applied throttling. For the moment, we are keeping this Applied throttling option available for users of the View Trace button. Under applied throttling, the trace matches the metrics values, whereas under Simulated things do not currently match up.
Point 3 is the main part. Basically the throttling is still applied to the CPU on desktop. Also note they say "for the moment" so this is obviously something they are considering removing in the future.
This is actually a really good idea, most developers are running powerful hardware, most consumers are running cheap off the shelf laptops with i3 processors (or equivalent...or worse!).
As Google spend a lot of time refining Lighthouse I would leave Simulated throttling ON and use their results as they will be more indicative of what an end user might see.
Switching off simulated throttling
If you want your trace results (or console performance API results) to match then uncheck "simulated throttling" at the top of the page.

How to properly benchmark / stresstest single-page web application

I am somehow familiar with benchmarking/stress testing traditional web application and I find it relatively easy to start estimating maximum load for it. With tools I am familiar (Apache ab, Apache JMeter) I can get a rough estimate of the number of request/second a server with a standard app can handle. I can come up with user story, create a list of page I would like to check and benchmark them separately. A lot of information can be found on the internet how to go from novice like me to a master.
But in my opinion a lot of things is different when benchmarking single page application. The main entry point is the most expensive request, because the user loads majority of things needed for proper app experience (or at least in my app it is this way). After it navigation to other places is just ajax request, waiting for json, templating. So time to window load is not important anymore.
To add problems to this, I was not able to find any resources how people do it properly.
In my particular case I have a SPA written with knockout, and sitting on apache server (most probably this is irrelevant). I would like to have rough estimate how many users can my app on the particular server handle. I am not looking for a tool recommendation (also it would be nice), I am looking for experienced person to share his insight about benchmarking process.
I suggest you to test this application just like you would test any other web application, as you said - identify the common use cases, prepare scripts for them, run in the appropriate mix and analyze the results.
Web-applications can break in many different ways for different reasons. You are speculating that the first page load is heavy and the rest is just small ajax. From experience I can tell you that this is sometimes misleading - for example, you can find that the heavy page is coming from cache and the server is not working hard for it, but a small ajax response requires a lot of computing power or long running database query or has some locking in the code that cause it to break or be slow under load - that's why we do load testing.
You can do this with any load testing tool, ideally one that can handle those types of script with many dynamic values. My personal preference is WebLOAD by RadView
I am dealing with similar scenario, SPA application where first page loads and there after everything is done by just requesting for other html pages and/or web service calls to get the data.
My goal is to stress test the web server and DB server.
My solution is to just create request for those html pages(very low performance issue, IMO, since they are static and they can be cached for a very long time in the browser) and web service call requests. Biggest load will come from the request for data/processing via the web service call requests.
Capture all the requests for html and web service calls using a tool like fiddler, and use any load test tools(like JMeter) to run these requests using as many virtual users as you want to test your application with.

Running MTurk HITs on external website

I am implementing a website on which the recruited MTurk workers will perform tasks. I plan to recruit workers using MTurk tasks, using which I will redirect them to an external website for actual work. I have the following questions relating to this plan.
Is there any foreseeable problems with this approach of running HITs? If so, how can we mitigate them?
how should I implement the authentication procedure on my external site? For example, how can I make sure the people who come to the website to perform a specific task are indeed the same group of people recruited earlier for this particular task on MTurk?
when the workers finish the task, how should I integrate the payment procedure with MTurk based on their performance? For example, say worker is owed $3 after finishing the task on my external site, is it possible for me to tell MTurk to pay him/her this amount programmatically?
The external site will be built using Python, if such detail matters.
Any suggestions and comments based on your experiences and insights in using MTurk would be much appreciated!
I am thinking through this for a similar project of mine. I've experimented as a worker myself. Here is my plan, I hope it is of use to you. (I have not implemented it yet. It is based on an academic HIT I participated in as a worker.) Here goes:
A. Create a template that has language something like:
1. Please open this web site in a new browser window:
http://your-url.xyz.blah/tasks/${token}
2. Read and follow the instructions there.
3. After completing the task, you will receive a confirmation code. Paste
it here: [________]
B. Create some random tokens for your Mechnical Turk data file:
1A1B43B327015141
09F49F2D47823E0C
B5C49A18B3DB56F4
4E93BB63B0938728
CCE7FA60BFEB3198
...
(Generate these tokens from your app; it needs to cross-reference them.)
C. Your app extracts the token from URL, looks up the task, and does whatever it needs to do. I personally don't worry about people stumbling onto a URL, since it is a one-time use token.
D. After a user completes the task on the external web site, the external app gives a confirmation code. The confirmation code should be random and opaque. Only your application will know if any particular code corresponds to a correct or incorrect answer. In fact, if you want, the correctness may not even be determined in real time -- it could be the result of an aggregation and/or comparison across multiple submissions.
E. Write some code to interact programmatically. Take the token and confirmation code supplied from the MTurk result and make sure they match with your external app. If they don't match, reject the HIT. If they match, check the correctness in your external app and approve or reject. You might consider a bonus pay structure.
So, to answer your particular questions:
I don't anticipate problems with the approach I described. That said, Mechanical Turk is both an art and a science. Perhaps more art. Writing good questions and paying Turkers appropriately is something you have to figure out with a combination of common sense, market research, and experimentation.
See (C) above. A token is designed to only be used once. Use long enough tokens and the probability of collision becomes very low.
See (E) above. The Mechanical Turk Developer Guide is a good place to start.
Please share your results back. Or have the Turkers send StackOverflow hundreds of postcards. :)
Notes:
I'm currently exploring qualification tests. I suspect they can be very useful.
I want to get a Turker's Worker ID in my external application, but I haven't figured that part out yet. I'm reading up on it; for example: Getting workerId by assignmentId
I am thinking about using the ExternalQuestion feature from the API: "... you can host the questions on your own web site using an "external" question. ... A HIT with an external question displays a web page from your web site in a frame in the Worker's web browser. Your web page displays a form for the Worker to fill out and submit. The Worker submits results using your form, and your form submits the results back to Mechanical Turk. Using your web site to display the form gives your web site control over how the question appears and how answers are collected."
You might also find PsiTurk to be useful: "PsiTurk is an open platform for conducting custom behvioral experiments on Amazon's Mechanical Turk. ... It is intended to provide most of the backend machinery necessary to run your experiment. It uses AMT's External Question HIT type, meaning that you can collect data using any website. As long as you can turn your experiment into a website, you can run it with PsiTurk!"

Ajax TruClient, scope of use and limitations?

Just got LR 11 Vugen licence and tried TruClient, looks great and the firefox based script recording works really nice. However, I have not found answers to the following:
1)Is TruClient running limited the same way as QuickTest Pro virtual users scripts (1 user per OS)?
2)It is called Ajax TruClient, does it mean it supports only javascript based web pages or all (standard php/html) including javascript etc.?
Here are a few answers for ya:
1) TruClient is not limited like a GUI Vuser (WinRunner or now QTP) to a single GUI session on a Load Generator. You can run multiple AJAX TruClient Virtual Users on a single Load Generator and they will run "invisibly" like a virtual user. You might find that the driver is much heavier (takes more memory and CPU), so you can't run as many vusers as the Web HTTP/HTML vuser.
2) TruClient is not just for AJAX-based web pages - it can work on any web page that will render in a browser.
In addition to what Mark said, it's purely event driven, i.e. if a user clicks on a link, this is what gets rendered, consumed as a resource and subsequently displayed, as opposed to traditional headless implementations, which are, however in return, using less system resources.
This is one of the main caveats with TruClient (from experience): depending on the complexity of your script or workflow, single user simulated can take lots of resources, mainly memory, in my case.
This is because for every Virtual user that gets emulated, an instance of Gecko Web Engine is being spawned, in order to replay the script, and this has its cost.
However, the level of realism reaches very close to typical user session and experience, as you can, for example, set the typewriting speed, decide whether to simulate caching mechanisms or not, make additional corrections of pattern and images recognition, etc.
Overall, mostly positive experience, which has, however, certain price. Talk to your HP sales (disclaimer: A company which I don't work for, just experience).
A little more ...
TC is a big win in some respects as you can avoid a ton of nasty correlation. But it also has some downsides, the memory/CPU footprint can be huge, and the sync issues can be tricky.

Is there a way to automate Google Web Optimizer approvals?

Google Web Optimizer uses URLs for its A/B testing experiments. In addition to the production URL, we also have several pre-production environments. Releases of software are pushed out.
We're only running our first experiment now, but we've set up five experiments in GWO -- one for each environment (and thus URL).
It's a bit of a pain to set up all these experiments manually -- especially when verifying the pages.
Is there some kind of API or other automated way to set these up?
No; as of May 2011, the GWO API above has been discontinued. A future version of the API will supposedly show up, at some point, though - so it may get possible again.
I think this is what your after: Website Optimizer API