Google Optimize experiment not showing data from GA4 server-side - optimization

I've been using Optimize on my site successfully for over a year. I'm transitioning to our new GA4 container. I understand the GA4 connection is in beta and will be deprecated. My understanding is it should still work till Sept 2023. We've successfully run two experiences previously in the GA4 container.
I have two tests that have been running from more than 2 days. The test variants are being applied to my website. However, no impressions are being recorded or shown in the "Reporting" tab for the experience. I don't see why that would be the case.
Screenshot of experiment
Optimize shows that it is properly installed when I click "Check Installation." We've installed Optimize through GTM.
We have a standard server-side tagging setup for GA4. The Optimize tag is set to fire before the GA4 configuration tag.
I expected experiment impressions would be recorded as usual the experiment "Reporting" page on Optimize.
I tried the Optimize installation troubleshooter. The experiments are running properly in my Universal Analytics Optimize container. I checked GTM and the Optimize tag is firing properly.
Can you help troubleshooting this?

Related

Workflow from development to testing and merge

I am trying to formalize the development workflow and here's the first draft. Welcome suggestions on the process and any tweaks for optimization. I am pretty new when it comes to setting up the processes and would be great to have feedback on it. P.S: We are working on an AWS Serverless application.
Create an issue link in JIRA - is tested by. The link 'is tested by' has no relevance apart from correcting displaying the relation while viewing the story.
Create a new issue type in JIRA - Testcase. This issue type should have some custom fields to fully describe the test case.
For every user story, there will be a set of test cases that are linked to the user story using the Jira linking function. The test cases will be defined by the QA.
The integration/e2e test cases will be written in the same branch as the developer. E2E test cases will be written in a separate branch as it's a separate repository (Open for discussion).
The Test case issue type should also be associated with a workflow that moves from states New => Under Testing => Success/Failure
Additionally, we could consider adding capability in the CI system to automatically move the Test case to Success when the test case passes in the CI. (This should be possible by using JIRA API ). This is completely optional and we will most probably be doing it manually.
When all the Test cases related to a user story to success, The user story can then be moved to Done.
A few points to note:
We will also be using https://marketplace.atlassian.com/apps/1222843/aio-tests-test-management-for-jira for test management and linking.
The QA should be working on the feature branch from day 1 for adding the test cases. Working in the same branch will enable the QA and developer to be always in Sync. This should ensure that the developer is not blocked waiting for the test cases to be completed for the branch to be merged into development.
The feature branch will be reviewed when the pull request is created by the developer. This is to ensure that the review is not pending until the test cases have been developed/passed. This should help with quick feedback.
The focus here is on the "feature-oriented QA" process to ensure the develop branch is always release-ready and that only well-tested code is merged into the develop branch.
A couple of suggestions:
For your final status consider using Closed rather than Success/Failure. Success/Failures are outcomes rather than states. You may have other outcomes like cancelled or duplicate. You can use the Resolved field for the outcomes. You could also create a custom field for Success/Failure and decouple it from both the outcome and status. You ideally do not want your issue jumping back in forth in your workflow. If Failure is a status then you set yourself up for a lot of back and forth
You may also want to consider a status after New Test Creation for the writing of the test case and a status after that such as Ready for Testing. This would allow you to more specifically where the work is in the flow and also capture the amount of time that is spent writing tests, how long test cases wait, and how much time is spent actually executing tests and defect remediation
Consider adding a verification rule to your Story workflow that prevents a story from being closed until all the linked test cases are closed
AIO Tests for Jira, unlike other test management systems, does not clutter Jira, by creating tests as issues. So you need not create an issue type at all.
With it's zero setup time, you can simply start creating tests against stories. It has a workflow from Draft to Published (essentially equaling Ready for Testing).
The AIO Tests jira panel shows the cases associated with the stories and their last execution status to get a glimpse of the testing progress of the story as shown below. It allows everyone from the Product to the Developer to get a glimpse of the testing status.
You can also create testing tasks and get a glimpse of the entire execution cycle in the AIO Tests panel.
It also has a Jenkins plugin + REST APIs to make it part of your CI/CD process.

Workaround to seeing data factory v2 debug runs

I realise normally a debug run is not visible in the data factory v2 UI after closing the browser window, however unfortunately I needed to restart my machine unexpectedly and it's a long running pipeline.
I thought maybe the runs might be available via powershell, but I haven't had any luck.
The pipeline is likely still running.
We do have external logging, however ideally I'd like to see how long each activity is taking as I'm load testing.
And more importantly I do not want to do another run until I'm sure it's finished.... notably I'll run it from a trigger next time (just in case!).
EDIT:
It looks like a sandbox id is used which is stored in the browser local storage and there appears to be undocumented API endpoints for gathering info using the sandbox id. But there doesn't appear to be a way of getting old sandbox id's so I'm probably out of luck.
There is a button for view all debug runs.
Taken from Microsoft documentation:
To view a historical view of debug runs or see a list of all active debug runs, you can go into the Monitor experience.

Does vwo(visual website optimizer) slow down website load time in any manner?

Visual website optimizer is a A/B testing tool which can help one site owner to analyze his site with a modified of that. It puts a simple code in your website and make a new version of your web page.Then it show one version of your webpage to 50% of your visitors and another ver to rest of the 50%. This way the owner can analyze which ver of the site is generating more revenue & dump the other one.
So my question is can vwo reduce the site loading time somehow?Or what is the drawbacks of using vwo in a website?
Yes, there's a little bit of additional lag in load time, as the script that makes the decision has to call home to the VWO servers, see what variations should be served, then serve that particular page.
The trick to minimising that loading lag is to put the script absolutely first on the target page, so that nothing else is happening before the script fires (but you'll always have lag).
This blog post by VWO sums everything up: https://vwo.com/blog/how-vwo-affects-site-speed/
They write in that post:
Having said all this, we are confident that VWO’s best-in-class technology coupled with optimal campaign settings will ensure that your website never slows down
However, I would suggest to test it out on your page and see whether it works for you or not.

Ubuntu + PBS + Apache? How can I show a list of running jobs as a website?

Is there a plugin/package to display status information for a PBS queue? I am currently running an apache webserver on the login-node of my PBS cluster. I would like to display status info and have the ability to perform minimal queries without writing it from scratch (or modifying an age old python script, ala jobmonarch). Note, the accepted/bountied solution must work with Ubuntu.
Update: In addition to ganglia as noted below, I also looked that the Rocks Cluster Toolkit, but I firmly want to stay with Ubuntu. So I've updated the question to reflect that.
Update 2: I've also looked at PBSWeb as well as MyPBS neither one appears to suit my needs. The first is too out-of-date with the current system and the second is more focused on cost estimation and project budgeting. They're both nice, but I'm more interested in resource availability, job completion, and general status updates. So I'm probably just going to write my own from scratch -- starting Aug 15th.
Have you tried Ganglia?
I have no personal experience but few sysadmin I know are using it.
Following pages may help,
http://taos.groups.wuyasea.com/articles/how-to-setup-ganglia-to-monitor-server-stats/3
http://coe04.ucalgary.ca/rocks-documentation/2.3.2/monitoring-pbs.html
my two cents
Have you tried using nagios: http://www.nagios.org/ ?

Website data retrieval

An recent article has prompted me to pick up a project I have been working on for a while. I want to create a web service front end for a number of sites to allow automated completion of forms and data retrieval from the results, and other areas of the site. I have acheived a degree of success using Selenium and custom code however I am looking to extend this to a stage where adding additional sites is a trivial task (maybe one which doesn't require a developer even).
The Kapow web data server looks to achieve a lot of this however I am told it is quite expensive (currently awaiting a quote). Has anyone had experience with this, or can suggest any alternatives (Open Source ideally)?
Disclaimer: I realise the potential legality issues around automating data retrieval from 3rd party websites - this tool is designed to be used in a price comparison system and all of the websites integrated with it will be done with the express permission of the owners. Where the sites provide an API this will clearly be the favoured approach.
Thanks
Realised it's been a while since I posted this, however should anyone come across it, I have had lots of success in using the WSO2 framework (particularly the mashup server) for this. For data mining tasks I have also used a Java library that this wraps - webharvest - which has achieved everything I needed