BigQuery Google Analytics sessionsWithEvent metric - sql

I'm having trouble creating a BigQuery query that will allow for me to fetch the Google Analytics ga:sessionsWithEvent metric.
This is what I tried:
SELECT
EXACT_COUNT_DISTINCT(concat(fullvisitorid, string(visitid))) AS distinctVisitIds
FROM
(TABLE_DATE_RANGE([xxxxxxxx.ga_sessions_], TIMESTAMP('2016-11-30'), TIMESTAMP('2016-12-26')))
WHERE
hits.type='EVENT'
The logic in the query above seems sound - get all the rows that have a hit.type of 'EVENT' and sum up the exact count of distinct fullVisitorId/VisitId results - aka. the number of unique sessions with an event.
But the numbers I get from here are close but higher than what I get using query explorer
Thank you.
EDIT: Addressing comment below to use wider date range with date filter
With date range +-5 days, this makes the query
SELECT
EXACT_COUNT_DISTINCT(concat(fullvisitorid, string(visitid))) AS distinctVisitIds
FROM
(TABLE_DATE_RANGE([xxxxxxxx.ga_sessions_], TIMESTAMP('2016-11-25'), TIMESTAMP('2016-12-31')))
WHERE
hits.type='EVENT'
AND ('20161130'<=date AND date<='20161226')
Unfortunately I still get the same number

Don't rely on the table dates, usually even on later days you can have metrics from previous days. Instead use a larger date range on from and exact date range on columns.
AFAIK also the data explorer does approximations.

Related

How to bypass default parameter to include a range or better SQL?

EDITED (AGAIN): added tables and two screenshots (one of Google Sheets Chart and another showing mutliple issues in DS) to help demonstrate what I am seeing.
Short Version: I have created a parameter to help me score trending topics based on the date range filter. However, I want to be able to show a range of dates' worth of data, not just a specific date's worth of data. In theory, I could make the parameter a checklist with a huge range, but that doesn't seem efficient or sustainable down the road.
Disclaimer: I am about a week into SQL and Data Studio.
Long Version: We are tracking trends over time from a specific customer data set. I'd like to make it so that when a user adjusts the time range, various topics’ " score " depends on the end date. For instance, every time the topic "Recession" is brought up, it is given a score. That score is weighted based on when it was said. I was using 365 as the highest possible score so that anything over a year is null. So if "Recession" is referenced twice, once a week ago and once today, the avg score for recession is 361.5, but if a reference is made to the topic "Talent Management" twice today, then it would have a score of 365, and so forth across a growing list of 50+ topics pertaining to 50+ specific communities we are tracking the topics across.
Here is an example:
topics
groups
entry_date
recession
A
2022-11-24
talent mgt
A
2022-11-24
recession
B
2022-11-22
economy
A
2022-11-22
recession
C
2022-11-15
talent mgt
B
2022-11-8
This score would then affect the bubble size on a chart where the Y-axis is the count of unique groups referencing the topics, and an x-axis based on the range of average scores.
The goal is to be able to see which topics are the most common across groups, which ones are emerging trends, and which ones are dated trends by having a range slider. That way users (colleagues in other departments) can play with the date range "see" the bubbles moving in location and size.
example of static chart in google sheets
I could then also use the same data and fields to measure the percentage of topics being discussed across groups based on the weighted averages against a time range.
In Goolge Sheets I can do this with an xLookUp to a tab that has a column of 0-365 and then next to it a column of 365-0 (on a tab called 'scales') and then a cell on a sheet that you can put any date as the point in time, and it affects all the scores, tables, charts, etc. (I used. =xlookup((point_in_time - entry_date), 'scales'!A:A, 'scales'!B:B, "0")
In the data studios custom SQL I used:
SELECT
*
FROM
`qRaw_data'
where
DATE(_entry_dates_) between
parse_date('%Y%m%d', #DS_START_DATE) and
parse_date('%Y%m%d', #DS_END_DATE)
AND
#pit_date_diff = date_diff(
parse_date('%Y%m%d', #ds_end_date),
_entry_dates_,
day
)
Then I created a field that is time_score of:
avg((Pit_Date_Diff-365)*(-1))
I have been googling and youtubing like crazy and think I either have to come up with a way to override the #pit_date_diff default value OR I need to use a CASE WHEN in the custom query where each time the date_diff is 1 then 365, and so on, but when I try that I get all sorts of errors.
I would like below to include all topics averaged based on all entry dates, not just those that correlate with the inputted parameter field.
currently, I can only show specific entry dates due to the parameter
I appreciate any and all help. I am a week into using data studio and am going cross-eyed Googling and YouTubing things. There is likely a better logical path to accomplish all this. Hoping for a holiday miracle.
Thanks in advance.
It turns out this was much easier than I realized... I added an AS syntax to create a column and then created a field that created the same metrics that I had in the Google Sheets:
SELECT
*,
(date_diff(parse_date('%Y%m%d', #ds_end_date), _entry_dates_,day)) AS q_time_diff
FROM
`qRaw_data`
Then the score field is: (avg(q_time_diff)-365)*(-1)
In case that helps any others in the future... ¯\(ツ)/¯
Happy Holidays!

SQL difference between two rows with timestamp

Im working on with SQL on PostgreSQL
I have this table that describes images of cars with columns camera,whn and reg
Camera is int,whn is timestamp string(e.g. 2007-02-25 07:51:10) and reg is string
I am doing this assignment and im stuck on:
"Print the register plate(reg) of cars which have been photographed twice by the same camera or different and the difference between the photos is a minute or less"
Does anybody know how i can express this difference between two different rows of the column whn that needs to be equal or less than 60 seconds?
Sort your table by timestamp.
Use lag() from windows function to get row below current row.
Use extract() to see different between two timestamp.

SQL Method for Cascading Workload Based on Rank and Available Hours

Recently I created an automated production scheduling tool through Excel that assigns a rank to items being produced in the same process, and then uses that rank in combination with the workload to create a schedule.
It functions exactly the way it is intended to, but due to the large amount of data and it being excel it has very slow performance, which is why I am looking to move the calculations over to SQL.
The general logic is like this:
-Always produce everything from the first day before the second day
-Always produce items from an earlier rank before items from a later rank
You can see how this plays out in the image below, where the line has 21.5 hours today, so items will be produced on day 1 until it equals 21.5, where the remainder is then carried over to day 2 and so on.
I was able to do this in excel using lengthy positional based formulas, but I am trying to think of a way to get the same result in SQL without having to rely on looking at the row above.
I am not sure how to convey something like 'Subtract from the available time production time of higher priority items produced on the same day'.
I apologize if the question is unclear, but any advice would be appreciated.
Image of Production Hours Cascading by Priority and Day
Example of Position-Based Fomula
Thanks to shawnt00, that put me in the right direction. Ultimately I had to modify the case statements a bit to go off of the cumulative total instead, but I was able to get the desired results using a sum() Over (partition by order by ) statement.

Doubleor triple timestamp issue

I am using SQL assistant and my data brings in snapshots from a huge database in the form of timestamps. Occasionally the snapshots bring in multiples per hour. The data is correct, multiple snapshots do happen from time to time within an hour, not always but it does happen.
I am bringing this into Spotfire and viewing by an hour and when more than one snapshot happens in the hour, the data shows as doubled.
I only want to display one per hour preferably the last(max) timestamp for the hour. Example; for the 7 am hour the data has a snapshot for 7:10 am and one for 7:55 am.
These are correct but I only want to display the last(max) timestamp, 7:55 am in this case. I can't figure the issue out in Spotfire so I am leaning towards a fix in SQL. How can I display only 1 for each hour?
You'd do this similarly to how you'd probably do it in SQL -- using a ranking/rownumber function.
The basic way Rank in Spotfire works is Rank(Order columns, order direction, partitioned columns, tie method)
You need to partition by the combination of Date and Hour, and then sort descending by your timestamp column.
So the code to identify the rows that you want to isolate should be something along the lines of:
Rank([TimestampColumn], "desc", Date([TimestampColumn]), Hour([TimestampColumn]), "ties.method=first")
What you do with it from here is going to depend on how you plan to use the data - for example, you can Limit Data Using Expression and set the code above = 1 which will limit your table accordingly (helpful if you don't want your users to accidentally forget to filter), or you can create a calculated column which turns it into a flag of some form like here:
If(Rank([TimestampColumn], "desc", Date([TimestampColumn]), Hour([TimestampColumn]), "ties.method=first") = 1, "Latest", "Duplicate")
Which allows your users to filter by this property. This way, they have the option to look at the extra rows.
Ultimately, though, if you want to only ever see these rows, and have no use for the earlier records, I'd probably do it in SQL, if you have that ability. This reduces the number of rows you have to load into your analytic.

DAX sum different DateTime

I have a problem here, i would like to sum the work time from my employee based on the data (time2 - time 1) daily and here is my query:
Effective Minute Work Time = 24. * 60 * (LASTNONBLANK(time2,0) -FIRSTNONBLANK(time1,0))
It works daily, but if i drill up to weekly / monthly data it show the wrong sum as it shown below :
What i want is summary of minute between daily different times (time2-time1)
Thanks for your help :)
You have several approaches you can take: the hard way or the easier way :). The harder (at least for me :)) is to use DAX to do this. You would:
1) create a date table,
2) Use the DAX calculate function to evaluate your last non-blank and first non-blank values (you might need to use calculate table, but I'm not sure; DAX experts jump in). Then subtract one vs. the other.
This will give you correct values for a given day for a given person. You can enforce the latter condition by putting a 'has one value' guard on the person name so that your measure informs the report author if they're not using it right.
Doing the same for dates is a little trickier. In the example you show you are including the date in the row grouping. But if you change your mind and want instead to have 'total hours worked by person' or 'total hours worked by everyone' you're not done with modelling yet.
Your next step is to use calculate table in combination with calculate to create a measure that returns the total. You'll use calculate table so you evaluate each date and the hours worked on that date by person. Then you'll use calculate to summarize that all down to a single number. If you're not careful with your DAX (or report authoring) you might mix which person you're summarizing for so that your first/last non blank are not at the person level. It gets intense quickly.
Your easier solution, though it might be more limited in its application - depends really on your scenario - is to use the query to transform the data into a summary by day and person using the group by command. This will give you a row per person per day with their start and end times. Then you can quickly calculate the hours worked on that day. Then you can quite easily build visuals on top of the summary data. Of course you give up some of the flexibility of the having a proper data model. However if you have a date table, a person table, and your summary table and then setup your relationships correctly you can achieve answers to the most common questions.