How to count a feature x if y condition is true (bigquery) - sql

I am new in machine learning and I am running one of my first machine learning problems. I do a lot of research by myself but could not find the solution for this question.
I am creating new variables from my dataset in Bigquery. One of the variables consists in count a variable (x) if a condition in variable (y) is met. I cannot use WHERE or HAVING since the condition is only related to the calculation of this specific variable.
The code is more or less like this:
COUNT(DISTINCT sessionid IF(date > “2018-06-01” && date < “2018-06-30”))
AS sessions_lastmonth
I know that this code is not correct and could not be but is just a way to express more or less what I need.
The goal is to know the number of sessions in a certain period. I cannot use the count for date because in one day you can have more than one session and not every row is a different session (a session can have several lines because the user can go through many pages in the same session).
Thank you!

Below for BigQuery Standard SQL
COUNT(DISTINCT IF(date >= '2018-06-01' AND date <= '2018-06-30', sessionid, NULL)) AS sessions_lastmonth

You said you couldn't use WHERE so I am going to assume that the answer that proposes you use WHERE won't fly?
I think you might need to use CASE instead, so your query might look something like:
COUNT(DISTINCT CASE WHEN date > “2018-06-01” AND date < “2018-06-30” THEN sessionid ELSE NULL END) AS sessions_lastmonth
That syntax might not be right, but it might help you onto the right track?

Related

How to set a max range condition with timescale time_bucket_gapfill() in order to not fill real missing values?

I'd like some advices to know if what I need to do is achievable with timescale functions.
I've just found out I can use time_bucket_gapfill() to complete missing data, which is amazing! I need data each 5 minutes but I can receive 10 minutes, 30 minutes or 1 hour data. So the function helps me to complete the missing points in order to have only 5 minutes points. Also, I use locf() to set the gapfilled value with last value found.
My question is: can I set a max range when I set the last value found with locf() in order to never overpass 1 hour ?
Example: If the last value found is older than 1 hour ago I don't want to fill gaps, I need to leave it empty to say we have real missing values here.
I think I'm close to something with this but apparently I'm not allowed to use locf() in the same case.
ERROR: multiple interpolate/locf function calls per resultset column not supported
Somebody have an idea how I can resolve that?
How to reproduce:
Create table powers
CREATE table powers (
delivery_point_id BIGINT NOT NULL,
at timestamp NOT NULL,
value BIGINT NOT NULL
);
Create hypertable
SELECT create_hypertable('powers', 'at');
Create indexes
CREATE UNIQUE INDEX idx_dpid_at ON powers(delivery_point_id, at);
CREATE INDEX index_at ON powers(at);
Insert data for one day, one delivery point, point 10 minutes
INSERT INTO powers SELECT 1, at, round(random()*10000) FROM generate_series(TIMESTAMP '2021-01-01 00:00:00', TIMESTAMP '2022-01-02 00:00:00', INTERVAL '10 minutes') AS at;
Remove three hours of data from 4am to 7am
DELETE FROM powers WHERE delivery_point_id = 1 AND at < '2021-01-1 07:00:00' AND at > '2021-01-01 04:00:00';
The query that need to be fixed
SELECT
time_bucket_gapfill('5 minutes', at) AS point_five,
avg(value) AS avg,
CASE
WHEN (locf(at) - at) > interval '1 hour' THEN null
ELSE locf(avg(value))
END AS gapfilled
FROM powers
GROUP BY point_five, at
ORDER BY point_five;
Actual: ERROR: multiple interpolate/locf function calls per resultset column not supported
Expected: Gapfilled values each 5 minutes except between 4am and 7 am (real missing values).
This is a great question! I'm going to provide a workaround for how to do this with the current stuff, but I think it'd be great if you'd open a Github issue as well, because there might be a way to add an option for this that doesn't require a workaround like this.
I also think your attempt was a good approach and just requires a few tweaks to get it right!
The error that you're seeing is that we can't have multiple locf calls in a single column, this is a limitation that's pretty easy to work around as we can just shift both of them into a subquery, but that's not enough. The other thing that we need to change is that locf only works on aggregates, right now, you’re trying to use it on a column (at) that isn’t aggregated, which isn’t going to work, because it wouldn’t know which of the values of at in a time_bucket to “pull forward” for the gapfill.
Now you said you want to fill data as long as the previous point wasn’t more than one hour ago, so, we can take the last value of at in the bucket by using last(at, at) this is also the max(at) so either of those aggregates would work. So we put that into a CTE (common table expression or WITH query) and then we do the case statement outside like so:
WITH filled as (SELECT
time_bucket_gapfill('5 minutes', at) AS point_five,
avg(value) AS avg,
locf(last(at, at)) as filled_from,
locf(avg(value)) as filled_avg
FROM powers
WHERE at BETWEEN '2021-01-01 01:30:00' AND '2021-01-01 08:30:00'
AND delivery_point_id = 1
GROUP BY point_five
ORDER BY point_five)
SELECT point_five,
avg,
filled_from,
CASE WHEN point_five - filled_from > '1 hour'::interval THEN NULL
ELSE filled_avg
END as gapfilled
FROM filled;
Note that I’ve tried to name my CTE expressively so that it’s a little easier to read!
Also, I wanted to point out a couple other hyperfunctions that you might think about using:
heartbeat_agg is a new/experimental one that will help you determine periods when your system is up or down, so if you're expecting points at least every hour, you can use it to find the periods where the delivery point was down or the like.
When you have more irregular sampling or want to deal with different data frequencies from different delivery points, I’d take a look a the time_weight family of functions. They can be more efficient than using something like gapfill to upsample, by instead letting you treat all the different sample rates similarly, without having to create more points and more work to do so. Even if you want to, for instance, compare sums of values, you’d use something like integral to get the time weighted sum over a period based on the locf interpolation.
Anyway, hope all that is helpful!

BigQuery: SELECT in WHERE-clause with filter based on a value in the current row

I know the title is probably pretty stupid but I have a hard time phrasing it differently.
I have to use BigQuery at work atm for some report. BigQuery is connected to a Google Analytics view of ours. This gives us a dataset with 1 table for each day. The rows of the tables are user-sessions on our site, while columns have some information about the sessions.
The problem I have is the following:
I want to select sessions with transactions, but only if the user was referred to our site by a certain referrer in the last x days before the transaction happened. I'm only familiar with basic SQL and not with any advanced concepts. It's really frustrating to me because this would be a no-brainer with any proper programming language given a .csv of the data, but I'm lacking knowledge of the relevant concepts in SQL.
#standardSQL
SELECT
COUNT(*)
FROM
`dataset.ga_sessions_2017*`
WHERE
totals.transactions > 0 AND
fullVisitorId IN (SELECT
fullVisitorId
FROM
`dataset.ga_sessions_2017*`
WHERE
trafficSource.source = "xyz.com"
) AND
< date difference thing>
I could filter for the date difference like I did with the trafficSource (referrer). The problem for me is that while "xyz.com" is a static thing, I'd need to reference the date value of the current row I'm in. So the date by which I'd filter the 2nd SELECT would be dynamically changing from row to row. Can anyone guide me on how this is usually done? This seems like a thing that would come up often.
I'm not familiar with the GA tables specifically, but having written some wildcard queries in BigQuery before, I think what you're looking for can be done using the _TABLE_SUFFIX pseudo column:
CAST(_TABLE_SUFFIX AS INT64) >= 1217
Where 1217 is today's date in MMDD format minus 3 days, assuming the table names are _20171217, _20171218, etc. Otherwise you can just use REPLACE to remove underscores before casting to an int. There are also functions that will generate today's date for you if you needed this query to run automatically.
Also, I think the fullVisitorId business could be replaced with a simple WHERE trafficSource.source = "xyz.com" but it's hard to say for sure without being able to run the query myself.
So the full query would look something like this:
#standardSQL
SELECT
COUNT(*)
FROM
`dataset.ga_sessions_2017*`
WHERE
totals.transactions > 0 AND
trafficSource.source = "xyz.com" AND
CAST(_TABLE_SUFFIX AS INT64) >= 1217

sqlalchemy select by date column only x newset days

suppose I have a table MyTable with a column some_date (date type of course) and I want to select the newest 3 months data (or x days).
What is the best way to achieve this?
Please notice that the date should not be measured from today but rather from the date range in the table (which might be older then today)
I need to find the maximum date and compare it to each row - if the difference is less than x days, return it.
All of this should be done with sqlalchemy and without loading the entire table.
What is the best way of doing it? must I have a subquery to find the maximum date? How do I select last X days?
Any help is appreciated.
EDIT:
The following query works in Oracle but seems inefficient (is max calculated for each row?) and I don't think that it'll work for all dialects:
select * from my_table where (select max(some_date) from my_table) - some_date < 10
You can do this in a single query and without resorting to creating datediff.
Here is an example I used for getting everything in the past day:
one_day = timedelta(hours=24)
one_day_ago = datetime.now() - one_day
Message.query.filter(Message.created > one_day_ago).all()
You can adapt the timedelta to whatever time range you are interested in.
UPDATE
Upon re-reading your question it looks like I failed to take into account the fact that you want to compare two dates which are in the database rather than today's day. I'm pretty sure that this sort of behavior is going to be database specific. In Postgres, you can use straightforward arithmetic.
Operations with DATEs
1. The difference between two DATES is always an INTEGER, representing the number of DAYS difference
DATE '1999-12-30' - DATE '1999-12-11' = INTEGER 19
You may add or subtract an INTEGER to a DATE to produce another DATE
DATE '1999-12-11' + INTEGER 19 = DATE '1999-12-30'
You're probably using timestamps if you are storing dates in postgres. Doing math with timestamps produces an interval object. Sqlalachemy works with timedeltas as a representation of intervals. So you could do something like:
one_day = timedelta(hours=24)
Model.query.join(ModelB, Model.created - ModelB.created < interval)
I haven't tested this exactly, but I've done things like this and they have worked.
I ended up doing two selects - one to get the max date and another to get the data
using the datediff recipe from this thread I added a datediff function and using the query q = session.query(MyTable).filter(datediff(max_date, some_date) < 10)
I still don't think this is the best way, but untill someone proves me wrong, it will have to do...

How to get all rows from a table inserted in a particular date.

I am trying to write a query that gets all the rows of a table for a particular date.
SELECT * FROM MY_TABLE WHERE COLUMN_CONTAINING_DATE='2013-05-07'
However that does not work, because in the table the COLUMN_CONTAINING_DATE contains data like '2013-05-07 00:00:01' etc. So, this would work
SELECT * FROM MY_TABLE WHERE COLUMN_CONTAINING_DATE>='2013-05-07' AND COLUMN_CONTAINING_DATE<'2013-05-08'
However, I dont want to go for option 2 because that feels like a hacky way. I would rather put a query that says get me all the rows for a give date and somehow not bother about the minutes and hours in the COLUMN_CONTAINING_DATE.
I am trying to have this query run on both H2 and DB2.
Any suggestions?
You can do:
select *
from MY_Table
where trunc(COLUMN_CONTAINING_DATE) = '2013-05-07';
However, the version that you describe as a "hack" is actually better. By wrapping a function around the data, many SQL optimizers will not use indexes. With just direct comparisons, an index would definitely be used.
Use something like this
SELECT * FROM MY_TABLE WHERE COLUMN_CONTAINING_DATE=DATE('2013-05-07')
You can ease this if you use the Temporal data management capability from DB2 10.1.
For more information:
http://www.ibm.com/developerworks/data/library/techarticle/dm-1204db2temporaldata/
If your concerns are related to the different data types (timestamp in the column, and a string containing a date), you can do this:
SELECT * FROM MY_TABLE
WHERE
COLUMN_CONTAINING_DATE >= '2013-05-07 00:00:00'
and COLUMN_CONTAINING_DATE < '2013-05-08 00:00:00'
and I'd pay attention to the formatting of the where clause, because this will improve readability a lot, if you have to look at your queries two months later. Just pick a style you prefer for ranges like "a <= x < b". Unfortunately SQL's between does not support this.
One could argue that the milliseconds are still missing, so perfectionists may append another ".0" in the timestamp ...

SQL : BETWEEN vs <= and >=

In SQL Server 2000 and 2005:
what is the difference between these two WHERE clauses?
which one I should use on which scenarios?
Query 1:
SELECT EventId, EventName
FROM EventMaster
WHERE EventDate BETWEEN '10/15/2009' AND '10/18/2009'
Query 2:
SELECT EventId, EventName
FROM EventMaster
WHERE EventDate >='10/15/2009'
AND EventDate <='10/18/2009'
(Edit: the second Eventdate was originally missing, so the query was syntactically wrong)
They are identical: BETWEEN is a shorthand for the longer syntax in the question that includes both values (EventDate >= '10/15/2009' and EventDate <= '10/19/2009').
Use an alternative longer syntax where BETWEEN doesn't work because one or both of the values should not be included e.g.
Select EventId,EventName from EventMaster
where EventDate >= '10/15/2009' and EventDate < '10/19/2009'
(Note < rather than <= in second condition.)
They are the same.
One thing to be careful of, is if you are using this against a DATETIME, the match for the end date will be the beginning of the day:
<= 20/10/2009
is not the same as:
<= 20/10/2009 23:59:59
(it would match against <= 20/10/2009 00:00:00.000)
Although BETWEEN is easy to read and maintain, I rarely recommend its use because it is a closed interval and as mentioned previously this can be a problem with dates - even without time components.
For example, when dealing with monthly data it is often common to compare dates BETWEEN first AND last, but in practice this is usually easier to write dt >= first AND dt < next-first (which also solves the time part issue) - since determining last usually is one step longer than determining next-first (by subtracting a day).
In addition, another gotcha is that lower and upper bounds do need to be specified in the correct order (i.e. BETWEEN low AND high).
Typically, there is no difference - the BETWEEN keyword is not supported on all RDBMS platforms, but if it is, the two queries should be identical.
Since they're identical, there's really no distinction in terms of speed or anything else - use the one that seems more natural to you.
As mentioned by #marc_s, #Cloud, et al. they're basically the same for a closed range.
But any fractional time values may cause issues with a closed range (greater-or-equal and less-or-equal) as opposed to a half-open range (greater-or-equal and less-than) with an end value after the last possible instant.
So to avoid that the query should be rewritten as:
SELECT EventId, EventName
FROM EventMaster
WHERE (EventDate >= '2009-10-15' AND
EventDate < '2009-10-19') /* <<<== 19th, not 18th */
Since BETWEEN doesn't work for half-open intervals I always take a hard look at any date/time query that uses it, since its probably an error.
I have a slight preference for BETWEEN because it makes it instantly clear to the reader that you are checking one field for a range. This is especially true if you have similar field names in your table.
If, say, our table has both a transactiondate and a transitiondate, if I read
transactiondate between ...
I know immediately that both ends of the test are against this one field.
If I read
transactiondate>='2009-04-17' and transactiondate<='2009-04-22'
I have to take an extra moment to make sure the two fields are the same.
Also, as a query gets edited over time, a sloppy programmer might separate the two fields. I've seen plenty of queries that say something like
where transactiondate>='2009-04-17'
and salestype='A'
and customernumber=customer.idnumber
and transactiondate<='2009-04-22'
If they try this with a BETWEEN, of course, it will be a syntax error and promptly fixed.
I think the only difference is the amount of syntactical sugar on each query. BETWEEN is just a slick way of saying exactly the same as the second query.
There might be some RDBMS specific difference that I'm not aware of, but I don't really think so.
Logically there are no difference at all.
Performance-wise there are -typically, on most DBMSes- no difference at all.
There are infinite logically equivalent statements, but I'll consider three(ish).
Case 1: Two Comparisons in a standard order (Evaluation order fixed)
A >= MinBound AND A <= MaxBound
Case 2: Syntactic sugar (Evaluation order is not chosen by author)
A BETWEEN MinBound AND MaxBound
Case 3: Two Comparisons in an educated order (Evaluation order chosen at write time)
A >= MinBound AND A <= MaxBound
Or
A <= MaxBound AND A >= MinBound
In my experience, Case 1 and Case 2 do not have any consistent or notable differences in performance as they are dataset ignorant.
However, Case 3 can greatly improve execution times. Specifically, if you're working with a large data set and happen to have some heuristic knowledge about whether A is more likely to be greater than the MaxBound or lesser than the MinBound you can improve execution times noticeably by using Case 3 and ordering the comparisons accordingly.
One use case I have is querying a large historical dataset with non-indexed dates for records within a specific interval. When writing the query, I will have a good idea of whether or not more data exists BEFORE the specified interval or AFTER the specified interval and can order my comparisons accordingly. I've had execution times cut by as much as half depending on the size of the dataset, the complexity of the query, and the amount of records filtered by the first comparison.
In this scenario col BETWEEN ... AND ... and col <= ... and col >= ... are equivalent.
SQL Standard defines also T461 Symmetric BETWEEN predicate:
<between predicate part 2> ::=
[ NOT ] BETWEEN [ ASYMMETRIC | SYMMETRIC ]
<row value predicand> AND <row value predicand>
Transact-SQL does not support this feature.
BETWEEN requires that values are sorted. For instance:
SELECT 1 WHERE 3 BETWEEN 10 AND 1
-- no rows
<=>
SELECT 1 WHERE 3 >= 10 AND 3 <= 1
-- no rows
On the other hand:
SELECT 1 WHERE 3 BETWEEN SYMMETRIC 1 AND 10;
-- 1
SELECT 1 WHERE 3 BETWEEN SYMMETRIC 10 AND 1
-- 1
It works exactly as the normal BETWEEN but after sorting the comparison values.
db<>fiddle demo