Ruby Application Record: Select rows where Date's Month and Day is any of a list of Month Days - sql

The following query selects rows where the date_field is a particular month and day:
Table.where("DATE_FORMAT(date_field, '%m/%d') = ?", "03/05")
I was wondering if there is a way to do something similar but instead, I supply a list of month/day strings ex: ["03/05","02,12", "12/25"] and get all rows where date_field matches that particular day and month.

You can use a SQL IN statement:
Table.where("DATE_FORMAT(date_field, '%m/%d') IN ?", ["03/05", "02/12", "12/25"])

Related

Big query sql: month to date query

I have daily data and the code im using creates a report between some dates. I want to replace the date interval with month to date. So every time it will generate a month to date report based on which month we are in. Is there a simple way to do it? thanks :)
An example using BigQuery Standard SQL
SELECT
*
FROM
your_table
WHERE
your_date_field BETWEEN
DATE_TRUNC(CURRENT_DATE(), month) --to get start date of current month
AND
CURRENT_DATE()
You should be able to use that in a WHERE clause and

Get rows with timestamp from current week in Postgres?

I'm trying to build a weekly leaderboard of sorts and was wondering how I could get the rows with a timestamp that is within the current week (Monday to Sunday). I've tried:
SELECT id, COUNT(*) FROM Data WHERE created::date BETWEEN date $1 and date $2 GROUP BY id ORDER BY COUNT(*) DESC LIMIT 10;
But got stuck on how I could get the rows within the current week without hard coding them. created is a column of type TIMESTAMP.
I saw that there was something called YEARWEEK() in MySQL. Is there an equivalent in Postgres? If not, what can I do to get the desired result?
You can use date_trunc() with "week":
where created >= date_trunc('week', now())
This assumes that no created timestamps are in the future. Postgres follows the ISO standard of having weeks start on Mondays, which is what you want.

Why do i get a different results from my weekly code vs per week code?

Why am I getting different results when I compare weekly results into using a code individually per week. Does it have something to do with the timestamp?
This is the code for all the weeks:
select date_trunc('week',date_joined) as week, COUNT(*) as count from auth_user
where date_joined>='01-01-2019' and date_joined<='31-03-2019'
group by week
order by week
This is the resulting table:
first result
This is the code for getting an individual week:
select COUNT(*) from auth_user where date_joined>='31-12-2018' and date_joined<='06-01-2019'
This is the result for the first week: second result
I'd say that date_joined is a timestamp, and your second query misses the entries from January 6th.
Try with
AND date_joined < '2019-01-07'
Also, you should use ISO notation: YYYY-MM-DD

SQLITE strftime() function issue

SELECT strftime('%W', 'Week'), sum(income) FROM tableOne GROUP BY Week;
Format for date is a simple date: YYYY-MM-DD
PROBLEM: When run no value for the Week column is provided. Any suggestions?
There is data in the table and when the query is run the income is summarized by the date in the week column. Thing is, this column contains a date that may be any day of the week and often multiple different days of the same week. I need to summarize the income by week.
In SQL, 'Week' is a string containing four characters.
To reference the value in a column named "Week", remove the quotes: Week.

Return last week data in hive

I am new to hive and sql.
Is there any way if we run a query today with count fields then it should fetch last 7 days data ( example- if i run a query with count fields on monday then I should get the total count from last week monday to sunday) And date in my table is in the format 20150910. (yyyyMMdd).
Kindly please help me on this.
You can use date_sub() in this case. Something like this should work...
select * from table
where date_field >= date_sub(current_date, 7)
assuming that the current day's data is not loaded yet. If you want to exclude the current day's data too, you will have to include that too in the filter condition
and date_field <= date_sub(current_date, 1)
current_date would work if your hive version > 0.12
else, you can explicitly pull the date from unix using to_date(from_unixtime(unix_timestamp()))