How to create a table report with WTD integrated inside the report. e.g.
some option i could think of is creating an sp that returns a temp table, inside the sp is a loop that every week it will insert a wtd totals for that week. another one is if it can be achieved in the reporting service. so far no luck with those.
You can use grouping sets and order by. You don't show what your data looks like, but the idea is:
select date, sum(sales), sum(orders)
from t
group by grouping sets ( (date), (year(date), datepart(week, date)) )
order by max(date), grouping(date);
Here is a db<>fiddle.
Note: This leaves the "WTD" out, because that is a string and you seem to want to put it in a date column.
You can convert the date to a string and use coalece() (or case logic using grouping()):
select coalesce(convert(varchar(255), date), 'WTD'),
Here is a db<>fiddle.
Related
I am trying to use date_trunc() on some date in a window function on BigQuery. I used to do this previoulsy in Snowflake and everything went smoothly. Unfortunalty, BigQuery tells me that the full date needs to be in the group by, which defeat the purpose of using the date_trunc function. I wish to group by "year-month" and customer_id and give every customer a "rank" based on their order per "year-month". Here's an example of my script
select
id as customer_id,
date_trunc(month from date) as date,
count(1) as orders,
row_number() over (partition by date_trunc(month from date) order by count(1) desc) as customer_order
from table
group by 1,2
And I get this error code :
PARTITION BY expression references column date which is neither grouped nor aggregated
Anyone knows how to prevent this problem in an elegant manner? I know I could do a subquery \ CTE to fix this but I'm curious to understand why Big Query prevent this operation.
I tried adding two numbers that are present in two different columns but it's not adding up when there are no numbers present in the second column(B). Please find the screenshot of the table and the query I was using to achieve this.
Not getting the value present in COLUMN A in total sales.
The query which I ran but wasn't successful.
SELECT Date,
SUM(sales a) as "total_a",
SUM(sales b) as "total_b",
("total_a"+"total_b") as "total_sales"
FROM data_table
GROUP BY Date;
I would suggest:
SELECT Date,
SUM(sales_a) as "total_a",
SUM(sales_b) as "total_b",
COALESCE(SUM(sales_a, 0) + COALESCE(SUM(sales_b, 0)) as "total_sales"
FROM data_table
GROUP BY Date;
I do know that Amazon Redshift allows the re-use of table aliases -- contravening the SQL standard. However, I find it awkward to depend on that functionality; and it can lead to hard-to-find-errors if your column aliases match existing column names.
You can't reuse column aliases in the same scope, so your query should error. You need to repeat the SUM() expressions.
Then: if one of the sums returns NULL, it propagates the the results of the addition. You can use coalesce() to avoid that:
SELECT Date,
SUM(sales_a) as total_a,
SUM(sales_b) as total_b,
COALESCE(SUM(sales_a), 0) + COALESCE(SUM(sales_b), 0) as total_sales
FROM data_table
GROUP BY Date;
I want to be able to count number of rows inserted in a table per second using SQL database. The count has to be for all the rows in the table. Sometimes there could be 100 rows and others 10 etc so this is just for stats. I managed to count rows per day but need more details. Any advise or any scripts would be appreciated
Thanks
If you truncate the datetime column to the second.
Then you can aggregate on it, to get totals per second.
For example:
SELECT
CAST(dt AS DATE) as [Date],
MIN(Total) as MinRecordsPerSec,
MAX(Total) as MaxRecordsPerSec,
AVG(Total) as AverageRecordsPerSec
FROM
(
SELECT
CONVERT(datetime, CONVERT(char(19), YourDatetimeColumn, 120), 120) as dt,
COUNT(*) AS Total
FROM YourTable
GROUP BY CONVERT(char(19), YourDatetimeColumn, 120)
) q
GROUP BY CAST(dt AS DATE)
ORDER BY 1;
Well it depends on language you are using, the way to do this would be to fetch your DB and change date column to timestamp, then group them by each stamp as you would know each timestamp is per second.
OR
Alternatively, you can store timestamps in DB instead of actual date the it will be easy to query from DB.
OR
Use this function 'UNIX_TIMESTAMP()' in mysql to get timestamp of column then you can do whatever and whichever comparison you want to do on it
https://dev.mysql.com/doc/refman/5.5/en/date-and-time-functions.html#function_unix-timestamp
Hope this gives you an idea.
I using date_trunc in order to count events per day. I have a subquery that I use the date_trunc on. The problem is that the query returns multiple rows per one date. Any ideas?
select
date_trunc('day',date_) date_,
count(download),
count(subscribe)
from
(select
min(users.redshifted_at) date_,
users.id_for_vendor download,
subs.id_for_vendor subscribe
from Facetune2_device_info_log users
left join Facetune2_usage_store_user_subscribed subs
on users.id_for_vendor=subs.id_for_vendor
group by users.id_for_vendor,subs.id_for_vendor) b
group by date_
order by date_
date_ is confusing, because it is both a column and an alias. Columns get resolved first. So this should fix your problem:
group by date_trunc('day', date_)
You can also fix it by using a different alias name, one not already used for a column.
Is it possible in PostgreSQL to select one column from a table within a particular date span (there is a date column) and - here is the catch! - add the table together. Like for making a sales report?
Based on your comment, I think you are referring to SUM(). This is an aggregate function
SELECT SUM(amount)
FROM sales_orders
WHERE date BETWEEN '2011-03-06' and '2011-04-06' -- not sure what your date is.
If I understand you correctly, you are looking for this:
SELECT sum(amount)
FROM sales_orders
WHERE date ...