I'm sure this is pretty straightforward but can't get my head round it at all.
In one of my DB tables, I have a column 'partitionDate'. This is populated every time a transaction is logged in to the DB table with the date 11-JAN-2023 for example. So we could have 100 transactions all with the partitionDate of 11-JAN-2023.
I've run a query to give me the total count for each distinct partitionDate
SELECT partitionDate, COUNT (DISTINCT partitionDate)
from tablename
I'm trying to get a grand total at the bottom that shows me all of the totals added up which I guess will be a SUM but I can't work it out!
Thanks
I'm trying to get a grand total at the bottom that shows me all of the totals added up which I guess will be a SUM but I can't work it out!
In T-SQL:
SELECT partitionDate
,COUNT (partitionDate)
from tablename
GROUP BY GROUPING SETS
(
(partitionDate)
,()
)
In MySQL should be something like this:
SELECT partitionDate, COUNT(partitionDate)
FROM tablename
GROUP BY partitionDate WITH ROLLUP;
Related
I have a sales data table with cust_ids and their transaction dates.
I want to create a table that stores, for every customer, their cust_id, their last purchased date (on the basis of transaction dates) and the count of times they have purchased.
I wrote this code:
SELECT
cust_xref_id, txn_ts,
DENSE_RANK() OVER (PARTITION BY cust_xref_id ORDER BY CAST(txn_ts as timestamp) DESC) AS rank,
COUNT(txn_ts)
FROM
sales_data_table
But I understand that the above code would give an output like this (attached example picture)
How do I modify the code to get an output like :
I am a beginner in SQL queries and would really appreciate any help! :)
This would be an aggregation query which changes the table key from (customer_id, date) to (customer_id)
SELECT
cust_xref_id,
MAX(txn_ts) as last_purchase_date,
COUNT(txn_ts) as count_purchase_dates
FROM
sales_data_table
GROUP BY
cust_xref_id
You are looking for last purchase date and count of distinct transaction dates ( like if a person buys twice, it should be considered as one single time).
Although you mentioned you want count of dates but sample data shows you want count of distinct dates - customer 284214 transacted 9 times but distinct will give you 7.
So, here is the SQL you can use to get your result.
SELECT
cust_xref_id,
MAX(txn_ts) as last_purchase_date,
COUNT(distinct txn_ts) as count_purchase_dates -- Pls note distinct will count distinct dates
FROM sales_data_table
GROUP BY 1
total novice here with SQL SUM function question. So, SUM function itself works as I expected it to:
select ID, sum(amount)
from table1
group by ID
There are several records for each ID and my goal is to summarize each ID on one row where the next column would give me the summarized amount of column AMOUNT.
This works fine, however I also need to filter out based on certain criteria in the summarized amount field. I.e. only look for results where the summarized amount is either bigger, smaller or between certain number.
This is the part I'm struggling with, as I can't seem to use column AMOUNT, as this messes up summarizing results.
Column name for summarized results is shown as "00002", however using this in the between or > / < clause does not work either. Tried this:
select ID, sum(amount)
from table1
where 00002 > 1000
group by ID
No error message, just blank result, however plenty of summarized results with values over 1000.
Unfortunately not sure on the engine the database runs on, however it should be some IBM-based product.
The WHERE clause will filter individual rows that don't match the condition before aggregating them.
If you want to do post aggregation filtering you need to use the HAVING Clause.
HAVING will apply the filter to the results after being grouped.
select ID, sum(amount)
from table1
group by ID
having sum(amount) > 1000
I have an sql statement which I cant get the structure right on, when I run what I currently have It says syntax wrong. I am looking the result to look like this:-
(source: churchcom.co.uk)
.
This is my query so far but I dont think I am on the right track at all.
SELECT Name, ValueofTaught, Amount
FROM Activities
WHERE (Department = #Department)
ORDER BY Name
GROUP BY BurnhamGrade
(SUM ValueofTaught AND Amount
WHERE Departmetn = #Department)
Structure of the activities Table is like this:-
(source: churchcom.co.uk)
.
It sort of sounds like you're looking to group by two columns. I'm assuming the Value of Taught column is what gets rolled up for hours:
SELECT Department, Name, SUM([Value of Taught]) Hours, SUM(Amount) Pay
FROM Activities
GROUP BY Department, Name
ORDER BY Department, Name
WITH ROLLUP
I have a table (named trxItemdata) which contains over 40 columns and 60million rows. One of these columns, named ActivityDateDate shows the date/year associated with each CustomerID (another column in the table).
What I would like to do is find the number of rows for allocated to each year (2010,2011,etc), such that I get a table that looks like this in the results output:
Year Number of Rows
2011 100
2012 10000
2013 10000000
I was looking into the following code but am not too familiar with group by clauses:
select count(*) from trxItemdata
group by year(ActivityDateDate)
However when I run this I get the following table but am not sure what it means:
No Column Name
33060000
27546960
2941697
Any help you could provide would be appreciated! Thanks!
try
select year(activityDateDate) as [Year], count(1) as [Number Of Rows]
from trxItemdata
group by year(ActivityDateDate)
order by year(ActivityDateDate)
Does your date column really have "DateDate"? :P
Your query is not naming the column. Try this:
SELECT YEAR(ActivityDateDate) AS [Year],
COUNT(*) AS NumberOfRows
FROM trxItemdata
GROUP BY YEAR(ActivityDateDate)
I am running the following queries against a database:
CREATE TEMPORARY TABLE med_error_third_party_tmp
SELECT `med_error_category`.description AS category, `med_error_third_party_category`.error_count AS error_count
FROM
`med_error_category` INNER JOIN `med_error_third_party_category` ON med_error_category.`id` = `med_error_third_party_category`.`category`
WHERE
year = 2003
GROUP BY `med_error_category`.id;
The only problem is that when I create the temporary table and do a select * on it then it returns multiple rows, but the query above only returns one row. It seems to always return a single row unless I specify a GROUP BY, but then it returns a percentage of 1.0 like it should with a GROUP BY.
SELECT category,
error_count/SUM(error_count) AS percentage
FROM med_error_third_party_tmp;
Here are the server specs:
Server version: 5.0.77
Protocol version: 10
Server: Localhost via UNIX socket
Does anybody see a problem with this that is causing the problem?
Standard SQL requires you to specify a GROUP BY clause if any column is not wrapped in an aggregate function (IE: MIN, MAX, COUNT, SUM, AVG, etc), but MySQL supports "hidden columns in the GROUP BY" -- which is why:
SELECT category,
error_count/SUM(error_count) AS percentage
FROM med_error_third_party_tmp;
...runs without error. The problem with the functionality is that because there's no GROUP BY, the SUM is the SUM of the error_count column for the entire table. But the other column values are completely arbitrary - they can't be relied upon.
This:
SELECT category,
error_count/(SELECT SUM(error_count)
FROM med_error_third_party_tmp) AS percentage
FROM med_error_third_party_tmp;
...will give you a percentage on a per row basis -- category values will be duplicated because there's no grouping.
This:
SELECT category,
SUM(error_count)/x.total AS percentage
FROM med_error_third_party_tmp
JOIN (SELECT SUM(error_count) AS total
FROM med_error_third_party_tmp) x
GROUP BY category
...will gives you a percentage per category of the sum of the categories error_count values vs the sum of the error_count values for the entire table.
another way to do it - without the temp table as seperate item...
select category, error_count/sum(error_count) "Percentage"
from (SELECT mec.description category
, metpc.error_count
FROM med_error_category mec
, med_error_third_party_category metpc
WHERE mec.id = metpc.category
AND year = 2003
GROUP BY mec.id
);
i think you will notice that the percentage is unchanging over the categories. This is probably not what you want - you probably want to group the errors by category as well.