Query columns based on values of table - sql

I'm new to DB and SQL so I don't know if there is anything new I need to try for this. I want to solve this table my senior has given to me:
Passbook(Table Name)
Date | Amount | Type
-----------------------------
14/3/19 | 48000 | Debit |
13/2/19 | 75000 | Credit|
9/7/19 | 65000 | Credit|
12/6/19 | 15000 | Debit |
Now I have to generate a query in this manner:
Month | Debit | Credit
------------------------------
13/2/19 | 0 | 75000
14/3/19 | 48000 | 0
12/6/19 | 15000 | 0
9/7/19 | 0 | 65000
Here my Passbook table value has become the columns for query and IDK how to generate it in this manner
Anyone help me do this please
for monthly sorting, I'm supposed to use ORDER BY clause, I suppose
Now I have to generate a query in that manner.

A basic pivot query should work here:
SELECT
Format(Month([Date])) AS Month,
SUM(IIF(Type = 'Debit', Amount, 0)) AS Debit,
SUM(IIF(Type = 'Credit', Amount, 0)) AS Credit
FROM yourTable
GROUP BY
Format(Month([Date]));
If you instead want date level output, then aggregate the by the Date column directly.

Related

Remove row values and convert into single column value

I have a stored procedure returning a table that looks like this:
ID | Type | Price | Description
-------------------------------
7 | J | 50.00 | Job
7 | F | 20.00 | Freight
7 | P | 30.00 | Postage
7 | H | 5.00 | Handling
I would like it to return the table like this:
ID | Type | Price | Description | FreightPrice
-----------------------------------------
7 | J | 50.00 | Job | 20.00
7 | P | 30.00 | Postage | 20.00
7 | H | 5.00 | Handling | 20.00
Is there a way that I can use a query such as:
SELECT * FROM Temp WHERE Type = 'F'
but return the 'Freight' row as a column instead with just the 'Price' value?
From what I have seen it appears that I may need to use the PIVOT operator to achieve this but that seems overly complex. Is there a way that I could achieve this result using a CASE or IF expression?
Based on the data you provided, there is one row having the description value: "Freight". Assuming this is the case, then try:
select ID,Type,Price,Description,
FreightPrice = (select Price
from mytable
where Description = 'Freight')
from mytable
where Description <> 'Freight'
If the Freight row is always moved to the right you can hard code this logic (assuming it's always a single row), as in:
select
id,
type,
price,
description,
(select price from t where description = 'Freight') as freightprice
from t
where description <> 'Freight'
Note: this query will crash if your table has more than one row for Freight.

Access SQL - consolidated SUM

I have a table of several hundred thousand similar records and I am trying to consolidate multiple similar records into a more concise table. The SQL query I have used below doesn't give me accurate results when compared with the original table, but I am not sure why.
The table is intended to pull all fields from the original table but consolidate each record in to a single unique record with a sum of the count, so the sum of count should correspond exactly with the sum from the original table.
SELECT Date_mday, date_month, date_year, [Message#EventID], aaRequestType,
[Message#SecurityParameters#AccountNumber] ,
[Message#SecurityParameters#LogonUserID] ,
InstitutionPOBoxCountry,
[Message#SecurityParameters#RoleData] ,
Sum(Count) AS SumOfCount
FROM TempImport
GROUP BY Date_mday, date_month, date_year, [Message#EventID], aaRequestType, [Message#SecurityParameters#AccountNumber], [Message#SecurityParameters#LogonUserID], InstitutionPOBoxCountry, [Message#SecurityParameters#RoleData], Count;
I'm certain that this is straightforward to solve but I have tried a few different approaches and am pretty stumped.
My original table looks like this:
date_mday | date_month | date_year | Message#EventID | aaRequestType | Message#SecurityParameters#AccountNumber | Message#SecurityParameters#LogonUserID | InstitutionPOBoxCountry | Message#SecurityParameters#RoleData | count
-----------|------------|-----------|-----------------|---------------|------------------------------------------|----------------------------------------|-------------------------|-------------------------------------|-------
1 | Jan | 2017 | XML-INPUT | GetData | A1234 | AAA1234 | GB | VALIDATE | 1
1 | Jan | 2017 | XML-INPUT | GetData | A1234 | AAA1234 | GB | VALIDATE | 1
1 | Jan | 2017 | XML-INPUT | GetData | A1234 | AAA1234 | GB | VALIDATE | 1
1 | Jan | 2017 | XML-INPUT | GetData | A1234 | AAA1234 | GB | VALIDATE | 1
And the consolidated table would have a single line, but with the final column (SumOfCount) as 4.
count is the field you are aggregating. If you include it in the GROUP BY, you will get a separate row for each count value.
Because you want to sum the values:
SELECT Date_mday, date_month, date_year, [Message#EventID], aaRequestType,
[Message#SecurityParameters#AccountNumber],[Message#SecurityParameters#LogonUserID] ,
InstitutionPOBoxCountry, [Message#SecurityParameters#RoleData] ,
Sum(Count) AS SumOfCount
FROM TempImport
GROUP BY Date_mday, date_month, date_year, [Message#EventID], aaRequestType, [Message#SecurityParameters#AccountNumber],
[Message#SecurityParameters#LogonUserID], InstitutionPOBoxCountry, [Message#SecurityParameters#RoleData];
Your GROUP BY should only contain columns that are not part of aggregation functions.

Access 2016 & SQL: Totaling two columns, then subtracting them

Say I have a MoneyIN and a MoneyOUT column. I wish to total these entire columns up so I have a sum of each, then I wish to subtract the total of the MoneyOUT column from the total of the MoneyIN column. I also want to display a DateOF column and possibly a description (I think I can do that by myself).
This would be the original database where I get my information from:
+-------------+------------------+---------+----------+-----------+
| Location ID | Location Address | Date Of | Money In | Money Out |
+-------------+------------------+---------+----------+-----------+
| 1 | blah | date | 10.00 | 0.00 |
| 2 | blah | date | 2,027.10 | 27.10 |
| 2 | blah | date | 0.00 | 2000.00 |
| 1 | blah | date | 0.00 | 10.00 |
| 3 | blah | date | 5000.00 | 0.00 |
+-------------+------------------+---------+----------+-----------+
I would like to be able to type in a location ID and then have results show up (in this example I type 2 for the location)
+---------+----------+-----------+------+
| Date Of | Money In | Money Out | |
+---------+----------+-----------+------+
| date | 2027.10 | 27.10 | |
| date | 0 | 2000 | |
| Total: | 2027.10 | 2027.10 | 0 |
+---------+----------+-----------+------+
I have tried other solutions (One of which was pointed out below), however, they don't show the sum of each entire column, they simply subtract MoneyOUT from MoneyIN for each row. As of now, I am trying to do this in a query, but if there is a better way, please elaborate.
I am extremely new to SQL and Access, so please make the explanation understandable for a beginner like me. Thanks so much!
This is a table referred to below.
+-------------+-------+----------+-----------+-----------+
| Location ID | Date | Money IN | Money Out | Total Sum |
+-------------+-------+----------+-----------+-----------+
| 1 | date | 300 | 200 | |
| 1 | date | 300 | 200 | |
| 1 | date | 300 | 200 | |
| 1 | total | 900 | 600 | 300 |
+-------------+-------+----------+-----------+-----------+
The following should give you what you want:
SELECT DateOf, MoneyIn, MoneyOut, '' AS TotalSum FROM YourTable
UNION
SELECT 'Total', SUM(MoneyIn) AS SumIn, SUM(MoneyOut) AS SumOut,
SUM(MoneyIn - MoneyOut) AS TotalSum FROM YourTable
Edit:
You do not need to alter very much to achieve what you want. In order to get Access to prompt for a parameter when running a query, you give a name for the parameter in square brackets; Access will then pop-up a window prompting the user for this value. Also this parameter can be used more than once in the query, without Access prompting for it multiple times. So the following should work for you:
SELECT DateOf, MoneyIn, MoneyOut, '' AS TotalSum
FROM YourTable
WHERE LocationID=[Location ID]
UNION
SELECT 'Total', SUM(MoneyIn) AS SumIn, SUM(MoneyOut) AS SumOut,
SUM(MoneyIn - MoneyOut) AS TotalSum FROM YourTable
WHERE LocationID=[Location ID];
However, looking at your table design, I strongly encourage you to change it. You are including the address on every record. If you have three locations, but 100 records, then on average you are unnecessarily repeating each address more than 30 times. The "normal" way to avoid this would be to have a second table, Locations, which would have an ID and an Address field. You then remove address from YourTable, and in its place create a one-to-many relationship between the ID in Locations and the LocationID in YourTable.
It's a little unclear exactly what you expect without sample data, but I think this is what you want:
SELECT DateOf, SUM(MoneyIN) - SUM(MoneyOut)
FROM YourTable
GROUP BY DateOf
This will subtract the summed total of MoneyOut from MoneyIn at each distinct DateOf
Updated Answer
A UNION will let you append a 'Totals' record to the bottom of your result set:
SELECT *
FROM (
SELECT CAST(DateOf as varchar(20)) as DateOf, MoneyIn, MoneyOut, '' as NetMoneyIn
FROM YourTable
UNION
SELECT 'Total:', SUM(MoneyIn), SUM(MoneyOut), SUM(MoneyIN) - SUM(MoneyOut)
FROM YourTable
) A
ORDER BY CASE WHEN DateOf <> 'Total:' THEN 0 ELSE 1 END, DateOf
Some notes.. I used a derived table to ensure that the 'Total' record is last. Also casted DateOf to a string (assuming it is a date), otherwise you will have issues writing the string 'Total:' to that column.

SQL sum 12 weeks of data based on first sold date across different items

The database has thousands of individual items, each with multiple first sold dates and sales results by week. I need a total sum for each products first 12 weeks of sales.
Code was used for previous individual queries when we know the start date using a SUM(CASE. This is too manual though with thousands of products to review and we are looking for a smart way to speed this up.
Can I build on this so the sum find the minimum first shop date, and then sums the next 12 weeks of results? If so, how do I structure it, or is there a better way?
Columns in database I will need to reference with sample data
PROD_ID | WEEK_ID | STORE_ID | FIRST_SHOP_DATE | ITM_VALUE
12345543 | 201607 | 10000001 | 201542 | 24,356
12345543 | 201607 | 10000002 | 201544 | 27,356
12345543 | 201608 | 10000001 | 201542 | 24,356
12345543 | 201608 | 10000002 | 201544 | 27,356
32655644 | 201607 | 10000001 | 201412 | 103,245
32655644 | 201607 | 10000002 | 201420 | 123,458
32655644 | 201608 | 10000001 | 201412 | 154,867
32655644 | 201608 | 10000002 | 201420 | 127,865
You can do something like this:
select itemid, sum(sales)
from (select t.*, min(shopdate) over (partition by itemid) as first_shopdate
from t
) t
where shopdate < first_stopdate + interval '84' day
group by id;
You don't specify the database, so this uses ANSI standard syntax. The date operations (in particular) vary by database.
Hi Kirsty, Try like this -
select a.Item,sum(sales) as totla
from tableName a JOIN
(select Item, min(FirstSoldDate) as FirstSoldDate from tableName group by item) b
ON a.Item = b.Item
where a.FirstSoldDate between b.FirstSoldDate and (dateadd(day,84,b.FirstSoldDate))
group by a.Item
Thanks :)

T-SQL aggregate over contiguous dates more efficiently

I have a need to aggregate a sum over contiguous dates. I've seen solutions to similar problems that will return the start and end dates, but don't have a need to aggregate the data between those ranges. It's further complicated by the extremely large amounts of data involved, to the point that a simple self join takes an impractical amount of time (especially since the start and end date fields are unindexed)
I have a solution involving cursors, but I've generally been led to believe that cursors can always be more efficiently replaced with joins that will execute faster, but so far every solution I've tried with a query anywhere close to giving me the data I need takes an hour at least, and my cursor solutions takes about 10 seconds. So I'm asking if there is a more efficient answer.
And the data includes both buy and sell transactions and each row of aggregated contiguous dates returned also needs to list the transaction ID of the last sell that occurred before the first buy of the contiguous set of buy transactions.
An example of the data:
+------------------+------------+------------+------------------+--------------------+
| TRANSACTION_TYPE | TRANS_ID | StartDate | EndDate | Amount |
+------------------+------------+------------+------------------+--------------------+
| sell | 100 | 2/16/16 | 2/18/18 | $100.00 |
| sell | 101 | 3/1/16 | 6/6/16 | $121.00 |
| buy | 102 | 6/10/16 | 6/12/16 | $22.00 |
| buy | 103 | 6/12/16 | 6/14/16 | $0.35 |
| buy | 104 | 6/29/16 | 7/2/16 | $5.00 |
| sell | 105 | 7/3/16 | 7/6/16 | $115.00 |
| buy | 106 | 7/8/16 | 7/9/16 | $200.00 |
| sell | 107 | 7/10/16 | 7/13/16 | $4.35 |
| sell | 108 | 7/17/16 | 7/20/16 | $0.50 |
| buy | 109 | 7/25/16 | 7/29/16 | $33.00 |
| buy | 110 | 7/29/16 | 8/1/16 | $75.00 |
| buy | 111 | 8/1/16 | 8/3/16 | $0.33 |
| sell | 112 | 9/1/16 | 9/2/16 | $99.00 |
+------------------+------------+------------+------------------+--------------------+
Should have results like the following:
+------------+------------+------------------+--------------------+
| Last_Sell | StartDate | EndDate | Amount |
+------------+------------+------------------+--------------------+
| 101 | 6/10/16 | 6/14/18 | $22.35 |
| 101 | 6/29/16 | 7/2/16 | $5.00 |
| 105 | 7/8/16 | 7/9/16 | $200.00 |
| 108 | 7/25/16 | 8/3/16 | $108.33 |
+------------------+------------+------------+--------------------+
Right now I use queries to split the data into buys and sells, and just walk through the buy data, aggregating as I go, inserting into the return table every time I find a break in the dates, and I step through the sell table until I reach the last sell before the start date of the set of buys.
Walking linearly through cursors gives me a computational time of n. Even though cursors are orders of magnitude less efficient, it's still calculating in n, while I suspect the joins I would need to do would give me at least n log n. With the ridiculous amount of data I'm working with, the inefficiencies of cursors get swamped if it goes beyond linear time.
If I assume that the transaction id increases along with the dates, then you can get the last sales date using a cumulative max. Then, the adjacency can be found by using similar logic, but with a lag first:
with cte as (
select t.*,
sum(case when transaction_type = 'sell' then trans_id end) over
(order by trans_id) as last_sell,
lag(enddate) over (partition by transaction_type order by trans_id) as prev_enddate
from t
)
select last_sell, min(startdate) as startdate, max(enddate) as enddate,
sum(amount) as amount
from (select cte.*,
sum(case when startdate = dateadd(day, 1, prev_enddate) then 0 else 1 end) over (partition by last_sell order by trans_id) as grp
from cte
where transaction_type = 'buy'
) x
group by last_sell, grp;