how do I get this following result in sql 2000? - sql

Table 1
2 columns
pack no 20 20 20 20 20 20 30 30 30 30 30
Serial no 12 13 14 15 16 17 18 19 20 21 22
Result I need is
pack no 20 30
serial no 12-17 18-22

if these fields are all numbers,
SELECT packNo,
CAST(min_serial AS VARCHAR(12)) + '-' + CAST(min_serial AS VARCHAR(12)) serial_no
FROM
(
SELECT packNo,
MIN(serialNo) min_serial,
MAX(serialNo) max_serial
FROM TableName
GROUP BY packNo
) subtable

Please try:
select
[pack no],
CAST(MIN([Serial no]) AS NVARCHAR(10))+'-'+CAST(MAX([Serial no]) AS NVARCHAR(10)) as [Serial no]
from
YourTable
group by [pack no]

You could use min() and max() functions to do that. It would be better if you use underscore in column names instead space.
FIDDLE DEMO
select [pack no], convert(varchar(10),min([Serial no])) + '-' +
convert(varchar(10),max([Serial no])) as [Serial no]
from yourTable
group by [pack no]

Related

My Sql PIVOT Query Is Not Working As Intended

I'm using the following SQL query to return a table with 4 columns Year, Month, Quantity Sold, Stock_Code,
SELECT yr, mon, sum(Quantity) as Quantity, STOCK_CODE
FROM [All Stock Purchased]
group by yr, mon, stock_code
order by yr, mon, stock_code
This is an example of some of the data BUT I have about 3000 Stock_Codes and approx 40 x yr/mon combinations.
yr mon Quantity STOCK_CODE
2015 4 42 100105
2015 4 220 100135
2015 4 1 100237
2015 4 2 100252
2015 4 1 100277
I want to pivot this into a table which has a row for each SKU and columns for every Year/Month combination.
I have never used Pivot before so have done some research and have created a SQL query that I believe should work.
select * from
(SELECT yr,
mon, Quantity,
STOCK_CODE
FROM [All Stock Purchased]) AS BaseData
pivot (
sum(Quantity)
For Stock_Code
in ([4 2015],[5 2015] ...........
) as PivotTable
This query returns a table with Yr as col1, Mon as col2 and then 4 2015 etc as subsequent columns. Whereas I want col1 to be Stock_Code and col2 to show the quantity of that stock code sold in 4 2015.
Would really like to understand what is wrong with my code above please.
The following query using dynamic PIVOT should do what you want:
CREATE TABLE #temp (Yr INT,Mnt INT,Quantity INT, Stock_Code INT)
INSERT INTO #temp VALUES
(2015,4,42,100105),
(2015,4,100,100105),
(2015,5,220,100135),
(2015,4,1,100237),
(2015,4,2,100252),
(2015,7,1,100277)
DECLARE #pvt NVARCHAR(MAX) = '';
SET #pvt = STUFF(
(SELECT DISTINCT N', ' + QUOTENAME(CONVERT(VARCHAR(10),Mnt) +' '+ CONVERT(VARCHAR(10),Yr)) FROM #temp FOR XML PATH('')),1,2,N'');
EXEC (N'
SELECT pvt.* FROM (
SELECT Stock_Code
,CONVERT(VARCHAR(10),Mnt) +'' ''+ CONVERT(VARCHAR(10),Yr) AS [Tag]
,Quantity
FROM #temp )a
PIVOT (SUM(Quantity) FOR [Tag] IN ('+#pvt+')) pvt');
Result is as below,
Stock_Code 4 2015 5 2015 7 2015
100105 142 NULL NULL
100135 NULL 220 NULL
100237 1 NULL NULL
100252 2 NULL NULL
100277 NULL NULL 1
You can achieve this without using pivoting.
SELECT P.`STOCK_CODE`,
SUM(
CASE
WHEN P.`yr`=2015 AND P.`mon` = '1'
THEN P.`Quantity`
ELSE 0
END
) AS '1 2015',
SUM(
CASE
WHEN P.`yr`=2015 AND P.`mon` = '2'
THEN P.`Quantity`
ELSE 0
END
) AS '2 2015',
SUM(
CASE
WHEN P.`yr`=2015 AND P.`mon` = '3'
THEN P.`Quantity`
ELSE 0
END
) AS '3 2015',
FROM [All Stock Purchased] P
GROUP BY P.`STOCK_CODE`;

MS Access - Sub Query with Running Total using DSUM with filter

In order to generate running total of Sales Qty in MS Access, I used below query, it is working as expected
SELECT ID, [Product Line], DSUM("[Qty]","[SalesData]","[Product Line] like '*Electronics*' AND [ID] <=" & [ID]) AS RunningTotal, FROM SalesData WHERE ([Product Line]) Like '*Electronics*';
Now, I need to filter all the record with RunningTotal < 100,
I ran the below sub query
SELECT * FROM(
SELECT ID, [Product Line], DSUM("[Qty]","[SalesData]","[Product Line] like '*Electronics*' AND [ID] <=" & [ID]) AS RunningTotal, FROM SalesData WHERE ([Product Line]) Like '*Electronics*')
DSUM("[Qty]","[","[Product Line] like '*Electronics*' AND [ID] <=" & [ID]) < 100;
It is not working and table is freezed many times while running this query
Data Table
ID Product Line Qty RunningTotal
1 Electronics 15 15
2 R.K. Electricals 20 20
3 Samsung Electronics 10 25
4 Electricals 30 50
5 Electricals 45 95
6 Electronics Components 18 43
7 Electricals 25 120
8 Electronics 50 93
9 Electricals Machines 65 185
10 Electronics 15 108
11 ABC Electronics Ltd 52 160
12 Electricals 15 200
Here RunningTotal is calculated field (not table field)
Electricals RunningTotal is different and Electronics RunningTotal is different
Expected output for Product Line like Electronics with RunningTotal < 100
ID Product Line Qty RunningTotal
1 Electronics 15 15
3 Samsung Electronics 10 25
6 Electronics Components 18 43
8 Electronics 50 93
Could you please help me to rectify the above query?
Thanks in advance.
Rather than using domain aggregate functions (such as DSum) which are known to be notoriously slow, I would suggest using a correlated subquery, such as the following:
select q.* from
(
select t.id, t.[product line], t.qty,
(
select sum(u.qty)
from salesdata u
where u.[product line] = t.[product line] and u.id <= t.id
) as runningtotal
from salesdata t
where t.[product line] like "*Electronics*"
) q
where q.runningtotal < 100
EDIT:
select t.*, q.runningtotal from salesdata t inner join
(
select t.id,
(
select sum(u.qty)
from salesdata u
where u.[product line] like "*Electronics*" and u.id <= t.id
) as runningtotal
from salesdata t
) q on t.id = q.id
where q.runningtotal < 100 and t.[product line] like "*Electronics*"

SQL calculating sum based on another column

I have a table with the following data in it:
Account number Amount
13 40
34 30
14 30
13 60
14 10
I would like to know how I can write a query to return the following results
Account number Total amount
13 100
14 40
34 30
The query should calculate the sum of all of the amounts in the amount column that share the same account number.
Any help would be much appreciated!
Use Group By + SUM
SELECT [Account number],
SUM(Amount) As [Total Amount]
FROM dbo.Table1
GROUP BY [Account Number]
ORDER BY SUM(Amount) DESC
Demo
Please try:
select
[Account Number],
sum(Amount) Amount
from
YourTable
Group by [Account Number]

SQL - Comparing and Grouping Data on multiple rows

I'm trying to query my database to find which products sold less in October than in either November or December.
I thought something like below would do it but I have a feeling the sub query will be returning the mininimum quantity for the whole database rather than for the specific product.
There must be some way of doing this using GROUP BY but I cant figure it out.
SELECT Category, Product
FROM Sales
WHERE SaleQuantity < (SELECT MIN(SaleQuantity)
FROM Sales
WHERE MonthNumber > 10)
AND MonthNumber = 10
Data looks like:
Category Product MonthNumber SaleQuantity
---------- ----------- ------------- -----------
11 14 10 210
11 14 11 200
11 14 12 390
15 12 10 55
15 12 11 24
17 12 12 129
19 10 10 12
Thanks.
try something like this
SELECT Category,
Product,
SUM( s.SaleQuantity ) AS saleOcotber,
SUM( ISNULL( son.SaleQuantity, 0 ) ) AS saleNovember,
SUM( ISNULL( sod.SaleQuantity, 0 ) ) AS saleDecember
FROM Sales s
LEFT OUTER JOIN Sales son ON son.Category = s.Category
AND son.Product = s.Product
AND son.MonthNumber = 11
LEFT OUTER JOIN Sales sod ON sod.Category = s.Category
AND sod.Product = s.Product
AND sod.MonthNumber = 11
WHERE s.MonthNumber = 10
GROUP BY Category,Product
WHERE SUM( s.SaleQuantity ) < SUM( ISNULL( son.SaleQuantity, 0 ) )
OR SUM( s.SaleQuantity ) < SUM( ISNULL( sod.SaleQuantity, 0 ) )
I have no tested this select but i think it will do the job if there is something not clear
please ask
Best Regards,
Iordan
PS. I presume you are using some version of MSSQL if not try to rewrite it by yourself int SQL you are using
Your table already appears to be summarised by Category, Product and MonthNumber, for SalesQuantity. If so, try this:
select distinct Category, Product
from Sales s11_12
where MonthNumber in (11,12) and
not exists (select null
from Sales s10
where s10.Category = s11_12.Category and
s10.Product = s11_12.Product and
s10.SalesQuantity >= s11_12.SalesQuantity)

In SQL, how can you "group by" in ranges?

Suppose I have a table with a numeric column (lets call it "score").
I'd like to generate a table of counts, that shows how many times scores appeared in each range.
For example:
score range | number of occurrences
-------------------------------------
0-9 | 11
10-19 | 14
20-29 | 3
... | ...
In this example there were 11 rows with scores in the range of 0 to 9, 14 rows with scores in the range of 10 to 19, and 3 rows with scores in the range 20-29.
Is there an easy way to set this up? What do you recommend?
Neither of the highest voted answers are correct on SQL Server 2000. Perhaps they were using a different version.
Here are the correct versions of both of them on SQL Server 2000.
select t.range as [score range], count(*) as [number of occurences]
from (
select case
when score between 0 and 9 then ' 0- 9'
when score between 10 and 19 then '10-19'
else '20-99' end as range
from scores) t
group by t.range
or
select t.range as [score range], count(*) as [number of occurrences]
from (
select user_id,
case when score >= 0 and score< 10 then '0-9'
when score >= 10 and score< 20 then '10-19'
else '20-99' end as range
from scores) t
group by t.range
An alternative approach would involve storing the ranges in a table, instead of embedding them in the query. You would end up with a table, call it Ranges, that looks like this:
LowerLimit UpperLimit Range
0 9 '0-9'
10 19 '10-19'
20 29 '20-29'
30 39 '30-39'
And a query that looks like this:
Select
Range as [Score Range],
Count(*) as [Number of Occurences]
from
Ranges r inner join Scores s on s.Score between r.LowerLimit and r.UpperLimit
group by Range
This does mean setting up a table, but it would be easy to maintain when the desired ranges change. No code changes necessary!
I see answers here that won't work in SQL Server's syntax. I would use:
select t.range as [score range], count(*) as [number of occurences]
from (
select case
when score between 0 and 9 then ' 0-9 '
when score between 10 and 19 then '10-19'
when score between 20 and 29 then '20-29'
...
else '90-99' end as range
from scores) t
group by t.range
EDIT: see comments
In postgres (where || is the string concatenation operator):
select (score/10)*10 || '-' || (score/10)*10+9 as scorerange, count(*)
from scores
group by score/10
order by 1
gives:
scorerange | count
------------+-------
0-9 | 11
10-19 | 14
20-29 | 3
30-39 | 2
And here's how to do it in T-SQL:
DECLARE #traunch INT = 1000;
SELECT
CONCAT
(
FORMAT((score / #traunch) * #traunch, '###,000,000')
, ' - ' ,
FORMAT((score / #traunch) * #traunch + #traunch - 1, '###,000,000')
) as [Range]
, FORMAT(MIN(score), 'N0') as [Min]
, FORMAT(AVG(score), 'N0') as [Avg]
, FORMAT(MAX(score), 'N0') as [Max]
, FORMAT(COUNT(score), 'N0') as [Count]
, FORMAT(SUM(score), 'N0') as [Sum]
FROM scores
GROUP BY score / #traunch
ORDER BY score / #traunch
James Curran's answer was the most concise in my opinion, but the output wasn't correct. For SQL Server the simplest statement is as follows:
SELECT
[score range] = CAST((Score/10)*10 AS VARCHAR) + ' - ' + CAST((Score/10)*10+9 AS VARCHAR),
[number of occurrences] = COUNT(*)
FROM #Scores
GROUP BY Score/10
ORDER BY Score/10
This assumes a #Scores temporary table I used to test it, I just populated 100 rows with random number between 0 and 99.
select cast(score/10 as varchar) + '-' + cast(score/10+9 as varchar),
count(*)
from scores
group by score/10
create table scores (
user_id int,
score int
)
select t.range as [score range], count(*) as [number of occurences]
from (
select user_id,
case when score >= 0 and score < 10 then '0-9'
case when score >= 10 and score < 20 then '10-19'
...
else '90-99' as range
from scores) t
group by t.range
This will allow you to not have to specify ranges, and should be SQL server agnostic. Math FTW!
SELECT CONCAT(range,'-',range+9), COUNT(range)
FROM (
SELECT
score - (score % 10) as range
FROM scores
)
I would do this a little differently so that it scales without having to define every case:
select t.range as [score range], count(*) as [number of occurences]
from (
select FLOOR(score/10) as range
from scores) t
group by t.range
Not tested, but you get the idea...
declare #RangeWidth int
set #RangeWidth = 10
select
Floor(Score/#RangeWidth) as LowerBound,
Floor(Score/#RangeWidth)+#RangeWidth as UpperBound,
Count(*)
From
ScoreTable
group by
Floor(Score/#RangeWidth)
select t.blah as [score range], count(*) as [number of occurences]
from (
select case
when score between 0 and 9 then ' 0-9 '
when score between 10 and 19 then '10-19'
when score between 20 and 29 then '20-29'
...
else '90-99' end as blah
from scores) t
group by t.blah
Make sure you use a word other than 'range' if you are in MySQL, or you will get an error for running the above example.
Because the column being sorted on (Range) is a string, string/word sorting is used instead of numeric sorting.
As long as the strings have zeros to pad out the number lengths the sorting should still be semantically correct:
SELECT t.range AS ScoreRange,
COUNT(*) AS NumberOfOccurrences
FROM (SELECT CASE
WHEN score BETWEEN 0 AND 9 THEN '00-09'
WHEN score BETWEEN 10 AND 19 THEN '10-19'
ELSE '20-99'
END AS Range
FROM Scores) t
GROUP BY t.Range
If the range is mixed, simply pad an extra zero:
SELECT t.range AS ScoreRange,
COUNT(*) AS NumberOfOccurrences
FROM (SELECT CASE
WHEN score BETWEEN 0 AND 9 THEN '000-009'
WHEN score BETWEEN 10 AND 19 THEN '010-019'
WHEN score BETWEEN 20 AND 99 THEN '020-099'
ELSE '100-999'
END AS Range
FROM Scores) t
GROUP BY t.Range
Try
SELECT (str(range) + "-" + str(range + 9) ) AS [Score range], COUNT(score) AS [number of occurances]
FROM (SELECT score, int(score / 10 ) * 10 AS range FROM scoredata )
GROUP BY range;
select t.range as score, count(*) as Count
from (
select UserId,
case when isnull(score ,0) >= 0 and isnull(score ,0)< 5 then '0-5'
when isnull(score ,0) >= 5 and isnull(score ,0)< 10 then '5-10'
when isnull(score ,0) >= 10 and isnull(score ,0)< 15 then '10-15'
when isnull(score ,0) >= 15 and isnull(score ,0)< 20 then '15-20'
else ' 20+' end as range
,case when isnull(score ,0) >= 0 and isnull(score ,0)< 5 then 1
when isnull(score ,0) >= 5 and isnull(score ,0)< 10 then 2
when isnull(score ,0) >= 10 and isnull(score ,0)< 15 then 3
when isnull(score ,0) >= 15 and isnull(score ,0)< 20 then 4
else 5 end as pd
from score table
) t
group by t.range,pd order by pd
I'm here because i have similar question but i find the short answers wrong and the one with the continuous "case when" is to much work and seeing anything repetitive in my code hurts my eyes. So here is the solution
SELECT --MIN(score), MAX(score),
[score range] = CAST(ROUND(score-5,-1)AS VARCHAR) + ' - ' + CAST((ROUND(score-5,-1)+10)AS VARCHAR),
[number of occurrences] = COUNT(*)
FROM order
GROUP BY CAST(ROUND(score-5,-1)AS VARCHAR) + ' - ' + CAST((ROUND(score-5,-1)+10)AS VARCHAR)
ORDER BY MIN(score)
For PrestoSQL/Trino applying answer from Ken https://stackoverflow.com/a/232463/429476
select t.range, count(*) as "Number of Occurance", ROUND(AVG(fare_amount),2) as "Avg",
ROUND(MAX(fare_amount),2) as "Max" ,ROUND(MIN(fare_amount),2) as "Min"
from (
select
case
when trip_distance between 0 and 9 then ' 0-9 '
when trip_distance between 10 and 19 then '10-19'
when trip_distance between 20 and 29 then '20-29'
when trip_distance between 30 and 39 then '30-39'
else '> 39'
end as range ,fare_amount
from nyc_in_parquet.tlc_yellow_trip_2022) t
where fare_amount > 1 and fare_amount < 401092
group by t.range;
range | Number of Occurance | Avg | Max | Min
-------+---------------------+--------+-------+------
0-9 | 2260865 | 10.28 | 720.0 | 1.11
30-39 | 1107 | 104.28 | 280.0 | 5.0
10-19 | 126136 | 43.8 | 413.5 | 2.0
> 39 | 42556 | 39.11 | 668.0 | 1.99
20-29 | 19133 | 58.62 | 250.0 | 2.5
Perhaps you're asking about keeping such things going...
Of course you'll invoke a full table scan for the queries and if the table containing the scores that need to be tallied (aggregations) is large you might want a better performing solution, you can create a secondary table and use rules, such as on insert - you might look into it.
Not all RDBMS engines have rules, though!