Access Union/Pivot to Swap Columns and Rows - sql

Access 2010 here. I have a query (Thank you Andomar!):
SELECT Inspection.Date, count(*) AS [# Insp], sum(iif(Disposition = 'PASS',1,0)) AS [# Passed], sum(iif(Disposition = 'FAIL',1,0)) AS [# Failed], sum(iif(Disposition = 'PASS',1,0)) / count(*) AS [% Acceptance]
FROM Inspection
WHERE Disposition in ('PASS', 'FAIL') AND ((Inspection.Date) Between Date() And Date()-30)
GROUP BY Date;
That gives a table like this:
Date | # Insp | # Passed | # Failed | % Acceptance
11/26/2012 | 7 | 5 | 2 | 71
11/27/2012 | 8 | 4 | 4 | 50
...
I am looking to use this query to make a "table" for a sub-form that will be below a graph, for reference only. The formatting of the "table" is of importance, as it needs both Column (Date) and Row headings. I have table in parentheis to emphasize that the table is generated in real time; in other words, not stored as an Access object.
The end result will be someting like this:
Date | 11/26/2012 | 11/27/2012 ...
# Insp | 7 | 8
# Passed | 5 | 4
# Failed | 2 | 4
% Acceptance | 71 | 50
It seems to be an optimal case, as the axis are just flipped, but for the life of me, I cannot find a solution that does not destroy the data. A Crosstab Query only gave me filtering on one or more categories against a single value. Is this something a union would be used for; or a pivot? Would a transform be needed? It seems like it should be such a simple problem. Is this something that can be done in SQL or would VBA be needed to arrange the "table?" Thanks for the help!
These links do seem applicable:
Columns to Rows in MS Access
how to pivot rows to columns

This will have to be a two-step process to transform. First you will have to rotate the data in your current query to be in rows instead of columns, then you will have to transform the dates into columns instead of rows.
The query will be something like this:
TRANSFORM max(val) as MaxValue
SELECT col
FROM
(
SELECT [Date], '# Insp' as Col, [# Insp] as val
FROM yourQuery
UNION ALL
SELECT [Date], '# Passed' as Col, [# Passed] as val
FROM yourQuery
UNION ALL
SELECT [Date], '# Failed' as Col, [# Failed] as val
FROM yourQuery
UNION ALL
SELECT [Date], '% Acceptance' as Col, [% Acceptance] as val
FROM yourQuery
)
GROUP BY col
PIVOT [Date]
I am guessing the your current query is saved in your database, you will replace the yourQuery in my example with the name of your query.
I just tested this in MS Access 2003 with the values in your sample above and it produced the result you want.

Related

How to create a table to count with a conditional

I have a database with a lot of columns with pass, fail, blank indicators
I want to create a function to count each type of value and create a table from the counts. The structure I am thinking is something like
| Value | x | y | z |
|-------|------------------|-------------------|---|---|---|---|---|---|---|
| pass | count if x=pass | count if y=pass | count if z=pass | | | | | | |
| fail | count if x=fail | count if y=fail |count if z=fail | | | | | | |
| blank | count if x=blank | count if y=blank | count if z=blank | | | | | | |
| total | count(x) | count(y) | count (z) | | | | | | |
where x,y,z are columns from another table.
I don't know which could be the best approach for this
thank you all in advance
I tried this structure but it shows syntax error
CREATE FUNCTION Countif (columnx nvarchar(20),value_compare nvarchar(10))
RETURNS Count_column_x AS
BEGIN
IF columnx=value_compare
count(columnx)
END
RETURN
END
Also, I don't know how to add each count to the actual table I am trying to create
Conditional counting (or any conditional aggregation) can often be done inline by placing a CASE expression inside the aggregate function that conditionally returns the value to be aggregated or a NULL to skip.
An example would be COUNT(CASE WHEN SelectMe = 1 THEN 1 END). Here the aggregated value is 1 (which could be any non-null value for COUNT(). (For other aggregate functions, a more meaningful value would be provided.) The implicit ELSE returns a NULL which is not counted.
For you problem, I believe the first thing to do is to UNPIVOT your data, placing the column name and values side-by-side. You can then group by value and use conditional aggregation as described above to calculate your results. After a few more details to add (1) a totals row using WITH ROLLUP, (2) a CASE statement to adjust the labels for the blank and total rows, and (3) some ORDER BY tricks to get the results right and we are done.
The results may be something like:
SELECT
CASE
WHEN GROUPING(U.Value) = 1 THEN 'Total'
WHEN U.Value = '' THEN 'Blank'
ELSE U.Value
END AS Value,
COUNT(CASE WHEN U.Col = 'x' THEN 1 END) AS x,
COUNT(CASE WHEN U.Col = 'y' THEN 1 END) AS y
FROM #Data D
UNPIVOT (
Value
FOR Col IN (x, y)
) AS U
GROUP BY U.Value WITH ROLLUP
ORDER BY
GROUPING(U.Value),
CASE U.Value WHEN 'Pass' THEN 1 WHEN 'Fail' THEN 2 WHEN '' THEN 3 ELSE 4 END,
U.VALUE
Sample data:
x
y
Pass
Pass
Pass
Fail
Pass
Fail
Sample results:
Value
x
y
Pass
3
1
Fail
1
1
Blank
0
2
Total
4
4
See this db<>fiddle for a working example.
I think you don't need a generic solution like a function with value as parameter.
Perhaps, you could create a view grouping your data and after call this view filtering by your value.
Your view body would be something like that
select value, count(*) as Total
from table_name
group by value
Feel free to explain your situation better so I could help you.
You can do this by grouping by the status column.
select status, count(*) as total
from some_table
group by status
Rather than making a whole new table, consider using a view. This is a query that looks like a table.
create view status_counts as
select status, count(*) as total
from some_table
group by status
You can then select total from status_counts where status = 'pass' or the like and it will run the query.
You can also create a "materialized view". This is like a view, but the results are written to a real table. SQL Server is special in that it will keep this table up to date for you.
create materialized view status_counts with distribution(hash(status))
select status, count(*) as total
from some_table
group by status
You'd do this for performance reasons on a large table which does not update very often.

SQL - how to get as a query result both a column and the sum of that column's values

I have a complicated stored procedure that calculates a column with numeric values and returns it as a part of data-set containing other columns as well. I am trying to find a way to return in the same query the SUM of that special column as well. I use SQL Management Studio and was thinking to use an OUT parameter or even a RETURN value. But if there is a more SQL-ish way to do it will definitely prefer it.
SELECT
OrID, QN, PRID, PCKID, Person, Price, CSID,
CASE
WHEN (COUNT(*) OVER (PARTITION BY OrID)) > 1
THEN Price * 0.2
ELSE Price * 0.1
END AS Commission
FROM
( < my subquery > )
I would also like to add SUM(Commission) to the the results of the above statement.
If my data is (partial)
OrID|Price
----+-----
1 | 100
2 | 100
2 | 50
3 | 80
I will get the following result
OrID|Price|Commission
----+-----+----------
1 | 100 | 10
2 | 100 | 20
2 | 50 | 10
3 | 80 | 8
And somewhere I would also like to see the SUM of the last column - 48
Something like Excel's SUM function at the end of the Commission column
You can use a subquery:
SELECT s.*, SUM(Commission) OVER (PARTITION BY OrId) as sum_commission
FROM (SELECT OrID, QN, PRID, PCKID, Person, Price, CSID
(CASE WHEN (count(*) OVER (PARTITION BY OrID)) > 1
THEN Price*0.2
ELSE Price*0.1
END) AS Commission
FROM (< my subquery >
) s
) s;
I assume you want it by OrId. If not remove the partition by.
Try using the with Rollup command. It does what you want
https://technet.microsoft.com/en-us/library/ms189305%28v=sql.90%29.aspx?f=255&MSPPError=-2147217396

SQL Server, complex query

I have an Azure SQL Database table which is filled by importing XML-files.
The order of the files is random so I could get something like this:
ID | Name | DateFile | IsCorrection | Period | Other data
1 | Mr. A | March, 1 | false | 3 | Foo
20 | Mr. A | March, 1 | true | 2 | Foo
13 | Mr. A | Apr, 3 | true | 2 | Foo
4 | Mr. B | Feb, 1 | false | 2 | Foo
This table is joined with another table, which is also joined with a 3rd table.
I need to get the join of these 3 tables for the person with the newest data, based on Period, DateFile and Correction.
In my above example, Id=1 is the original data for Period 3, I need this record.
But in the same file was also a correction for Period 2 (Id=20) and in the file of April, the data was corrected again (Id=13).
So for Period 3, I need Id=1, for Period 2 I need Id=13 because it has the last corrected data and I need Id=4 because it is another person.
I would like to do this in a view, but using a stored procedure would not be a problem.
I have no idea how to solve this. Any pointers will be much appreciated.
EDIT:
My datamodel is of course much more complex than this sample. DateFile and Period are DateTime types in the table. Actually Period is two DateTime columns: StartPeriod and EndPeriod.
Well looking at your data I believe we can disregard the IsCorrection column and just pick the latest column for each user/period.
Lets start by ordering the rows placing the latest on top :
SELECT ROW_NUMBER() OVER (PARTITION BY Period, Name ORDER by DateFile DESC), *
And from this result you select all with row number 1:
;with numberedRows as (
SELECT ROW_NUMBER() OVER (PARTITION BY Period, Name ORDER by DateFile DESC) as rowIndex, *
)
select * from numberedRows where rowIndex=1
The PARTITION BY tells ROW_NUMBER() to reset the counter whenever it encounters change in the columns Period and Name. The ORDER BY tells the ROW_NUMBER() that we want th newest row to be number 1 and then older posts afterwards. We only need the latest row.
The WITH declares a "common table expression" which is a kind of subquery or temporary table.
Not knowing your exact data, I might recommend you something wrong, but you should be able to join your with last query with other tables to get your desired result.
Something like:
;with numberedRows as (
SELECT ROW_NUMBER() OVER (PARTITION BY Period, Name ORDER by DateFile DESC) as rowIndex, *
)
select * from numberedRows a
JOIN periods b on b.empId = a.Id
JOIN msg c on b.msgId = c.Id
where a.rowIndex=1

Separate a record into multiple records for each column

I have a query in MS Access that is pulling in from 4 different tables, it shows an ID (that is common across the 4 tables) and 4 fields of totals, one is an Actual total, another a forecast total, etc
Thus, each record shows something like the following:
ID | TotalActual | TotalForecast | TotalRR | TotalBudget
234518 | 90.10 | 150.98 | 152.31 | 149.0
Is there a way to divide this record so it shows up like the following:
Type | ID | Total |
Actual | 234518 | 90.10 |
Forecast | 234518 | 150.98 |
RR | 234518 | 152.31 |
Budget | 234518 | 149.0 |
I don't want to make a union because the data needs to be able to refresh/update daily and I know that unions do not update when there are changes in the table/query
As revealed in the comments to the question, your aversion to using a UNION query was based on a misunderstanding of how those queries work and hence was unfounded. You do, in fact, want to use a UNION query instead of your current query (which presumably does an INNER JOIN on each of the four tables to produce five columns). The UNION query would be something like
SELECT
'Actual' AS [Type],
[ID],
[TotalActual] AS [Total]
FROM [ActualTable]
UNION ALL
SELECT
'Forecast' AS [Type],
[ID],
[TotalForecast] AS [Total]
FROM [ForecastTable]
UNION ALL
SELECT
'RR' AS [Type],
[ID],
[TotalRR] AS [Total]
FROM [RRTable]
UNION ALL
SELECT
'Budget' AS [Type],
[ID],
[TotalBudget] AS [Total]
FROM [BudgetTable]

SQL group by and count fixed column values

I'm facing a problem in a data importation script in SQL(MySQL) where I need to GROUP rows by type to COUNT how much rows there are from each type. So far, it isn't really a problem, because I know that I can do:
SELECT
data.type,
COUNT(data.type)
FROM data
GROUP BY data.type;
So, by doing it, I have the result:
-------------- ---------------------
| type | COUNT(data.type) |
|--------------|---------------------|
| 0 | 1 |
| 1 | 46 |
| 2 | 35 |
| 3 | 423 |
| 4 | 64 |
| 5 | 36 |
| 9 | 1 |
-------------- ---------------------
I know that in the type column the values will always be in the range from 0 to 9, like the above result. So, I would like to list not only the existing values in the table content but the missing type values too, with their COUNT value set to 0.
Based on the above query result, the expected result would be:
-------------- ---------------------
| type | COUNT(data.type) |
|--------------|---------------------|
| 0 | 1 |
| 1 | 46 |
| 2 | 35 |
| 3 | 423 |
| 4 | 64 |
| 5 | 36 |
| 6 | 0 |
| 7 | 0 |
| 8 | 0 |
| 9 | 1 |
-------------- ---------------------
I could trickly INSERT one row of each type before GROUP/COUNT-1 the table content, flagging some other column on INSERT to be able to DELETE these rows after. So, the steps of my importation script would change to:
TRUNCATE table; (I can't securily import new content if there were old data in the table)
INSERT "control" rows;
LOAD DATA INFILE INTO TABLE;
GROUP/COUNT-1 the table content;
DELETE "control" rows; (So I can still work with the table content)
Do any other jobs;
But, I was looking for a cleaner way to reach the expected result. If possible, a single query, without a bunch of JOINs.
I would appreciate any suggestion or advice. Thank you very much!
EDIT
I would like to thank for the answers about CREATE a table to store all types to JOIN it. It really solves the problem. My approach solves it too, but does it storing the types, as you did.
So, I have "another" question, just a clarification, based on the received answers and my desired scope... is it possible to reach the expected result with some MySQL command that will not CREATE a new table and/or INSERT these types?
I don't see any problem, actually, in solve my question storing the types... I just would like to find a simplified command... something like a 'best practice'... some kind of filter... as I could run:
GROUP BY data.type(0,1,2,3,4,5,6,7,8,9)
and it could return these filtered values.
I am really interested to learn such a command, if it really exists/is possible.
And again, thank you very much!
Let's assume that you have a types table with all the valid types:
SELECT t.type,
COUNT(data.type)
FROM data join types t on data.type = t.type
GROUP BY t.type
order by t.type
You should include the explicit order by and not depend on the group by to produce results in a particular order.
The easiest way is to create a table of all type values and then join on that table when getting the count:
select t.type,
count(d.type)
from types t
left join data d
on t.type = d.type
group by t.type
See SQL Fiddle with demo
Or you can use the following:
select t.type,
count(d.type)
from
(
select 0 type
union all
select 1
union all
select 2
union all
select 3
union all
select 4
union all
select 5
union all
select 6
union all
select 7
union all
select 8
union all
select 9
) t
left join data d
on t.type = d.type
group by t.type
See SQL Fiddle with Demo
One option would be having a static numbers table with the values 0-9. Not sure if this is the most elegant approach, and if you were using SQL Server, I could think of another approach.
Try something like this:
SELECT
numbers.number,
COUNT(data.type)
FROM numbers
left join data
on numbers.number = data.type
GROUP BY numbers.number;
And the SQL Fiddle.
Okay... I think I found it! Thank you all!!! I'm accepting my own answer.
I agree with the #GordonLinoff comment that the best practice refers to store the types values and describe them, so you can keep a concise/understandable database and queries.
But, as far as I've learned, if you have some data which might be an irrelevant information, it is preferable to treat it in some other way than storing it.
So, I developed this query:
SELECT
SUM(IF(data.type = 0, 1, 0)) AS `0`,
SUM(IF(data.type = 1, 1, 0)) AS `1`,
SUM(IF(data.type = 2, 1, 0)) AS `2`,
SUM(IF(data.type = 3, 1, 0)) AS `3`,
SUM(IF(data.type = 4, 1, 0)) AS `4`,
SUM(IF(data.type = 5, 1, 0)) AS `5`,
SUM(IF(data.type = 6, 1, 0)) AS `6`,
SUM(IF(data.type = 7, 1, 0)) AS `7`,
SUM(IF(data.type = 8, 1, 0)) AS `8`,
SUM(IF(data.type = 9, 1, 0)) AS `9`
FROM data;
Not a so faster, optimized and beauty query, but to the size of data I'll manage (less than 100.000 rows each importation) it "manually" does the GROUP/COUNT job, running in 0.13 sec in a common developer machine.
It differs from my expected result just in the way rows and columns are selected - instead of 10 rows with 2 columns I've got 1 row with 10 columns, labeled with the matching type. Also, as we have a standardization to the type value (and we'll not change it for sure) which gives it a name and description, I'm now able to use the type name as the column label, instead of joining to a table with the types info to select a third column in the result (which really, is not that important as it's an importation script based on some standards).
Thank you all so much for the help!