I have written an SQL query to try and get some information on a group of trees by forest type. However, when I run this query, the Count(T_Plots.ID_plot) AS [Plots/Forest Type], returns the same numbers as the Count(T_Trees.ID_Tree) AS [Trees/Forest Type]. These numbers are way too high, since the entire T_Plots table only has 175 records, while in the results table it returns outcomes as high as 4290.
What did I overlook that is causing these wrong numbers, and how do I get the correct number of plots per forest type?
SELECT
T_Plots.Forest_type,
Count(T_Plots.ID_plot) AS [Plots/Forest Type],
Count(T_Trees.ID_Tree) AS [Trees/Forest Type],
[Trees/Forest Type]/[Plots/Forest Type]*10 AS [Trees/Ha],
Avg([T_Trees.DBH (cm)])/100*Avg([T_Trees.Height (m)])*0.7*[Trees/Ha] AS [Volume (m3)/Ha],
3.142*(((Avg([T_Trees.DBH (cm)])/2)^2)/100)*[Trees/Ha] AS [BA (m2)/Ha]
FROM T_Plots INNER JOIN T_Trees
ON T_Plots.ID_plot = T_Trees.ID_plot
GROUP BY T_Plots.Forest_type
Images:T_Plots T_Trees Results
I think a DISTINCT might solve your problem. Try:
SELECT
T_Plots.Forest_type,
Count(Distinct T_Plots.ID_plot) AS [Plots/Forest Type],
Count(Distinct T_Trees.ID_Tree) AS [Trees/Forest Type],
[Trees/Forest Type]/[Plots/Forest Type]*10 AS [Trees/Ha],
Avg([T_Trees.DBH (cm)])/100*Avg([T_Trees.Height (m)])*0.7*[Trees/Ha] AS [Volume (m3)/Ha],
3.142*(((Avg([T_Trees.DBH (cm)])/2)^2)/100)*[Trees/Ha] AS [BA (m2)/Ha]
FROM T_Plots INNER JOIN T_Trees
ON T_Plots.ID_plot = T_Trees.ID_plot
GROUP BY T_Plots.Forest_type
Related
I'm trying to tie a description to group codes to use in another table. Unfortunately the data is from a very old database and didn't have very good requirements for fields when it was initially created. So now I have a number of fields for part number group codes that were blank. I'm trying to convert these null values to say "blank". I've tried this many different ways and cannot get the nz function to modify the data in any way.
In the provided code snippet I have tried using the nz function only after select, only after from, and in both places as shown.
SELECT [Part Numbers].Part, nz([Part Numbers].Group,"Blank"), [Group Codes].Description
FROM [Part Numbers]
INNER JOIN [Group Codes] ON nz([Part Numbers].[Group],"Blank") = [Group Codes].[Group Code];
That query doesn't return records where Group field is Null. It can't even be displayed in query design with that join.
Consider RIGHT JOIN:
SELECT [Part Numbers].Part, Nz([Part Numbers].Group,"Blank") AS Expr1, [Group Codes].Description
FROM [Group Codes] RIGHT JOIN [Part Numbers] ON [Group Codes].[Group Code] = [Part Numbers].Group;
I have a listbox on a report that's effectively summarizing another table, grouped by a short list of reasons. I have a column for the reasons, and a column for the number of times each reason occured. I want to have a column for what percentage those reasons constitute, but this column is not displaying - moreover it is giving the previous column (count by reason) wrong numbers.
Before editing, my code (which worked perfectly) was;
SELECT Reasons.Reason, Count([Master Data].ID) AS [Count]
FROM Reasons INNER JOIN [Master Data] ON Reasons.ID = [Master Data].[Lateness Reason]
GROUP BY Reasons.Reason;
I edited it to be;
SELECT Reasons.Reason, Count([Master Data].ID) AS [Count], Format(Count([Master Data].ID)/Count(AllData.ID),'\P') AS Percentage
FROM [Master Data] AS AllData, Reasons INNER JOIN [Master Data] ON Reasons.ID = [Master Data].[Lateness Reason]
GROUP BY Reasons.Reason;
But as mentioned, the third column doesn't show and the number in the Count Column is now wrong too.
Can someone explain why this is and what I should do to fix it please?
EDIT: I have worked out that the incorrect number showing in the 'Count' column is actually the correct number multiplied by Count(AllData.ID) although I can't understand why this would happen.
Correct/desired | Actual Output
value for "Count" | Value
___________________|___________________
10 75500
4 30200
1 7550
20 151000
3 22650
7 52850
Try using the correct syntax for Format:
Format(Count([Master Data].ID)/Count(AllData.ID),'Percent') AS Percentage
And you will probably have to use a subquery to obtain the distinct count og the table with the smallest count of joined records ([Master Data]?).
Edit:
If your first solution seems slow, try this:
SELECT
ID,
Reason,
T.ReasonCount / (SELECT Count(*) FROM [Master Data])
FROM
Reasons
INNER JOIN
(SELECT [Lateness Reason], Count(*) AS [ReasonCount]
FROM [Master Data]
GROUP BY [Lateness Reason]) AS T
ON
T.[Lateness Reason] = Reasons.ID
GROUP BY
ID,
Reason,
T.ReasonCount
I am a bit rusty on SQL so any assistance is appreciated. I am also referencing my SQL textbook but I thought I would try this out.
I am developing a lead scoring model starting with engagement scoring. I created a data extension to house the results and used the following query to populate:
SELECT a.[opportunityid],
a.[first name],
a.[last name],
a.[anticipatedentryterm],
a.[funnelstage],
a.[programofinterest],
a.[opportunitystage],
a.[opportunitystatus],
a.[createdon],
a.[ownerfirstname],
a.[ownerlastname],
a.[f or j visa student],
a.[donotbulkemail],
a.[statecode],
Count(DISTINCT c.[subscriberkey]) AS 'Clicks',
Count(DISTINCT b.[subscriberkey]) AS 'Opens',
Count(DISTINCT b.[subscriberkey]) * 1.5 +
Count(DISTINCT c.[subscriberkey]) * 3 AS 'Probability'
FROM [ug_all_time_joined] a
INNER JOIN [open] b
ON a.[opportunityid] = b.[subscriberkey]
INNER JOIN [click] c
ON a.[opportunityid] = c.[subscriberkey]
GROUP BY a.[opportunityid],
a.[first name],
a.[last name],
a.[anticipatedentryterm],
a.[funnelstage],
a.[programofinterest],
a.[opportunitystage],
a.[opportunitystatus],
a.[createdon],
a.[ownerfirstname],
a.[ownerlastname],
a.[f or j visa student],
a.[donotbulkemail],
a.[statecode]
Something is wrong with my COUNT functions, the query populates the same value in both Clicks and Opens and I don't think it's accurate. The result I am aiming for is how many times a subscriber id appears (which would correspond with the individual clicks/opens, each row is a 1 action).
Thank you!
Why is that surprising?
You have two joins that if you take to their logical conclusion imply that
b.[SubscriberKey] = c.[SubscriberKey]
Hence, counting distinct values will be the same.
You have not provide sample data or desired results. I can speculate, though, that you intend LEFT JOINs so you get some values in one table that are not matched in the other.
When you do an inner join, between a and b, your data is filtered when you join a and c, which will give you incorrect results. having no view of your data and no background of your tables, this is the best guess i have
I have two queries I want to join so I can get the percentages from each department of completed work orders but not sure how to go about it. I know I want a join and a crosstab query so I can display the results in a report.
The first query calculates the numerator,
SELECT
Count(MaximoReport.WorkOrder) AS CountOfWorkOrder,
MaximoReport.[Assigned Owner Group]
FROM MaximoReport
WHERE (
((MaximoReport.WorkType) In ("PMINS","PMOR","PMPDM","PMREG","PMRT"))
AND ((MaximoReport.Status) Like "*COMP")
AND ((MaximoReport.[Target Start])>=DateAdd("h",-1,[Enter the start date])
AND (MaximoReport.[Target Start])<DateAdd("h",23,[Enter the end date]))
AND ((MaximoReport.ActualLaborHours)<>"00:00")
AND ((MaximoReport.ActualStartDate)>=DateAdd("h",-11.8,[Enter the start date])
AND (MaximoReport.ActualStartDate)<DateAdd("h",23,[Enter the end date]))
)
GROUP BY MaximoReport.[Assigned Owner Group];
While the second query calculates the denominator:
SELECT
Count(MaximoReport.WorkOrder) AS CountOfWorkOrder,
MaximoReport.[Assigned Owner Group]
FROM MaximoReport
WHERE (
((MaximoReport.WorkType) In ("PMINS","PMOR","PMPDM","PMREG","PMRT"))
AND ((MaximoReport.Status)<>"CAN")
AND ((MaximoReport.[Target Start])>=DateAdd("h",-11.8,[Enter the start date])
AND (MaximoReport.[Target Start])<DateAdd("h",23,[Enter the end date])))
GROUP BY MaximoReport.[Assigned Owner Group];
Please advise how I can join the two queries to get the percentages of the departments and then do a crosstab query.
If there is a better way of doing this please also let me know.
I have the following query in SQL Server 2008 R2:
SELECT
DateName(month, DateAdd(month, [sfq].[fore_quart_month], -1)) AS [Month],
[sfq].[fore_quart_so_rev] AS [Sales Orders Revenue],
[sfq].[fore_quart_so_mar] AS [Sales Orders Margin],
[sfq].[fore_quart_mac_rev] AS [MAC Revenue],
[sfq].[fore_quart_mac_mar] AS [MAC Margin],
[sfq].[fore_quart_total_rev] AS [TOTAL Revenue],
[sfq].[fore_quart_total_mar] AS [TOTAL Margin],
(SELECT SUM([FORE].[Revenue])
FROM [SO_Opportunity][SO]
LEFT JOIN [SO_Type] ON [SO].[SO_Type_RecID] = [SO_Type].[SO_Type_RecID]
LEFT JOIN [SO_Opportunity_Audit][soa] ON [so].[Opportunity_RecID] = [soa].[Opportunity_RecId]
LEFT JOIN [SO_Opportunity_Audit_Value][soav] ON [soa].[SO_Opportunity_Audit_RecId] = [soav].[SO_Opportunity_audit_recid]
LEFT JOIN [SO_Forecast_dtl] [FORE] ON [SO].[Opportunity_RecID] = [FORE].[Opportunity_RecID]
WHERE ([SO_Type].[Description] NOT LIKE '%MAC%' AND [SO_Type].[Description] NOT LIKE '%Maint%')
AND YEAR([soa].[last_Updated_utc]) = #p_year AND MONTH([soa].[last_updated_utc]) = [sfq].[fore_quart_month]
AND [soav].[audit_value] LIKE '%Closed - Won%' AND [soav].[audit_token] = 'new_value'
AND [so].[SO_Opp_Status_RecID] = 7) AS [Rev]
FROM
[authmanager2].[dbo].[sales_forecast_quarterly][sfq]
WHERE
[sfq].[fore_quart_year] = #p_year AND [sfq].[fore_quart_loc] = 'w'
ORDER BY
[sfq].[fore_quart_month]
The issue is that when including the NOT LIKE filters and the [sfq].[fore_quart_month] reference in the subquery, it runs incredibly slow (minutes), but if I remove the NOT LIKE filters or if I hard set the value instead of use the [sfq].[fore_quart_month] (which obviously means every calculation will use the wrong month except the one I hard coded), then the query runs in less than a second.
Any suggestions?
The LIKE queries with wild cards on both ends are very slow. Example: %MAC%
If you really need to search on that, consider creating a persisted computed boolean field and searching on that. Something like:
ALTER TABLE SO_Type
ADD IsMac AS CASE WHEN [Description] LIKE '%MAC%' THEN 1 ELSE 0 END PERSISTED
GO
OR
As an alternative, set ISMac whenever data is inserted
Small tip: you can group by month and join subquery to main datasource in from clause. This will let (which is not must) server perform subquery only once. And please note my comment above.
...
FROM [authmanager2].[dbo].[sales_forecast_quarterly][sfq]
INNER JOIN
(
SELECT SUM([FORE].[Revenue]) as [Revenue], MONTH([soa].[last_updated_utc]) as [Month]
FROM [SO_Opportunity][SO]
INNER JOIN [SO_Type] ON [SO].[SO_Type_RecID] = [SO_Type].[SO_Type_RecID]
...
GROUP BY MONTH([soa].[last_updated_utc])
) rev on rev.[Month] = [sfq].[fore_quart_month]