I need to create a general report about of a trucks in a company.
I have this tables in my schema:
Schema image:
Basically, I need to create a table containing the following:
|Location|Trucks|TotalOfCampaings|CampaingsWithCompleteStatus|CampaingsWithInProcessStatus|
Location: Location of the trucks, are in the Truck table.
Trucks: Number of Trucks by Location.
TotalOfCampaings: Total Number of Campaings by the Location and Trucks.
CampaingsWithCompleteStatus: Total Number of Campaings Completed, the status are in the table Campaing Control.
CampaingsWithInProcessStatus: Total Number of Campaings not finished.
Campaing = Order to fix one or multiple trucks.
I tried with a inner joins querys, but i cant get what i expect for the general report.
I would appreciate help me with this!
SELECT *
FROM
-- Prepare the base data for the report
(SELECT location, COUNT(*) AS Trucks FROM Truck GROUP BY location) loc
-- The statistics needed, make sure it is 1 to 1
-- The status value just guess as you did not mention in the question
OUTER APPLY
(
SELECT
COUNT(*) AS TotalOfCampaings,
SUM(CASE WHEN cc.campaing_status = 'Complete' THEN 1 ELSE 0 END) AS CampaingsWithCompleteStatus,
SUM(CASE WHEN cc.campaing_status = 'InProcess' THEN 1 ELSE 0 END) AS CampaingsWithInProcessStatus
FROM CampaingControl cc INNER JOIN Truck t ON cc.vin = t.vin
WHERE t.location = loc.location
) stat
Related
I have two tables.
FootballPlayers with columns Id_footballplayer, Last_Name, Fisrt_Name, Age
Transfers with columns Id_transfer, Name_club, price, date, acceptance (yes or no), code_footballplayer
How can I write a SQL query to select the last names of the players and the sum of the successful transfers carried out by them, the number of which exceeds 3?
I already wrote a query that displays the total amount of all successful transfers for each player
SELECT FootballPLayers.Last_Name,
SUM(CASE acceptance WHEN 'yes' THEN price ELSE 0 END) AS amount_price
FROM FootballPlayers
INNER JOIN Transfers ON FootballPlayers.ID_footballplayer = Transfers.code_footballplayer
GROUP BY FootballPlayers.Last_Name;
But I don’t know how to add a condition if the number of successful transfers is more than 3
Since this is a group scenario, after theGROUP BY you probably want:
HAVING COUNT(1) > 3
The HAVING clause works very similarly to WHERE, but is applied differently.
An alternative would be the sub-query:
SELECT * FROM
(
SELECT FootballPLayers.Last_Name,
SUM(CASE acceptance WHEN 'yes' THEN price ELSE 0 END) AS amount_price,
COUNT(1) AS [Transfers]
FROM FootballPlayers
INNER JOIN Transfers ON FootballPlayers.ID_footballplayer = Transfers.code_footballplayer
GROUP BY FootballPlayers.Last_Name
) x
WHERE x.Transfers > 3
This outputs two columns. First is number of incidents, second is a 1 or 0 for breached or not breached. I'm trying to see if there's a way to show this data arranged by month with a percentage of breached vs not breached.
SELECT TOP 5000
[incidents] = Count(task_sla.dv_sla),
task_sla.has_breached,
resolved_at
FROM
incident
left join task_sla
on incident.number = task_sla.dv_task
WHERE
dv_task like 'INC%'
and dv_sla like 'Resolution%'
GROUP BY
task_sla.has_breached,
resolved_at
I think you want something like this:
SELECT YEAR(i.date), MONTH(i.date),
COUNT(*) as num_incidents,
AVG(CASE WHEN t.has_breached = 1 THEN 1.0 ELSE 0 END) as ratio_breached
FROM incident i left join
task_sla t
on i.number = t.dv_task and
t.dv_task like 'INC%'
WHERE ?.dv_sla like 'Resolution%'
GROUP BY YEAR(i.date), MONTH(i.date)
ORDER BY YEAR(i.date), MONTH(i.date);
I have an SQLite3 database with a table upon which I need to filter by several factors. Once such factor is to filter our rows based on the content of other rows within the same table.
From what I've researched, a self JOIN is going to be required, but I am not sure how I would do that to filter the table by several factors.
Here is a sample table of the data:
Name Part # Status Amount
---------------------------------
Item 1 12345 New $100.00
Item 2 12345 New $15.00
Item 3 35864 Old $132.56
Item 4 12345 Old $15.00
What I need to do is find any Items that have the same Part #, one of them has an "Old" Status and the Amount is the same.
So, first we would get all rows with Part # "12345," and then check if any of the rows have an "Old" status with a matching Amount. In this example, we would have Item2 and Item4 as a result.
What now would need to be done is to return the REST of the rows within the table, that have a "New" Status, essentially discarding those two items.
Desired Output:
Name Part # Status Amount
---------------------------------
Item 1 12345 New $100.00
Removed all "Old" status rows and any "New" that had a matching "Part #" and "Amount" with an "Old" status. (I'm sorry, I know that's very confusing, hence my need for help).
I have looked into the following resources to try and figure this out on my own, but there are so many levels that I am getting confused.
Self-join of a subquery
ZenTut
Compare rows and columns of same table
The first two links dealt with comparing columns within the same table. The third one does seem to be a pretty similar question, but does not have a readable answer (for me, anyway).
I do Java development as well and it would be fairly simple to do this there, but I am hoping for a single SQL query (nested), if possible.
The "not exists" statment should do the trick :
select * from table t1
where t1.Status = 'New'
and not exists (select * from table t2
where t2.Status = 'Old'
and t2.Part = t1.Part
and t2.Amount = t1.Amount);
This is a T-SQL answer. Hope it is translatable. If you have a big data set for matches you might change the not in to !Exists.
select *
from table
where Name not in(
select Name
from table t1
join table t2
on t1.PartNumber = t2.PartNumber
AND t1.Status='New'
AND t2.Status='Old'
and t1.Amount=t2.Amount)
and Status = 'New'
could be using an innner join a grouped select for get status old and not only this
select * from
my_table
INNER JOIN (
select
Part_#
, Amount
, count(distinct Status)
, sum(case when Status = 'Old' then 1 else 0 )
from my_table
group part_#, Amount,
having count(distinct Status)>1
and sum(case when Status = 'Old' then 1 else 0 ) > 0
) t on.t.part_# = my_table.part_#
and status = 'new'
and my_table.Amount <> t.Amount
Tried to understand what you want best I could...
SELECT DISTINCT yt.PartNum, yt.Status, yt.Amount
FROM YourTable yt
JOIN YourTable yt2
ON yt2.PartNum = yt.PartNum
AND yt2.Status = 'Old'
AND yt2.Amount != yt.Amount
WHERE yt.Status = 'New'
This gives everything with a new status that has an old status with a different price.
firstly sorry for vague title wasn't sure how to explain it. Loooking at the query below I want to pull incidents out where 'truck1' attended. Not just on its own but when other vehicles attended with it. Im sure its something straight forward but can't work it out.
select
i.incident_number
Vehicles,
countofvehicles
FROM
(
SELECT
i.Incident_Number,
array_agg(ir.RESOURCE) as vehicle,
count(ir.RESOURCE) as countofvehicles
FROM INCIDENT as i
JOIN RESOURCE as ir on i.Incident_Number = ir.Incident_Number
--WHERE ir. like '%Truck1%'
GROUP BY i.Incident_Number) i
Result
incident_Number vehicle countofvehicle
1 car1,car2,bike1 3
2 car1,car2,truck1 3
3 truck1 1
4 car1 1
If I wanted to see where only the incidents truck1 attended, using WHERE ir.RESOURCE like '%truck1% would only bring back incident number 3 and not incident 2 where it attended with other vehicles. How can I get around this please?
Thanks
You don't need a subquery for this, just a having clause:
SELECT i.Incident_Number,
array_agg(ir.RESOURCE) as vehicle,
count(ir.RESOURCE) as countofvehicles
FROM INCIDENT i JOIN
RESOURCE ir
ON i.Incident_Number = ir.Incident_Number
GROUP BY i.Incident_Number
HAVING SUM(CASE WHEN ir.RESOURCE like '%truck1% THEN 1 ELSE 0 END) > 1;
In fact, you probably don't need the join either, because the only field you are taking from INCIDENT is also in RESOURCE:
SELECT ir.Incident_Number,
array_agg(ir.RESOURCE) as vehicle,
count(ir.RESOURCE) as countofvehicles
FROM RESOURCE ir
GROUP BY ir.Incident_Number
HAVING SUM(CASE WHEN ir.RESOURCE like '%truck1% THEN 1 ELSE 0 END) > 1;
I have 2 validation queries I use in 2 seperate functions in vb.net. These 2 functions are called for every order_num that is processed through my application. I would like to combine the two case queries into one with one result value of either a 1 or 0. Thanks in advance.
First Query:
select case when EXISTS (
select 1
from [Sonora].[dbo].[tbl_Informatics_Orders_Data] with (nolock)
where segment IN ('PID','ZPI', 'ZRQ', 'ZSI') AND order_num = '780630021555'
) then 1 else 0 end as [SegmentsExist]
Second Query:
SELECT CASE
WHEN
(SELECT COUNT(*) As result_count FROM [Sonora].[dbo].[tbl_Informatics_Orders_Data]
with (nolock) WHERE segment = 'OBR' AND order_num = '780630021555')
=
(SELECT COUNT(*) As result_count FROM [Sonora].[dbo].[tbl_Informatics_Orders_Data]
with (nolock) WHERE segment = 'OBX' AND order_num = '780630021555')
THEN 1
ELSE 0
End AS returnValue
Probably the bigger issue you'll have is your program structure. You mention that these queries (or 1 query) will be run against 30k-40k orders. That's a ridiculous amount of times to run a single query, especially when you could be running this in a set based fashion.
I would advise refactoring your queries to run for the entire dataset, store the results as a dataset in VB, then do what you will from there. You're never going to get good performance if you're running a seek/scan based operation for every single row in a database as opposed to returning a dataset.
EDIT: A little more about datasets.
When you're running queries for a single result (WHERE order_num = '780630021555'), the database has to look at every single row in the table to make sure it finds all records where order_num = '780630021555'. Now, you say there are 40k records.. which means for every single one of those 40,000 updates, the database must look at every single record of the table. Scanning a 40,000 row table 40,000 times adds up to about 1.6 billion rows being read. SQL will try to optimize and may scan the table less, but this is essentially what you're doing.
The ideal way to do it is to return data for the entire table. Write your query to return a 1 or 0 for ALL the order numbers, then do your processing from there. This way, the 40k rows of the table are only read once. Something like this:
SELECT CASE
WHEN (segment IN ('PID','ZPI', 'ZRQ', 'ZSI') OR OBR_Count = OBX_Count) THEN 1 ELSE 0 END AS Valid_Record
FROM [Sonora].[dbo].[tbl_Informatics_Orders_Data] IOD with (nolock)
JOIN
(
SELECT order_num, COUNT(segment) OBR_Count
FROM [Sonora].[dbo].[tbl_Informatics_Orders_Data] with (nolock)
WHERE segment = 'OBR'
GROUP BY order_num
) OBR
ON IOD.order_num = OBR.order_num
JOIN
(
SELECT order_num, COUNT(segment) OBX_Count
FROM [Sonora].[dbo].[tbl_Informatics_Orders_Data] with (nolock)
WHERE segment = 'OBX'
GROUP BY order_num
) OBX
ON IOD.order_num = OBX.order_num
This should return a row for every single order number in your table. It will return it with a Valid_Record column, indicating whether the criteria you specified is true (a 1) or it is not (a 0). I haven't run this query but it looks right. It will take a while to run, maybe a minute, but I can guarantee the operation you're doing selecting every single result individually is taking many times longer.
I work with several tables in SQL Server that have over a billion rows on a daily basis. In those cases, very small changes in queries make a big difference in query times. In tables that have 40k rows max, you will get much more performance out of refactoring the query to give you a dataset instead of giving you single results 40k times.
If the logic is "or", you can do:
select (case when EXISTS (select 1
from [Sonora].[dbo].[tbl_Informatics_Orders_Data] with (nolock)
where segment IN ('PID','ZPI', 'ZRQ', 'ZSI') AND
order_num = '780630021555'
)
then 1
when (SELECT COUNT(*) As result_count
FROM [Sonora].[dbo][tbl_Informatics_Orders_Data] with (nolock)
WHERE segment = 'OBR' AND order_num = '780630021555') =
(SELECT COUNT(*) As result_count
FROM [Sonora].[dbo].[tbl_Informatics_Orders_Data] with (nolock)
WHERE segment = 'OBX' AND order_num = '780630021555'
)
then 1
else 0
end) as [SegmentsExist];
If the logic is "and":
select (case when EXISTS (select 1
from [Sonora].[dbo].[tbl_Informatics_Orders_Data] with (nolock)
where segment IN ('PID','ZPI', 'ZRQ', 'ZSI') AND
order_num = '780630021555'
) and
(SELECT COUNT(*) As result_count
FROM [Sonora].[dbo][tbl_Informatics_Orders_Data] with (nolock)
WHERE segment = 'OBR' AND order_num = '780630021555') =
(SELECT COUNT(*) As result_count
FROM [Sonora].[dbo].[tbl_Informatics_Orders_Data] with (nolock)
WHERE segment = 'OBX' AND order_num = '780630021555'
)
then 1
else 0
end) as [SegmentsExist];
EDIT:
You can simplify the second condition to directly return 1 or 0:
(SELECT (case when sum(case when segment = 'OBR' then 1 else 0 end) =
sum(case when segment = 'OBX' then 1 else 0 end)
then 1 else 0
end)
FROM [Sonora].[dbo][tbl_Informatics_Orders_Data] with (nolock)
WHERE segment IN ('OBR', 'OBX') AND order_num = '780630021555'
)
Where do you get the order_num from? If it's based on a query againste the very same database I'd rather go for a totally different approach and let SQL Server perform the evaluation in a set based fashion rather than trying to speed up the function call and still call it a few thousand times...