I have SQL like this (where $ytoday is 5 days ago):
$sql = 'SELECT Count(*), created_at FROM People WHERE created_at >= "'. $ytoday .'" AND GROUP BY DATE(created_at)';
I want this to return a value for every day, so it would return 5 results in this case (5 days ago until today).
But say Count(*) is 0 for yesterday, instead of returning a zero it doesn't return any data at all for that date.
How can I change that SQLite query so it also returns data that has a count of 0?
Without convoluted (in my opinion) queries, your output data-set won't include dates that don't exist in your input data-set. This means that you need a data-set with the 5 days to join on to.
The simple version would be to create a table with the 5 dates, and join on that. I typically create and keep (effectively caching) a calendar table with every date I could ever need. (Such as from 1900-01-01 to 2099-12-31.)
SELECT
calendar.calendar_date,
Count(People.created_at)
FROM
Calendar
LEFT JOIN
People
ON Calendar.calendar_date = People.created_at
WHERE
Calendar.calendar_date >= '2012-05-01'
GROUP BY
Calendar.calendar_date
You'll need to left join against a list of dates. You can either create a table with the dates you need in it, or you can take the dynamic approach I outlined here:
generate days from date range
Related
Objective: Pull the last 30 days of data from a table that has a variable end date
Background: I have a table that contains purchase information but this table is only updated every two weeks therefore there's a lag in the data. Some day it can be 14 days behind and others 13 or 15 days behind.
My table contains a DATE_KEY column which joins to the DATE_DIM table on this key which is where I pull my date field from. I would use GETDATE or CURRENT_DATE but this is not appropriate in my case due to the lag.
I am using Sybase IQ and I believe I can't use a select statements in the where clause to compare dates, I got the following error:
Feature, scalar value subquery (at line 63) outside of a top level SELECT list, is not supported.
This is what I was trying to do
WHERE
TIME.[DAY] >= DATEADD(dd,-30,( SELECT
MAX([TIME1].[DAY])
FROM DB.DATE_DIM TIME1
JOIN DB.PURCHASES PURC
ON TIME1.KEY = PURC.KEY))
How can I pull the most recent 30 days of data given the constraints above?
According to the Sybase IQ documentation, you can use a comparison to a subquery, hence you could add a join to the DATE_DIM to the main FROM clause, and then compare that to a subquery similar to yours, just with the DATEADD moved into it. In the following code, I assume the alias for DATE_DIM in the main FROM clause is TIME0.
WHERE
TIME0.[DAY] >= (SELECT DATEADD(dd,-30, MAX([TIME1].[DAY]))
FROM DB.DATE_DIM TIME1
JOIN DB.PURCHASES PURC
ON TIME1.KEY = PURC.KEY
)
How can I display the query in such a way to show each day between date?
SELECT job.wo_id
FROM [MES].[MESDB].[dbo].[job] AS job
WHERE job.init_sched_ent_id = 227
AND job.sched_start_time_local >= #paramStartDate
AND job.sched_finish_time_local <= #paramEndDate
I can select each day separately by function from this post but I don't know how to combines these two tables.
https://stackoverflow.com/a/44671808/8853661
You can generate dates in mySql and cross join with your select.
The good description is here : generate days from date range
I have a table storing a datetime column, which is indexed. I'm trying to find a way to compare ONLY the month and day (ignores the year totally).
Just for the record, I would like to say that I'm already using MONTH() and DAY(). But I'm encountering the issue that my current implementation uses Index Scan instead of Index Seek, due to the column being used directly in both functions to get the month and day.
There could be 2 types of references for comparison: a fixed given date and today (GETDATE()). The date will be converted based on time zone, and then have its month and day extracted, e.g.
DECLARE #monthValue DATETIME = MONTH(#ConvertDateTimeFromServer_TimeZone);
DECLARE #dayValue DATETIME = DAY(#ConvertDateTimeFromServer_TimeZone);
Another point is that the column stores datetime with different years, e.g.
1989-06-21 00:00:00.000
1965-10-04 00:00:00.000
1958-09-15 00:00:00.000
1965-10-08 00:00:00.000
1942-01-30 00:00:00.000
Now here comes the problem. How do I create a SARGable query to get the rows in the table that match the given month and day regardless of the year but also not involving the column in any functions? Existing examples on the web utilise years and/or date ranges, which for my case is not helping at all.
A sample query:
Select t0.pk_id
From dob t0 WITH(NOLOCK)
WHERE ((MONTH(t0.date_of_birth) = #monthValue AND DAY(t0.date_of_birth) = #dayValue))
I've also tried DATEDIFF() and DATEADD(), but they all end up with an Index Scan.
Adding to the comment I made, on a Calendar Table.
This will, probably, be the easiest way to get a SARGable query. As you've discovered, MONTH([YourColumn]) and DATEPART(MONTH,[YourColumn]) both cause your query to become non-SARGable.
Considering that all your columns, at least in your sample data, have a time of 00:00:00 this "works" to our advantage, as they are effectively just dates. This means we can easily JOIN onto a Calendar Table using something like:
SELECT dob.[YourColumn]
FROM dob
JOIN CalendarTable CT ON dob.DateOfBirth = CT.CalendarDate;
Now, if we're using the table from the above article, you will have created some extra columns (MonthNo and CDay, however, you can call them whatever you want really). You can then add those columns to your query:
SELECT dob.[YourColumn]
FROM dob
JOIN CalendarTable CT ON dob.DateOfBirth = CT.CalendarDate
WHERE CT.MonthNo = #MonthValue
AND CT.CDay = #DayValue;
This, as you can see, is a more SARGable query.
If you want to deal with Leap Years, you could add a little more logic using a CASE expression:
SELECT dob.[YourColumn]
FROM dob
JOIN CalendarTable CT ON dob.DateOfBirth = CT.CalendarDate
WHERE CT.MonthNo = #MonthValue
AND CASE WHEN DATEPART(YEAR, GETDATE()) % 4 != 0 AND CT.CDat = 29 AND CT.MonthNo = 2 THEN 28 ELSE CT.Cdat END = #DayValue;
This treats someone's birthday on 29 February as 28 February on years that aren't leap years (when DATEPART(YEAR, GETDATE()) % 4 != 0).
It's also, probably, worth noting that it'll likely be worth while changing your DateOfBirth Column to a date. Date of Births aren't at a given time, only on a given date; this means that there's no implicit conversion from datetime to date on your Calendar Table.
Edit: Also, just noticed, why are you using NOLOCK? You do know what that does, right..? Unless you're happy with dirty reads and ghost data?
I'm trying to compare data from an Access 2010 database based on a date interval. Example I have items from various purchase orders and I want to maintain the history of these item's delivery to a warehouse. So my purchase order has a request for a quantity of 10 of a material, for example, and it can be partially delivered in many deliveries and I want to know how this delivery varied in a date interval. To fill the date field the criteria used is the following: if the item had an update in the QtyPending field, I copy the current row deactivating it with a booelan field, create a new entry with the current update date updating the QtyPending field, so the active record is the actual state of the item. So I have a table that holds informations about these items like that
PO POItem QtyPending Date Active
4500000123 10 10 01/09/2014 FALSE
4500000123 10 8 05/09/2014 TRUE
4500000122 30 5 03/09/2014 FALSE
4500000122 30 1 04/09/2014 TRUE
With this example, for the first item, it means that from date 01/09 to 04/09 the QtyPending field didn't suffer a variation, meaning that the supplier didn't make any delivery to me, but from 01/09 to 05/08 he delivered me a qty of 2 of a material. For the second one, from date 03/09 to 04/09 the supplier delivered me a qty of 4 of a material. So, if I were to be making a report query from 02/09/2014 to 04/09/2014, the expected output is like this:
PO POItem QtyDelivered
4500000123 10 0
4500000122 30 4
And a report from 31/08/2014 to 10/09/2014, would have this output
PO POItem QtyDelivered
4500000123 10 2
4500000122 30 4
I'm not coming up with a query to make this report. Can anyone help me?
There are many ways of solving this. The easiest one would be to simply make a query of all the necessary records between two dates, loop over them and insert into a temporary table the result. This temporary table can then be the source of your report. A lot of people will scream at you for not using a big query instead but getting the result that you want in the fastest and simplest way should be your priority.
Your problem with your schema is that you don't have the QtyDelivered stored for each record. If you would have it, it would be an easy thing to sum over it in order to get needed result. By not storing this value, you have transformed a simple and fast query into a much harder and slower one because you need to recalculate this value in some way or other and you must do this without forgetting the fact that it's possible to have more than two records.
For calculating this value, you can either use a sub-query to retrieve the value from the previous row or a Left join do to the same. Once you have this value, you can subtract these two to get the needed difference; allowing for the possibility of Null value if there is no previous row. Once you have these values, you can now sum over them to get the final result with a Group By. Notice that in order to perform these calculations, you need to have one or two more levels of subquery. The first query should be something like:
Select PO, POItem, QtyPending, (Select Top 1 QtyPending from MyTable T2 where T1.PO = T2.PO and T2.Date < T1.Date And (T2.Date between #Date1 and #Date2) Order by T2.Date Desc) as QtyPending2 from MyTable T1 Where T1.Date between #Date1 and #Date2) ...
With this as either another subquery or as a View, you can then compute the desired difference by comparing the values of QtyPending and QtyPending2; without forgetting that QtyPendin2 may be Null. The remaining steps are easy to do.
Notice that the above example is for SQL-Server, you might have to change it a little for Access. In any case, you can find here many examples on how to compare two rows under Access. As noted earlier, you can also use a Left Join instead of a subquery to compare your rows.
I came up with this query that solved the problem, it wasn't that simple
SELECT
ItmDtIni.PO
,ItmDtIni.POItem AS [PO Item]
,ROUND(ItmDtIni.QtyPending - ItmDtEnd.QtyPending, 3) AS [Qty Delivered]
,ROUND((ItmDtIni.QtyPending - ItmDtEnd.QtyPending) * ItmDtEnd.Price, 2) AS [Value delivered(US$)]
//Filtering subqueries to bring only the items in the date interval to make a self join
FROM (((SELECT
PO
,POItem
,QtyPending
,MIN(Date) AS MinDate
FROM Item
WHERE Date BETWEEN FORMAT(begin_date, 'dd/mm/yyyy') AND FORMAT(end_date, 'dd/mm/yyyy')
GROUP BY
PO
,POItem
,QtyPending) AS ItmDtIni
//Self join filtering to bring only items in the date interval with the previously filtered table
INNER JOIN (SELECT
PO
,POItem
,QtyPending
,Price
,MAX(Date) AS MaxDate
FROM Item
WHERE Date BETWEEN FORMAT(begin_date, 'dd/mm/yyyy') AND FORMAT(end_date, 'dd/mm/yyyy')
GROUP BY
PO
,POItem
,QtyPending
,Price) AS ItmDtEnd
ON ItmDtIni.PO = ItmDtEnd.PO
AND ItmDtIni.POItem = ItmDtEnd.POItem)
INNER JOIN PO
ON ItmDtEnd.PO = PO.Numero)
WHERE
//Showing only items that had a variation in the date interval
ROUND(ItmDtIni.QtyPending - ItmDtEnd.QtyPending, 3) <> 0
//Anchoring min date in the interval for each item found by the first subquery
AND ItmDtIni.MinDate = (SELECT MIN(Item.Date)
FROM Item
WHERE
ItmDtIni.PO = Item.PO
AND ItmDtIni.POItem = Item.POItem
AND Date BETWEEN FORMAT(begin_date, 'dd/mm/yyyy') AND FORMAT(end_date, 'dd/mm/yyyy'))
//Anchoring max date in the interval for each item found by the second subquery
AND ItmDtEnd.MaxDate = (SELECT MAX(Item.Date)
FROM Item
WHERE
ItmDtEnd.PO = Item.PO
AND ItmDtEnd.POItem = Item.POItem
AND Date BETWEEN FORMAT(begin_date, 'dd/mm/yyyy') AND FORMAT(end_date, 'dd/mm/yyyy'))
I have 2 tables with no relations, both tables have different number of columns, but there are a few columns that are the same but hold different data. I was able to create a function or view of only the data I wanted, but when I try to count the data by filtering the date, I always get the wrong count in return. Let me explain by showing the 2 functions and what I try to do:
Function 1
ID - number from 1 to 8
data sent - YES or NO
Date - date value
Function 2
ID - number from 1 to 8
data sent - yes or no
date - date value
Upon running both separately, I get all the rows from the tables and everything looks good.
Then I try to add the following to each function:
select
count([data sent]), ID
from function1
Where (date between #date1 and #date2)
group by ID
The above statement works great and gives me the right result for each function.
Now I thought what if I want to add those 2 functions into one and get the count from both functions on 1 page.
So I created the following function:
Function 3
select
count(Function1.[data sent]) as Expr1,
Function1.id,
count(Function2.[data sent]) as Expr2,
Function1.date
from
Function1
LEFT OUTER JOIN
Function2 on Function1.id = Function2.id
Where
(Function1.date between #date1 and #date2)
group by
Function1.id
Upon running the above, I get the following table:
ID Expr1 Expr2
On both Expr1 and Expr2, I get results which I am not sure where they come from. I guess something is being multiplied by 100000 since one table holds almost 15000 rows and the other around 5000 rows.
What I would like to know first is if it possible at all to be able to filter by date and count records from both table at the same time. If anyone need more information please let me know and I will be glad to share and explain more.
Thank you
The LEFT OUTER JOIN is taking each row of the left table, finding ALL of the rows in the right table with the same id field, and creating that many rows in the result table. Since id isn't what we usually think of as an identity field (it looks more like a "deviceId" or something), you'll get lots of matches for each one. Repeat 15000 times and you get your combinatorial explosion.
Tip: To debug things like this, you can create sample tables with a tiny subset of the real data, say 10 rows from each, and run your query on them. You'll see the issue immediately.
It's possible to filter by date. It's hard to recommend an actual solution without better understanding your phrase "I want to add those 2 functions into one and get the count from both functions on 1 page".
Why can't you create a temporary table for each function then join them together?
Maybe subqueries can help you to achieve what you want:
SELECT
ID = COALESCE(f1.ID, f2.ID),
Date = COALESCE(f1.Date, f2.Date),
f1.Expr1,
f2.Expr2
FROM (
SELECT
ID,
Date,
Expr1 = COUNT([data sent])
FROM Function1
WHERE Date BETWEEN #date1 AND #date2
GROUP BY
ID,
Date
) f1
FULL JOIN (
SELECT
ID,
Date,
Expr2 = COUNT([data sent])
FROM Function2
WHERE Date BETWEEN #date1 AND #date2
GROUP BY
ID,
Date
) f2
ON f1.ID = f2.ID AND f1.Date = f2.Date
This query also uses full (outer) join instead of left join, in case the right side of the join contains rows that have no match in the left side (and you want those rows).