I have a table of data. I have a field which shows date. I had set this column as Start Date. I want to create an additional column as End Date, where the End Date will be the Start Date of the next row. Can you give me a query of creating the End Date by taking the data of the Start Date in next row ?
First of all, you have to come up with a definition of "order", since rows in a table are stored without any order.
When you know what your order is, you can create a stored procedure that goes:
insert into the_table (new_id, start_date) values (#id, #start_date);
update the_table
set end_date = #start_date
where id = <the id determined by your sorting rule>;
I'm assuming that you currently have rows with values such as
StartDate
---------
1 Jan 1990
2 June 1998
4 September 2006
And you want to change to
StartDate EndDate
--------- ------------
1 Jan 1990 2 June 1998
2 June 1998 4 September 2006
4 September 2006 NULL
Quite apart from the redundancy and maintenance issue that reminds me of this question where such a setup with correlated columns actually caused the original poster problems when querying data. (I prefer Unreason's answer to my own on that question!)
Why do you need to add the EndDate column? It will probably be possible to come up with a query that works without it.
Edit After much faffing about with row_number() I actually couldn't find a query with a better plan than this. (Assumes index on StartDate)
SELECT
id,
StartDate,
(SELECT MIN(StartDate)
FROM testTable t2
WHERE t2.StartDate > t1.StartDate) AS EndDate
FROM testTable t1
Assuming you already have your columns and that you have an Auto-Incrementing Primary Key:
Update T1
Set T1.EndDate = T2.StartDate
From [Table] T1
Inner Join [Table] T2 on T1.Id = T2.Id - 1
Depends on what you mean by "next" row.
Can you provide sample dataset, and specify how you determine what order the rows go in?
EDIT
Your record order really does matter -- you're going to have to determine what that is. For now, I'm working off of the assumption that ordering it by start_date is acceptable.
--GET the first relevant start date
declare #start datetime
set #start = select MIN(start_date) from table
declare #end datetime
set #end = #start
WHILE #end is not null
--GET the next relevant end date
SET #end = select MIN(start_date) from table where start_date > #start
--Update the table with the end date
UPDATE table
SET end_date = #end
WHERE start_date = #start
--GET the next relevant start date
SET #start = #end
END
What about last row? The endDate will be blank for that?
I'm answering this question because it is being referenced somewhere else.
Depending on the id having no holes is dangerous. identity columns can have gaps which the currently accepted answer does not take into account.
In SQL Server 2012+, the answer is simply lag(). In earlier versions, you can use cross apply:
Update T1
Set T1.EndDate = T2.StartDate
From [Table] T1 cross apply
(select top 1 t2.*
from [Table] T2
where t2.StartDate > t1.Startdate
order by t2.StartDate asc
) t2;
With an index on table(StartDate), this might even have reasonable performance.
Related
In this table there are three colum and in need the value for of data which are lesser than code = 28,this is my query
SELECT value,code,date
FROM table
order by date,vchcode
but when i ad where clouse like
SELECT value,code,date
FROM table
where code < 28
order by date,vchcode
is only shows 2 row with code 26 and 27... i need 26,27 and 32.. and table colums are variable its not fix..
I think you wnat to take the date into account -- what you really want are all rows before the date of the row with code 28.
One method uses a subquery:
SELECT t.value, t.code, t.date
FROM table t
WHERE date < (SELECT date FROM table t2 WHERE t2.code = 28)
ORDER BY t.date, t.vchcode
I'm building a report that needs to show how many users were upgraded from account status 1 to account status 2 each hour for the last week (and delete hours where the upgrades = 0). My table has an updated date, however it isn't certain that the account status is the item being updated (it could be contact information etc).
The basic table config that I'm working with is below. There are other columns but they aren't needed for my query.
account_id, account_status, updated_date.
My initial idea was to first filter and look at the data for the current week, then find if they were at account_status = 1 and later account_status = 2.
What's the best way to tackle this?
This is the kind of thing that you would use a SELF JOIN for. It's tough to say exactly how to do this without getting any kind of example data, but hopefully you can build off of this at least. There are a lot of tutorials on how to write a successful self join, so I'd refer to those if you're having difficulties.
select a.account_id
from tableName a, tableName b
where a.account_id= b.account_id
and
(a.DateModified > 'YYYY-MM-DD' and a.account_status = 1)
and
(b.DateModified < 'YYYY-MM-DD' and b.account_status= 2)
Maybe you could try to rank all the updates older than an update, with a status of 2 for an account by the timestamp descending. Check if such an entry with status 1 and rank 1 exists, to know that the respective younger update did change the status from 1 to 2.
SELECT *
FROM elbat t1
WHERE t1.account_status = 2
AND EXISTS (SELECT *
FROM (SELECT rank() OVER (ORDER BY t2.updated_date DESC) r,
t2.account_status
FROM elbat t2
WHERE t2.account_id = t1.account_id
AND t2.updated_date <= t1.updated_date) x
WHERE x.account_status = 1
AND x.r = 1);
Then, to get the hours you, could create a table variable and fill it with the hours worth a week (unless you already have a suitable calender/time table). Then INNER JOIN that table (variable) to the result from above. Since it's an INNER JOIN hours where no status update exists won't be in the result.
DECLARE #current_time datetime = getdate();
DECLARE #current_hour datetime = dateadd(hour,
datepart(hour,
#current_time),
convert(datetime,
convert(date,
#current_time)));
DECLARE #hours
TABLE (hour datetime);
DECLARE #interval_size integer = 7 * 24;
WHILE #interval_size > 0
BEGIN
INSERT INTO #hours
(hour)
VALUES (dateadd(hour,
-1 * #interval_size,
#current_hour));
SET #interval_size = #interval_size - 1;
END;
SELECT *
FROM #hours h
INNER JOIN (SELECT *
FROM elbat t1
WHERE t1.account_status = 2
AND EXISTS (SELECT *
FROM (SELECT rank() OVER (ORDER BY t2.updated_date DESC) r,
t2.account_status
FROM elbat t2
WHERE t2.account_id = t1.account_id
AND t2.updated_date <= t1.updated_date) x
WHERE x.account_status = 1
AND x.r = 1)) y
ON convert(date,
y.updated_date) = h.convert(date,
h.hour)
AND datepart(hour,
y.updated_date) = datepart(hour,
h.hour);
If you use this often and/or performance is important, you might consider to introduce persistent, computed and indexed columns for the convert(...) and datepart(...) expressions and use them in the query instead. Indexing the calender/time table and the columns used in the subqueries is also worth a consideration.
(Disclaimer: Since you didn't provide DDL of the table nor any sample data this is totally untested.)
Tried to come up with SQL query in MS Access, but null values and an aggregate function have me stumped. Any help appreciated.
Query to show records from TABLE1 where the EditDate (which may be null) is greater than the maximum LastImportDate from TABLE2.
TABLE1
Field Name - Data Type
ReportID - Number
EditDate - Date/Time
TABLE2
Field Name - Data Type
LastImportDate - Date/Time
Thank you.
SELECT *
FROM table1
WHERE editDate > (
SELECT max(lastImportDate)
FROM table2
)
not sure how exactly that translates to an access query, but that is the idea.
Additionally, if you could break out the max date as a separate variable, that will get you a bit better performance - something like:
DECLARE #maxDate DATETIME
SET #maxDate = (
SELECT max(lastImportDate)
FROM table2
)
SELECT *
FROM table1
WHERE editDate > #maxDate
Lastly, if you wanted to dates that have a null editDate, you can have the query interpret nulls as some arbitrary date like isNull(editDate, '1900-01-01') - this will make null editDates get interpreted as 1900 Jan 1
I have what seems to be a common business request but I can't find no clear solution. I have a daily report (amongst many) that gets generated based on failed criteria and gets saved to a table. Each report has a type id tied to it to signify which report it is, and there is an import event id that signifies the day the imports came in (a date column is added for extra clarification). I've added a sqlfiddle to see the basic schema of the table (renamed for privacy issues).
http://www.sqlfiddle.com/#!3/81945/8
All reports currently generated are working fine, so nothing needs to be modified on the table. However, for one report (type 11), not only I need pull the invoices that showed up today, I also need to add one column that totals the amount of consecutive days from date of run for that invoice (including current day). The result should look like the following, based on the schema provided:
INVOICE MESSAGE EVENT_DATE CONSECUTIVE_DAYS_ON_REPORT
12345 Yes July, 30 2013 6
54355 Yes July, 30 2013 2
644644 Yes July, 30 2013 4
I only need the latest consecutive days, not any other set that may show up. I've tried to run self joins to no avail, and my last attempt is also listed as part of the sqlfiddle file, to no avail. Any suggestions or ideas? I'm quite stuck at the moment.
FYI: I am working in SQL Server 2000! I have seen a lot of neat tricks that have come out in 2005 and 2008, but I can't access them.
Your help is greatly appreciated!
Something like this? http://www.sqlfiddle.com/#!3/81945/14
SELECT
[final].*,
[last].total_rows
FROM
tblEventInfo AS [final]
INNER JOIN
(
SELECT
[first_of_last].type_id,
[first_of_last].invoice,
MAX([all_of_last].event_date) AS event_date,
COUNT(*) AS total_rows
FROM
(
SELECT
[current].type_id,
[current].invoice,
MAX([current].event_date) AS event_date
FROM
tblEventInfo AS [current]
LEFT JOIN
tblEventInfo AS [previous]
ON [previous].type_id = [current].type_id
AND [previous].invoice = [current].invoice
AND [previous].event_date = [current].event_date-1
WHERE
[current].type_id = 11
AND [previous].type_id IS NULL
GROUP BY
[current].type_id,
[current].invoice
)
AS [first_of_last]
INNER JOIN
tblEventInfo AS [all_of_last]
ON [all_of_last].type_id = [first_of_last].type_id
AND [all_of_last].invoice = [first_of_last].invoice
AND [all_of_last].event_date >= [first_of_last].event_date
GROUP BY
[first_of_last].type_id,
[first_of_last].invoice
)
AS [last]
ON [last].type_id = [final].type_id
AND [last].invoice = [final].invoice
AND [last].event_date = [final].event_date
The inner most query looks up the starting record of the last block of consecutive records.
Then that joins on to all the records in that block of consecutive records, giving the final date and the count of rows (consecutive days).
Then that joins on to the row for the last day to get the message, etc.
Make sure that in reality you have an index on (type_id, invoice, event_date).
You have multiple problems. Tackle them separately and build up.
Problems:
1) Identifying consecutive ranges: subtract the row_number from the range column and group by the result
2) No ROW_NUMBER() functions in SQL 2000: Fake it with a correlated subquery.
3) You actually want DENSE_RANK() instead of ROW_NUMBER: Make a list of unique dates first.
Solutions:
3)
SELECT MAX(id) AS id,invoice,event_date FROM tblEventInfo GROUP BY invoice,event_date
2)
SELECT t2.invoice,t2.event_date,t2.id,
DATEDIFF(day,(SELECT COUNT(DISTINCT event_date) FROM (SELECT MAX(id) AS id,invoice,event_date FROM tblEventInfo GROUP BY invoice,event_date) t1 WHERE t2.invoice = t1.invoice AND t2.event_date > t1.event_date),t2.event_date) grp
FROM (SELECT MAX(id) AS id,invoice,event_date FROM tblEventInfo GROUP BY invoice,event_date) t2
ORDER BY invoice,grp,event_date
1)
SELECT
t3.invoice AS INVOICE,
MAX(t3.event_date) AS EVENT_DATE,
COUNT(t3.event_date) AS CONSECUTIVE_DAYS_ON_REPORT
FROM (
SELECT t2.invoice,t2.event_date,t2.id,
DATEDIFF(day,(SELECT COUNT(DISTINCT event_date) FROM (SELECT MAX(id) AS id,invoice,event_date FROM tblEventInfo GROUP BY invoice,event_date) t1 WHERE t2.invoice = t1.invoice AND t2.id > t1.id),t2.event_date) grp
FROM (SELECT MAX(id) AS id,invoice,event_date FROM tblEventInfo GROUP BY invoice,event_date) t2
) t3
GROUP BY t3.invoice,t3.grp
The rest of your question is a little ambiguous. If two ranges are of equal length, do you want both or just the most recent? Should the output MESSAGE be 'Yes' if any message = 'Yes' or only if the most recent message = 'Yes'?
This should give you enough of a breadcrumb though
I had a similar requirement not long ago getting a "Top 5" ranking with a consecutive number of periods in Top 5. The only solution I found was to do it in a cursor. The cursor has a date = #daybefore and inside the cursor if your data does not match quit the loop, otherwise set #daybefore = datediff(dd, -1, #daybefore).
Let me know if you want an example. There just seem to be a large number of enthusiasts, who hit downvote when they see the word "cursor" even if they don't have a better solution...
Here, try a scalar function like this:
CREATE FUNCTION ConsequtiveDays
(
#invoice bigint, #date datetime
)
RETURNS int
AS
BEGIN
DECLARE #ct int = 0, #Count_Date datetime, #Last_Date datetime
SELECT #Last_Date = #date
DECLARE counter CURSOR LOCAL FAST_FORWARD
FOR
SELECT event_date FROM tblEventInfo
WHERE invoice = #invoice
ORDER BY event_date DESC
FETCH NEXT FROM counter
INTO #Count_Date
WHILE ##FETCH_STATUS = 0 AND DATEDIFF(dd,#Last_Date,#Count_Date) < 2
BEGIN
#ct = #ct + 1
END
CLOSE counter
DEALLOCATE counter
RETURN #ct
END
GO
I'm using oracle(10).
I've got two tables as follows:
Table1 (uniq rows):
ID AMOUNT DATE
Table2:
ID AMOUNT1 AMOUNT2 ...AMOUNTN DATE
Table2 is connected many to one to Table1 connected via ID.
What I need is update-ing Table1.DATE with: the last (earliest) date from Table2 where Table1.AMOUNT - SUM(Table2.AMOUNT1) <= 0, when reading table 2 backwards by the Table2.DATE field.
Is there a simple way to do it?
Thanks in advance!
UPDATE: as I see from your answers I had misspecified the question a bit. So here goes a detailed example:
Table1 has:
ID: 1 AMOUNT:100 DATE:NULL
Table2 has (for ID: 1 so ID is not listed in here):
AMOUNT1 DATE
50 20080131
30 20080121
25 20080111
20 20080101
So in this case I need 20080111 as the DATE in Table1 as 50+30+25 => 100.
Based on your revised question, this is a case for using analytic functions.
Assuming you meant >=100 rather than <= 100 as your example implies, and renaming columns DATE to THEDATE since DATE is a reserved word in Oracle:
update table1 set thedate=
( select max(thedate) from
( select id, thedate,
sum(amount1) over (partition by id order by thedate desc) cumsum
from table2
) v
where v.cumsum >= 100
and v.id = table1.id
)
If the 100 means the current value of table1 then change that line to:
where v.cumsum >= table1.amount
First off - your database layout feels severely wrong, but I guess you can't / don't want to change it. Table1 should probably be a view, and Table2 does not make the impression of proper normalization. Something like (ID, AMOUNT_TYPE, AMOUNT_VALUE, DATE) would make much more sense to me.
But to solve your problem (this is T-SQL "UPDATE FROM" syntax, but I think Oracle knows it):
UPDATE
Table1
SET
Date = Table2Aggregate.MinDate
FROM
Table1
INNER JOIN (
SELECT Id, SUM(Amount1) SumAmount1, MIN(Date) MinDate
FROM Table2
GROUP BY Id
) AS Table2Aggregate ON Table1.Id = Table2Aggregate.ID
WHERE
Table1.Amount - Table2Aggregate.SumAmount1 <= 0