I am trying to update some budgeting data in an SQL database. I have 1 table which is the set of data for this year, and another table which contains last years (and I need to insert this years data into).
During the insert I need to create a unique row for each sitenumber which is in a temporary table, against each site number needs to be this years information (week number, startdate, enddate etc).
I have tried using a subquery (but obviously this fails as the subquery for the getting the site number will return multiple records. So I am trying a cursor, however although it doesn't error, it doesn't insert any date. if anyone has any ideas that'd be great.
This is my cursor code
create table #tempSiteNoTable (SiteNo int)
insert into #tempSiteNoTable
Select distinct(SiteNumber)
from Lynx_Period_Lookup
begin tran xxx
Declare #SiteNNo int
Declare SiteNumberCursor Cursor FOR
Select
SiteNo from #tempSiteNoTable where SiteNo = #SiteNNo
Open SiteNumberCursor
Fetch next from SiteNumberCursor
Into #SiteNNo while ##fetch_status = 0
begin
insert into Lynx_Period_Lookup
(SiteNumber,SubPeriod,StartDate,EndDate,[Year],Period,[Week],BusinessCalendarNumber,BusinessCalendarName)
Select
#SiteNNo,
SubPeriod,
StartDate,
EndDate,
2014 as year,
Period,
WeekNo,
BusinessCalendarNumber,
BusinessCalendarName
from accountingperiods
Fetch next from SiteNumberCursor
into #SiteNNo
End
Close SiteNumberCursor
Deallocate SiteNumberCursor
You should be able to do this without a CURSOR - and quite easily, too!
Try this:
CREATE TABLE #tempSiteNoTable (SiteNo int)
INSERT INTO #tempSiteNoTable(SiteNo)
SELECT DISTINCT (SiteNumber)
FROM dbo.Lynx_Period_Lookup
INSERT INTO dbo.Lynx_Period_Lookup(SiteNumber, SubPeriod, StartDate, EndDate, [Year], Period, [Week], BusinessCalendarNumber, BusinessCalendarName)
SELECT
t.SiteNo,
ap.SubPeriod,
ap.StartDate,
ap.EndDate,
2014 as year,
ap.Period,
ap.WeekNo,
ap.BusinessCalendarNumber,
ap.BusinessCalendarName
FROM
#tempSiteNoTable t
INNER JOIN
dbo.AccountingPeriods ap ON ... (how are those two sets of data related?)...
The only point I don't know - how are AccountingPeriods and #tempSiteNoTable related - what common column do they share?
That's all there is to it - no cursor, no messing around with row-by-agonizing-row (RBAR) processing - just one, nice, clean set-based statement and you're done
I think you're looking for a CROSS JOIN; try the following snippet of code.
DECLARE #Table1 TABLE (Week int, StartDate datetime)
DECLARE #Table2 TABLE (sitenumber int)
INSERT INTO #Table1 (Week, StartDate)
VALUES (1, '2014-01-01'), (2, '2014-01-08')
INSERT INTO #Table2 (sitenumber)
VALUES (1), (2)
SELECT *
FROM #Table1 CROSS JOIN #Table2
Related
I currently have such a query inside my stored procedure:
INSERT INTO YTDTRNI (TRCDE, PROD, WH, DESCR, UCOST, TCOST, DRAC, CRAC, REM, QTY, UM, ORDNO, TRDATE, SYSDATE, PERIOD, USERID)
SELECT
'AJ', PROD, WH, DESCR, 0, -TCOST, STKGL, COSGL,
'MASS ADJUSTMENT', 0, UM,
CAST(ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) AS nvarchar(255)),
GETDATE(), GETDATE(), #inputPeriod, #inputUserId
FROM
INV
WHERE
H = 0
I am making use of row_number() to get a number that is incrementing itself while executing the query.
For example the query above INSERT 2018 records in YTDTRNI table. So the last number generated by this row_number() function is 2018. My question now is whether is it possible to get hold of this last number generated by row_number().
In another table, I have a value stored as I1000 for example. So after performing the above operation. I need to update this table with the new value of I3018 (1000+2018).
I am stuck on how to move on. Open to any advice if whatever I am doing is incorrect or not following conventions/standards.
just do a ##rowcount after your query
DECLARE #rc INT
INSERT INTO YTDTRNI ( ... )
SELECT #rc = ##rowcount
after that you can use this #rc to update the other table
##ROWCOUNT is not reliable if there are triggers in the database. I would strongly discourage you from using it.
Instead, use OUTPUT:
declare #t table (rn int);
insert into . . .
output (inserted.ordno) into #t
select . . .;
Then you can simply do:
select max(ordno) from #t;
This captures exactly what is input into the table.
I have a set of records (table [#tmp_origin]) containing duplicate entries in a string field ([Names]). I would like to insert the whole content of [#tmp_origin] into the destination table [#tmp_destination], that does NOT allow duplicates and may already contain items.
If the string in the origin table does not exist in the destination table, then in is simply inserted in the destination table, as is.
If an entry in the destination table already exists with the same value of the entry in the original table, a string-ified incremental number must be appended to the string, before it is inserted in the destination table.
The process of moving data in this way has been implemented with a cursor, in this sample script:
-- create initial situation (origin and destination table, both containing items)
-- Begin
CREATE TABLE [#tmp_origin] ([Names] VARCHAR(10))
CREATE TABLE [#tmp_destination] ([Names] VARCHAR(10))
CREATE UNIQUE INDEX [IX_UniqueName] ON [#tmp_destination]([Names] ASC)
INSERT INTO [#tmp_origin]([Names]) VALUES ('a')
INSERT INTO [#tmp_origin]([Names]) VALUES ('a')
INSERT INTO [#tmp_origin]([Names]) VALUES ('b')
INSERT INTO [#tmp_origin]([Names]) VALUES ('c')
INSERT INTO [#tmp_destination]([Names]) VALUES ('a')
INSERT INTO [#tmp_destination]([Names]) VALUES ('a_1')
INSERT INTO [#tmp_destination]([Names]) VALUES ('b')
-- create initial situation - End
DECLARE #Name VARCHAR(10)
DECLARE NamesCursor CURSOR LOCAL FORWARD_ONLY FAST_FORWARD READ_ONLY FOR
SELECT [Names]
FROM [#tmp_origin];
OPEN NamesCursor;
FETCH NEXT FROM NamesCursor INTO #Name;
WHILE ##FETCH_STATUS = 0
BEGIN
DECLARE #finalName VARCHAR(10)
SET #finalName = #Name
DECLARE #counter INT
SET #counter = 1
WHILE(1=1)
BEGIN
IF NOT EXISTS(SELECT * FROM [#tmp_destination] WHERE [Names] = #finalName)
BREAK;
SET #finalName = #Name + '_' + CAST(#counter AS VARCHAR)
SET #counter = #counter + 1
END
INSERT INTO [#tmp_destination] ([Names]) (
SELECT #finalName
)
FETCH NEXT FROM NamesCursor INTO #Name;
END
CLOSE NamesCursor;
DEALLOCATE NamesCursor;
SELECT *
FROM [#tmp_destination]
/*
Expected result:
a
a_1
a_2
a_3
b
b_1
c
*/
DROP TABLE [#tmp_origin]
DROP TABLE [#tmp_destination]
This works correctly, but its performance drastically slows down when the number of items to insert increases.
Any idea to speed it up?
thanks
Using a windowing function allows the duplicates to be numbered. You can also get the count from the destination table (will need where condition to strip off the suffix you've added):
select orig.names,
row_number() over (partition by orig.names order by orig.names) as rowNo,
dest.count
from ##tmp_origin orig
cross apply (select count(1) from #tmp_destination where names = orig.names) as dest
An insert can be built from the above (new suffix is rowNo + dest.count -1 if greater than zero).
Suggest you refactor the destination temporary table to include the name and suffix as separate columns – this might mean having a new intermediate stage – because this will make the matching logic much simpler.
Something like this:
insert [#tmp_destination]
select CASE WHEN row_number() over(partition by Names order by Names) > 1 THEN Names + '_' + CONVERT(VARCHAR(10), row_number() over(partition by Names order by Names)) ELSE Names END
from [#tmp_origin]
I wouldn't use a cursor in that case. Instead, I would build the query using ROW_NUMBER(). This way you add a counter in your original table, and then use this counter to append to your [Names]:
SELECT [Names], ROW_NUMBER() OVER (PARTITION BY [Names] ORDER BY [Names]) - 1 AS [counter]
INTO #tmp_origin_with_counter
FROM #tmp_origin
SELECT CONCAT([Names], IIF([counter] = 0, '', '_'+ CAST([counter] AS NVARCHAR)))
INTO #tmp_destination
FROM #tmp_origin_with_counter
I have a table with a string in some columns values that tells me if I should delete the row....however this string needs some parsing to understand whether to delete or not.
What is the string: it tells me the recurrence of meetings eg everyday starting 21st march for 10 meetings.
My table is a single column called recurrence:
Recurrence
-------------------------------
daily;1;21/03/2015;times;10
daily;1;01/02/2016;times;8
monthly;1;01/01/2016;times;2
weekly;1;21/01/2016;times;4
What to do: if the meetings are finished then remove the row.
The string is of the following format
<frequency tag>;<frequency number>;<start date>;times;<no of times>
For example
daily;1;21/03/2016;times;10
everyday starting 21st march, for 10 times
Does anybody know how I would calculate if the string indicates all meetings are in past? I want a select statement that tells me if the recurrence values are in past - true or false
I added one string ('weekly;1;21/05/2016;times;4') that definitely must not be deleted to show some output. At first try to add to temp table `#table1' all data from your table and check if all is deleted well.
DECLARE #table1 TABLE (
Recurrence nvarchar(max)
)
DECLARE #xml xml
INSERT INTO #table1 VALUES
('daily;1;21/03/2016;times;10'),
('daily;1;21/03/2015;times;10'),
('daily;1;01/02/2016;times;8'),
('monthly;1;01/01/2016;times;2'),
('weekly;1;21/01/2016;times;4'),
('weekly;1;21/05/2016;times;4')
SELECT #xml= (
SELECT CAST('<s><r>' + REPLACE(Recurrence,';','</r><r>') + '</r><r>'+ Recurrence+'</r></s>' as xml)
FROM #table1
FOR XML PATH ('')
)
;WITH cte as (
SELECT t.v.value('r[1]','nvarchar(10)') as how,
t.v.value('r[2]','nvarchar(10)') as every,
CONVERT(date,t.v.value('r[3]','nvarchar(10)'),103) as since,
t.v.value('r[4]','nvarchar(10)') as what,
t.v.value('r[5]','int') as howmany,
t.v.value('r[6]','nvarchar(max)') as Recurrence
FROM #xml.nodes('/s') as t(v)
)
DELETE t
FROM #table1 t
LEFT JOIN cte c ON c.Recurrence=t.Recurrence
WHERE
CASE WHEN how = 'daily' THEN DATEADD(day,howmany,since)
WHEN how = 'weekly' THEN DATEADD(week,howmany,since)
WHEN how = 'monthly' THEN DATEADD(month,howmany,since)
ELSE NULL END < GETDATE()
SELECT * FROM #table1
Output:
Recurrence
-----------------------------
weekly;1;21/05/2016;times;4
(1 row(s) affected)
Okay so I have this temp table. It has all the orders which a company needs to ship out. I need to somehow loop through the table and insert the information into 3+ tables.
#TempTable Table
(
OrderID Int
)
Declare #value int = (select count(orderID) from #temptable)
Declare #i int = 1
WHILE #i < #value BEGIN
Declare #orderid= (select first(orderid) from #temptable)
INSERT INTO shipment (orderid, Price, Date, DateDue)
VALUES (#orderid, #Price, #Date, #DateDue);
Set #i += 1
Delete top(1) from #temptable
END
Is there a better way of doing this?
Adding a little more to my issue
I'm taking in 3 values from VB.Net that as an example is #Price, #Date, and #DateDue.Because of this I wasn't able just to do a select statement cause the values are mixed with this passed values.
Do it in a single query
INSERT INTO (orderid, -some other value-)
SELECT orderid, -some other value-
FROM #temptable
Looping is not efficient. Always try to avoid it. You will need two queries: one for selection and inserting and one for deletion
INSERT INTO shipment (orderid, Price, Date, DateDue)
SELECT orderid, #Price, #Date, #DateDue FROM #temptable;
DELETE FROM #temptable;
Note also that the #temptable has a limited lifetime. Therefore - depending on the situation - deleting it might not be necessary at all.
I'm using SQL Server 2012 and I'm trying to do something like this:
SELECT SUM(MILES) from tblName WHERE
mDate > = '03/01/2012' and
mDate <= '03/31/2012'
-- and...
/*
now I want to add here do until the SUM of Miles
is equal to or greater then '3250' and get the
results rows randomly
*/
So in other words, I want to select random rows from a table that have a specified from and to date and stop when the sum of miles is at or over the number: 3250
Since you're using SQL Server 2012, here is a much easier approach that doesn't require looping.
DECLARE #tbl TABLE(mDate DATE, Miles INT)
INSERT #tbl VALUES
('20120201', 520), ('20120312', 620),
('20120313', 720), ('20120314', 560),
('20120315', 380), ('20120316', 990),
('20120317', 1020), ('20120412', 520);
;WITH x AS
(
SELECT
mDate,
Miles,
s = SUM(Miles) OVER
(
ORDER BY NEWID() ROWS UNBOUNDED PRECEDING
)
FROM #tbl
WHERE mDate >= '20120301'
AND mDate < '20120401'
)
SELECT
mDate,
Miles,
s
FROM x
WHERE s <= 3250
ORDER BY s;
SQLfiddle demo - hit "Run SQL" multiple times to see random results.
You can do SELECT TOP x ... ORDER BY newid() to get a sum of random rows. The problem lies in determining 'x'. You can't even be sure that the largest value of 'x' (number of rows that match the query) will have a total large enough to meet your requirements without testing that first:
DECLARE #stopAt int
DECLARE #x int
DECLARE #result int
SET #stopAt = 3250
SET #x = 1
SELECT #result = SUM(MILES) from tblName
WHERE
mDate >= '03/01/2012' and
mDate <= '03/31/2012'
IF (#result < #stopAt)
SELECT NULL -- this can't be done
ELSE
BEGIN
WHILE (1=1)
BEGIN
SELECT TOP (#x) #result = SUM(MILES) FROM tblName
WHERE
mDate >= '03/01/2012' and
mDate <= '03/31/2012'
ORDER BY newid()
IF #result >= #stopAt
BREAK
SET #x = #x + 1
END
SELECT #result
END
Just a note about this - the algorithm starts and 1 and increments up until a suitable match is found. A more efficient approach (for larger sets of data) might include a binary type search that caches the lowest suitable result and returns when the deepest node (or an exact match) is found.
I can't think of a way without a TSQL While... loop. This in combination with the TSQL paging with ROW_NUMBER() should get you there.
http://www.mssqltips.com/sqlservertip/1175/page-through-sql-server-results-with-the-rownumber-function/
In the ROW_NUMBER query, sum the Miles into another MileSum column, then in the while loop select the set all the rows that correspond with the ROW_NUMBER query while accumulating these MileSum values into a variable. Terminate when the variable exceeds 3250.
Try
SELECT MILES
, RANK() OVER (ORDER BY NEWID()) yourRank
FROM #tblName
WHERE miles>3250
AND mDate >= '03/01/2012'
AND mDate <= '03/31/2012'
ORDER BY yourRank
and then you can add a TOP 5 or whatever you want.
You get those in random order for sure.
Just a sample code for you to understand the concept.
create table temp(intt int)
insert into temp values(1)
insert into temp values(2)
insert into temp values(3)
insert into temp values(4)
insert into temp values(5)
insert into temp values(6)
insert into temp values(7)
insert into temp values(8)
insert into temp values(9)
insert into temp values(10)
insert into temp values(11)
insert into temp values(12)
insert into temp values(13)
insert into temp values(14)
insert into temp values(15)
insert into temp values(16)
insert into temp values(17)
insert into temp values(18)
insert into temp values(19)
insert into temp values(20)
insert into temp values(21)
declare #sum int = 0;
declare #prevInt int = 0;
while(#sum<50)
begin
set #prevInt = (select top(1) intt from temp order by newid());
set #sum = #sum + #prevInt;
end
set #sum = #sum-#prevInt;
select #sum
drop table temp
The reason for this approach is that paging would not return wide spread result unless and until you have thousands of records because in it the data is grouped into pages and with less records, the same page is hit multiple number of times giving the same result.
Also, there might be cases when a blank page is hit giving 0 as the result.(i don't know why sometimes a blank page is hit.)