I have one table AttendanceLog.
Columns are:
EmpCode
Date
time
type
This is for attendance punch details
empcode date time type
01 19.08.2016 080530 64
01 19.08.2016 092030 64
01 19.08.2016 084030 65
The types are 64 for Intime, 65 for outtime.
I have another table.
Columns are
Empcode
Date
Intime
outtime
Now I want to insert into this table from attendancelog table.
Depending on the type I have to insert time into particular intime and outtime column.
pls help for an employee to fill firstIntime and lastouttime
My procedure:
CREATE PROCEDURE [dbo].[Attendance]
AS
BEGIN
SET NOCOUNT ON
Declare #Empcode varchar(50),
#Date varchar(50),
#time varchar(10),
#type varchar(10)
Declare attcursor Cursor
for
select AC.AttEmpCode,AttDate,AttTime,AttType
from BGUsersAttendanceCode AC inner join BGAttendanceTempTable AT
on AC.AttEmpCode=AT.AttEmpCode
order by AttType,AttDate
open attcursor
fetch next from attcursor into
#Empcode, #Date, #time, #type
WHILE ##FETCH_STATUS = 0
begin
insert into BGUsersAttendanceLog values(#Empcode,#Date,#time,#time)
fetch next from attcursor into
#Empcode, #Date, #time, #type
end
CLOSE attcursor
DEALLOCATE attcursor
END
GO
My interpretation of your requirements
You want to consolidate the rows in your attendance table, such that you end up with a single row for every (empcode, date) combination.
For every such row, you want the new InTime column to be populated with the earliest InTime value if more than one exists for that (empcode, date) combination.
Likewise, you want the new OutTime column to be populated with the latest OutTime values if more than one exists for that (empcode, date) combination.
A few additional notes
I notice that you use the varchar data type for all your columns. That's unfortunate. If possible, you really should use the appropriate types (datetime, int, etc.). But I'll ignore that here, and will assume that you are preserving the varchar values.
In your description, you say you have a single attendance source table. But your SP code suggests that your source data comes from a join between 2 tables. Since you haven't described this in detail, I'll just stick with your description of a single source table. You can adjust as necessary.
Setup
create table OldAttendanceLog (
AttEmpCode varchar(50),
AttDate varchar(50),
AttTime varchar(10),
AttType varchar(10)
)
insert into OldAttendanceLog
(AttEmpCode, AttDate, AttTime, AttType)
values
('01', '19.08.2016', '080530', '64'),
('01', '19.08.2016', '092030', '64'),
('01', '19.08.2016', '084030', '65'),
create table NewAttendanceLog (
AttEmpCode varchar(50),
AttDate varchar(50),
InTime varchar(10),
OutTime varchar(10)
)
INSERT statement
You don't need all that complicated cursor code. The insert can be accomplished in a single statement using conditional aggregation:
insert into NewAttendanceLog (AttEmpCode, AttDate, InTime, OutTime)
select AttEmpCode,
AttDate,
min(case when AttType = '64' then AttTime end),
max(case when AttType = '65' then AttTime end)
from OldAttendanceLog
group by AttEmpCode, AttDate
Related
I'm trying to insert Id with the help of output clause but I get this error:
Column name or number of supplied values does not match table definition
CREATE TABLE #TEMP_Master_DimensionValues
(
Id int,
[Name] varchar(max),
[FullName] varchar(max),
ID_DimensionHierarchyType varchar(max),
StartDate varchar(max),
EndDate varchar(max)
)
DECLARE #OutputTbl TABLE ([ID] INT);
INSERT INTO #TEMP_Master_DimensionValues
OUTPUT INSERTED.[ID] INTO #OutputTbl([ID])
SELECT
'April01-17' [Name],
'''Week of ''' + CONVERT(VARCHAR, (SELECT Min('2021-04-01') FROM Master_DimensionValues), 107) [FullName],
'3' [ID_DimensionHierarchyType],
'2021-04-01' [StartDate],
NULL [EndDate];
The select statement above is correct and returns a result, but I couldn't figure out what's going wrong when I am trying into #TEMP_Master_DimensionValues. If anybody could help me it would be appreciated
I always recommend to be explicit about the columns you are inserting into - so change your INSERT statement to be like this:
DECLARE #OutputTbl TABLE ([ID] INT);
INSERT INTO #TEMP_Master_DimensionValues ([Name], [FullName], ID_DimensionHierarchyType, StartDate, EndDate)
OUTPUT INSERTED.[ID] INTO #OutputTbl([ID])
SELECT
'April01-17' [Name],
'''Week of ''' + CONVERT(VARCHAR, (SELECT Min('2021-04-01') FROM Master_DimensionValues), 107) [FullName],
'3' [ID_DimensionHierarchyType],
'2021-04-01' [StartDate],
NULL [EndDate];
Also, I'd highly recommend not just making everything a varchar(max) - use the most appropriate datatype - always - so a StartDate or EndDate should really be DATE (or alternatively DATETIME2(n)) - but most certainly NOT varchar(max)!
I have a merge statement that builds my SCD type 2 table each night. This table must house all historical changes made in the source system and create a new row with the date from/date to columns populated along with the "islatest" flag. I have come across an issue today that I am not really sure how to handle.
There looks to have been multiple changes to the source table within a 24 hour period.
ID Code PAN EnterDate Cost Created
16155 1012401593331 ENRD 2015-11-05 7706.3 2021-08-17 14:34
16155 1012401593331 ENRD 2015-11-05 8584.4 2021-08-17 16:33
I use a basic merge statement to identify my changes however what would be the best approach to ensure all changes get picked up correctly? The above is giving me an error as it's trying to insert/update multiple rows with the same value
DECLARE #DateNow DATETIME = Getdate()
IF Object_id('tempdb..#meteridinsert') IS NOT NULL
DROP TABLE #meteridinsert;
CREATE TABLE #meteridinsert
(
meterid INT,
change VARCHAR(10)
);
MERGE
INTO [DIM].[Meters] AS target
using stg_meters AS source
ON target.[ID] = source.[ID]
AND target.latest=1
WHEN matched THEN
UPDATE
SET target.islatest = 0,
target.todate = #Datenow
WHEN NOT matched BY target THEN
INSERT
(
id,
code,
pan,
enterdate,
cost,
created,
[FromDate] ,
[ToDate] ,
[IsLatest]
)
VALUES
(
source.id,
source.code ,
source.pan ,
source.enterdate ,
source.cost ,
source.created ,
#Datenow ,
NULL ,
1
)
output source.id,
$action
INTO #meteridinsert;INSERT INTO [DIM].[Meters]
(
[id] ,
[code] ,
[pan] ,
[enterdate] ,
[cost] ,
[created] ,
[FromDate] ,
[ToDate] ,
[IsLatest]
)
SELECT ([id] ,[code] ,[pan] ,[enterdate] ,[cost] ,[created] , #DateNow ,NULL ,1 FROM stg_meters a
INNER JOIN #meteridinsert cid
ON a.id = cid.meterid
AND cid.change = 'UPDATE'
Maybe you can do it using merge statement, but I would prefer to use typicall update and insert approach in order to make it easier to understand (also I am not sure that merge allows you to use the same source record for update and insert...)
First of all I create the table dimscd2 to represent your dimension table
create table dimscd2
(naturalkey int, descr varchar(100), startdate datetime, enddate datetime)
And then I insert some records...
insert into dimscd2 values
(1,'A','2019-01-12 00:00:00.000', '2020-01-01 00:00:00.000'),
(1,'B','2020-01-01 00:00:00.000', NULL)
As you can see, the "current" is the one with descr='B' because it has an enddate NULL (I do recommend you to use surrogate keys for each record... This is just an incremental key for each record of your dimension, and the fact table must be linked with this surrogate key in order to reflect the status of the fact in the moment when happened).
Then, I have created some dummy data to represent the source data with the changes for the same natural key
-- new data (src_data)
select 1 as naturalkey,'C' as descr, cast('2020-01-02 00:00:00.000' as datetime) as dt into src_data
union all
select 1 as naturalkey,'D' as descr, cast('2020-01-03 00:00:00.000' as datetime) as dt
After that, I have created a temp table (##tmp) with this query to set the enddate for each record:
-- tmp table
select naturalkey, descr, dt,
lead(dt,1,0) over (partition by naturalkey order by dt) enddate,
row_number() over (partition by naturalkey order by dt) rn
into ##tmp
from src_data
The LEAD function takes the next start date for the same natural key, ordered by date (dt).
The ROW_NUMBER marks with 1 the oldest record in the source data for the natural key in the dimension.
Then, I proceed to close the "current" record using update
update d
set enddate = t.dt
from dimscd2 d
join ##tmp t
on d.naturalkey = t.naturalkey
and d.enddate is null
and t.rn = 1
And finally I add the new source data to the dimension with insert
insert into dimscd2
select naturalkey, descr, dt,
case enddate when '1900-00-00' then null else enddate end
from ##tmp
Final result is obtained with the query:
select * from dimscd2
You can test on this db<>fiddle
I have a SQL Server 2012 stored procedure. I'm filling a temp table below, and that's fairly straightforward. However, after that I'm doing some UPDATE on it.
Here's my T-SQL for declaring the temp table, #SourceTable, filling it, then doing some updates on it. After all of this, I simply take this temp table and insert it into a new table we are filling with a MERGE statement which joins on DOI. DOI is a main column here, and you'll see below that my UPDATE statements get MAX/MIN on several columns based on this column as the table can have multiple rows with the same DOI.
My question is...how can I speed up filling #SourceTable or doing my updates on it? Are there any indexes I can create? I'm decent at SQL, but not the best at performance issues. I'm dealing with maybe 60,000,000 records here in the temp table. It's been running for almost 4 hours now. This is a one-time deal here for a script I'm running once.
CREATE TABLE #SourceTable
(
DOI VARCHAR(72),
FullName NVARCHAR(128), LastName NVARCHAR(64),
FirstName NVARCHAR(64), FirstInitial NVARCHAR(10),
JournalId INT, JournalVolume VARCHAR(16),
JournalIssue VARCHAR(16), JournalFirstPage VARCHAR(16),
JournalLastPage VARCHAR(16), ArticleTitle NVARCHAR(1024),
PubYear SMALLINT, CreatedDate SMALLDATETIME,
UpdatedDate SMALLDATETIME,
ISSN_e VARCHAR(16), ISSN_p VARCHAR(16),
Citations INT, LastCitationRefresh SMALLDATETIME,
LastCitationRefreshValue SMALLINT, IsInSearch BIT,
BatchUpdatedDate SMALLDATETIME, LastIndexUpdate SMALLDATETIME,
ArticleClassificationId INT, ArticleClassificationUpdatedBy INT,
ArticleClassificationUpdatedDate SMALLDATETIME,
Affiliations VARCHAR(8000),
--Calculated columns for use in importing...
RowNum SMALLINT, MinCreatedDatePerDOI SMALLDATETIME,
MaxUpdatedDatePerDOI SMALLDATETIME,
MaxBatchUpdatedDatePerDOI SMALLDATETIME,
MaxArticleClassificationUpdatedByPerDOI INT,
MaxArticleClassificationUpdatedDatePerDOI SMALLDATETIME,
AffiliationsSameForAllDOI BIT, NewArticleId INT
)
--***************************************
--CROSSREF_ARTICLES
--***************************************
--GET RAW DATA INTO SOURCE TABLE TEMP TABLE..
INSERT INTO #SourceTable
SELECT
DOI, FullName, LastName, FirstName, FirstInitial,
JournalId, LEFT(JournalVolume,16) AS JournalVolume,
LEFT(JournalIssue,16) AS JournalIssue,
LEFT(JournalFirstPage,16) AS JournalFirstPage,
LEFT(JournalLastPage,16) AS JournalLastPage,
ArticleTitle, PubYear, CreatedDate, UpdatedDate,
ISSN_e, ISSN_p,
ISNULL(Citations,0) AS Citations, LastCitationRefresh,
LastCitationRefreshValue, IsInSearch, BatchUpdatedDate,
LastIndexUpdate, ArticleClassificationId,
ArticleClassificationUpdatedBy,
ArticleClassificationUpdatedDate, Affiliations,
ROW_NUMBER() OVER(PARTITION BY DOI ORDER BY UpdatedDate DESC, CreatedDate ASC) AS RowNum,
NULL AS MinCreatedDatePerDOI, NULL AS MaxUpdatedDatePerDOI,
NULL AS MaxBatchUpdatedDatePerDOI,
NULL AS MaxArticleClassificationUpdatedByPerDOI,
NULL AS ArticleClassificationUpdatedDatePerDOI,
0 AS AffiliationsSameForAllDOI, NULL AS NewArticleId
FROM
CrossRef_Articles WITH (NOLOCK)
--UPDATE SOURCETABLE WITH MAX/MIN/CALCULATED VALUES PER DOI...
UPDATE S
SET MaxUpdatedDatePerDOI = T.MaxUpdatedDatePerDOI, MaxBatchUpdatedDatePerDOI = T.MaxBatchUpdatedDatePerDOI, MinCreatedDatePerDOI = T.MinCreatedDatePerDOI, MaxArticleClassificationUpdatedByPerDOI = T.MaxArticleClassificationUpdatedByPerDOI, MaxArticleClassificationUpdatedDatePerDOI = T.MaxArticleClassificationUpdatedDatePerDOI
FROM #SourceTable S
INNER JOIN (SELECT MAX(UpdatedDate) AS MaxUpdatedDatePerDOI, MIN(CreatedDate) AS MinCreatedDatePerDOI, MAX(BatchUpdatedDate) AS MaxBatchUpdatedDatePerDOI, MAX(ArticleClassificationUpdatedBy) AS MaxArticleClassificationUpdatedByPerDOI, MAX(ArticleClassificationUpdatedDate) AS MaxArticleClassificationUpdatedDatePerDOI, DOI from #SourceTable GROUP BY DOI) AS T ON S.DOI = T.DOI
UPDATE S
SET AffiliationsSameForAllDOI = 1
FROM #SourceTable S
WHERE NOT EXISTS (SELECT 1 FROM #SourceTable S2 WHERE S2.DOI = S.DOI AND S2.Affiliations <> S.Affiliations)
After
This will probably be a faster way to do the update-- hard to say without seeing the execution plan, but it might be running the GROUP BY for every row.
with doigrouped AS
(
SELECT
MAX(UpdatedDate) AS MaxUpdatedDatePerDOI,
MIN(CreatedDate) AS MinCreatedDatePerDOI,
MAX(BatchUpdatedDate) AS MaxBatchUpdatedDatePerDOI,
MAX(ArticleClassificationUpdatedBy) AS MaxArticleClassificationUpdatedByPerDOI,
MAX(ArticleClassificationUpdatedDate) AS MaxArticleClassificationUpdatedDatePerDOI,
DOI
FROM #SourceTable
GROUP BY DOI
)
UPDATE S
SET MaxUpdatedDatePerDOI = T.MaxUpdatedDatePerDOI,
MaxBatchUpdatedDatePerDOI = T.MaxBatchUpdatedDatePerDOI,
MinCreatedDatePerDOI = T.MinCreatedDatePerDOI,
MaxArticleClassificationUpdatedByPerDOI = T.MaxArticleClassificationUpdatedByPerDOI,
MaxArticleClassificationUpdatedDatePerDOI = T.MaxArticleClassificationUpdatedDatePerDOI
FROM #SourceTable S
INNER JOIN doigrouped T ON S.DOI = T.DOI
If it is faster it will be a couple of orders of magnitude faster -- but that does not mean your machine will be able to process 60 million records in any period of time... if you didn't test on 100k first there is no way to know how long it will take to finish.
I suppose you can try:
Replace INSERT with SELECT INTO
Anyway you don't have indexes on your #SourceTable.
SELECT INTO is minimally logged, so you must have some speedup here
Replace UPDATE with SELECT INTO another table
Instead of updating #SourceTable you can create #SourceTable_Updates with SELECT INTO (modified Hogan query):
with doigrouped AS
(
SELECT
MAX(UpdatedDate) AS MaxUpdatedDatePerDOI,
MIN(CreatedDate) AS MinCreatedDatePerDOI,
MAX(BatchUpdatedDate) AS MaxBatchUpdatedDatePerDOI,
MAX(ArticleClassificationUpdatedBy) AS MaxArticleClassificationUpdatedByPerDOI,
MAX(ArticleClassificationUpdatedDate) AS MaxArticleClassificationUpdatedDatePerDOI,
DOI
FROM #SourceTable
GROUP BY DOI
)
SELECT
S.DOI,
MaxUpdatedDatePerDOI = T.MaxUpdatedDatePerDOI,
MaxBatchUpdatedDatePerDOI = T.MaxBatchUpdatedDatePerDOI,
MinCreatedDatePerDOI = T.MinCreatedDatePerDOI,
MaxArticleClassificationUpdatedByPerDOI = T.MaxArticleClassificationUpdatedByPerDOI,
MaxArticleClassificationUpdatedDatePerDOI = T.MaxArticleClassificationUpdatedDatePerDOI
INTO #SourceTable_Updates
FROM #SourceTable S
INNER JOIN doigrouped T ON S.DOI = T.DOI
Use JOIN-ed #SourceTable and #SourceTable_Updates
Hope this helps
Here are a couple of things that may help the performance of you insert statement
Does the CrossRef_Articles table have a primary key? If it does insert the primary key (be sure it is indexed) into your temp table and only include the fields you need to do your calculations. Once the calculations are done then do a select and join your temp table to the original table on the Id field. It takes time to write all that data to disk.
Look at your tempdb. If you have run this query multiple times then the database or log file size may be out of control.
Check the fields between the 2 original tables joined to see if the fields are indexed?
So I am using a cursor to loop through a bunch of records that my query returns. I have just updated some details in a table and now I want to pull the details from that table so I have used a temporary table.
So now I want to insert some values into a new table that are unrelated to the last and then the rest of the values would be a direct copy from the table variable...how can I do this?
I'll post below the section in question to help people see what I am trying to do.
The part in question is between the update status comment and the above not finished comment.
OPEN cur
FETCH NEXT FROM cur INTO #MembershipTermID , #EndDate , #MembershipID <VARIABLES>
WHILE ##FETCH_STATUS = 0
BEGIN
--PERFORM ACTION
DECLARE #TodaysDate DATETIME
SET #TodaysDate = getDate()
--CANCEL DETAIL
DECLARE #CancellationDetailID INT
INSERT INTO CancellationDetail(CancellationDetailID,RefundAmount,OldEndDate,EffectiveDate,CancelDate,ReasonCodeProgKey)
VALUES (0, 0.0, #EndDate, #TodaysDate, #TodaysDate, 'CANC_DORMANT')
SELECT #CancellationDetailID = SCOPE_IDENTITY()
INSERT INTO CancellationDetailAudit(StampUser,StampDateTime,StampAction,CancellationDetailID,RefundAmount,OldEndDate,EffectiveDate,CancelDate,ReasonCodeProgKey)
VALUES('SYSTEM', GetDate(), 'I', #CancellationDetailID, 0.0, #EndDate, #TodaysDate, #TodaysDate, 'CANC_DORMANT')
--LINK TO TERM
INSERT INTO MembershipTermCancellationDetail(CancellationDetailID,MembershipTermID)
VALUES(#CancellationDetailID, #MembershipTermID)
INSERT INTO MembershipTermCancellationDetailAudit(StampUser,StampDateTime,StampAction,MembershipTermCancellationDetailID,CancellationDetailID,MembershipTermID)
VALUES('SYSTEM', GetDate(), 'I', 0, #CancellationDetailID, #MembershipTermID)
--UPDATE STATUS
UPDATE MembershipTerm
SET MemberStatusProgKey = 'CANCELLED',
EndDate = #TodaysDate,
UpdateDateTime = #TodaysDate,
AgentID = 224,
NextTermPrePaid = 'False'
WHERE MembershipTermID = #MembershipTermID
DECLARE #MembershipTermTable TABLE
(
MembershipTermID int,
MemberStatusProgKey nvarchar (50),
StartDate datetime,
EndDate datetime,
AdditionalDiscount float,
EntryDateTime datetime,
UpdateDateTime datetime,
MembershipID int,
AgentID smallint,
PlanVersionID int,
ForceThroughReference nvarchar (255),
IsForceThrough bit,
NextTermPrePaid bit,
IsBillingMonthly bit,
LastPaymentDate datetime,
PaidToDate datetime,
IsIndeterminate bit
)
INSERT INTO #MembershipTermTable
SELECT MembershipTermID,
MemberStatusProgKey,
StartDate,
EndDate,
AdditionalDiscount,
EntryDateTime,
UpdateDateTime,
MembershipID,
AgentID,
PlanVersionID,
ForceThroughReference,
IsForceThrough,
NextTermPrePaid,
IsBillingMonthly,
LastPaymentDate,
PaidToDate,
IsIndeterminate
FROM MembershipTerm
WHERE MembershipTermID = #MembershipTermID
INSERT INTO MembershipTermAudit(StampUser,StampDateTime,StampAction,MembershipTermID,MemberStatusProgKey,StartDate,EndDate,AdditionalDiscount,EntryDateTime,UpdateDateTime,MembershipID,AgentID,PlanVersionID,ForceThroughReference,IsForceThrough,NextTermPrePaid,IsBillingMonthly,LastPaymentDate,PaidToDate,IsIndeterminate)
VALUES ('SYSTEM',#TodaysDate,'I',MembershipTermID,MemberStatusProgKey,StartDate,EndDate,AdditionalDiscount,EntryDateTime,UpdateDateTime,MembershipID,AgentID,PlanVersionID,ForceThroughReference,IsForceThrough,NextTermPrePaid,IsBillingMonthly,LastPaymentDate,PaidToDate,IsIndeterminate)
--ABOVE NOT FINISHED, NEED TO ADD AUDIT RECORD CORRECTLY
--Members
DECLARE #MembersTable TABLE
(
MembershipTermID int,
MemberStatusProgKey nvarchar (50),
StartDate datetime,
EndDate datetime,
AdditionalDiscount float,
EntryDateTime datetime,
UpdateDateTime datetime,
MembershipID int,
AgentID smallint,
PlanVersionID int,
ForceThroughReference nvarchar (255),
IsForceThrough bit,
NextTermPrePaid bit,
IsBillingMonthly bit,
LastPaymentDate datetime,
PaidToDate datetime,
IsIndeterminate bit
)
INSERT INTO #MembersTable
SELECT * FROM [MembershipTermPerson] WHERE MembershipTermID = #MembershipTermID
--Vehicles
FETCH NEXT FROM cur INTO #MembershipTermID , #EndDate , #MembershipID <VARIABLES>
END
CLOSE cur
DEALLOCATE cur
I think this would be a good case for a INSERT INTO SELECT statement
Something like
INSERT INTO MyTable (ColA, ColB, ColC)
SELECT
GETDATE(), A.MyCol, 'MyValue'
FROM MyOtherTable A
WHERE a.MyValue = 'What I Want'
Basically you skip the temp table, and just grab the value and inject everything at once.
Following is the sample data. I need to make 3 copies of this data in t sql without using loop and return as one resultset. This is sample data not real.
42 South Yorkshire
43 Lancashire
44 Norfolk
Edit: I need multiple copies and I have no idea in advance that how many copies I need I have to decide this on the basis of dates. Date might be 1st jan to 3rd Jan OR 1st jan to 8th Jan.
Thanks.
Don't know about better but this is definatley more creative! you can use a CROSS JOIN.
EDIT: put some code in to generate a date range, you can change the date range, the rows in the #date are your multiplier.
declare #startdate datetime
, #enddate datetime
create table #data1 ([id] int , [name] nvarchar(100))
create table #dates ([date] datetime)
INSERT #data1 SELECT 42, 'South Yorkshire'
INSERT #data1 SELECT 43, 'Lancashire'
INSERT #data1 SELECT 44, 'Norfolk'
set #startdate = '1Jan2010'
set #enddate = '3Jan2010'
WHILE (#startdate <= #enddate)
BEGIN
INSERT #dates SELECT #startdate
set #startdate=#startdate+1
END
SELECT [id] , [name] from #data1 cross join #dates
drop table #data1
drop table #dates
You could always use a CTE to do the dirty work
Replace the WHERE Counter < 4 with the amount of duplicates you need.
CREATE TABLE City (ID INTEGER PRIMARY KEY, Name VARCHAR(32))
INSERT INTO City VALUES (42, 'South Yorkshire')
INSERT INTO City VALUES (43, 'Lancashire')
INSERT INTO City VALUES (44, 'Norfolk')
/*
The CTE duplicates every row from CTE for the amount
specified by Counter
*/
;WITH CityCTE (ID, Name, Counter) AS
(
SELECT c.ID, c.Name, 0 AS Counter
FROM City c
UNION ALL
SELECT c.ID, c.Name, Counter + 1
FROM City c
INNER JOIN CityCTE cte ON cte.ID = c.ID
WHERE Counter < 4
)
SELECT ID, Name
FROM CityCTE
ORDER BY 1, 2
DROP TABLE City
This may not be the most efficient way of doing it, but it should work.
(select ....)
union all
(select ....)
union all
(select ....)
Assume the table is named CountyPopulation:
SELECT * FROM CountyPopulation
UNION ALL
SELECT * FROM CountyPopulation
UNION ALL
SELECT * FROM CountyPopulation
Share and enjoy.
There is no need to use a cursor. The set-based approach would be to use a Calendar table. So first we make our calendar table which need only be done once and be somewhat permanent:
Create Table dbo.Calendar ( Date datetime not null Primary Key Clustered )
GO
; With Numbers As
(
Select ROW_NUMBER() OVER( ORDER BY S1.object_id ) As [Counter]
From sys.columns As s1
Cross Join sys.columns As s2
)
Insert dbo.Calendar([Date])
Select DateAdd(d, [Counter], '19000101')
From Numbers
Where [Counter] <= 100000
GO
I populated it with a 100K dates which goes into 2300. Obviously you can always expand it. Next we generate our test data:
Create Table dbo.Data(Id int not null, [Name] nvarchar(20) not null)
GO
Insert dbo.Data(Id, [Name]) Values(42,'South Yorkshire')
Insert dbo.Data(Id, [Name]) Values(43, 'Lancashire')
Insert dbo.Data(Id, [Name]) Values(44, 'Norfolk')
GO
Now the problem becomes trivial:
Declare #Start datetime
Declare #End datetime
Set #Start = '2010-01-01'
Set #End = '2010-01-03'
Select Dates.[Date], Id, [Name]
From dbo.Data
Cross Join (
Select [Date]
From dbo.Calendar
Where [Date] >= #Start
And [Date] <= #End
) As Dates
By far the best solution is CROSS JOIN. Most natural.
See my answer here: How to retrieve rows multiple times in SQL Server?
If you have a Numbers table lying around, it's even easier. You can DATEDIFF the dates to give you the filter on the Numbers table