Our development team works on SQL Server and writes stored procedures for our product.
We need something like a version control system for those procedures or any other objects.
Sometimes I change a stored procedure, and someone else in my team changes it and I don't know any thing about it.
Is there any solution for that?
If you want to do it via code you could run this on a daily or hourly basis to get a list of all procs that were changed in the last day:
select *
from sys.objects
where datediff(dd, create_date, getdate()) < 1
or datediff(dd, modify_date, getdate() < 1)
and type = 'P';
or you could create a ddl trigger:
Create trigger prochanged On database
For create_procedure, alter_procedure, drop procedure
as
begin
set nocount on
Declare #data xml
set #data = Eventdata()
-- save #data to a table...
end
This will allow you to save all kinds of information every time a proc is created, changed or deleted.
Related
I need to create a table, with many indexes that is scoped only to the running sproc.
I tried a table variable, but this doesn't seem to support indexes. A local temp table seems to create a 'real' table, and need to be explicitly dropped at the end of the proc, from which I'm inferring that it's also shared across concurrent runs and so would break.
What can I use to store data with indexes that is scoped only to the indicidual instance of the running sproc?
You don't need to worry about dropping the table. SQL Server does that automatically. As explained in the documentation:
A local temporary table created in a stored procedure is dropped automatically when the stored procedure is finished. The table can be
referenced by any nested stored procedures executed by the stored
procedure that created the table. The table cannot be referenced by
the process that called the stored procedure that created the table.
This is a result of the scoping rules for access to the temporary table.
I will admit, that in practice, I tend to explicitly drop temporary tables in stored procedures. The differences among:
create table temp
create table #temp
create table ##temp
are all too similar to rely on the fact that the second is dropped automatically, but the first and third are not. However, this is my "problem" and not a best practice.
Updated
The answer is don't worry at all since the temp table will be as if it was a local variable inside the stored procedure.
I wanted to make sure if the doubt I had was correct or not, so I made this test
create procedure TestTempData
as
begin
declare #date datetime = getdate()
if object_id('#testing') is not null
drop table #testing
create table #testing(
Id int identity(1,1),
[Date] datetime
)
print 'run at ' + format(#date,'HH:mm:ss')
insert into #testing([Date]) values
(dateadd(second,10,getdate())),
(dateadd(second,20,getdate())),
(dateadd(second,30,getdate()))
waitfor delay '00:00:15'
select * from #testing
end
then I ran this query
exec TestTempData
waitfor delay '00:00:02'
exec TestTempData
the result came as
run at 14:57:39
Id Date
1 2016-09-21 14:57:49.117
2 2016-09-21 14:57:59.117
3 2016-09-21 14:58:09.117
the second result
run at 14:57:56
Id Date
1 2016-09-21 14:58:06.113
2 2016-09-21 14:58:16.113
3 2016-09-21 14:58:26.113
If the concurrent runs will effect the #temp table, both results
should be the same which was not the case, It seems that the temp
table inside stored procedure acts like a local variable inside a
method.
Before chatting with Gordon Linoff
Since you mentioned that the temp table is shared across concurrent runs, your temp table should be unique for the current run.
Your stored procedure should look like this
create procedure YourProc(#userId int)
as
begin
if object_id('#temp' + #userId) IS NOT NULL
execute( 'DROP TABLE #temp' + #userId +'')
...
execute('insert into #temp' + #userId + 'values(...')
end
The above solution will ensure that no conflict will occur and no data will be lost since each execution will be unique per userId
you don't need to drop the table when you finish because it will be dropped automatically by it self
Hope this will help you
We have a database called AVL in SQL Server 2008 R2 SE. This database has many tables, but there is one in particular called ASSETLOCATION that has 46 millon rows right now and accounts for 99.9% of the total database size.
This table has information from 2008 to date, and the actual rate of growing is about 120k records daily.
Now, there are 2 situations we like to address:
Performance is starting to degrade slowly, and everything is
optimized so there's not much to do
Backup times are increasing and is becoming a problem (we do 1 full
backup everyday). The BAK file is 11GB, after winrar does his thing
the final size 2GB and then a script sends the file offsite. We have
a T1 and pulling 2GB through the wire is taking about 5 hours.
All this is normal but here's the catch I want to capitalize on: +90% of SQL statements use information only 3 months old or less, in other words, data from 2008, 2009 and 2010 don't get accessed often.
I was thinking on creating one new database for each year. Lets say:
- AVL2008 database, only table there will be ASSETLOCATION with records from 2008
- AVL2009 database, only table there will be ASSETLOCATION with records from 2009
- AVL2010 database, only table there will be ASSETLOCATION with records from 2010
As you have already guessed data from the past don't get changed, so this will be great from the backup perspective since the AVL database will only have the records from the current year. This approach will also help performance a lot.
Now for the problem. Assume the ASSETLOCATION table has the following columns:
- IDASSETLOCATION (int, PK identity)
- IDASSET (int, FK to ASSET table)
- WHEN (datetime)
- LATLONG (varchar(22), spatial info)
I need to create a view in the AVL database called "vASSETLOCATION", witch is quite simple, but I don't want the view accessing all the databases and joining the ASSETLOCATION tables via UNION, rather the only ones needed based on the WHEN field. For example:
select * from vASSETLOCATION where [WHEN] between '2008-01-01' and '2008-01-02'
In this case the view should ONLY ACCESS the AVL2008.ASSETLOCATION table
select * from vASSETLOCATION where [WHEN] between '2008-12-29' and '2009-01-05'
In this case the view should access AVL2008.ASSETLOCATION and AVL2009.ASSETLOCATION
select * from vASSETLOCATION where
([WHEN] between '2008-01-01' and '2008-01-01')
or
([WHEN] = getdate())
In this case the view should access AVL2008.ASSETLOCATION and AVL.ASSETLOCATION
I know a table scalar UDF in place of the view will solve the problem, but there are more than only 4 fields and [WHEN] is not the only field we may want to include in the where part.
Before anyone suggest it, the table partitioning feature will perhaps help in performance, but NOT in the backup problem.
If there a way to do this in a view?
Thanks.-
This sounds like a classic case for table partitioning or distributed partitioned views.
However you can work around this without ponying up the price for Enterprise edition (or doing all the prep work required to support those features) using some smart code that looks at the problem a little differently. You don't want a single view that accesses all of the tables across the different databases, but what if you had multiple views and a stored procedure to control how they're accessed?
Create views for the most common access patterns. Perhaps you have a view that covers date ranges for 2008-2010, 2008-2009, 2009-2010, etc. They might look like this:
CREATE VIEW dbo.vAL_2008_2009
AS
SELECT * FROM AVL2008.dbo.ASSETLOCATION
UNION ALL
SELECT * FROM AVL2009.dbo.ASSETLOCATION;
GO
CREATE VIEW dbo.vAL_2008_2010
AS
SELECT * FROM AVL2008.dbo.ASSETLOCATION
UNION ALL
SELECT * FROM AVL2009.dbo.ASSETLOCATION
UNION ALL
SELECT * FROM AVL2010.dbo.ASSETLOCATION;
GO
-- etc. etc.
Now your code that handles the queries can take the input date parameters and calculate which view it needs to query. For example:
CREATE PROCEDURE dbo.DetermineViews
#StartDate DATETIME,
#EndDate DATETIME,
#optionalToday BIT = 0
AS
BEGIN
SET NOCOUNT ON;
DECLARE #sql NVARCHAR(MAX) = N'';
SET #sql = #sql + N'SELECT * FROM ' + CASE
WHEN #StartDate >= '20080101' AND #EndDate < '20090101' THEN 'AVL2008.dbo.ASSETLOCATION'
WHEN #StartDate >= '20080101' AND #EndDate < '20100101' THEN 'dbo.vAL_2008_2009'
WHEN #StartDate >= '20080101' AND #EndDate < '20110101' THEN 'dbo.vAL_2008_2010'
-- etc. etc.
WHEN YEAR(#StartDate) = YEAR(CURRENT_TIMESTAMP) THEN 'AVL.dbo.ASSETLOCATION'
ELSE '' END;
IF #OptionalToday = 1 AND YEAR(#StartDate) <> YEAR(CURRENT_TIMESTAMP)
BEGIN
SET #sql = #sql + N'UNION ALL SELECT * FROM AVL.dbo.ASSETLOCATION'
END
SET #sql = #sql + ' WHERE [WHEN] BETWEEN '''
+ CONVERT(CHAR(8), #StartDate, 112) + ''' AND '''
+ CONVERT(CHAR(8), #EndDate, 112) + '''';
IF #OptionalToday = 1
BEGIN
SET #sql = #sql + ' OR ([WHEN] >= DATEDIFF(DAY, 0, CURRENT_TIMESTAMP)
AND [WHEN] < DATEADD(DAY, 1, DATEDIFF(DAY, 0, CURRENT_TIMESTAMP)';
END
PRINT #sql;
-- EXEC sp_executeSQL #sql;
END
GO
I'm probably missing some of your business logic, and you'll certainly want to add some error-handling in there and test the junk out of it, but this is a relatively easy-to-maintain solution that only requires updating that coincides with creating a new database to archive last year's data, which it sounds like only happens once a year.
I have a function and select statement
CREATE FUNCTION dbo.fun_currentday ( #dt DATETIME)
RETURNS DATETIME
AS
BEGIN
DECLARE #currentday DATETIME
SET #currentday = DATEADD(dd, DATEDIFF(dd, 0, #dt), 0)
RETURN (#currentday)
END
GO
DECLARE #pvm AS DATETIME
SET #pvm = GETDATE()
SELECT 'Last 7 days' AS Range, dbo.fun_currentday(#pvm) - 6 AS Stday, dbo.fun_currentday(#pvm) AS Endday
All works fine but when I hover over dbo.fun_currentday at the select statement, I get an error saying:
Cannot find either column "dbo" or the user-defined function or aggregate "dbo.fun_currentday", or the name is ambiguous.
Where's the problem?
Intellisense / Error highlighting always does this for newly created objects. Use Ctrl+Shift+R to refresh the local cache.
Before
After
Everything runs fine here with SQL Server 2008 Express.
Have you tried running your query on a different database?
You can try to create a new schema and create you UDF in it.
Make sure you have the necessary permissions and that the dbo schema configuration is correct.
Your stored procedure have to be stored somewhere so when you don't specify the location it goes to default database (master). So you should call it like
SELECT 'Last 7 days' AS Range,master.dbo.fun_currentday(GETDATE()) - 6 AS Stday, master.dbo.fun_currentday(GETDATE()) AS Endday
EDITED
I've checked that and I wasn't right, it not always goes to mater schema. It goes to database in contex of you were woring, so if your create procedure was created in query on root folder it goes to master, but if you created it in query of test databse you should use test.dbo.fun_currentday(GETDATE()). To avoid it always specify the databse like USE database_name GO CREATE FUNCTION dbo.fun_currentday
I am working with an insert trigger within a Sybase database. I know I can access the ##nestlevel to determine whether I am being called directly or as a result of another trigger or procedure.
Is there any way to determine, when the nesting level is deeper than 1, who performed the action causing the trigger to fire?
For example, was the table inserted to directly, was it inserted into by another trigger and if so, which one.
As far as I know, this is not possible. Your best bet is to include it as a parameter to your stored procedure(s). As explained here, this will also make your code more portable since any method used would likely rely on some database-specific call. The link there was specific for SQL Server 2005, not Sybase, but I think you're pretty much in the same boat.
I've not tested this myself, but assuming you are using Sybase ASE 15.03 or later, have your monitoring tables monProcessStatement and monSysStatement enabled, and appropriate permissions set to allow them to be accessed from your trigger you could try...
declare #parent_proc_id int
if ##nestlevel > 1
begin
create table #temp_parent_proc (
procId int,
nestLevel int,
contextId int
)
insert into #temp_parent_proc
select mss.ProcedureID,
mss.ProcNestLevel,
mss.ContextID
from monSysStatement mss
join monProcessStatement mps
on mss.KPID = mps.KPID
and mss.BatchID = mps.BatchID
and mss.SPID = mps.SPID
where mps.ProcedureID =##procid
and mps.SPID = ##spid
select #parent_proc_id = (select tpp.procId
from #temp_parent_proc tpp,
#temp_parent_proc2 tpp2
where tpp.nestLevel = tpp2.nestLevel-1
and tpp.contextId < tpp2.contextId
and tpp2.procId = ##procid
and tpp2.nestLevel = ##nestlevel
group by tpp.procId, tpp.contextId
having tpp.contextId = max(tpp.contextId ))
drop table #temp_parent_proc
end
The temp table is required because of the nature of monProcessStatement and monSysStatement.
monProcessStatement is transient and so if you reference it more than once, it may no longer hold the same rows.
monSysStatement is a historic table and is guaranteed to only return an individual rown once to any process accessing it.
if you do not have or want to set permissions to access the monitoring tables, you could put this into a stored procedure you pass ##procid, ##spid, and ##nestlevel to as parameters.
If this also isn't an option, since you cannot pass parameters into triggers, another possible work around would be to use a temporary table.
in each proc that might trigger this...
create table #trigger_parent (proc_id int)
insert into #trigger_parent ##procid
then in your trigger the temp table will be available...
if object_id('#trigger_parent') is not null
set #parent_proc = select l proc_id from #trigger_parent
you will know it was triggered from within another proc.
The trouble with this is it doesn't 'just work'. You have to enforce temp table setup.
You could do further checking to find cases where there is no #trigger_parent but the nesting level > 1 and combine a similar query to the monitoring tables as above to find potential candidates that would need to be updated.
We are running MS SQL 2005 and we have been experiencing a very peculiar problem the past few days.
I have two procs, one that creates an hourly report of data. And another that calls it, puts its results in a temp table, and does some aggregations, and returns a summary.
They work fine...until the next morning.
The next morning, suddenly the calling report, complains about an invalid column name.
The fix, is simply a recompile of the calling proc, and all works well again.
How can this happen? It's happened three nights in a row since moving these procs into production.
EDIT: It appears, that it's not a recompile that is needed of the caller (summary) proc. I was just able to fix the problem by executing the callee (hourly) proc. Then executing the summary proc. This makes less sense than before.
EDIT2:
The hourly proc is rather large, and I'm not posting it here in it's entirety. But, at the end, it does a SELECT INTO, then conditionally, returns the appropriate result(s) from the created temp table.
Select [large column list]
into #tmpResults
From #DailySales8
Where datepart(hour,RowStartTime) >= #StartHour
and datepart(hour,RowStartTime) < #EndHour
and datepart(hour, RowStartTime) <= #LastHour
IF #UntilHour IS NOT NULL
AND EXISTS (SELECT * FROM #tmpResults WHERE datepart(hour, RowEndTime) = #UntilHour) BEGIN
SELECT *
FROM #tmpResults
WHERE datepart(hour, RowEndTime) = #UntilHour
END ELSE IF #JustLastFullHour = 1 BEGIN
DECLARE #MaxHour INT
SELECT #MaxHour = max(datepart(hour, RowEndTime)) FROM #tmpResults
IF #LastHour > 24 SELECT #LastHour = #MaxHour
SELECT *
FROM #tmpResults
WHERE datepart(hour, RowEndTime) = #LastHour
IF ##ROWCOUNT = 0 BEGIN
SELECT *
FROM #tmpResults
WHERE datepart(hour, RowEndTime) = #MaxHour
END
END ELSE BEGIN
SELECT * FROM #tmpResults
END
Then it drops all temp tables and ends.
The caller (Summary)
First creates a temp table #tmpTodaySales to store the results, the column list DOES MATCH the definition of #tmpResults in the other proc. Then it ends up calling the hourly proc a couple times
INSERT #tmpTodaysSales
EXEC HourlyProc #LocationCode, #ReportDate, null, 1
INSERT #tmpTodaysSales
EXEC HourlyProc #LocationCode, #LastWeekReportDate, #LastHour, 0
I believe it is these calls that fail. But recompiling the proc, or executing the hourly procedure outside of this, and then calling the summary proc fixes the problem.
Two questions:
Does the schema of #DailySales8 vary at all? Does it have any direct/indirect dependence on the date of execution, or on any of the parameters supplied to HourlyProc?
Which execution of INSERT #tmpTodaysSales EXEC HourlyProc ... in the summary fails - first or second?
What do the overnight maintenance plans look like, and are there any other scheduled overnight jobs that run between 2230 and 1000 the next day? It's possible that step in the maintenance plan or another agent job is causing some kind of corruption that's breaking your SP.