USE [MASTER]
GO
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
USE [MASTER]
GO
CREATE PROCEDURE [dbo].[TOTALLY_NEW] #FISCAL_YEAR NVARCHAR(4) AS
BEGIN
PRINT 'HERE'
END
GO
select * from master..sysobjects
where name like 'tot%' <-- returns one row!!!!!!
I've refreshed this list a dozen times..!!
I've tried disconnecting and reconnecting..
I've created all those other SP's listed in the image before.
Here is a picture with more.
Ensure that the user you are using has permissions to view stored procedures. I am not 100% on SQL Server which permission this is but I have seen this problem on a few other databases where a user creates a SP, but another user does not have permission to view or list the SPs.
Per request, converting comment to answer:
Yes, you shouldn't be creating user objects in master. The only time I ever do it is when I explicitly want to create a utility procedure that I can call from any database using that database's context, which you have to do on purpose and doesn't happen by accident - so I suspect you inadvertently marked your object as a system procedure. You do this using EXEC sp_MS_marksystemobject (or in older versions by having set EXEC sp_MS_upd_sysobj_category 1 - the latter might work in 2005 with 80 compatibility, not sure).
Related
Background Information:
In Python, I might write something like this if I want to apply the same logic to different values in a list.
database_list = ["db_1", "db_2", "db_3"]
for x in range(0,len(database_list),1):
print("the database name is " + database_list[x])
What I am trying to do:
What I am trying to do in SSMS, is pull a list of DB objects for each database. I created a stored procedure to pull exactly what I want, but I have to run it against each database, so 10 databases mean running it 10 times.
My goal is to do this with a T-SQL query instead of Python.
I tried doing something like this:
exec sp_MSforeachdb 'USE ?; EXEC [dbo].[my_stored_procedure]';
The problem with this is, [dbo].[my_stored_procedure] has to exist in every database I want to do this in.
How can I create the stored procedure in 1 database, but execute it for all databases or a list of databases that I choose?
I know what you are trying to do and if it's what I think (you seem reluctant to actually say!) you can do the following:
In the master database, create your procedure. Normally you wouldn't do this, but in this case you must prefix it sp_
use master
go
create procedure sp_testproc as
select top 10 * from sys.tables
go
Now if you run this, it will return tables from the master database.
If you switch context to another database and exec master.dbo.sp_testproc, it will still return tables from the master database.
In master, run
sys.sp_MS_marksystemobject sp_testproc
Now switch context to a different database and exec master.dbo.sp_testproc
It will return tables from the database you are using.
Try creating your sproc in master and naming it with an sp_ prefix:
USE master
GO
CREATE PROCEDURE sp_sproc_name
AS
BEGIN
...
END
GO
-- You *may* need to mark it as a system object
EXEC sys.sp_MS_marksystemobject sp_sprocname
See: https://nickstips.wordpress.com/2010/10/18/sql-making-a-stored-procedure-available-to-all-databases/
It should then be available in all dbs
Create the stored procedure in the Master database with the sp_ prefix, and use dynamic SQL in the stored procedure so it resolves object names relative to the current database, rather than the database which contains the stored procedure.
EG
use master
go
CREATE OR ALTER PROCEDURE [dbo].[sp_getobjects]
AS
exec ('
select *
from [sys].[objects]
where is_ms_shipped = 0
order by type, name
')
go
use AdventureWorks2017
exec sp_getobjects
#LunchBox - it's your single stored procedure (that you create in one database) that is actually going to need to contain the "exec sp_MSforeach ...." command, and instead of the command to be executed being "EXEC ", it will need to be the actual SQL that you were going to put into the stored proc.
Eg. (inside your single stored procedure)
EXEC sp_MSforeachdb 'USE ?; SELECT * FROM <table>; UPDATE <another table> SET ...';
Think of the stored procedure (that you put into one database) as being no different than your Python code file - if you had actually wanted to achieve the same thing in Python, you would have either needed to create the stored proc in each database, or build the SQL statement string in Python and execute it against each database.
I understand what you thought you might be able to achieve with SQL, but stored procedures really don't work the way you were expecting. Even when you're in the context of a different database, but you run EXEC <different_db>.stored_proc, that stored proc ends up running in the context of the database in which it exists (not your context database).
Now, the only one issue you may come up against is that the standard sp_MSforeachdb stored proc has a limit of 2000 characters for the command that can be executed (although, it does have multiple "command" parameters, this may not be practical if you were planning on running a very large code block, perhaps with variables that carry all the way through). If this is something that might impact what you're intending to do, you could do a search online for "sp_MSforeachdb alternatives" - there seem to be a handful that people have created where the command parameter can contain a larger string.
This seems like it would be trivial, but I have not been able to come up with a solution to this small problem.
I am attempting to create a stored procedure in my application's database. This stored procedure just executes a job that has been set up in the SSMS on the same server (seemed to be the only way to programmatically execute these jobs).
The simple code is shown below:
USE ApplicationsDatabase
GO
CREATE PROCEDURE [dbo].[procedure]
AS
BEGIN
EXEC dbo.sp_start_job N'Nightly Download'
END
When ran as is, the procedure technically gets created but cannot be executed due to it not being able to find the 'sp_start_job' since it is using the ApplicationsDatabase. If I try to create the procedure again (after deleting previously created) but updating the USE to MSDB, it tries to add it to that system database for which I do not have permissions to do. Finally, I attempted to keep the original create statement but added the USE MSDB within the procedure (just to use the 'sp_start_job' procedure), but it would error saying USE statements cannot be placed within procedures.
After pondering on the issue for a little (I'm obviously no SQL database expert), I could not come up with a solution and decided to solicit the advice of my peers. Any help would be greatly appreciated, thanks!
You will have to fully qualify the path to the procedure. Of course, you can only execute this is the application has permissions.
Try this:
USE ApplicationsDatabase
GO
CREATE PROCEDURE [dbo].[procedure]
AS
BEGIN
EXEC msdb.dbo.sp_start_job N'Nightly Download'
END
I need to export the results of a query to a csv file and put the file on a network shared folder.
Is it possible to achieve this within a stored procedure?
If yes, comes yet another constraint: can I achieve this without sysadmin privileges, aka without using xp_cmdshell + BCP utility?
If no to 2., does the caller have to have sysadmin privileges or would it suffice if the SP owner has sysadmin privileges?
Here are some more details to the problem: The SP must export and transfer the file on the fly and raise error if something went wrong. The caller must get a response immediately, i.e. in case of no error, he can assume that the results are successfully transferred to the folder. Therefore, a DTS/SSIS job that runs every N minutes is not an option. I know the problem smells like I will have to do this at application level, but I would be more than happy if all those stuff could be done from T-SQL.
It seems to me, that you are not waiting for an SQL code in the answer on your question. The main aspect of you question is the security aspect. What should you do to implement your requirement without sysadmin privileges and without a new security hole? This is your real question I think.
I see at least 3 ways to solve your problem. But first of all a short explanation why sysadmin privileges exists in all solutions based on Extended Stored Procedures. Extended Stored Procedures like xp_cmdshell are very old. They existed at least before SQL Server 4.2, the first Microsoft SQL Server running under the first Windows NT (NT 3.1). In the old version of SQL Server I was not security restriction to execute such procedures, but later one made such restrictions. It is important to understand, that all general purpose procedures which allow starting any process under SQL Server account like xp_cmdshell and sp_OACreate must have sysadmin privileges restriction. Only a task oriented procedures with a clear area of usage and role based permissions can solve the problem without a security hole. So this is the 3 solution ways which I promised before:
You create a new SQL account on you SQL server with sysadmin privileges. Then you create a stored procedure which use some from Extended Stored Procedures like xp_cmdshell or sp_OACreate and technically implement you requirements (export some information into a CSV file). With respect of EXECUTE AS Clause (see http://msdn.microsoft.com/en-us/library/ms188354.aspx) you configure the created stored procedure so, that it runs under the account with sysadmin privileges. You delegate the execution of this procedure to users with a some SQL role, to be more flexible from the side of delegation of permission.
You can use CLR Stored Procedures instead of xp_cmdshell and sp_OACreate. You should also use role based permissions on the procedure created.
The end-user doesn’t call directly any SQL stored procedure what you create. There is exists a piece of software (like WCF service or a Web Site) which call your SQL stored procedure. You can implement the export to CSV file inside of this software and not inside of any SQL stored procedure.
In all implementation ways you should exactly define where you will hold the password of the account with which you access to the file system. There are different options which you have, all with corresponding advantages and disadvantages. It's possible to use impersonation to allow access to the file system with the end-user‘s account. The best way depends on the situation which you have in your environment.
You can build a SQL Agent job andkick it off via system SP's from a trigger or SP. The job may call SSIS or bulk dump scrits... returning instant error message may be an issue though
In general, it's quite unusual requirement - what are you trying to accomplish?
UPDATE:
After some more thinking - this is a design issue and I have not been able to find a solution simply by using SQL Server SP's.
IN the past - this is what I did:
on the app level - implement async process where user pushes a button, requesting a file download; the app accepts and let user go
the user can check the status via status page or will get email when it's done or error occured
in the mean time the application layer, kicks of either SSIS package or SQL Agent Job
If parameters are needed - use design and implement special table: JOB_PARAMETERS - where you would put the parameters
you would also need to create more tables to manage the jobs and store job status and communicate with application layer
you may want o use SQL Server Broker on DB level
You may want to use MSMQ on the app level
This is not easy, but this is the most efficient way to export data, where it goes from DB to a file, without traveling to app server and user PC via browser.
Can you use OLE Automation? It's ugly, and you could probably use some set based string building techniques instead of the cursor but here goes...
Declare #Dir varchar(4000)
Set #Dir = 'c:\some\path\accessible\to\SQLServer'
If #Dir IS NULL
Begin
print 'dir is null.'
Return 1
End
declare
#FilePath as varchar(255),
#DataToWrite as varchar(8000)
If right(#DataDir,1) <> '\'
Set #DataDir = #DataDir + '\'
Set #FilePath = #DataDir + 'filename.csv'
DECLARE #RetCode int , #FileSystem int , #FileHandle int
EXECUTE #RetCode = sp_OACreate 'Scripting.FileSystemObject' , #FileSystem OUTPUT
IF (##ERROR|#RetCode > 0 Or #FileSystem < 0)
begin
RAISERROR ('could not create FileSystemObject',16,1)
End
declare #FileExists int
Execute #RetCode = sp_OAMethod #FileSystem, 'FileExists', #FileExists OUTPUT, #FilePath
--print '#FileExists = ' + cast(#FileExists as varchar)
If #FileExists = 1
Begin
RAISERROR ('file does not exist',16,1)
/*return 1*/
End
--1 = for reading, 2 = for writing (will overwrite contents), 8 = for appending
EXECUTE #RetCode = sp_OAMethod #FileSystem , 'OpenTextFile' , #FileHandle OUTPUT , #FilePath, 8, 1
IF (##ERROR|#RetCode > 0 Or #FileHandle < 0)
begin
RAISERROR ('could not create open text file',16,1)
End
DECLARE CSV CURSOR
READ_ONLY
FOR
Select Anything From MyDataTable
order by whatever
DECLARE #fld1 nvarchar(50)
,#fld2 nvarchar(50)
OPEN CSV
FETCH NEXT FROM CSV INTO #fld1, #fld2
WHILE (##fetch_status <> -1)
BEGIN
IF (##fetch_status <> -2)
BEGIN
Set #DataToWrite = #fld1 + ',' + #fld2 + char(13) + char(10)
EXECUTE #RetCode = sp_OAMethod #FileHandle , 'Write' , NULL , #DataToWrite
IF (##ERROR|#RetCode > 0)
begin
RAISERROR ('could not write to file',16,1)
End
END
FETCH NEXT FROM OpenOrders INTO #fld1, #fld2
END
CLOSE CSV
DEALLOCATE CSV
EXECUTE #RetCode = sp_OAMethod #FileHandle , 'Close' , NULL
IF (##ERROR|#RetCode > 0)
RAISERROR ('Could not close file',16,1)
EXEC sp_OADestroy #FileSystem
return 0
End
Generally, no, this kind of work can't be done without a lot of fuss and effort and sysadmin rights.
SQL is a database engine, and is focused on database problems, and so and quite rightly has very poor file manipulation tools. Work-arounds include:
xp_cmdshell is the tool of choice for file manipulations.
I like the sp_OA* solution myself, 'cause it gives me flashbacks to SQL 7.0. But using those functions always made me nervous.
You might be able to do something with OPENROWSET, where the target of an insert is a file defined with this function. Sounds unlikely, might be worth a look.
Similarly, a linked server definition might be used as a target for inserts or select...into... statements.
Security seems to be your showstopper. By and large, when SQL shells out to the OS, it has all the rights of the NT account under which the SQL service started up on; if you'd want to limit network access, configure that account carefully (and never make it domain admin!)
It is possible to call xp_cmdshell as a user without sysadmin rights, and to configure these calls to not have the same access rights as the SQL Service NT account. As per BOL (SQL 2005 and up):
xp_cmdshell Proxy Account
When it is called by a user that is not a member of the sysadmin fixed server role, xp_cmdshell connects to Windows by using the account name and password stored in the credential named ##xp_cmdshell_proxy_account##. If this proxy credential does not exist, xp_cmdshell will fail.
The proxy account credential can be created by executing sp_xp_cmdshell_proxy_account. As arguments, this stored procedure takes a Windows user name and password. For example, the following command creates a proxy credential for Windows domain user SHIPPING\KobeR that has the Windows password sdfh%dkc93vcMt0.
So your user logs in with whatever user rights (not sysadmin!) and executes the stored procedure, which calls xp_cmdshell, which will "pick up" whatever proxy rights have been configured. Again, awkward, but it sounds like it'd do what you'd want it to do. (A possible limiting factor is that you only get the one proxy account, so it has to fit all possible needs.)
Honestly, it sounds to me like the best solution would be to:
Identify the source of the call to the stored procedure,
Have the procedure return the data to be written to the file (you can do all your formatting and layout in the procedure if need be), and
Have the calling routine manage all the file preparation steps (which could be as simple as redirecting data returned from SQL into an opened file)
So, what does launch the call to the stored procedure?
I have a stored procedure which takes an XML parameter and inserts the data into multiple tables. If I run the stored procedure into a database using a SSMS query window, everything works fine. However, we have a custom installation program that is used to deploy stored procedures to databases, and when this is used, execution of the sp fails with this error:
INSERT failed because the following SET options have incorrect settings:
'ANSI_NULLS, QUOTED_IDENTIFIER'. Verify that SET options are correct for use with
indexed views and/or indexes on computed columns and/or query notifications
and/or xml data type methods.
The custom installation program does not use the correct settings when scripting in the stored procedures.
Setting these ( SET ARITHABORT ON; SET QUOTED_IDENTIFIER ON; SET ANSI_NULLS ON;) within the sp has no effect:
I have also tried setting these options for the open connection just before calling the sp in the code. This again does not have the desired effect.
It appears that the settings on the connection to the database while the sp is being run in to the database are what matters, not the settings when the sp is used.
I have experimented by playing with these settings in SSMS options, and this does appear to be the case. I would just like someone to confirm that this is definitely the case (if there is a way around I would love to hear it, but I'm not hopeful)
Unfortunately altering the installer program is not an option for me at the present time, so I'm looking at having to roll back a couple of weeks work; so if I do have to do this I want to be really sure (and have some evidence to back me up) that this is the only option
The settings applied with those at CREATE or ALTER time and are ignored at runtime.
SSMS has correct settings by default (so does sqlcmd, osql etc).
From BOL, CREATE PROC, "Using SET Options"
The Database Engine saves the settings
of both SET QUOTED_IDENTIFIER and SET
ANSI_NULLS when a Transact-SQL stored
procedure is created or modified.
These original settings are used when
the stored procedure is executed.
Therefore, any client session settings
for SET QUOTED_IDENTIFIER and SET
ANSI_NULLS are ignored when the stored
procedure is running. Other SET
options, such as SET ARITHABORT, SET
ANSI_WARNINGS, or SET ANSI_PADDINGS
are not saved when a stored procedure
is created or modified.
When it comes to creating stored procedures, views, functions, etc., is it better to do a DROP...CREATE or an ALTER on the object?
I've seen numerous "standards" documents stating to do a DROP...CREATE, but I've seen numerous comments and arguments advocating for the ALTER method.
The ALTER method preserves security, while I've heard that the DROP...CREATE method forces a recompile on the entire SP the first time it's executed instead of just a a statement level recompile.
Can someone please tell me if there are other advantages / disadvantages to using one over the other?
ALTER will also force a recompile of the entire procedure. Statement level recompile applies to statements inside procedures, eg. a single SELECT, that are recompiled because the underlying tables changes, w/o any change to the procedure. It wouldn't even be possible to selectively recompile just certain statements on ALTER procedure, in order to understand what changed in the SQL text after an ALTER procedure the server would have to ... compile it.
For all objects ALTER is always better because it preserves all security, all extended properties, all dependencies and all constraints.
This is how we do it:
if object_id('YourSP') is null
exec ('create procedure dbo.YourSP as select 1')
go
alter procedure dbo.YourSP
as
...
The code creates a "stub" stored procedure if it doesn't exist yet, otherwise it does an alter. In this way any existing permissions on the procedure are preserved, even if you execute the script repeatedly.
Starting with SQL Server 2016 SP1, you now have the option to use CREATE OR ALTER syntax for stored procedures, functions, triggers, and views. See CREATE OR ALTER – another great language enhancement in SQL Server 2016 SP1 on the SQL Server Database Engine Blog. For example:
CREATE OR ALTER PROCEDURE dbo.MyProc
AS
BEGIN
SELECT * FROM dbo.MyTable
END;
Altering is generally better. If you drop and create, you can lose the permissions associated with that object.
If you have a function/stored proc that is called very frequently from a website for example, it can cause problems.
The stored proc will be dropped for a few milliseconds/seconds, and during that time, all queries will fail.
If you do an alter, you don't have this problem.
The templates for newly created stored proc are usually this form:
IF EXISTS (SELECT * FROM sysobjects WHERE type = 'P' AND name = '<name>')
BEGIN
DROP PROCEDURE <name>
END
GO
CREATE PROCEDURE <name>
......
However, the opposite is better, imo:
If the storedproc/function/etc doesn't exist, create it with a dummy select statement. Then, the alter will always work - it will never be dropped.
We have a stored proc for that, so our stored procs/functions usually like this:
EXEC Utils.pAssureExistance 'Schema.pStoredProc'
GO
ALTER PROCECURE Schema.pStoredProc
...
and we use the same stored proc for functions:
EXEC Utils.pAssureExistance 'Schema.fFunction'
GO
ALTER FUNCTION Schema.fFunction
...
In Utils.pAssureExistance we do a IF and look at the first character after the ".": If it's a "f", we create a dummy fonction, if it's "p", we create a dummy stored proc.
Be careful though, if you create a dummy scalar function, and your ALTER is on a table-valued function, the ALTER FUNCTION will fail, saying it's not compatible.
Again, Utils.pAssureExistance can be handy, with an additional optional parameter
EXEC Utils.pAssureExistance 'Schema.fFunction', 'TableValuedFunction'
will create a dummy table-valued function,
Additionaly, I might be wrong, but I think if you do a drop procedure and a query is currently using the stored proc, it will fail.
However, an alter procedure will wait for all queries to stop using the stored proc, and then alter it. If the queries are "locking" the stored proc for too long (say a couple seconds), the ALTER will stop waiting for the lock, and alter the stored proc anyway: the queries using the stored proc will probably fail at that point.
DROP generally loses permissions AND any extended properties.
On some UDFs, ALTER will also lose extended properties (definitely on SQL Server 2005 multi-statement table-valued functions).
I typically do not DROP and CREATE unless I'm also recreating those things (or know I want to lose them).
I don't know if it's possible to make such blanket comment and say "ALTER is better". I think it all depends on the situation. If you require this sort of granular permissioning down to the procedure level, you probably should handle this in a separate procedure. There are benefits to having to drop and recreate. It cleans out existing security and resets it what's predictable.
I've always preferred using drop/recreate. I've also found it easier to store them in source control. Instead of doing .... if exists do alter and if not exists do create.
With that said... if you know what you're doing... I don't think it matters too much.
If you perform a DROP, and then use a CREATE, you have almost the
same effect as using an ALTER VIEW statement. The problem is that you need to entirely re-establish your permissions on who can and can’t use the view. ALTER retains any dependency information and set permissions.
You've asked a question specifically relating to DB objects that do not contain any data, and theoretically should not be changed that often.
Its likely you may need to edit these objects but not every 5 minutes. Because of this I think you've already hit the hammer on the head - permissions.
Short answer, not really an issue, so long as permissions are not an issue
We used to use alter while we were working in development either creating new functionality or modifying the functionality. When we were done with our development and testing we would then do a drop and create. This modifys the date/time stamp on the procs so you can sort them by date/time.
It also allowed us to see what was bundeled by date for each deliverable we sent out.
Add with a drop if exists is better because if you have multiple environments when you move the script to QA or test or prod you don't know if the script already exists in that environment. By adding an drop (if it already exists) and and then add you will be covered regardless if it exists or not. You then have to reapply permissions but its better then hearing your install script error-ed out.
From a usability point of view a drop and create is better than a alter. Alter will fail in a database that doesn't contain that object, but having an IF EXISTS DROP and then a CREATE will work in a database with the object already in existence or in a database where the object doesn't exist. In Oracle and PostgreSQL you normally create functions and procedures with the statement CREATE OR REPLACE that does the same as a SQL SERVER IF EXISTS DROP and then a CREATE. It would be nice if SQL Server picked up this small but very handy syntax.
This is how I would do it. Put all this in one script for a given object.
IF EXISTS ( SELECT 1
FROM information_schema.routines
WHERE routine_schema = 'dbo'
AND routine_name = '<PROCNAME'
AND routine_type = 'PROCEDURE' )
BEGIN
DROP PROCEDURE <PROCNAME>
END
GO
CREATE PROCEDURE <PROCNAME>
AS
BEGIN
END
GO
GRANT EXECUTE ON <PROCNAME> TO <ROLE>
GO