create function sql server 2005 disable errors - sql-server-2005

I'm trying to write a *.bat file which runs all sql-scripts in given folder (every file in this folder has a create function script):
for /r "%~dp0\Production\Functions" %%X in (*.sql) do (
sqlcmd -S%1 -d%2 -b -i "%%X"
)
But some functions in the folder are depended on others. So I get Invalid object name error. Is there a way to disable this error?

Rename your files so that they're listed in the correct order of precedence. So, for example, if FuncA.sql uses FuncB.sql, then rename the files as 001-FuncB.sql, 002-FuncA.sql.

It is not possible to disable errors generated by SQL when you run (what I think of as) code-based object: stored procedures, functions, views, triggers, and anything else that has to be the sole object of a batch submitted to SQL.
It is also awkward at best to work around this problem. Some options:
One way, as Joe Stefanelli recommends, is to name your files such that they get executed in proper order (by name, or perhaps by date created or something more esoteric).
Another way is to group related functions in single scripts, such that referenced objects must be created before referencing objects.
Or combine the above two, putting all your dependent objects in one script you can guarantee will always run first. Not so useful if your have nested references.
A last (and more kludgy) way is to iterate over your scripts several times (assuming your "create" script will properly deal with an object that already exists), until a given pass raises no errors.
For development purposes, we store code-based objects in individual files, but when it comes time to wrap the code up for push to Production systems, I glom the files together, test it, and shuffle the contents around and retest until no more errors are generated.

Related

How to watch Changes to SQlite Database and Trigger Shell Script

Note: I believe I may be missing a simple solution to this problem. I'm relatively new to programming. Any advice is appreciated.
The problem: A small team of people (~3-5) want to be able to automate, as far as possible, the filing of downloaded files in appropriate folders. Files will be downloaded into a shared downloads folder. The files in this downloads folder will be sorted into a large shared folder structure according to their file-type, URL the file was downloaded from, and so on and so forth. These files are stored on a shared server, and the actual sorting will be done by some kind of shell script running on the server itself.
Whilst there are some utilities which do this (such as Maid), they don't do everything I want them to do. Maid for example doesn't have a way to get the the download url of a file in Linux. Additionally, it is written in Ruby, which I'd like to avoid.
The biggest stumbling block then is finding a find a way to get the url of the downloaded file that can be passed into the shell script. Originally I thought this could be done via getfattr, which would get a file's extended attributes. Frustratingly however, whilst chromium saves a file's download url as an extended attribute, Firefox doesn't seem to do the same thing. So relying on extended attributes seems to be out of the question.
What Firefox does do however is store download 'metadata' in the places.sqlite file, in two separate tables - moz_annos and moz_places. Inspired by this, I dediced to build a Firefox extension that writes all information about the downloaded file to a SQLite database downloads.sqlite on our server upon the completion of said download. This includes the url, MIME type, etc. of the downloaded file.
The idea is that with this data, the server could run a shell script that does some fine-grained sorting of the downloaded file into our shared file system.
However, I am struggling to find out a stable, reliable, and portable way of 'triggering' the script that will actually move the files, as well as passing information about these files to the script so that it can sort them accordingly.
There are a few ways I thought I could go about this. I'm not sure which method is the most appropriate:
1) Watch Downloads Folder
This method would watch for changes to the shared downloads directory, then use the file name of the downloaded file to query downloads.sqlite, getting the matching row, then finally passing the file's attributes into a bash script that sorts said file.
Difficulties: Finding a way to reliably match the downloaded file with the appropriate record in the database. Files may have the same download name but need to be sorted differently, perhaps, for example, if they were downloaded from a different URL. Additionally, I'd like to get additional attributes like whether the file was downloaded in incognito mode.
2) Create Auxillary 'Helper' File
Upon a file download event, the extension creates a 'helper' text file, which is the name of the file + some marker that contains the additional file attribute:
/Downloads/
mydownload.pdf
mydownload-downloadhelper.txt
The server can then watch for the creation of a .txt file in the downloads directory run the necessary shell script from this.
Difficulties: Whilst this avoids using a SQlite databse, it seems rather ungraceful and hacky, and I can see a multitude of ways in which this method would just break or not work.
3) Watch
SQlite Database
This method writes to the shared SQlite database downloads.sqlite on server. Then, by some method, watch for a new insert of a row into this database. This could either be by watching the sqlite databse for a new INSERT on a table, or have a sqlite trigger on INSERT that runs a bash script, passing on the download information into a shell script.
Difficulties: there doesn't seem to be any easy way to watch an SQlite database for a new row insert, and a trigger within SQlite doesn't seem to be able to launch an external script/program. I've searched high and low for a method of doing either of these two processes, but I'm struggling to find any documented way to do it that I am able to understand.
What I would like is :
Some feedback on which of these methods is appropriate, or if there is a more appropriate method that I am overlooking.
An example of a system/program that does something similar to this.
Many thanks in advance.
It seems to me that you have put "the cart in front of the horse":
Use cron to periodically check for new downloads. Process them on the command line instead of trying to trigger things from inside sqlite3:
a) Here is an approach using your shared sqlite3 database "downloads.sqlite":
Upfront once:
Add a table to your database containing just an integer as record counter and a timeStamp field, e.g., "table_counter":
sqlite3 downloads.sqlite "CREATE TABLE "table_counter" ( "counter" INTEGER PRIMARY KEY NOT NULL, "timestamp" DATETIME DEFAULT (datetime('now','UTC')));" 2>/dev/null
Insert an initial record into this new table setting the "counter" to zero and recording a timeStamp:
sqlite3 downloads.sqlite "INSERT INTO "table_counter" VALUES (0, (SELECT datetime('now','UTC')));" 2>/dev/null
Every so often:
Query the table containing the downloads with a "SELECT COUNT(*)" statement:
sqlite3 downloads.sqlite "SELECT COUNT(*) from table_downloads;" 2>/dev/null
Result e.g., 20
Compare this number to the number stored in the record counter field:
sqlite3 downloads.sqlite "SELECT (counter) from table_counter;" 2>/dev/null
Result e.g., 17
If result from 3) > result from 4), then you have downloaded more files than processed.
If so, query the table containing the downloads with a "SELECT" statement for the oldest not yet processed download, using a "subselect":
sqlite3 downloads.sqlite "SELECT * from table_downloads where rowid = (SELECT (counter+1) from table_counter);" 2>/dev/null
In my example this would SELECT all values for the data record with the rowid of 17+1 = 18;
Do your magic in regards to the downloaded file stored as record #18.
Increase the record counter in the "table_counter", again using a subselect:
sqlite3 downloads.sqlite "UPDATE table_counter SET counter = (SELECT (counter) from table_counter)+1;" 2>/dev/null
Finally, update the timeStamp for the "table_counter":
Why? Shit happens on shared drives... This way you can always check how many download records have been processed and when this has happened last time.
sqlite3 downloads.sqlite "UPDATE table_counter SET timeStamp = datetime('now','UTC');" 2>/dev/null
If you want to have a log of this processing then change the SQL statements in 4) to a "SELECT COUNT(*)" and in 7) to an "INSERT counter" and its subselect to an "(SELECT (counter+1) from table_counter)" respectively ...
Please note: The redirections " 2>/dev/null" at the end of the SQL statements are just to suppress this kind of line issued by newer versions of SQLite3 before showing your query results.
-- Loading resources from /usr/home/bernie/.sqliterc
If you don't like timeStamps based on UTC then use localtime instead:
(datetime('now','localtime'))
Put steps 3) inclusive 8) in a shell-script and use a cron entry to run this query/comparism periodically...
Use the complete /path/to/sqlite3 in this shell-script (just in case running on a shared drive. Someone could be fooling around with paths and could surprise your cron ...)
b) I will give you a simpler answer using awk and some hash like md5 in a separate answer.
So it is easier for future readers and easier for you to "rate" :-)

Import Oracle User Schema

I've got an oracle database with several users (Other Users?), and I would like to import an schema which is in an .sql file.
My doubt is how to specify on my .sql file that the import is for an specific user.
Thank you in advance.
Examine your sql file. If the commands in there specify a schema name, then you'll need to modify it before you can import it into a different schema.
For example, does it have commands like this:
CREATE TABLE scott.mytable (...)
or like:
CREATE TABLE mytable (...)
If the schema name (e.g. "scott") has been hard-coded, then you'll need to edit your sql script to carefully remove it.
If not, then you just need to log in as the target username and run your sql script.
That depends on the content of your SQL file. You're not doing an import, you are running an SQL file, and that is a bit like "running a script" : it can contain anything. So, it's hard for us to tell from here, how you should run a file, to which we have no clue what is the content. There are many ways of defining the owner of an object. It can be done explicit, or implicit. So, that's a first thing to check : is a user (schema) specified IN the script ? If it is, where is it specified, and how ?
In the most simple case, people would just write a script that connects, and installs objects - in the current schema. Sometimes even without the connect. So, in that case you can call the script as any user you want the objects to be created in.
In the totally other way, you can have a script where a given owner, is specified at each object reference. In that case, you'll probably end up doing a global search and replace.
So, let us know how your script works, and we can detail.

How to script out stored procedures to files?

Is there a way that I can find where stored procedures are saved so that I can just copy the files to my desktop?
Stored procedures aren't stored as files, they're stored as metadata and exposed to us peons (thanks Michael for the reminder about sysschobjs) in the catalog views sys.objects, sys.procedures, sys.sql_modules, etc. For an individual stored procedure, you can query the definition directly using these views (most importantly sys.sql_modules.definition) or using the OBJECT_DEFINITION() function as Nicholas pointed out (though his description of syscomments is not entirely accurate).
To extract all stored procedures to a single file, one option would be to open Object Explorer, expand your server > databases > your database > programmability and highlight the stored procedures node. Then hit F7 (View > Object Explorer Details). On the right-hand side, select all of the procedures you want, then right-click, script stored procedure as > create to > file. This will produce a single file with all of the procedures you've selected. If you want a single file for each procedure, you could use this method by only selecting one procedure at a time, but that could be tedious. You could also use this method to script all accounting-related procedures to one file, all finance-related procedures to another file, etc.
An easier way to generate exactly one file per stored procedure would be to use the Generate Scripts wizard - again, starting from Object Explorer - right-click your database and choose Tasks > Generate scripts. Choose Select specific database objects and check the top-level Stored Procedures box. Click Next. For output choose Save scripts to a specific location, Save to file, and Single file per object.
These steps may be slightly different depending on your version of SSMS.
Stored procedures are not "stored" as a separate file that you're free to browse and read without the database. It's stored in the database it belongs to in a set of system tables. The table that contains the definition is called [sysschobjs] which isn't even accessible (directly) to any of us end users.
To retrieve the definition of these stored procedures from the database, I like to use this query:
select definition from sys.sql_modules
where object_id = object_id('sp_myprocedure')
But I like Aaron's answer. He gives some other nice options.
It depends on which version of SQL Server you're running. For recent versions, source code for stored procedures is available via the system view sys.sql_modules, but a simpler way to get the source for a stored procedure or user-defined function (UDF) is by using system function object_definition() (which the view definition of sys.ssql_modules uses):
select object_definition( object_id('dbo.my_stored_procedure_or_user_defined_function') )
In older versions, stored procedure and UDF was available via the now-deprecated view system view sys.syscomments.
And in older yet versions of SQL Server, it was available via the system table `dbo.syscomments'
It should be notdd that depending on your access and how the database is configured, the source may not be available to you or it may be encrypted, which makes it not terribly useful.
You can also get the source programmatically using SMO (Sql Server Management Objects).
http://technet.microsoft.com/en-us/library/hh248032.aspx
I recently came across an issue with programmatically extracting Stored Procedure scripts to file. I started off using the routine_definition approach, but quickly realised that I hit the 4000 character limit... No matter what I tried, I couldn't find a way to get over that hump. (Still interested to know if there's a way around this!)
Instead, I stumbled across a powerful built-in helper; sp_helptext
In short, for the purposes of extracting Stored Procedure Scripts, specifically, sp_helptext extracts each line to a row in the output. ie, 2000 lines of code = 2000 rows in a returned dataset. As long as your individual lines don't exceed the 4000 character limit, nothing will be clipped.
Of course, you can then write the entire table contents to file pretty easily either in SQL, or in my case SSIS.
In Case someone comes across this problem, I guess the fastest way to extract all the items (Stored Procedures, Views, User Defied Tables, Functions) is to create a Database project in any solution, then Import everything with Schema Compare and wholaaa you have all the items nicely created in corresponding folders.

Visual studio Database Project: Include If Exists checks for all the objects in the project

I have imported my database into a database projects and so far everything looks good. I would like to know if there is any way by which I can remove the suffix of the objects.
For example: every table file has name as 'SomeTable.table.sql' and every procedure is named as 'SomeProcedure.proc.sql'. I want the file names to follow simple naming convention as 'SomeObject.sql'.
Also, all the objects in the project have just Create statement. I want to update the same into:
IF EXISTS statement like:
IF EXISTS (SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N'[dbo].[SomeTableName]') AND type in (N'U'))
BEGIN
DROP TABLE [SomeTableName]
END
GO
CREATE TABLE SomeTabeName.......
I tried searching a lot of this on web, but couldn't find anything useful or any perfect answer.
As far as I'm aware it's not possible to configure the naming of the imported object files. However, at least in the most recent incarnation of the database project, you should be able to rename them yourself after the import, and you can additionally organize your files in folders as you see fit.
It isn't possible to store the object definition files using 'if exists' simply because this isn't their purpose. They are there to represent the objects, allowing you to view in your source control system how the objects have evolved over time. These scripts are not designed to be executed. If you want to deploy from a database project you need to use the Publish feature, or use schema compare. This generates a script that is designed to be run.

Using wix3 SqlScript to run generated temporary sql-script files

I am starting to write an installer which will use the SqlScript-element.
That takes a reference to the Binary-table what script to run.
I would like to dynamically generate the script during the installation.
I can see three possibilities:
Somehow to get SqlScript to read it data from a file rather then a Binary entry.
Inject my generated script into the Binary table
Using SqlString
Which will cause the need to place some rather long strings into Properties, but I guess that shouldn't really be a prolem.
Any advice?
Regards
Leif
(My reason, should anyone be interested is that the database should have a job set up, that calls on an installed exe-file. I prefer to create the job using sqlscript. And the path of that file is not known until InstallDir has been choosen.)
The way this is typically handled is to have the static stuff in SqlScript and use SqlString (which can contain formatted Properties) to execute the dynamic stuff. You can interleave the two with careful use of the Sequence attribute.