Visual studio Database Project: Include If Exists checks for all the objects in the project - database-project

I have imported my database into a database projects and so far everything looks good. I would like to know if there is any way by which I can remove the suffix of the objects.
For example: every table file has name as 'SomeTable.table.sql' and every procedure is named as 'SomeProcedure.proc.sql'. I want the file names to follow simple naming convention as 'SomeObject.sql'.
Also, all the objects in the project have just Create statement. I want to update the same into:
IF EXISTS statement like:
IF EXISTS (SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N'[dbo].[SomeTableName]') AND type in (N'U'))
BEGIN
DROP TABLE [SomeTableName]
END
GO
CREATE TABLE SomeTabeName.......
I tried searching a lot of this on web, but couldn't find anything useful or any perfect answer.

As far as I'm aware it's not possible to configure the naming of the imported object files. However, at least in the most recent incarnation of the database project, you should be able to rename them yourself after the import, and you can additionally organize your files in folders as you see fit.
It isn't possible to store the object definition files using 'if exists' simply because this isn't their purpose. They are there to represent the objects, allowing you to view in your source control system how the objects have evolved over time. These scripts are not designed to be executed. If you want to deploy from a database project you need to use the Publish feature, or use schema compare. This generates a script that is designed to be run.

Related

Import Oracle User Schema

I've got an oracle database with several users (Other Users?), and I would like to import an schema which is in an .sql file.
My doubt is how to specify on my .sql file that the import is for an specific user.
Thank you in advance.
Examine your sql file. If the commands in there specify a schema name, then you'll need to modify it before you can import it into a different schema.
For example, does it have commands like this:
CREATE TABLE scott.mytable (...)
or like:
CREATE TABLE mytable (...)
If the schema name (e.g. "scott") has been hard-coded, then you'll need to edit your sql script to carefully remove it.
If not, then you just need to log in as the target username and run your sql script.
That depends on the content of your SQL file. You're not doing an import, you are running an SQL file, and that is a bit like "running a script" : it can contain anything. So, it's hard for us to tell from here, how you should run a file, to which we have no clue what is the content. There are many ways of defining the owner of an object. It can be done explicit, or implicit. So, that's a first thing to check : is a user (schema) specified IN the script ? If it is, where is it specified, and how ?
In the most simple case, people would just write a script that connects, and installs objects - in the current schema. Sometimes even without the connect. So, in that case you can call the script as any user you want the objects to be created in.
In the totally other way, you can have a script where a given owner, is specified at each object reference. In that case, you'll probably end up doing a global search and replace.
So, let us know how your script works, and we can detail.

SQL table content update from external data source

I am not sure how to ask this question so please direct me in the right direction if I am not using the appropriate terminology, etc. but I can explain what I am currently doing. I would like to know if there is an easier way to update content in the database than the method I'm currently using.
(I'm using SQL Server 2008 BTW.)
I have a bunch of CSV files that I use to give to my client as a means to update content which gets imported into the DB (because the content is LARGE). The import works by running a python script that I wrote that makes use of a Jinja2 template that generates the SQL file needed to insert the CSV content into the database (if it is a from-scratch scenario). This is working fine.
Now when it comes to data migration (I need to migrate the data that exists in the DB to a new version thereof) I have a lot of manual work (I hand code it in the template, there is no SQL command or auto-generated code that I can run to do this for me) to do.
So lets say I have a list of Hospitals in a CSV file and I already have a set of hospitals in the database (which is imported from the previous version of the CSV file). I create a copy of the Hospitals table (without the data) and call it HospitalsTemp. The new CSV hospitals are inserted into the HospitalsTemp table (at least that part is generated via the template).
The Hospitals table now gets detached from all its foreign-keys and constraints. Now I go through all the tables surrounding the Hospitals (again manually!) and replace the hospitalId which pointed to the old hospitalId with the new hospitalId (as I can do a lookup from the Hospitals to the HospitalsTemp based on the hospital code to ensure that referential integrity is retained).
Then I delete the Hospitals table and rename the HospitalsTemp to Hospitals and put back the foreign-keys and constraints on the new Hospitals table.
I hope I explained it well enough for everyone to understand. I'm really hoping for a simpler way to do this.
How do you know which hospital becomes which, do the names stay the same? Is there an Id that stays the same?
Have you looked at SSIS, and the Slowly Changing Dimension component? You can use it to update existing rows and add new rows: http://blogs.msdn.com/b/karang/archive/2010/09/29/slowly-changing-dimension-using-ssis.aspx
Also SSIS would be a good tool for the import, as it handles reading CSV files well.
You could replace the current logic with simple SSIS package that's just a flat-file data source and the output of the SCD wizard by the sounds of it?

How to script out stored procedures to files?

Is there a way that I can find where stored procedures are saved so that I can just copy the files to my desktop?
Stored procedures aren't stored as files, they're stored as metadata and exposed to us peons (thanks Michael for the reminder about sysschobjs) in the catalog views sys.objects, sys.procedures, sys.sql_modules, etc. For an individual stored procedure, you can query the definition directly using these views (most importantly sys.sql_modules.definition) or using the OBJECT_DEFINITION() function as Nicholas pointed out (though his description of syscomments is not entirely accurate).
To extract all stored procedures to a single file, one option would be to open Object Explorer, expand your server > databases > your database > programmability and highlight the stored procedures node. Then hit F7 (View > Object Explorer Details). On the right-hand side, select all of the procedures you want, then right-click, script stored procedure as > create to > file. This will produce a single file with all of the procedures you've selected. If you want a single file for each procedure, you could use this method by only selecting one procedure at a time, but that could be tedious. You could also use this method to script all accounting-related procedures to one file, all finance-related procedures to another file, etc.
An easier way to generate exactly one file per stored procedure would be to use the Generate Scripts wizard - again, starting from Object Explorer - right-click your database and choose Tasks > Generate scripts. Choose Select specific database objects and check the top-level Stored Procedures box. Click Next. For output choose Save scripts to a specific location, Save to file, and Single file per object.
These steps may be slightly different depending on your version of SSMS.
Stored procedures are not "stored" as a separate file that you're free to browse and read without the database. It's stored in the database it belongs to in a set of system tables. The table that contains the definition is called [sysschobjs] which isn't even accessible (directly) to any of us end users.
To retrieve the definition of these stored procedures from the database, I like to use this query:
select definition from sys.sql_modules
where object_id = object_id('sp_myprocedure')
But I like Aaron's answer. He gives some other nice options.
It depends on which version of SQL Server you're running. For recent versions, source code for stored procedures is available via the system view sys.sql_modules, but a simpler way to get the source for a stored procedure or user-defined function (UDF) is by using system function object_definition() (which the view definition of sys.ssql_modules uses):
select object_definition( object_id('dbo.my_stored_procedure_or_user_defined_function') )
In older versions, stored procedure and UDF was available via the now-deprecated view system view sys.syscomments.
And in older yet versions of SQL Server, it was available via the system table `dbo.syscomments'
It should be notdd that depending on your access and how the database is configured, the source may not be available to you or it may be encrypted, which makes it not terribly useful.
You can also get the source programmatically using SMO (Sql Server Management Objects).
http://technet.microsoft.com/en-us/library/hh248032.aspx
I recently came across an issue with programmatically extracting Stored Procedure scripts to file. I started off using the routine_definition approach, but quickly realised that I hit the 4000 character limit... No matter what I tried, I couldn't find a way to get over that hump. (Still interested to know if there's a way around this!)
Instead, I stumbled across a powerful built-in helper; sp_helptext
In short, for the purposes of extracting Stored Procedure Scripts, specifically, sp_helptext extracts each line to a row in the output. ie, 2000 lines of code = 2000 rows in a returned dataset. As long as your individual lines don't exceed the 4000 character limit, nothing will be clipped.
Of course, you can then write the entire table contents to file pretty easily either in SQL, or in my case SSIS.
In Case someone comes across this problem, I guess the fastest way to extract all the items (Stored Procedures, Views, User Defied Tables, Functions) is to create a Database project in any solution, then Import everything with Schema Compare and wholaaa you have all the items nicely created in corresponding folders.

create function sql server 2005 disable errors

I'm trying to write a *.bat file which runs all sql-scripts in given folder (every file in this folder has a create function script):
for /r "%~dp0\Production\Functions" %%X in (*.sql) do (
sqlcmd -S%1 -d%2 -b -i "%%X"
)
But some functions in the folder are depended on others. So I get Invalid object name error. Is there a way to disable this error?
Rename your files so that they're listed in the correct order of precedence. So, for example, if FuncA.sql uses FuncB.sql, then rename the files as 001-FuncB.sql, 002-FuncA.sql.
It is not possible to disable errors generated by SQL when you run (what I think of as) code-based object: stored procedures, functions, views, triggers, and anything else that has to be the sole object of a batch submitted to SQL.
It is also awkward at best to work around this problem. Some options:
One way, as Joe Stefanelli recommends, is to name your files such that they get executed in proper order (by name, or perhaps by date created or something more esoteric).
Another way is to group related functions in single scripts, such that referenced objects must be created before referencing objects.
Or combine the above two, putting all your dependent objects in one script you can guarantee will always run first. Not so useful if your have nested references.
A last (and more kludgy) way is to iterate over your scripts several times (assuming your "create" script will properly deal with an object that already exists), until a given pass raises no errors.
For development purposes, we store code-based objects in individual files, but when it comes time to wrap the code up for push to Production systems, I glom the files together, test it, and shuffle the contents around and retest until no more errors are generated.

Doctrine schema changes while keeping data?

We're developing a Doctrine backed website using YAML to define our schema. Our schema changes regularly (including fk relations) so we need to do a lot of:
Doctrine::generateModelsFromYaml(APPPATH . 'models/yaml', APPPATH . 'models', array('generateTableClasses' => true));
Doctrine::dropDatabases();
Doctrine::createDatabases();
Doctrine::createTablesFromModels();
We would like to keep existing data and store it back in the re-created database. So I copy the data into a temporary database before the main db is dropped.
How do I get the data from the "old-scheme DB copy" to the "new-scheme DB"? (the new scheme only contains NEW columns, NO COLUMNS ARE REMOVED)
NOTE:
This obviously doesn't work because the column count doesn't match.
SELECT * FROM copy.Table INTO newscheme.Table
This obviously does work, however this is consuming too much time to write for every table:
SELECT old.col, old.col2, old.col3,'somenewdefaultvalue' FROM copy.Table as old INTO newscheme.Table
Have you looked into Migrations? They allow you to alter your database schema in programmatical way. WIthout losing data (unless you remove colums, of course)
How about writing a script (using the Doctrine classes for example) which parses the yaml schema files (both the previous version and the "next" version) and generates the sql scripts to run? It would be a one-time job and not require that much work. The benefit of generating manual migration scripts is that you can easily store them in the version control system and replay version steps later on. If that's not something you need, you can just gather up changes in the code and do it directly through the database driver.
Of course, the more fancy your schema changes becomes, the harder the maintenance will get i.e. column name changes, null to not null etc.