I can't find the option to exclude constraints when using SQL Server Management Studio script generation - sql-server-2012

I have a situation where we are creating data interfaces for external customers.
Our managers push out sets of data to an external database from a secure internal database. The external database needs to have the exact table structure as the internal database, but it does not need any of the relational constraints since the data is being managed by an internal copy.
I have tried using generate script, I can't seem to find the right 'Advanced Options' that will generate the script so that it won't use the incremental IDENTITY or CONSTRAINT definitions when the script is generated. The default seems to always include them.
I assume I am just not selecting the right options.
Edit:
Here is the option selection and an example of the result.
I could do a Find and then Replace All for the 'IDENTITY', but the 'CONSTRAINT' items seem a bit more cumbersome.

You need to set:
Advanced Scripting Options -> Table/view options -> Script Foreign Key -> False

Related

SQL Server: Scripts to perform Object Movement from a database.schema to another

I am new to SQL server. Database: SQL Server 2012, Size 2TB
We are planning to consolidate multiple databases.dbo.* objects under a single database as different schemas (databaseN.SchemaN.). Thus we need to prepare scripts to move database1.dbo. objects in another database as a different schema (e.g. database2.schema2.*) including all the dependent objects (need exact replica). This needs to be done without using any tools (SSMS, ApexSQL etc).
How should I go about scripting this. I was thinking on the lines of below approach:
Extract complete metadata (including all constraints/triggers/indexes/keys/partitions etc)
Extract data
Execute metadata scripts on target
disable all relational constraints and triggers
insert all the extracted data
enable all the relational constraints and triggers.
If this approach is fine, can I get some assistance in how to go about scripting this. Also, please suggest if any other approach. Some tables are partitioned and 50-100GB in size.

What is the best way to design, generate, and version a database schema script for MS SQL Server?

I have never really seen any questions (with answers) as general as this, so I'm hoping to get some useful feedback. The reason I'm asking is because I've done all of this before and I have my own ways, but sometimes I feel it's not the best practice.
Let's take for example that I can't afford better db modeling tools and I only have sql server and ms sql server management studio. What I do is:
I design with mssms, all of the entities in my db (tables, primary keys, foreign keys, indexes, etc)
then I just generate the schema script using 'Generate Scripts...' command in mssms. The script that's generated is rather large (using sql server express 2012) and seems like it's not organized for maintenance very well.
Example: after all the table creation scripts are setup, there's a bunch of ALTER TABLE commands to add all the constraints. This kind of thing seems like it would be better in the table creation script section, maybe not. Also, for upgrade-ability, I normally add for each table creation section, 'IF NOT EXISTS', so that it doesn't throw an error when I need to re-run the sql script when the db is updated with new tables, columns, etc.
Then for versioning, I generally have a separate script that I run to add the schema version in a VERSION table in the db itself (with just one row).
This allows me to do incremental upgrades when I run the script by adding 'if new-version > current-version' sort of thing.
It seems to have worked out for me in the past, but it just seems kind of, I don't know, not very sophisticated. Can a sql expert shed some light on this subject? It's something we all do for every data driven web app we create, over and over again. I'd like to see how other developers do it.
To recap,
how do you go about designing your db model and generate scripts (do you do it with a design tool, write from scratch, etc?),
how to you manage incremental db changes over time?
How do you version control your database?
SQL Server Data Tools is ideal for this. It has all the design features you require and configurable scripting. It will also diff two databases and generate the change script for you. Oh - and it's free!

AS400 SQL query similar to CLRLIB (clear library) in native AS400

I'm working on a AS400 database and I need to manipulate library/collection with sql.
I need to recreate something similar to the CLRLIB command but I don't find a good way to do this.
Is there a way to delete all the table from a library with a sql query ?
Maybe I can drop the collection and create a new one with the same name. But I don't know if this is a good way to clear the library.
RESOLVE :
Thanks to Buck Calabro for his solution.
I use the following query to call the CLRLIB in SQL :
CALL QSYS.QCMDEXC('CLRLIB LIB_NAME ASPDEV(ASP_NAME)', 0000000032.00000)
Where LIB_NAME is the name of the library I want to clear, ASP_NAME is the name of the ASP where the library is and 0000000032.00000 is the command lenght.
(note that the term COLLECTION has been deprecated, SCHEMA is the current term)
Since a library can contain both SQL and non-SQL objects, there's no SQL way to delete every possible object type.
Dropping the schema and recreating it might work. But note that if the library is in a job's library list, it will have a lock on it and you will not be able to drop it. Also, unless the library was originally created via CREATE SCHEMA (or CREATE COLLECTION) you're going to end up with differences.
CRTLIB creates an empty library, CREATE SCHEMA creates a library plus objects needed for automatic journaling and a dozen or so SQL system views.
Read Charles' answer - there may be objects in your schema that you want to keep (data areas, programs, display and printer files, etc.) If the problem is to delete all of the tables so you can re-build all of the tables, then look at the various system catalog tables: SYSTABLES, SYSVIEWS, SYSINDEXES, etc. The system catalog 'knows' about all of the SQL tables, indexes, views, stored procedures, triggers and so on. You could read the catalog and issue the appropriate SQL DROP statements.

Best practice SCRIPT Installation DataBase

I would like to have your opinions regarding best practices to adopt in SQL scripting to install a Data Base.
PROBLEM A)
In my script I have several Batches to create Tables.
Tables have many Foreign Keys to each others, at the moment I must arranges batches in the right order to avoid conflict with FK Tables.
I would like to know if could be a good practice create Tables and all columns without FK first, and at the end of the script ALTER such tables adding FK.
PROBLEM B)
My script should be use to create different DB on different Servers.
Data Base could have different name on every installation.
Now in my script I create a Database using:
CREATE DATABASE NameX
and:
USE NameX
to use it.
Because I would need update manually the script for every installation. I was thinking would be great to have a CENTRALIZED way for naming the Data Base inside a the script.
In this way changing a simple variable would create the Data Base with my name and all USE statements.
I tried to use LOCAL VARIABLES, but without success because after GO statements they go out of scope.
I do not have any experience in using sqlcmd and variables there.
Any idea how to solve it inside my script?
PS: I use MS SQL 2008 and I will load my script in MS SMS
Thanks guys for your help, this community is great :-)
avoid using "USE DATABASE"
separate the database creating script and data object creating scripts
use some code (Setup, Deploy) to execute creating database script by replacing #database_name with real name
alternative:
use some replacement tool to prepare scripts before deploy (it just replace your ###database_name### with real name)
use bat file to prepare scripts
alternative
use Database Project in the Visual Studio. VS can generate some variables that setup projects can change in the deploy process..
Normally one starts with scripting all the tables, followed by the FK scripts, index scripts and the rest. This is normal practice, as you can't add relationships to tables that are not there...
As for your second problem - there is no way I am aware of for centralizing this. Your best option is a global search/replace of the database name on open files in SSMS.

Create a database from another database?

Is there an automatic way in SQL Server 2005 to create a database from several tables in another database? I need to work on a project and I only need a few tables to run it locally, and I don't want to make a backup of a 50 gig DB.
UPDATE
I tried the Tasks -> Export Data in Management studio, and while it created a new sub database with the tables I wanted, it did not copy over any table metadata, ie...no PK/FK constraints and no Identity data (Even with Preserve Identity checked).
I obviously need these for it to work, so I'm open to other suggestions. I'll try that database publishing tool.
I don't have Integration Services available, and the two SQL Servers cannot directly connect to each other, so those are out.
Update of the Update
The Database Publishing Tool worked, the SQL it generated was slightly buggy, so a little hand editing was needed (Tried to reference nonexistent triggers), but once I did that I was good to go.
You can use the Database Publishing Wizard for this. It will let you select a set of tables with or without the data and export it into a .sql script file that you can then run against your other db to recreate the tables and/or the data.
Create your new database first. Then right-click on it and go to the Tasks sub-menu in the context menu. You should have some kind of import/export functionality in there. I can't remember exactly since I'm not at work right now! :)
From there, you will get to choose your origin and destination data sources and which tables you want to transfer. When you select your tables, click on the advanced (or options) button and select the check box called "preserve primary keys". Otherwise, new primary key values will be created for you.
I know this method can hardly be called automatic but why don't you use a few simple SELECT INTO statements?
Because I'd have to reconstruct the schema, constraints and indexes first. Thats the part I want to automate...Getting the data is the easy part.
Thanks for your suggestions everyone, looks like this is easy.
Integration Services can help accomplish this task. This tool provids advanced data transformation capabilities so you will be able to get exact subset of data that you need from large database.
Assuming that such data is needed for testing/debugging you may consider applying Row Sampling to reduce amount of data exported.
Create new database
Right click on it,
Tasks -> Import Data
Follow instructions