sql script from base - sql

in my project database model is changed periodically, but since database contains test data they have to re-enter each time.
script to insert data quickly becomes relevant. at the moment it is done manually. how this can be done using sql management studio?
I need a script with the data from the tables (to insert ready data to a new table), the script for the database model I have.
for example: i have table [dbo].[Users], table has 3 column [Id],[Login],[Email] and currency contain only one user(Id = 1, Login = 'Anton', Email = 'fake#mail.com')...i create script for my base and resul will be somthink like this
CREATE TABLE [dbo].[Users] (
[Id] int IDENTITY(1,1) NOT NULL,
[Login] decimal(1024) NOT NULL,
[Email] nvarchar(1024) NOT NULL,
);
INSERT INTO Users([Id],[Login],[Email]) VALUES(1, 'Anton', 'fake#mail.com')

There's actually a standalone official tool with which you can create schema and data dumps as sql scripts. Just follow the wizard and make sure you've set the checkbox to export data. As far as I know, it comes included into SQL Management Studio 2008 or later.

Related

Entity Framework Database First approach in Azure

I have a bunch of SQL scripts that create my database. I'd like to use the Database First approach with EF and Azure. I figured out that there are certain fields that my tables need to include:
CREATE TABLE dbo.Company
(
[Id] VARCHAR(50) NOT NULL CONSTRAINT DF_Company_Id DEFAULT '0',
CompanyName NVARCHAR(50) NOT NULL CONSTRAINT DF_Company_Name DEFAULT '',
-- Azure properties
[Version] TIMESTAMP NOT NULL,
[CreatedAt] DATETIMEOFFSET(7) NOT NULL,
[UpdatedAt] DATETIMEOFFSET(7) NULL,
[Deleted] BIT NOT NULL
)
I also figured out that I need certain constraints on these fields:
ALTER TABLE dbo.Company ADD CONSTRAINT PK_Company PRIMARY KEY NONCLUSTERED ([Id] ASC);
GO
ALTER TABLE dbo.Company ADD CONSTRAINT DF_Company_CreatedAt DEFAULT CONVERT(DATETIMEOFFSET(7), SYSUTCDATETIME(), 0) FOR [CreatedAt]
GO
ALTER TABLE dbo.Company ADD CONSTRAINT DF_Company_Deleted DEFAULT 0 FOR Deleted
GO
CREATE CLUSTERED INDEX IX_Company_CreatedAt ON dbo.Company([CreatedAt] ASC)
GO
CREATE TRIGGER [TR_Company_InsertUpdateDelete] ON dbo.Company
AFTER INSERT, UPDATE, DELETE AS
BEGIN
UPDATE dbo.Company
SET dbo.Company.[UpdatedAt] = CONVERT(DATETIMEOFFSET, SYSUTCDATETIME())
FROM INSERTED WHERE inserted.[Id] = dbo.Company.[Id]
END
GO
I'm wondering if there is any documentation for that. I'm not sure if I included all fields and constraints. Also, I'm not sure if creating tables using a SQL script like the above one is a good approach for EF with Azure.
Any advice?
===UPDATED TO CLARIFY THE QUESTION===
Maybe it would be clearer to explain my question step-by-step:
I have a bunch of SQL scripts that create database structure.
I'd like to make the database structure usable by Azure. For example, Azure has Offline Data Sync feature that I know requires certain fields in the tables.
Through trial and error, I have found that Azure uses certain fields (Version, CreatedAt, UpdatedAt, and Deleted) and triggers to provide its features (such as Offline Data Sync).
I modified my SQL scripts to include this "Azure-specific" fields and triggers.
The problem is the "trial and error" part. It's just wrong to apply such an approach in production code that potentially will be used by many users. I'd like to find out what exactly Azure needs in the database. When the database is created using Code First all the "plumbing" is created by Azure. With the Database First approach, I have to create this plumbing. I'm wondering if I did not miss anything.

Jetbrains Datagrip 2017.1.3, force columns exported when dumping data to sql inserts file

I have an SQL server database with a lot of tables and data. I need to reproduce it locally in a docker container.
I have successfully exported the schema and reproduced it. When I dump data to an SQL file, it does not export automatically generated fields (Like ids or uuids for example)
Here is the schema for the user table:
create table user (
id_user bigint identity constraint PK_user primary key,
uuid uniqueidentifier default newsequentialid() not null,
id_salarie bigint constraint FK_user_salarie references salarie,
date_creation datetime,
login nvarchar(100)
)
When it exports and element from this table, I get this kind of insert:
INSERT INTO user(id_salarie, date_creation, login) VALUES (1, null, "example")
As a consequence, most of my inserts give me foreign key errors, because the ids generated by my new database are not the same as the ones in the old database. I can't change everything manually as there is way too much data.
Instead, I would like to have this kind of insert:
INSERT INTO user(id_user, uuid, id_salarie, date_creation, login) VALUES (1, 1, "manuallyentereduuid" null, "example")
Is there any way to do this with Datagrid directly? Or maybe a specific SQL server way of generating insert statements this way?
Don't hesitate to ask for more details in comments.
You need the option 'Skip generated columns' while configuring INSERT extractor.
It seems like Datagrip does not give you that possibility so I used something else : DBeaver. It is free and based on the Eclipse Environment.
The method is simple :
Select all the tables you want to export
Right click -> Export table data
From there you just have to follow the instructions. It outputs one file per table, which is a good thing if you have a consequent volumetry. I had trouble executing the whole script and had to split it when using Datagrip.
Hope this helps anyone encountering the same problem. If you find the solution directly in datagrip, I would like to know too.
EDIT : See the answer above

generate scripts in sql server be generated as a StoredProcedure or written to a text file

I am aware of SQL server being able to generate scripts about metadata of tables in a database. I was wondering if there is a way to do this without using the wizard. Whether it can be done using a Stored Procedure to write to a txt file or using SSIS to generate the DDL commands. I just want something so I can a job automatically overnight, so there no manual intervention running the wizard. if there is a way of doing this on SQL server Management studios using commands or SSIS.
Thank you
you could setup a database trigger that keeps record of any change in your development database.
An example is this :
create TRIGGER trDDLHistory ON DATABASE
for CREATE_FUNCTION, ALTER_FUNCTION, DROP_FUNCTION,
CREATE_PROCEDURE, ALTER_PROCEDURE, DROP_PROCEDURE,
CREATE_TABLE, ALTER_TABLE, DROP_TABLE,
CREATE_TRIGGER, ALTER_TRIGGER, DROP_TRIGGER,
CREATE_VIEW, ALTER_VIEW, DROP_VIEW
-- there are more offcourse..
AS
BEGIN
INSERT INTO tblDDLHistory(DDLHistory)
VALUES(convert(nvarchar(max), EVENTDATA()))
END;
example of tblDDLHistory is
CREATE TABLE [dbo].[tblDDLHistory] (
[DDLHistoryID] INT IDENTITY(1,1) NOT NULL,
[DDLDate] DATETIME NOT NULL DEFAULT (getdate()),
[DDLHistory] XML NULL,
CONSTRAINT [PK_DDLHistoryID] PRIMARY KEY (DDLHistoryID)
)
Now you can get all changes from here to use in your production database.

How do I create tables from my .sql file in SQL Server

How do I create a table in SQL Server from a .sql? I see the query statements and the data to be inserted into the table but how do I create the actual tables?
If the statements to create the tables aren't in the .sql file, then you will need to know their structure and create them, either by using a handwritten query, another .sql file or SQL Server Management Studio (SSMS).
In SSMS you can expand a database in the Object Explorer and then right click on "Tables" to select "New Table..." then you will get a UI for defining the columns you need.
With the context of your previous question you need to contact whoever supplied the .sql files and ask for a script to create the required tables. Or perhaps they should send you a copy(a backup) of the database.
You can run it from the command prompt using the command below:
sqlcmd -S myServer\instanceName -i C:\myScript.sql
http://msdn.microsoft.com/en-us/library/ms170572.aspx
This assumes that your .sql files contains the logic to create the needed tables.
You can create tables from a sql script like so.
CREATE TABLE MyTable1 (
MyString1 NVARCHAR(50) NOT NULL,
MyInt1 INT NULL NULL
)
GO
CREATE TABLE MyTable2 (
Id INT IDENTITY(1,1) NOT NULL,
Name NVARCHAR(50) NOT NULL,
PRIMARY KEY(Id)
)
GO
See http://msdn.microsoft.com/en-US/library/ms174979%28v=SQL.90%29.aspx for more info

SQL version control methodology

There are several questions on SO about version control for SQL and lots of resources on the web, but I can't find something that quite covers what I'm trying to do.
First off, I'm talking about a methodology here. I'm familiar with the various source control applications out there and I'm familiar with tools like Red Gate's SQL Compare, etc. and I know how to write an application to check things in and out of my source control system automatically. If there is a tool which would be particularly helpful in providing a whole new methodology or which have a useful and uncommon functionality then great, but for the tasks mentioned above I'm already set.
The requirements that I'm trying to meet are:
The database schema and look-up table data are versioned
DML scripts for data fixes to larger tables are versioned
A server can be promoted from version N to version N + X where X may not always be 1
Code isn't duplicated within the version control system - for example, if I add a column to a table I don't want to have to make sure that the change is in both a create script and an alter script
The system needs to support multiple clients who are at various versions for the application (trying to get them all up to within 1 or 2 releases, but not there yet)
Some organizations keep incremental change scripts in their version control and to get from version N to N + 3 you would have to run scripts for N->N+1 then N+1->N+2 then N+2->N+3. Some of these scripts can be repetitive (for example, a column is added but then later it is altered to change the data type). We're trying to avoid that repetitiveness since some of the client DBs can be very large, so these changes might take longer than necessary.
Some organizations will simply keep a full database build script at each version level then use a tool like SQL Compare to bring a database up to one of those versions. The problem here is that intermixing DML scripts can be a problem. Imagine a scenario where I add a column, use a DML script to fill said column, then in a later version that column name is changed.
Perhaps there is some hybrid solution? Maybe I'm just asking for too much? Any ideas or suggestions would be greatly appreciated though.
If the moderators think that this would be more appropriate as a community wiki, please let me know.
Thanks!
I struggled with this for several years before recently adopting a strategy that seems to work pretty well. Key points I live by:
The database doesn't need to be independently versioned from the app
All database update scripts should be idempotent
As a result, I no longer create any kind of version tables. I simply add changes to a numbered sequence of .sql files that can be applied at any given time without corrupting the database. If it makes things easier, I'll write a simple installer screen for the app to allow administrators to run these scripts whenever they like.
Of course, this method does impose a few requirements on the database design:
All schema changes are done through script - no GUI work.
Extra care must be taken to ensure all keys, constraints, etc.. are named so they can be referenced by a later update script, if necessary.
All update scripts should check for existing conditions.
Examples from a recent project:
001.sql:
if object_id(N'dbo.Registrations') is null
begin
create table dbo.Registrations
(
[Id] uniqueidentifier not null,
[SourceA] nvarchar(50) null,
[SourceB] nvarchar(50) null,
[Title] nvarchar(50) not null,
[Occupation] nvarchar(50) not null,
[EmailAddress] nvarchar(100) not null,
[FirstName] nvarchar(50) not null,
[LastName] nvarchar(50) not null,
[ClinicName] nvarchar(200) not null,
[ClinicAddress] nvarchar(50) not null,
[ClinicCity] nvarchar(50) not null,
[ClinicState] nchar(2) not null,
[ClinicPostal] nvarchar(10) not null,
[ClinicPhoneNumber] nvarchar(10) not null,
[ClinicPhoneExtension] nvarchar(10) not null,
[ClinicFaxNumber] nvarchar(10) not null,
[NumberOfVets] int not null,
[IpAddress] nvarchar(20) not null,
[MailOptIn] bit not null,
[EmailOptIn] bit not null,
[Created] datetime not null,
[Modified] datetime not null,
[Deleted] datetime null
);
end
if not exists(select 1 from information_schema.table_constraints where constraint_name = 'pk_registrations')
alter table dbo.Registrations add
constraint pk_registrations primary key nonclustered (Id);
if not exists (select 1 from sysindexes where [name] = 'ix_registrations_created')
create clustered index ix_registrations_created
on dbo.Registrations(Created);
if not exists (select 1 from sysindexes where [name] = 'ix_registrations_email')
create index ix_registrations_email
on dbo.Registrations(EmailAddress);
if not exists (select 1 from sysindexes where [name] = 'ix_registrations_email')
create index ix_registrations_name_and_clinic
on dbo.Registrations (FirstName,
LastName,
ClinicName);
002.sql
/**********************************************************************
The original schema allowed null for these columns, but we don't want
that, so update existing nulls and change the columns to disallow
null values
*********************************************************************/
update dbo.Registrations set SourceA = '' where SourceA is null;
update dbo.Registrations set SourceB = '' where SourceB is null;
alter table dbo.Registrations alter column SourceA nvarchar(50) not null;
alter table dbo.Registrations alter column SourceB nvarchar(50) not null;
/**********************************************************************
The client wanted to modify the signup form to include a fax opt-in
*********************************************************************/
if not exists
(
select 1
from information_schema.columns
where table_schema = 'dbo'
and table_name = 'Registrations'
and column_name = 'FaxOptIn'
)
alter table dbo.Registrations
add FaxOptIn bit null
constraint df_registrations_faxoptin default 0;
003.sql, 004.sql, etc...
At any given time I can run the entire series of scripts against the database in any state and know that things will be immediately brought up to speed with the current version of the app. Because everything is scripted, it's much easier to build a simple installer to do this, and it's adding the schema changes to source control is no problem at all.
You've got quite a rigorous set of requirements, I'm not sure whether you'll find something that puts checks in all the boxes, especially the multiple concurrent schemas and the intelligent version control.
The most promising tool that I've read about that kind of fits is Liquibase.
Here are some additional links:
http://en.wikipedia.org/wiki/LiquiBase
http://www.ibm.com/developerworks/java/library/j-ap08058/index.html
Yes, you're asking for a lot, but they're all really pertinent points! Here at Red Gate we're moving towards a complete database development solution with our SQL Source Control SSMS extension and we're facing similar challenges.
http://www.red-gate.com/products/SQL_Source_Control/index.htm
For the upcoming release we're fully supporting schema changes, and supporting static data indirectly via our SQL Data Compare tool. All changes are saved as creation scripts, although when you're updating or deploying to a database, the tool will ensure that the changes are applied appropriately as an ALTER or CREATE.
The most challenging requirement that doesn't yet have a simple solution is version management and deployment, which you describe very clearly. If you make complex changes to the schema and data, it may be inevitable that a handcrafted migration script is constructed to get between two adjacent versions, as not all of the 'intent' is always saved alongside a newer version. Column renames are a prime example. The solution could be for a system to be devised that saves the intent, or if this is too complex, allows the user to supply a custom script to perform the complex change. Some sort of version management framework would manage these and "magically" construct deployment scripts from two arbitrary versions.
for this kind of issue use Visual studio team system 2008 for version controlling of your sql database.
In tsf there are no. of feature avialbe like
Datacompare
Schemacompare
version controlling
about database version control : http://www.codinghorror.com/blog/2006/12/is-your-database-under-version-control.html
for more detail check : http://msdn.microsoft.com/en-us/library/ms364062(VS.80).aspx
We are using SQL Examiner for keeping database schema under version control. I've tried the VS2010 also, but in my opinion VS approach is too complex for small and mid-size projects. With SQL Examiner I mostly work with SSMS and use SQL Examiner to check-in updates to SVN (TFS and SourceSafe is supported also, but I never tried it).
Here is description of SQL Examiner's approach: How to get your database under version control
Try DBSourceTools. (http://dbsourcetools.codeplex.com)
Its open source, and specifically designed to script an entire database - tables, views, procs to disk, and then re-create that database through a deployment target.
You can script all data, or just specify which tables to script data for.
Additionally, you can zip up the results for distribution.
We use it for source control of databases, and to test update patches for new releases.
In the back-end it's built around SMO, and thus supports SQL 2000, 2005 and 2008.
DBDiff is integrated, to allow for schema comparisons.
Have fun,
- Nathan.