Jetbrains Datagrip 2017.1.3, force columns exported when dumping data to sql inserts file - sql

I have an SQL server database with a lot of tables and data. I need to reproduce it locally in a docker container.
I have successfully exported the schema and reproduced it. When I dump data to an SQL file, it does not export automatically generated fields (Like ids or uuids for example)
Here is the schema for the user table:
create table user (
id_user bigint identity constraint PK_user primary key,
uuid uniqueidentifier default newsequentialid() not null,
id_salarie bigint constraint FK_user_salarie references salarie,
date_creation datetime,
login nvarchar(100)
)
When it exports and element from this table, I get this kind of insert:
INSERT INTO user(id_salarie, date_creation, login) VALUES (1, null, "example")
As a consequence, most of my inserts give me foreign key errors, because the ids generated by my new database are not the same as the ones in the old database. I can't change everything manually as there is way too much data.
Instead, I would like to have this kind of insert:
INSERT INTO user(id_user, uuid, id_salarie, date_creation, login) VALUES (1, 1, "manuallyentereduuid" null, "example")
Is there any way to do this with Datagrid directly? Or maybe a specific SQL server way of generating insert statements this way?
Don't hesitate to ask for more details in comments.

You need the option 'Skip generated columns' while configuring INSERT extractor.

It seems like Datagrip does not give you that possibility so I used something else : DBeaver. It is free and based on the Eclipse Environment.
The method is simple :
Select all the tables you want to export
Right click -> Export table data
From there you just have to follow the instructions. It outputs one file per table, which is a good thing if you have a consequent volumetry. I had trouble executing the whole script and had to split it when using Datagrip.
Hope this helps anyone encountering the same problem. If you find the solution directly in datagrip, I would like to know too.
EDIT : See the answer above

Related

Create table or only add changed/new columns

I have several tables which are worked on within a development environment, then moved to production. If they don't already exist in production, it's fine to just generate the table creation script from SSMS and run it. However, there are occasions where the table already exists in production but all that's needed is an extra column or constraint. The problem is knowing exactly what has changed.
Is there a way to get SQL to compare my CREATE TABLE statement against the existing table and only apply what has changed? Essentially I am trying to do the below and SQL correctly complains that the table exists already.
I would have to manually write an ALTER query which on a real example would be difficult due to the sheer volume of columns. Is there a better / easier way to see what has changed? Note that this involves two separate database servers.
CREATE TABLE suppliers
( supplier_id int NOT NULL,
supplier_name char(50) NOT NULL,
contact_name char(50),
CONSTRAINT suppliers_pk PRIMARY KEY (supplier_id)
);
CREATE TABLE suppliers
( supplier_id int NOT NULL,
supplier_name char(50) NOT NULL,
contact_name char(50),
contact_number char(20), --this has been added
CONSTRAINT suppliers_pk PRIMARY KEY (supplier_id)
);
Also, dropping and recreating wouldn't be a possibility because data would be lost.
SSMS can generate the schema change script if you make the change in the table designer (right-click on the table in Object Explorer and select Design). Then, instead of applying the change immediately, from the menu select Table Designer-->Generate Change Script. Note that depending on the change, SSMS may need to recreate the table, although data will be retained. SSMS requires you uncheck the option to "prevent saving changes that require table re-creation" under Tools-->Options-->Designers-->Table and Database Designers. Review the script to make sure you're good with it.
SQL Server Data Tools (SSDT) and third-party tools (e.g. from Red-Gate and ApexSQL) have schema-compare features to generate the needed DDL after the fact. There are also features like migration scripts to facilitate continuous integration and source control integration as well. I suggest you keep database objects under source control and leverage database tooling as part of your development process.
Typically we use something like database migrations for this, as a feature outside of the database. For example, in several of our C# apps we have a tool called FluentMigrator. We write a script that adds the new columns we need in code, to the dev database. When the project is debugged, FM will run the script and modify the dev db, the dev code uses the new columns and all is well. FM knows not to run the script again
When time comes to put something live, the FM script is a part of the release, the app is put live onto the website, the migrations run again updating the live db so the live code will use the new columns and still all is well..
If there is nothing outside of your sql server (not sure how you manage that, but..), then surely you must be writing scripts (or using gui to generate scripts) that alter the DB right? So just keep those scripts and run them as part of the process of "going live"
If you are looking at this from a perspective that these db already exist created by someone else and they threw away the scripts, then you can one time catch up using a Database Schema Compare tool. Microsoft have one in SSDT - see here for more info on how it is used:
https://msdn.microsoft.com/en-us/library/hh272690(v=vs.103).aspx
If you don't have many constraints I suggest you create a dynamic script to cast and import the data into your new tables. If this doesn't fail then you just drop the old tables and rename the newly created ones.

How to transfer data using SSIS

I am new to SSIS packages and just require assistance on how to transfer data from one data source onto my own database.
Below is my data flow:
Now I have a ODBC Source (Http_Requests Source) where I take data from a PostgreSQL database table (see screenshot below for table columns and data):
Below is the OLE DB destination where it has the table I want to transfer the data to (this table is currently blank):
Now I tried to start debugging to extract the data but I get a few errors (displayed below):
I am a complete novice so I would like some guidance on what I need to include in order to get this SSIS package to transfer data across. Would I need to include a merge statement and how do I apply it. I heard you can write a merge as a proc and call on the proc as a sql command. Does that mean I will need to write a proc in SSMS and then call on it within the OLE DB Destination?
If somebody can provide an example and screenshot then that would be very helpful as I am really new to SSIS.
Thank you,
Check constraint on destination table or disable them before running it.
Below are query you can use.
-- Disable all table constraints
ALTER TABLE YourTableName NOCHECK CONSTRAINT ALL
-- Enable all table constraints
ALTER TABLE YourTableName CHECK CONSTRAINT ALL
Tick keep identity
box or drop primary key on the table. After you apply the changes do not forget to refresh metadata by opening the mappings in sis.
the error means that PerformanceId is an IDENTITY column on your destination table. IDENTITY columns are read only unless you tell it otherwise. So if we were in tSQL to be able to insert IDENTITY we would turn on IDENTITY_INSERT. Because you are in SSIS you can accomplish the same thing by checking the "keep identity" box.
HOWEVER when ever you get an error like this it is usually a sign that you should NOT be mapping ID to Performance ID. The question you have to ask is the Identity from your source supposed to be the identity of the destination table? Usually not, most of the time it would be another column as a surrogate key. Then you have to understand if it is even possible. because if there is a unique constraint or primary key then the identity cannot repeat which means you have to know that your source's id column will not cause a duplicate primary key violation.
More than likely the actual fix if for you to uncheck ID from the source and ignore the value.
The column PerformanceID (in the target) is almost certainly an identity column and that is why it is not working. You may not want to transfer it (and have SQL Server generate values for PerformanceID or you can check 'Keep Identity.'

Copy database with data and foreign keys without identity insert on

Scenario:
I have a set of test data that needs to be deployed to our build server daily (our build server database is first overwritten with the current live database, and has all data over a month old removed).
This test data has foreign key references within it which need to stay.
I can't simply switch on IDENTITY_INSERT as the primary keys may clash with data that is already in the database (because we aren't starting from a blank database).
The test data needs to be able to be regenerated fairly regularly, so the though of going through the deploy script and fudging the id columns to be something outlandish (or a negative number for instance) and then changing the related foreign key columns to be the same id every time we regenerate the data doesn't thrill me.
Ideally I would like to know if there is a tool which can scan a database, pick up the foreign key constraints and generate the insert scripts accordingly, something like:
INSERT INTO MyTable VALUES('TEST','TEST');
DECLARE #Id INT;
SET #Id = (SELECT ##IDENTITY)
INSERT INTO MyRelatedTable VALUES(#Id,'TEST')
It sounds like you want to look into an ETL process that copes with the change in id. As you're using SQL Server, you can look at the OUTPUT clause - use this to build up some temporary tables that can map the "old" id to the "new" id for each primary key to map the foreign keys when migrating the "child" tables.

How to manually insert the data in SQL Server with following condition

I set the Identity Increment with the primary key in one table, now I want to insert the data under GUI environment in SQL Server.
However, the primary key column is now read only and I am not able to edit it.
Any idea to solve this problem? Thanks.
Just leave the primary key field untouched and SQL Server will generate the data for you.

SQL script syntax for multiple databases under one user

This is probably stupid simple, but for some reason I'm having trouble getting it to work. I have a typical import script I'm trying to run on a MS SQL server with one master user (as opposed to a single user with only access to one database).
When I run the .SQL script, it creates the database and then starts to create tables. Here's where it gets interesting. It's not creating the databases under the DB I just made. It's throwing the tables under the "System Databases" view and not restricting the table creation to the DB that was just created.
I have tried:
CREATE TABLE table_name
CREATE TABLE database_name.table_name
Maybe I'm overlooking something really easy. I don't usually run into this with MySQL with a single user mapped to one database, I think since the user can only see that one database, so MySQL assumes it must be the one to work with.
The difference now is that I'm using MSSQL 2008 and maybe it works a little differently and I'm overlooking something. Thanks for your help!
Tried this too. No luck. Says database doesn't exist when it tries to create the table. I would think being a top/down read of the query script it would first create the database, then try to create the table afterwards.
CREATE DATABASE DATABASENAME;
CREATE TABLE DATABASENAME.dbo.TABLENAME
(
field_one VARCHAR(100) NOT NULL,
field_two INT NOT NULL,
PRIMARY KEY(field_one)
)
This is a working example after getting it all figured out. This syntax works well and I don't need to specify the DBO pathing stuff before table names this way. Cleaner and got me the results I was looking for. Thanks everyone.
IF Db_id('DBNAME') IS NULL
CREATE DATABASE DBNAME;
GO
USE [DBNAME];
GO
CREATE TABLE TABLENAME
(
COL1 VARCHAR(100) NOT NULL,
COL2 INT NOT NULL,
PRIMARY KEY(COL2)
)
INSERT INTO TABLENAME
(COL1,
COL2)
VALUES('1234',1001),
('1234',1002),
('1234',1003),
('1234',1004)
It basically just does a check to make sure database is created before doing anything else, then sets the USE database to the one I'm working with. Everything else is just normal SQL, so have fun. Cheers!
Probably you need to include the USE sentence at the begining of your script in order to indicate the database as follows:
USE [database_name]
GO
By default SQL-SERVER use the master DB that´s listed under system databases.
Other way is to use the database prefix, but including the owner:
INSERT INTO database_name.dbo.table_name
INSERT INTO database_name..table_name