SQL script syntax for multiple databases under one user - sql

This is probably stupid simple, but for some reason I'm having trouble getting it to work. I have a typical import script I'm trying to run on a MS SQL server with one master user (as opposed to a single user with only access to one database).
When I run the .SQL script, it creates the database and then starts to create tables. Here's where it gets interesting. It's not creating the databases under the DB I just made. It's throwing the tables under the "System Databases" view and not restricting the table creation to the DB that was just created.
I have tried:
CREATE TABLE table_name
CREATE TABLE database_name.table_name
Maybe I'm overlooking something really easy. I don't usually run into this with MySQL with a single user mapped to one database, I think since the user can only see that one database, so MySQL assumes it must be the one to work with.
The difference now is that I'm using MSSQL 2008 and maybe it works a little differently and I'm overlooking something. Thanks for your help!
Tried this too. No luck. Says database doesn't exist when it tries to create the table. I would think being a top/down read of the query script it would first create the database, then try to create the table afterwards.
CREATE DATABASE DATABASENAME;
CREATE TABLE DATABASENAME.dbo.TABLENAME
(
field_one VARCHAR(100) NOT NULL,
field_two INT NOT NULL,
PRIMARY KEY(field_one)
)
This is a working example after getting it all figured out. This syntax works well and I don't need to specify the DBO pathing stuff before table names this way. Cleaner and got me the results I was looking for. Thanks everyone.
IF Db_id('DBNAME') IS NULL
CREATE DATABASE DBNAME;
GO
USE [DBNAME];
GO
CREATE TABLE TABLENAME
(
COL1 VARCHAR(100) NOT NULL,
COL2 INT NOT NULL,
PRIMARY KEY(COL2)
)
INSERT INTO TABLENAME
(COL1,
COL2)
VALUES('1234',1001),
('1234',1002),
('1234',1003),
('1234',1004)
It basically just does a check to make sure database is created before doing anything else, then sets the USE database to the one I'm working with. Everything else is just normal SQL, so have fun. Cheers!

Probably you need to include the USE sentence at the begining of your script in order to indicate the database as follows:
USE [database_name]
GO
By default SQL-SERVER use the master DB that´s listed under system databases.
Other way is to use the database prefix, but including the owner:
INSERT INTO database_name.dbo.table_name
INSERT INTO database_name..table_name

Related

MSSQL Import/Export/Copy IDENTITY_INSERT problems

Using MS SQL Server Management Studio 2008.
I have a db (say ip 10.16.17.10 and called db1) and a second one (say ip 10.16.17.25 called db2).
I am trying to copy one table (and its contents) from db1 into db2.
I have the database on both (but empty in db2).
The problem is no matter how I copy/export/import, no matter what options I set in MS SQL Server Management Studio 2008 when I click 'table'->'Design' (on db2) it ALWAYS says 'Identity Spefication: NO' even tho the db1 table has it on.
From db1 I go to 'Tasks'->'export'->'source/db' and 'destination/db'->'Edit Mapping'->'Enable identity Insert' and click it on.
But no joy. ALWAYS exports without it.
I try similar thing from IMPORT on db2. Similar thing if I use COPY.
I have read MANY of the STACKOVERFLOW articles on this, they all suggest setting IDENTITY_INSERT setting to ON but when I do run below:
SET IDENTITY_INSERT [dbo].[mytable] ON
The table either doesn't exist yet or has already copied WITHOUT the identity setting on so see the error:
does not have the identity property. Cannot perform SET operation.
I have tried setting it as a property (under database properties) for db2 but when I copy/import/export never works.
Would appreciate any help here as lots of StackOverflow articles so far all seem to be having an easier time than me.
I am planning on doing this for another 50 or so tables in this database so am hoping to find a way which doesnt involve running scripts for each table.
thanks
The process of using the Export Data Wizard to copy the data from one table to another will NOT replicate all aspects of the schema (like identity and auto-increment). If you want to replicate the schema, script out your table into a create statement, change the name to db2, and create it. Then you should be able to run the export/import wizard with the identity insert option on and insert into your new table that replicates the schema of your old table.
Ended up sorting this out using MS SQL Management Studio.
Thanks to #kevin for the help regarding Import Data and Export Data. Schemas are NOT transferred across however they are the best means to transport the data once schema is up.
Found best way to MASS import/export db table schemas using below (Saved SQL create scripts to file):
Tasks->Generate Scripts->All Tables To File->with Identity on
Ran 200kb SQL file on db2 for schema.
Then ran Import Data from db1 to db2.
Done, all Identity_Inserts maintained.
thanks for help
According to the Error message I think your table does not have an IDENTITY column. Make sure that [dbo].[mytable] does have an IDENTITY column before you executing SET IDENTITY_INSERT.
SET IDENTITY_INSERT [dbo].[mytable] ON
DEMO1 (Trying to set identity ON when there is NO identity column)
--Error
'Table 'T' does not have the identity property. Cannot perform SET operation.: SET IDENTITY_INSERT T ON'
DEMO2 (Trying to set identity ON when there is identity column)
--No Errors
Follow following Steps :
From db1 I go to 'Tasks'->'export'->'source/db' and 'destination/db'->'Edit Mapping'->'Enable identity Insert' and Edit SQL - > You will able to see query structure of Table.
IN the query for eg. ID int NOT NULL, do the next step ID int NOT NULL IDENTITY(1,1)
Then proceed.
I bet it will work.

a special case when modifing the database

sometimes i face the following case in my database design,, i wanna to know what is the best practice to handle this case:::
for example i have a specific table and after a while ,, when the database in operation and some real data are already entered.. i need to add some required fields (that supposed not to accept null)..
what is the best practice in this situation..
make the field accept null as (some data already entered in the table ,, and scarify the important constraint )and try to force the user to enter this field through some validation in the code..
truncate all the entered data and reentered them again (tedious work)..
any other suggestions about this issue...
It depends on requirements. If the data to populate existing rows for the new column isn't available immediately then I would generally prefer to create a new table and just populate new rows when the data exists. If and when you have all the data for every row then put the new column into the original table.
If possible i would set a default value for the new column.
e.g. For Varchar
alter table table_name
add column_name varchar(10) not null
constraint column_name_default default ('Test')
After you have updated you could then drop the default
alter table table_name
drop constraint column_name_default
A lot will come down to your requirements.
It depends on your application, your database scheme, your entities.
The best way to go about it is to truncate the data and re - enter it again, but it need not be too tedious an item. Temporary tables and table variables could assist a great deal with this issue. A simple procedure comes to mind to go about it:
In SQL Server Management Studio, Right - click on the table you wish to modify and select Script Table As > CREATE To > New Query Editor Window.
Add a # in front of the table name in the CREATE statement.
Move all records into the temporary table, using something to the effect of:
INSERT INTO #temp SELECT * FROM original
Then run the script to keep all your records into the temporary table.
Truncate your original table, and make any changes necessary.
Right - click on the table and select Script Table As > INSERT To > Clipboard, paste it into your query editor window and modify it to read records from the temporary table, using INSERT .. SELECT.
That's it. Admittedly not quite straightforward, but a well - kept database is almost always worth a slight hassle.

SQL - create database and tables in one script

Sorry if already asked, but I can't find anything on this.
I am moving something over from MySQL to SQL Server I want to have a .sql file create a database and tables within the database. After working out syntax kinks I have gotten the files to work (almost).
If I run
IF db_id('dbname') IS NULL
CREATE DATABASE dbname
it works fine, and if I run
CREATE TABLE dbname.dbo.TABLE1 (
);
...
CREATE TABLE dbname.dbo.TABLEN (
);
it also works fine. But, if I run them in the same file I get this error
Database 'dbname' does not exist
Right now, the CREATE TABLE statements are not within the IF statement, which I would like, but I also cannot seem to find the syntax for that. ( { } does not work?)
So my big question is, how do I ensure a particular command in a .sql file is completed before another in SQL Server?
My second question is, how do I include multiple instructions within an IF clause?
To be clear, I have been running this into sqlcmd.
Put a GO command between queries.
IF db_id('dbname') IS NULL
CREATE DATABASE dbname
GO
CREATE TABLE dbname.dbo.TABLE1 (
);
CREATE TABLE dbname.dbo.TABLEN (
);
As for putting the table statements in the IF, you wouldn't be able to because of the GO command. You could create additional IF statements afterwards, to check for each tables pre-existence.
The syntax for a block if is:
IF condition
BEGIN
....
....
END
Between creating the database and creating the tables you will need a USE statement.
USE dbname
This way the tables will be created in the correct place, without having to specify the DB name on everything.
Also, GO and BEGIN...END like everyone else is saying.
You have to separate the statements with the GO keyword:
sql query
GO
another sql query
GO
and so on
By placing a GO between statements (to create separate batches of statements)

How do I create a table alias in MySQL

I am migrating an MS Access application (which has linked tables to a MSSQL Server) to MySQL.
As a means to overcome some MSAccess table naming problems, I am seeking a solution to add a MySQL table alias that will point to an existing table in the MySQL database. Ideally I would like to create the alias 'dbo_customers' in mysql that would point to the customers table also in mysql.
To be clear I am not wanting to alias a table name inside a query like this:
SELECT * FROM customers AS dbo_customers
But rather I would like to be able issue the following query:
SELECT * FROM dbo_customers
and have it return data from the customers table.
Off the top of my head
CREATE VIEW dbo_customers AS
SELECT * FROM customers
Maybe not the best solution but should work as the view is updatable. Will definitely work for Read Only
You can create a View.
CREATE VIEW dbo_customers AS SELECT * FROM customers;
If that doesn't work for you, you could try creating a shadow-copy of the table, and use Triggers to keep the tables synced.
For example:
CREATE TABLE t1( id serial primary key, field varchar(255) not null );
CREATE TABLE dbo_t1( id serial primary key, field varchar(255) not null );
-- INSERT trigger
CREATE TRIGGER t1_dbo_insert AFTER INSERT ON t1
FOR EACH ROW BEGIN
INSERT INTO dbo_t1 SET field = NEW.field;
-- No need to specify the ID, it should stay in-sync
END
-- UPDATE trigger
CREATE TRIGGER t1_dbo_update AFTER UPDATE ON t1
FOR EACH ROW BEGIN
UPDATE dbo_t1 SET field = NEW.field WHERE id = NEW.id;
END
-- DELETE trigger
CREATE TRIGGER t1_dbo_delete AFTER DELETE ON t1
FOR EACH ROW BEGIN
DELETE FROM dbo_t1 WHERE id = OLD.id;
END
Not exactly an 'alias', and far from perfect. But it is an option if all else fails.
there is a simpler solution for MySQL via MERGE table engine:
imagine we have table named rus_vacancies and need its English equivalent
create table eng_vacancies select * from rus_vacancies;
delete from eng_vacancies;
alter table eng_vacancies ENGINE=MERGE;
alter table eng_vacancies UNION=(rus_vacancies);
now table rus_vacancies equals to table eng_vacancies for any read-write operations
one limitation - original table must have ENGINE=MyISAM (it can be easily done by "alter table rus_vacancies ENGINE=MyISAM")
You could create a view named dbo_customers which is backed by the customers table.
#OMG Ponies ponies said in a comment:
Why not rename the table?
...and it seems the obvious answer to me.
If you create an ODBC linked table for the MySQL table customers it will be called customers and then all you have to do is rename the table to dbo_customers. There is absolutely no need that I can see to create a view in MySQL for this purpose.
That said, I'd hate to have an Access app that was using SQL Server table names when the MySQL tables were not named the same thing -- that's just confusing and will lead to maintenance problems (i.e., it's simpler for the linked tables in the Access front end to have the same names as the MySQL tables, wherever possible). If I were in your position, I'd get a search and replace utility and replace all the SQL Server table names with the MySQL table names throughout the entire Access front end. You'd likely have to do it one table at a time, but in my opinion, the time it takes to do this now is going to be more than made up for in clarity going forward with development of the Access front end.
I always rename my "linked to SQL" tables in Access from
{dbo_NAME} to {NAME}.
The link creates the table name as {dbo_NAME} but access occasionally has problems with the dbo_ prefix.
Aliases would be nice, yet MySQL does NOT have such a feature.
One option that may serve your needs, besides creating a view, is to use the FEDERATED storage engine locally.
CREATE TABLE dbo_customers (
id INT(20) NOT NULL AUTO_INCREMENT,
name VARCHAR(32) NOT NULL DEFAULT '',
PRIMARY KEY (id),
)
ENGINE=FEDERATED
DEFAULT CHARSET=latin1
CONNECTION='mysql://fed_user#localhost:9306/federated/customers';
There are currently some limitations with the FEDERATED storage engine. Here are a couple especially important ones:
FEDERATED tables do not support transactions
FEDERATED tables do not work with the query cache
I'd like to mention a bad solution I explored (and abandoned), which was to use hardlinks on the .frm, .MYD and .MYI files corresponding to my table in /var/lib/mysql/{name_of_my_database}/.
It does, however, NOT work. For InnoDB tables, it simply cannot (even if you hardlink the .idb file) because tables are also referenced in ibdata1.
For MyISAM tables, it kind of works, except it doesn't because in memory, the tables are still distinct and thus do not share cache. So if you write a row to original_table, it won't immediately appear in aliased_table. You would have to flush tables first… which defeats the purpose and even causes data loss (if you insert a row in both the original and the alias before flushing, only one row is kept).
I thought my experiment was worth mentioning as a cautionary tale.

Copy tables from one database to another in SQL Server

I have a database called foo and a database called bar. I have a table in foo called tblFoobar that I want to move (data and all) to database bar from database foo. What is the SQL statement to do this?
SQL Server Management Studio's "Import Data" task (right-click on the DB name, then tasks) will do most of this for you. Run it from the database you want to copy the data into.
If the tables don't exist it will create them for you, but you'll probably have to recreate any indexes and such. If the tables do exist, it will append the new data by default but you can adjust that (edit mappings) so it will delete all existing data.
I use this all the time and it works fairly well.
On SQL Server? and on the same database server? Use three part naming.
INSERT INTO bar..tblFoobar( *fieldlist* )
SELECT *fieldlist* FROM foo..tblFoobar
This just moves the data. If you want to move the table definition (and other attributes such as permissions and indexes), you'll have to do something else.
This should work:
SELECT *
INTO DestinationDB..MyDestinationTable
FROM SourceDB..MySourceTable
It will not copy constraints, defaults or indexes. The table created will not have a clustered index.
Alternatively you could:
INSERT INTO DestinationDB..MyDestinationTable
SELECT * FROM SourceDB..MySourceTable
If your destination table exists and is empty.
If it’s one table only then all you need to do is
Script table definition
Create new table in another database
Update rules, indexes, permissions and such
Import data (several insert into examples are already shown above)
One thing you’ll have to consider is other updates such as migrating other objects in the future. Note that your source and destination tables do not have the same name. This means that you’ll also have to make changes if you dependent objects such as views, stored procedures and other.
Whit one or several objects you can go manually w/o any issues. However, when there are more than just a few updates 3rd party comparison tools come in very handy. Right now I’m using ApexSQL Diff for schema migrations but you can’t go wrong with any other tool out there.
Script the create table in management studio, run that script in bar to create the table. (Right click table in object explorer, script table as, create to...)
INSERT bar.[schema].table SELECT * FROM foo.[schema].table
You can also use the Generate SQL Server Scripts Wizard to help guide the creation of SQL script's that can do the following:
copy the table schema
any constraints (identity, default values, etc)
data within the table
and many other options if needed
Good example workflow for SQL Server 2008 with screen shots shown here.
You may go with this way: ( a general example )
insert into QualityAssuranceDB.dbo.Customers (columnA, ColumnB)
Select columnA, columnB from DeveloperDB.dbo.Customers
Also if you need to generate the column names as well to put in insert clause, use:
select (name + ',') as TableColumns from sys.columns
where object_id = object_id('YourTableName')
Copy the result and paste into query window to represent your table column names and even this will exclude the identity column as well:
select (name + ',') as TableColumns from sys.columns
where object_id = object_id('YourTableName') and is_identity = 0
Remember the script to copy rows will work if the databases belongs to the same location.
You can Try This.
select * into <Destination_table> from <Servername>.<DatabaseName>.dbo.<sourceTable>
Server name is optional if both DB is in same server.
I give you three options:
If they are two databases on the same instance do:
SELECT * INTO My_New_Table FROM [HumanResources].[Department];
If they are two databases on different servers and you have linked servers do:
SELECT * INTO My_New_Table FROM [ServerName].[AdventureWorks2012].[HumanResources].[Department];
If they are two databases on different servers and you don't have linked servers do:
SELECT * INTO My_New_Table
FROM OPENROWSET('SQLNCLI', 'Server=My_Remote_Server;Trusted_Connection=yes;',
'SELECT * FROM AdventureWorks2012.HumanResources.Department');