Creating copy of database - Fails due to use of 'Identity' specification - sql

I've been trying to figure out how to properly accommodate how to handle 'Identity' columns when generating a script to re-create the database.
The original reason why I need to generate a script for this is because I have to 'downgrade' a SQL database to an older version. I know everything in the database (v10.5) is compatible with the older version (v10.0). The issue I'm facing is that out of 3 different methods of copying the database, it always fails with the fact that it cannot maintain the original ID fields (which are identity).
Every table of mine has the very first column ID: Int = PK & Identity. I also have many cases where a table doesn't perfectly go sequential in this column, for example, 1, 2, 3, 5, 8, 12, 13, etc. That is simply because those records had been deleted in the past. But it seems as if it's impossible to re-insert the original ID numbers in the same order as they used to be...
So how do I copy (without backup/restore) a database in its entirety from Server A to Server B? NOTE: I can connect to both databases on both servers from the Management Studio. Also, the destination server is not mine, it is a shared hosted DB and I have access only to my database. I have no authority to change destination server settings.
I've tried the following:
Generate script for entire database option
Export database option
Backup/Restore database - fails because of version mis-match
I'm guessing that I may just have to temporarily 'disable' the identity specification on all the tables, insert the data, then switch identity back on again. But I am horrible with writing scripts for manipulating the database structure. Data its self, I can do. But manipulating the database structure, I've gotten so used to using tools for this that I've never even taken the time to work with the scripts - and other than this particular scenario, hope that I never have to learn either.

I actually figured it out. I already knew that there must be a way to temporarily disable the identity specification, but the solution was a little different. Instead of 'disabling' and 're-enabling' the identity specification, there's another command (as in a comment above) called IDENTITY_INSERT which when switched on, it allows inserting values to an identity field - and you need to ensure it gets switched back off too. The IDENTITY_INSERT switch is per-connection-session, so it does not affect other sessions. As long as IDENTITY_INSERT is on, you can insert records with a specific value for that identity field - so long as it's still within the primary key constraints.
The actual solution was NOT to write a script with SET IDENTITY_INSERT MyTableName ON, but rather in the database export utility (in SQL management studio), when selecting the tables, select all the tables and choose the advanced setting to use IDENTITY_INSERT.

Related

Migrating legacy data from SQL Server 2000 to 2019 , log block error - is there a painless way of moving over tables with autoinc identity columns? [migrated]

This question was migrated from Stack Overflow because it can be answered on Database Administrators Stack Exchange.
Migrated 5 days ago.
I've been tasked with migrating data from an instance of SQL Server 2000 to 2019. There are a total of four databases to bring over, three of which I was able to backup/restore into 2008 and then into 2019 without any issues. Please note: I am not a DBA in any sense, though I'm the closest thing to one on hand.
The fourth and final database presented the following error that prevented moving from 2008 to 2019:
System.Data.SqlClient.SqlError: An error occurred while processing the log for database 'DbNameHere'. The log block version 2 is unsupported. This server supports log version 3 to 6. (Microsoft.SqlServer.SmoExtended)
Is there a simple fix for this problem that I'm missing in the various SSMS menus?
Alternatively, is there a way to copy raw data from one server to another via, for instance, a flat file, and preserve the identity columns as identity columns? That is, I don't want to just strip that column and bulk insert, as they are often used as foreign keys in other tables, and with twenty-some-odd years of data, something is bound to break in doing this.
An example of an ideal final result in this solution would be something like: legacy table X has 1000 rows, the last of which has an identity column value of 1000. Once the move is complete, new table X has 1000 rows, the last of which has an identity column value of 1000, and upon insert the next row automatically increments to 1001.
Apart from unsuccessfully messing around with flat files, I've also tried the "Copy Database" option in SSMS, which also failed.
I would attempt to get SQL Server to rebuild the transaction log. Based on the error message, that might sort out the situation.
You first use sp_detach_db to detach the database. It is now very likely that the ldf file isn't needed when you do a subsequent attach, and perhaps rebuilding the log this way will sort the situation.
Then you attach the database, without the ldf file. Use CREATE DATABASE with either of the FOR ATTACH or FOR ATTACH_REBUILD_LOG options.
I would do this on the 2008 instance, since from what I understand you got the database in there successfully. But feel free to play around regarding on which version (2000 or 2008) you do the detach and also on which version (2000, 2008, 2019) you do the attach.

How do I copy data from one Azure database table to a different Azure database table and also convert data types?

I have to copy data from one table to another, the tables are held in two different databases within Azure. I did a quick search for answers to this and whilst a query seems fairly straight forward i.e.
INSERT INTO table1 (make, model, type, serial)
SELECT the_make, the_model, the_type, ref_no
FROM database2.dbo.table2
I encountered issues because I'm using Azure.
Msg 40515, Level 15, State 1, Line 16 Reference to database and/or
server name in 'database2.dbo.table2' is not supported in this version of
SQL Server.
The above issue led me to the Cross-Database Queries articles. My requirements are a little more complicated than some of the scenarios provided and I need some help in making it work.
I also need to convert some columns such as reg_no which is a 'string' to an 'int' and then copy the value to the 'serial' column.
My question is, what the best way to create a script for this that allows me to reference both databases without any errors, copy the data and convert the columns at the same time? I tried the simple way of exporting data and importing it, editing the mappings for the columns, it wasn't that good I found and was causing problems all over the place.
Any guidance is appreciated on this.
You're getting this error because there's no linked server by default. You'll need to add it, in order to access the secondary db server. Here's a link about how to do it:
https://www.sqlshack.com/create-linked-server-azure-sql-database/
In terms of the transformation. It depends on many factors e.g. amount of rows, frequency, etc..
Usually the best alternative is by using an external tool (ETL) such as SSIS / Azure Data Factory because you can schedule it's execution and get the status of each execution.

SQL Server 2008 : Cannot Insert new column in the middle position and change data type

My OS is Windows server 2008.
I've already installed SQL Server Express 2008.
I have several problems:
I can't insert a new column in the middle position. If I insert in the last one, I can save the table design.
I can change the column name but I can't change the data type.
I got error message :
Saving changes is not permitted. The changes you have made require the following tables to be dropped and re-created. You have either made changes to a table that can't be recreated or enabled the option Prevent saving changes that require the table to be re-created.
Example:
I have ID, Name, Phone, and Status columns. I am unable to add Address between Name and Phone.
But, I can add Address if i place it after Status.
Is there any way to solve this problem?
Thanks before.
In SSMS Tools -> Options -> Designers you would need to uncheck the option "Prevent Saving Changes that require table re-creation" to allow you to do this in SSMS.
This will rebuild the table and so generally isn't worth the hassle if the table is at all large and will make deployment to production trickier.
If there are columns which logically you would prefer to have next to each other to make writing queries easier you can create a View with the desired column order.
Column order doesn't matter either in the designer or in sys.columns.
The on disk storage will be the same regardless: Inside the Storage Engine - Anatomy of a record.
There is no performance benefit either.
I think using query it's not possible, but you do using UI of SSMS. right click on selected table and Insert Column whenever you want.
Think it does not matter columns order.
If you want a script to do this, all you need to do is select the data out into a temporary table, drop the table, recreate it with the columns in your preferred order and then reinsert the data from the temporary table in the right order.

How do I use SQL to Drop a Column from a MS ACCESS Database if that column is a replication ID?

I had a notion to use a database column of type replication ID, but have since changed my approach and want to use this column for another purpose.
However, I'm unable to use SQL to drop the column to remove it from my database.
My SQL is:
ALTER TABLE foo_bar DROP COLUMN theFoo;
However, I get a "syntax error" and I'm assuming this has something to do with this column being a replication ID.
I'd rather not download the file and edit it directly using the MS Access application, but not sure if that's my only recourse.
Thanks so much in advance.
Regards,
Kris
If you have access to the database in a command shell, Michael Kaplan's Replication System Removal Fields utility should do the trick. However, I've found that in some circumstances, it's unable to do the job. Also note that the utility will only work with a Jet 4 format database (MDB), not ACE format (ACCDB).
If all else fails, you can recreate the table structure and append the existing data to it. That can get messy if you have referential integrity defined, though, but it will get the job done, and likely most of it is scriptable (if not all possible using just DDL).
Here is a link that may help you, I had a similar idea but when browsing the web found this
AccessMonster - Replication-ID-Field-size
EDIT: Well I don't have much time but what I was thinking of first was if you could alter the column to make it different (not a replication ID) and then drop it. (two separate actions). But I have not tested this.

How do I create and synchronize a combined reporting-only db from two live dbs?

I need to quickly implement a read-only database containing data pulled from two identically structured live databases.
The live dbs are actually company dbs from a Dynamics accounting system so I'm happy for any Dynamics specific advice but this is mostly a SQL question. It's a fairly old version of Dynamics from before Great Plains was acquired by Microsoft. This is on SQL Server 2000.
We have reports and applications which access the Dynamics data. These apps are designed to look at one company db. Now we need to add another. It's appropriate that most of these reports and apps see combined data. They don't really care which company an order or invoice exists in. They only look at a small number of the tables.
It seems to me that the simplest solution is to create a reports only db with combined data. Preferably, we need an efficient way to update this db with changes several times a day.
I'm a developer, not a db expert but here's my plan:
Create the combined reporting db with the required tables initially with the same table structure as the live dbs.
All Dynamics tables seem to have an int identity column called DEX_ROW_ID. I'm not sure what it's used for, (it's not indexed) but that seems like the obvious generic way to uniquely identify rows. On the reporting db I will change it to a normal int (not an identity). I will create a unique index on DEX_ROW_ID in all dbs.
Dynamics does not have timestamps so I will add a timestamp column to tables in the live dbs and a corresponding binary(8) column in the reporting db. I'm assuming and hoping that Dynamics won't be upset by the additional index and column.
Add an int CompanyId column to the reporting db tables and add it to the end of any unique indexes. Most data will be naturally unique even without that. ie, order and invoice numbers etc will be different for the two live dbs. We may need to make some minor changes to the applications but I'm not expecting to do much other than point them to the new reporting db.
Assuming my reporting db is called Reports, the live dbs are Live1 and Live2, the timestamp column is called TS and all dbs are on the same server ... here's my first attempt at an update script for copying the changes in one table called MyTable in Live1 to the reporting db.
USE Reports
CREATE TABLE #Changes
(
ReportId int,
LiveId int
)
/* Collect in a temp table the ids or rows which have been deleted or changed
in the live db L.DEX_ROW_ID will be null if the row has been deleted */
INSERT INTO #Changes
SELECT R.DEX_ROW_ID, L.DEX_ROW_ID
FROM MyTable R LEFT OUTER JOIN Live1.dbo.MyTable L ON L.DEX_ROW_ID = R.DEX_ROW_ID
WHERE R.CompanyId = 1 AND L.DEX_ROW_ID IS NULL OR L.TS <> R.TS
/* Delete rows that have been deleted or changed on the live db
I wonder if using join syntax would run better than the subquery. */
DELETE FROM MyTable
WHERE CompanyId = 1 AND DEX_ROW_ID IN (SELECT ReportId FROM #Changes)
/* Recopy rows that have changed in the live db */
INSERT INTO MyTable
SELECT 1 AS CompanyId, * FROM Live1.dbo.MyTable L
WHERE L.DEX_ROW_ID IN (SELECT ReportId FROM #Changes WHERE LiveId IS NOT NULL)
/* Copy the rows that are new in the live db */
INSERT INTO MyTable
SELECT 1 AS CompanyId, * FROM Live1.dbo.MyTable
WHERE DEX_ROW_ID > (SELECT MAX(DEX_ROW_ID) FROM MyTable WHERE CompanyId = 1)
Then do the same for the Live2 db. Repeat for every table in Reports. I know I should use a parameter #CompanyId instead of the literal but I can't do that for the live db name some I might generate these dynamically with a C# program or something.
I'm looking for any advice, suggestions or critique on what I'm doing here. I know it won't be atomic. Things could be happening on the live db while this script runs. I think we can live with that. We'll probably do a full copy either nightly or weekly when nothing is happening on the live dbs.
We need to favor performance over elegance or perfection. Some initial testing has the first query with the TS comparisons running at about 30 seconds for the biggest table so I'm optimistic that this is going to work but I'd also like to know if I'm missing something obvious or not seeing the forest for the trees.
We don't really want to deal with log files on the reporting db. Can we just set that to simple recovery model and forget about logs?
Thanks
I think there are a couple open questions here.
Do you need these reports to be near-real-time? Or is this this sort of reporting that could live with daily updates? But assume you need up-to-the-minute data.
Have you considered querying the databases directly and merging the data per-report on the fly? You'll have to do a lot of reporting to duplicate the effort that's going to go into designing, creating, and supporting a real-time merged replicated database.
Thirty seconds is (IMHO) unacceptable for any single query against a production database. There could be any number of tuning-related reasons for taking this long, but it at least means you're going to need serious professional SQL Server optimization resources (i.e. people). And if this is a problem for the queries for reports, it doesn't bode well for the queries to maintain a separate database for reporting.
Tuck into the back of your mind the consideration that, if you need to consolidate to a single database, it's worth considering whether you should make it an OLAP database rather than a mirror. The mirror will be quicker and easier, but the OLAP would be far more flexible and powerful in the long term; and it might be well to go the whole way from the beginning.
The last thing I'd want to do is write a custom update script. Try these bulletproof methods first:
Let's hope your production databases are backed up. Restore those backups every night to the reporting server. You can automate restores with the RESTORE command, which will work with a file on a network server.
Use SQL Server replication to push data from the live servers to the backend.
Schedule a DTS package every night to import the entire production database.
This might seem like brute force. But since you're copying a 2000-era database, brute force cannot be a problem with today's hardware. As an added advantage, these methods can be supported by a sysadmin instead of a developer.
Method 1 has the added added advantage of serving as backup verification. :)