Migrating legacy data from SQL Server 2000 to 2019 , log block error - is there a painless way of moving over tables with autoinc identity columns? [migrated] - sql

This question was migrated from Stack Overflow because it can be answered on Database Administrators Stack Exchange.
Migrated 5 days ago.
I've been tasked with migrating data from an instance of SQL Server 2000 to 2019. There are a total of four databases to bring over, three of which I was able to backup/restore into 2008 and then into 2019 without any issues. Please note: I am not a DBA in any sense, though I'm the closest thing to one on hand.
The fourth and final database presented the following error that prevented moving from 2008 to 2019:
System.Data.SqlClient.SqlError: An error occurred while processing the log for database 'DbNameHere'. The log block version 2 is unsupported. This server supports log version 3 to 6. (Microsoft.SqlServer.SmoExtended)
Is there a simple fix for this problem that I'm missing in the various SSMS menus?
Alternatively, is there a way to copy raw data from one server to another via, for instance, a flat file, and preserve the identity columns as identity columns? That is, I don't want to just strip that column and bulk insert, as they are often used as foreign keys in other tables, and with twenty-some-odd years of data, something is bound to break in doing this.
An example of an ideal final result in this solution would be something like: legacy table X has 1000 rows, the last of which has an identity column value of 1000. Once the move is complete, new table X has 1000 rows, the last of which has an identity column value of 1000, and upon insert the next row automatically increments to 1001.
Apart from unsuccessfully messing around with flat files, I've also tried the "Copy Database" option in SSMS, which also failed.

I would attempt to get SQL Server to rebuild the transaction log. Based on the error message, that might sort out the situation.
You first use sp_detach_db to detach the database. It is now very likely that the ldf file isn't needed when you do a subsequent attach, and perhaps rebuilding the log this way will sort the situation.
Then you attach the database, without the ldf file. Use CREATE DATABASE with either of the FOR ATTACH or FOR ATTACH_REBUILD_LOG options.
I would do this on the 2008 instance, since from what I understand you got the database in there successfully. But feel free to play around regarding on which version (2000 or 2008) you do the detach and also on which version (2000, 2008, 2019) you do the attach.

Related

MS SQL Server - Cannot Create Same Index Name on Different Tables

I am creating indexes on two separate tables in the same Database (MS SQL Server), and I got an error saying that an index already exists.
This error does NOT come up again if I changed index name to another.
Please help. Many Thanks.
Screenshot from Microsoft SQL Server Management Studio
I'd strongly suggest that the visual designer is leading you astray. IIRC, indexes used to have schema-scoped names (back in the 7.0 or 2000 era, I think. Before user/schema separation) and later gained the ability to only need to be unique at an individual table level1.
If you try to create a duplicate index manually, you receive the error:
The operation failed because an index or statistics with name '<name>' already exists on table '<table name>'.
Since that's clearly not the error you're seeing, I strongly suspect that it's old code in the visual designer and yet another reason not to use it.
1Unfortunately, we're in an area where historic documentation from the right period is no longer available from the Microsoft website. It used to be easier to verify these recollections because you could still find the "What's new in SQL Server 2000", etc pages there.

Azure SQL in SQL Server Management Studio 17 - Can't edit data manually

I have a table, which I have clicked Edit top 200 rows, as I wish to flip a cell in one of my columns which is smallint from a 0 to a 1. Every time I change the cell's data from a 0 to a 1 it is automatically changed back to a 0.
It seems that all of my columns are immutable in this way, what am I missing so that I can edit data in my sql database table manually for testing? I am using SQL Server Management Studio 17.
When using Microsoft SQL Server Migration Assistant which converts a MySQL database to an MSSQL or Azure Sql database, triggers are generated for you during the migration and added to your SQL tables.
In my case, these were stopping me from updating and inserting to my table, so I deleted them.

Azure SQL "select" query not showing all rows

I just used the SQLAzureMW (SQL Azure Migration Wizard Tool) to migrate my SQL Server database to Azure SQL. It went off without a hitch - all my tables are there, the website is running fine off it, etc.
Here's what's odd: if I execute a simple SELECT statement against my tables, I get only a few of the rows. I assumed they were missing, but my website is using some of those records as if they're there. So I queried with a WHERE clause and BAM - they showed up. How the... what the... why isn't my select showing me everything? This applies to many of the tables I've tested.
SQL Azure
On-Premise
I gave up on MS SQL Management Studio and am instead using SQL Server Object Explorer from Visual Studio 2012/2013. It functions properly and allows inline editing of data.
Consider this SELECT statement:
SELECT
SvcTimeID,
LoginName,
MeanSeconds,
MedianSeconds,
RequestCount,
StdDevSeconds,
SvcDate,
CAST (TS AS INT) AS TS
FROM dbo.SvcTime
WHERE SvcDate >= #SvcDate
Where the parameter is set:
cmd.Parameters["#SvcDate"].Value = DateTime.UtcNow - new TimeSpan(31, 0, 0, 0);
Execute that statement in an Azure Web Role - brought back, say 24 rows.
Now, insert two new rows; wait at least one minute; execute the statement again. Do the recently inserted rows appear? In my case, they did not. Note: the default value of SvcDate in the database is getutcdate().
Move the SQL Azure database from the web edition to the standard (S2) edition. Rows magically appear.
Here is my theory. The issue you had was not with MS SQL Management Studio but with SQL Azure itself where, under certain circumstances, the same query will return the original rows from a cache someplace and will miss the new rows in the database.
This has blown any remaining confidence I had with Azure.
I was scared at first, but I think this has an explanation:
If you inserted some rows in connection "A" and can't find them in other sessions, maybe you have a uncommited transaction. By default, in SQL Server on premise, your second connections would hung until transaction is commited or rolled back. (Isolation level read committed)
Somehow, using the same isolation level, Azure acts differently. I seems to work in some cases as a snapshot isolation. Because of that, you can read from the table, but results are not updated. Or maybe the lock are set in a different way.
To solve this, check sysprocesses for sessions with open_tran > 0 or just be careful commmiting trans. In the example, running commit in your session "A" should do it.
Good luck!

Identity column value suddenly jumps to 1001 in sql server [duplicate]

This question already has answers here:
Identity increment is jumping in SQL Server database
(6 answers)
Closed 7 years ago.
I am using Sql server 2012(Denali). I wonder why all identity column values start from 1001 and so on. At the beginning Identity column starts from 1,2 and so on and adding identity smoothly, but suddenly it jumps to 1001,1002 and onwards for all the table in the database containing identity column. What could be the reason? Please assist.
Microsoft has changed the way they deal with identity values in SQL Server 2012 and as a result of this you can see identity gaps between your records after rebooting your SQL server instance or your server machine. There might be some other reasons for this id gaps, it may be due to automatic server restart after installing an update.
You can use below two choices
Use trace flag 272
o This will cause a log record to be generated for each generated identity value. The performance of identity generation may be impacted by turning on this trace flag.
Use a sequence generator with the NO CACHE setting
Setting Trace Flag 272 on SQL Server 2012 that you are expecting here
Open "SQL Server Configuration Manager"
Click "SQL Server Services" on the left pane
Right-click on your SQL Server instance name on the right pane ->Default: SQL Server(MSSQLSERVER)
Click "Properties"
Click "Startup Parameters"
On the "specify a startup parameter" textbox type "-T272"
Click "Add"
Confirm the changes
I believe you have the explanation in a comment to this connect item. Failover or Restart Results in Reseed of Identity
To boost the preformance for high end machines, we introduce
preallocation for identity value in 2012. And this feature can be
disabled by using TF 272 (then you will get the behaviour from
2008R2).
The identity properties are stored separately in metadata. If a value
is used in identity and increment is called, then the new seed value
will be set. No operation, including Rollback, Failover, ..... can
change the seed value except DBCC reseed. Failover applies for the
table object, but no the identity object. So for failover, you can
call checkpoint before manual failover, but you may see gap for
unplanned cases. If gap is a concern, then I suggest you to use TF
272.
For control manager shutdown, we have a fix for next verion (with
another TF). This fix will take care of most control manager shutdown
cases.
I guess you could use sequence instead, sequence gives you 100% complete control, and is in many ways far superior in comparison to identity...
Identity is just so damn easy and convenient
http://msdn.microsoft.com/en-us/library/ff878091.aspx
As far as i know, when you do a insert with identity and fails, the identity is used anyway, Verified
with sequence you can make it "fill" gaps using cycle.
Although, as Amy Barrett is pointing out this is created out of scope of the transaction.
There is a performance optimization when you are using cache that might be useful as well.

import table and stored procedures

I am trying to export the tables [around 40] and stored procedures [around 120+] in SQL Server 2008 R2 from dev server to prod server.
I have created a .sql file [right clicking on the database in SSMS, choosing Tasks -> Generate Scripts], but when I am trying to import the table and stored procedures into the prod server [right clicking on the database in SSMS, New Query then copying the content in] it is giving me a long list of errors
Mostly
There is already an object named 'tblMyTable' in the database
Violation of PRIMARY KEY constraint 'PK_MyTable'. Cannot insert
duplicate key in object 'dbo.tblMyTable'
Any idea what I am doing wrong or what should be done? Thanks in advance.
The problem with your current technique is that assumes your target is an empty database. So it will reinsert everything with no attempt at merging data and this is what causes your duplicate primary keys. If you use Management Studio you have to do all the merging of data yourself.
My recommendation is first to look into redgate it's not free but all the time you will save it will be worth it. You will need to use both SQL Compare and Data Compare ( http://www.red-gate.com/products/sql-development/sql-data-compare/ ).
Another alternative is to use Visual Studio 2010 premium if you have it( http://msdn.microsoft.com/en-us/library/aa833435.aspx and http://msdn.microsoft.com/en-us/library/dd193261.aspx). This gives both a data compare and a schema compare option. It is not as good as redgate but I found it works most of the time.
If you are looking for free alternatives check out this stack post https://stackoverflow.com/questions/377388/are-there-any-free-alternatives-to-red-gates-tools-like-sql-compare.
if you are importing the whole database to production you might as well do a restore with replace to the production database.
120 SPs and 20 tables seemed to be the whole database. so Restore with replace should be done.