We are in the process of moving OnPrem MSSQLServers to Google CloudSQL MSSQLServer 2017 Standard. Out of 200 transfers so far, we came across 2 that began having issues with Update Statements. Between the 2 the same tables are not always effected. New Records can get created but updates will fail with below error. The OnPrem instances are 2012 and 2014 MSSQL
SQL Error on Update
-21472 Row cannot be located for updating. Some values may have been changed since it was last read
We use ADODB Connection with ADODB Recordsets
RecordSet.CursorLocation = adUseClient
Provider=MSOLEDBSQL; initial catalog= GeoLogicServer; Data Source=10.1.0.149; User ID=NOTAUser; password=NOTAPW;Persist Security Info=True
The only workaround we have found is to export the tables from one instance to another, in the process losing Identity and index settings. After resetting the Identities the tables update without issue
Any recommendation on settings we can review?
We are still testing restoring to other instances and backing up from different versions of sql. putting the backups on another on prem server still works fine
The issue was the bad DBs were older 2012 and 2014. Restoring to and Backing up from a 2016 MSSQL seems to have fixed the issue when then restoring to the Google 2017 SQL instance.
Related
This question was migrated from Stack Overflow because it can be answered on Database Administrators Stack Exchange.
Migrated 5 days ago.
I've been tasked with migrating data from an instance of SQL Server 2000 to 2019. There are a total of four databases to bring over, three of which I was able to backup/restore into 2008 and then into 2019 without any issues. Please note: I am not a DBA in any sense, though I'm the closest thing to one on hand.
The fourth and final database presented the following error that prevented moving from 2008 to 2019:
System.Data.SqlClient.SqlError: An error occurred while processing the log for database 'DbNameHere'. The log block version 2 is unsupported. This server supports log version 3 to 6. (Microsoft.SqlServer.SmoExtended)
Is there a simple fix for this problem that I'm missing in the various SSMS menus?
Alternatively, is there a way to copy raw data from one server to another via, for instance, a flat file, and preserve the identity columns as identity columns? That is, I don't want to just strip that column and bulk insert, as they are often used as foreign keys in other tables, and with twenty-some-odd years of data, something is bound to break in doing this.
An example of an ideal final result in this solution would be something like: legacy table X has 1000 rows, the last of which has an identity column value of 1000. Once the move is complete, new table X has 1000 rows, the last of which has an identity column value of 1000, and upon insert the next row automatically increments to 1001.
Apart from unsuccessfully messing around with flat files, I've also tried the "Copy Database" option in SSMS, which also failed.
I would attempt to get SQL Server to rebuild the transaction log. Based on the error message, that might sort out the situation.
You first use sp_detach_db to detach the database. It is now very likely that the ldf file isn't needed when you do a subsequent attach, and perhaps rebuilding the log this way will sort the situation.
Then you attach the database, without the ldf file. Use CREATE DATABASE with either of the FOR ATTACH or FOR ATTACH_REBUILD_LOG options.
I would do this on the 2008 instance, since from what I understand you got the database in there successfully. But feel free to play around regarding on which version (2000 or 2008) you do the detach and also on which version (2000, 2008, 2019) you do the attach.
I'm trying to create an Always On AG between SQL Server 2017 and SQL Server 2019 instances with a very simple database (just a single table test database for proof of concept).
Everything appears to work fine except the secondary replica is stuck in "Synchronizing / In Recovery" status in SSMS.
I cannot connect to the replica even though I marked it as allowing all connections.
If I make changes to the database and fail over to the replica, the changes are there. Everything LOOKS ok but I cannot connect to the replica, with either a normal connection string or a with read only intent.
I saw the below topic and see that the resolution was to use the same version of SQL Server on both replicas. I would ideally like to use 2017 on my primary and 2019 on the secondary as that is our current production environment. If I can get the AG group up that will ultimately allow me to bring SQL Server 2017 instance up to SQL Server 2019 without downtime.
Databases stuck in “Synchronized / In Recovery” mode after adding server to SQL Avaylability Group
SSMS status
AG Group status query
When you have different versions like this, the system can't run recovery. If it did, the AG would then be useless, as you couldn't go back. And without it, you can't access it.
With SSDT in VS 2017 I cannot do a schema comparison of two databases. Steps to reproduce:
Tools >> SQL Server >> New Schema Comparison
Select a source (a SQL 2017 database)
Select a target (a SQL 2017 database)
Compare
Get error: Value cannot be null.
Parameter name: identifierGroup
I've also tried in VS 2015 which was still installed and it failed there too.
This error starts after the identity Cache is disabled
ALTER DATABASE SCOPED CONFIGURATION SET IDENTITY_CACHE = OFF
After re-enabling it, the error disappears:
ALTER DATABASE SCOPED CONFIGURATION SET IDENTITY_CACHE = ON
Don't forget to restart sql service.
I had the same issue and not yet found a solution, I suppose there are some relation with orphan users.
I did a workaround that worked fine to me.
Create a new empty database
Compare the origin database with this new one.
Update the new database with de resulting script.
Make this new database the new origin and compare with the destination database.
Update de destination database.
Maybe this can helps!
I found a different answer which worked for me several times so far using SSDT / Visual Studio 2019 against SQL server 2019 databases.
https://hendrikbulens.com/2018/10/03/how-to-fix-sql-server-data-tools-error-unexpected-exception-caught-during-population-of-source-model-in-visual-studio-2017/
Hope this helps someone. Ideally Microsoft needs to patch this issue, which looks like a longstanding bug at this point. I looked for such a patch but did not find one. If anyone finds one, please do link it here.
I just used the SQLAzureMW (SQL Azure Migration Wizard Tool) to migrate my SQL Server database to Azure SQL. It went off without a hitch - all my tables are there, the website is running fine off it, etc.
Here's what's odd: if I execute a simple SELECT statement against my tables, I get only a few of the rows. I assumed they were missing, but my website is using some of those records as if they're there. So I queried with a WHERE clause and BAM - they showed up. How the... what the... why isn't my select showing me everything? This applies to many of the tables I've tested.
SQL Azure
On-Premise
I gave up on MS SQL Management Studio and am instead using SQL Server Object Explorer from Visual Studio 2012/2013. It functions properly and allows inline editing of data.
Consider this SELECT statement:
SELECT
SvcTimeID,
LoginName,
MeanSeconds,
MedianSeconds,
RequestCount,
StdDevSeconds,
SvcDate,
CAST (TS AS INT) AS TS
FROM dbo.SvcTime
WHERE SvcDate >= #SvcDate
Where the parameter is set:
cmd.Parameters["#SvcDate"].Value = DateTime.UtcNow - new TimeSpan(31, 0, 0, 0);
Execute that statement in an Azure Web Role - brought back, say 24 rows.
Now, insert two new rows; wait at least one minute; execute the statement again. Do the recently inserted rows appear? In my case, they did not. Note: the default value of SvcDate in the database is getutcdate().
Move the SQL Azure database from the web edition to the standard (S2) edition. Rows magically appear.
Here is my theory. The issue you had was not with MS SQL Management Studio but with SQL Azure itself where, under certain circumstances, the same query will return the original rows from a cache someplace and will miss the new rows in the database.
This has blown any remaining confidence I had with Azure.
I was scared at first, but I think this has an explanation:
If you inserted some rows in connection "A" and can't find them in other sessions, maybe you have a uncommited transaction. By default, in SQL Server on premise, your second connections would hung until transaction is commited or rolled back. (Isolation level read committed)
Somehow, using the same isolation level, Azure acts differently. I seems to work in some cases as a snapshot isolation. Because of that, you can read from the table, but results are not updated. Or maybe the lock are set in a different way.
To solve this, check sysprocesses for sessions with open_tran > 0 or just be careful commmiting trans. In the example, running commit in your session "A" should do it.
Good luck!
Ok here is the thing:
I have an old MS SQL 2000 server, and this one will keep running.
however for a new website i have a SQL 2008 server.
I need 3 tables from the old server (lets call it www.oldserver.com) to be on the new server too. the data on the old server still changes daily.
I would like to update the tables immediately when something changes on the old server.
how do you do this. i looked at mirroring but that doesnt seem to be the way to go, now i've checked Import function in SQL Server management studio, but i dont want to import the data all the time. 1 import, then updated like daily are ok. so i guess i need to 'write a query to specify the data to transfer' but i have no idea how that query should look.
the import will go to a SSIS package so it can be scheduled.
what is the best practice here? how should i do it?
You could set up the old server as a Linked Server in the new server.
Then, you could create the tables on the new server not as tables, but as views, directly selecting from the tables on the old server.
Like this (on the new server):
create view OldTableOnNewServer as
select * from OldServer.OldDatabase.dbo.OldTable
Advantages:
no replication/updating necessary -
the data comes directly from the tables on the old server
Disadvantages:
Network traffic: each time someone
selects from the view, the new
server will access the oldserver
over the network
Availability: if the
old server is not available, the
views on the new server won't work at all