Identity Management in a Pull Merge Replication at Subscriber - sql

I am facing SQL Server Replication issue
(Identity Management in a Pull Merge Replication at Subscriber).
Replication situation:
Distributor and the Publisher are in one server running Windows Server 2012 Std and SQL Server 2012 Std
One Subscriber PC running Windows 7 Professional and SQL Server 2012 Express Edition
Both are connected through the internet using VPN
The Problem:
Subscriber has an article (Table) [DocumentItems] where its Identity field [DocumentItemsID] is managed by Replication and was assigned the following range:
([DocumentItemsID]>(280649) AND [DocumentItemsID]<=(290649) OR [DocumentItemsID]>(290649) AND DocumentItemsID]<=(300649)
The server was disconnected from electricity several times.
Every time the Subscriber PC is up, The [DocumentItemsID] field will pick an identity out of its range like 330035 when inserting new rows.
The issue happened 3 times.
I fixed the problem by a manual reseed:
DBCC CHECKIDENT('DocumentItems' , RESEED, xxxx)
Where xxxx is the MAX existing value for [DocumentItemsID] + 1
Once the electricity is disconnected again, the same problem occurs.
Does anybody have any idea what is happening?
And why the [DocumentItemsID] field was assigned values out of its range?
Thanks

OK, finally I knew what was going on.
It is an issue happening only in SQL Server 2012. when SQL Server instance is restarted, then table's Identity value is jumped (int will jump 1000, where big int will jump 10000).
To stop this increment, Register -t272 to SQL Server Startup Parameter.
This solved the problem.
Thanks to Code Project article by S. M. Ahasan Habib, I was totally in the dark when I read it.
For details on how to register the startup parameter, read the article. It shows how to reproduce the issue and provide 2 solutions.

Related

Migrating legacy data from SQL Server 2000 to 2019 , log block error - is there a painless way of moving over tables with autoinc identity columns? [migrated]

This question was migrated from Stack Overflow because it can be answered on Database Administrators Stack Exchange.
Migrated 5 days ago.
I've been tasked with migrating data from an instance of SQL Server 2000 to 2019. There are a total of four databases to bring over, three of which I was able to backup/restore into 2008 and then into 2019 without any issues. Please note: I am not a DBA in any sense, though I'm the closest thing to one on hand.
The fourth and final database presented the following error that prevented moving from 2008 to 2019:
System.Data.SqlClient.SqlError: An error occurred while processing the log for database 'DbNameHere'. The log block version 2 is unsupported. This server supports log version 3 to 6. (Microsoft.SqlServer.SmoExtended)
Is there a simple fix for this problem that I'm missing in the various SSMS menus?
Alternatively, is there a way to copy raw data from one server to another via, for instance, a flat file, and preserve the identity columns as identity columns? That is, I don't want to just strip that column and bulk insert, as they are often used as foreign keys in other tables, and with twenty-some-odd years of data, something is bound to break in doing this.
An example of an ideal final result in this solution would be something like: legacy table X has 1000 rows, the last of which has an identity column value of 1000. Once the move is complete, new table X has 1000 rows, the last of which has an identity column value of 1000, and upon insert the next row automatically increments to 1001.
Apart from unsuccessfully messing around with flat files, I've also tried the "Copy Database" option in SSMS, which also failed.
I would attempt to get SQL Server to rebuild the transaction log. Based on the error message, that might sort out the situation.
You first use sp_detach_db to detach the database. It is now very likely that the ldf file isn't needed when you do a subsequent attach, and perhaps rebuilding the log this way will sort the situation.
Then you attach the database, without the ldf file. Use CREATE DATABASE with either of the FOR ATTACH or FOR ATTACH_REBUILD_LOG options.
I would do this on the 2008 instance, since from what I understand you got the database in there successfully. But feel free to play around regarding on which version (2000 or 2008) you do the detach and also on which version (2000, 2008, 2019) you do the attach.

Google Cloud SQL - -21472 Row cannot be located for updating

We are in the process of moving OnPrem MSSQLServers to Google CloudSQL MSSQLServer 2017 Standard. Out of 200 transfers so far, we came across 2 that began having issues with Update Statements. Between the 2 the same tables are not always effected. New Records can get created but updates will fail with below error. The OnPrem instances are 2012 and 2014 MSSQL
SQL Error on Update
-21472 Row cannot be located for updating. Some values may have been changed since it was last read
We use ADODB Connection with ADODB Recordsets
RecordSet.CursorLocation = adUseClient
Provider=MSOLEDBSQL; initial catalog= GeoLogicServer; Data Source=10.1.0.149; User ID=NOTAUser; password=NOTAPW;Persist Security Info=True
The only workaround we have found is to export the tables from one instance to another, in the process losing Identity and index settings. After resetting the Identities the tables update without issue
Any recommendation on settings we can review?
We are still testing restoring to other instances and backing up from different versions of sql. putting the backups on another on prem server still works fine
The issue was the bad DBs were older 2012 and 2014. Restoring to and Backing up from a 2016 MSSQL seems to have fixed the issue when then restoring to the Google 2017 SQL instance.

SQL Server Always On Availability Group stuck in "Synchronizing / In Recovery"

I'm trying to create an Always On AG between SQL Server 2017 and SQL Server 2019 instances with a very simple database (just a single table test database for proof of concept).
Everything appears to work fine except the secondary replica is stuck in "Synchronizing / In Recovery" status in SSMS.
I cannot connect to the replica even though I marked it as allowing all connections.
If I make changes to the database and fail over to the replica, the changes are there. Everything LOOKS ok but I cannot connect to the replica, with either a normal connection string or a with read only intent.
I saw the below topic and see that the resolution was to use the same version of SQL Server on both replicas. I would ideally like to use 2017 on my primary and 2019 on the secondary as that is our current production environment. If I can get the AG group up that will ultimately allow me to bring SQL Server 2017 instance up to SQL Server 2019 without downtime.
Databases stuck in “Synchronized / In Recovery” mode after adding server to SQL Avaylability Group
SSMS status
AG Group status query
When you have different versions like this, the system can't run recovery. If it did, the AG would then be useless, as you couldn't go back. And without it, you can't access it.

SQL Replication Publisher thinks Subscriber is wrong version

I have two SQL Server 2008 instances, one running Workgroup Edition (publisher) and the other Standard (subscriber)
I am trying to replicate a database but I am getting errors when it tries to create the database at the subscriber because it thinks it is running SQL Server 2005 for some reason.
Has anyone had this issue before?
I am getting this error
Column Location in object Members contains type Geography, which
is not supported in the target server version, SQL Server 2005.
Have you checked compatibility mode for the databases?
For example:
SELECT compatibility_level
FROM sys.databases WHERE name = 'YourDBName';

Identity column value suddenly jumps to 1001 in sql server [duplicate]

This question already has answers here:
Identity increment is jumping in SQL Server database
(6 answers)
Closed 7 years ago.
I am using Sql server 2012(Denali). I wonder why all identity column values start from 1001 and so on. At the beginning Identity column starts from 1,2 and so on and adding identity smoothly, but suddenly it jumps to 1001,1002 and onwards for all the table in the database containing identity column. What could be the reason? Please assist.
Microsoft has changed the way they deal with identity values in SQL Server 2012 and as a result of this you can see identity gaps between your records after rebooting your SQL server instance or your server machine. There might be some other reasons for this id gaps, it may be due to automatic server restart after installing an update.
You can use below two choices
Use trace flag 272
o This will cause a log record to be generated for each generated identity value. The performance of identity generation may be impacted by turning on this trace flag.
Use a sequence generator with the NO CACHE setting
Setting Trace Flag 272 on SQL Server 2012 that you are expecting here
Open "SQL Server Configuration Manager"
Click "SQL Server Services" on the left pane
Right-click on your SQL Server instance name on the right pane ->Default: SQL Server(MSSQLSERVER)
Click "Properties"
Click "Startup Parameters"
On the "specify a startup parameter" textbox type "-T272"
Click "Add"
Confirm the changes
I believe you have the explanation in a comment to this connect item. Failover or Restart Results in Reseed of Identity
To boost the preformance for high end machines, we introduce
preallocation for identity value in 2012. And this feature can be
disabled by using TF 272 (then you will get the behaviour from
2008R2).
The identity properties are stored separately in metadata. If a value
is used in identity and increment is called, then the new seed value
will be set. No operation, including Rollback, Failover, ..... can
change the seed value except DBCC reseed. Failover applies for the
table object, but no the identity object. So for failover, you can
call checkpoint before manual failover, but you may see gap for
unplanned cases. If gap is a concern, then I suggest you to use TF
272.
For control manager shutdown, we have a fix for next verion (with
another TF). This fix will take care of most control manager shutdown
cases.
I guess you could use sequence instead, sequence gives you 100% complete control, and is in many ways far superior in comparison to identity...
Identity is just so damn easy and convenient
http://msdn.microsoft.com/en-us/library/ff878091.aspx
As far as i know, when you do a insert with identity and fails, the identity is used anyway, Verified
with sequence you can make it "fill" gaps using cycle.
Although, as Amy Barrett is pointing out this is created out of scope of the transaction.
There is a performance optimization when you are using cache that might be useful as well.