I'm trying to create an Always On AG between SQL Server 2017 and SQL Server 2019 instances with a very simple database (just a single table test database for proof of concept).
Everything appears to work fine except the secondary replica is stuck in "Synchronizing / In Recovery" status in SSMS.
I cannot connect to the replica even though I marked it as allowing all connections.
If I make changes to the database and fail over to the replica, the changes are there. Everything LOOKS ok but I cannot connect to the replica, with either a normal connection string or a with read only intent.
I saw the below topic and see that the resolution was to use the same version of SQL Server on both replicas. I would ideally like to use 2017 on my primary and 2019 on the secondary as that is our current production environment. If I can get the AG group up that will ultimately allow me to bring SQL Server 2017 instance up to SQL Server 2019 without downtime.
Databases stuck in “Synchronized / In Recovery” mode after adding server to SQL Avaylability Group
SSMS status
AG Group status query
When you have different versions like this, the system can't run recovery. If it did, the AG would then be useless, as you couldn't go back. And without it, you can't access it.
Related
We are in the process of moving OnPrem MSSQLServers to Google CloudSQL MSSQLServer 2017 Standard. Out of 200 transfers so far, we came across 2 that began having issues with Update Statements. Between the 2 the same tables are not always effected. New Records can get created but updates will fail with below error. The OnPrem instances are 2012 and 2014 MSSQL
SQL Error on Update
-21472 Row cannot be located for updating. Some values may have been changed since it was last read
We use ADODB Connection with ADODB Recordsets
RecordSet.CursorLocation = adUseClient
Provider=MSOLEDBSQL; initial catalog= GeoLogicServer; Data Source=10.1.0.149; User ID=NOTAUser; password=NOTAPW;Persist Security Info=True
The only workaround we have found is to export the tables from one instance to another, in the process losing Identity and index settings. After resetting the Identities the tables update without issue
Any recommendation on settings we can review?
We are still testing restoring to other instances and backing up from different versions of sql. putting the backups on another on prem server still works fine
The issue was the bad DBs were older 2012 and 2014. Restoring to and Backing up from a 2016 MSSQL seems to have fixed the issue when then restoring to the Google 2017 SQL instance.
I'm seeing SSN masked on PROD and it is not masked on DEV with same exact code. I looked up the stored procedures it is calling and it is simple select statement. I inquired DB admin and he said they are not doing any masking, since they are using SQL server 2012 he is pretty sure it is not happening on sql server side(sql server 2016 and up year can do masking on DB).
I need suggestions what else I can debug or is it possible to do masking on SQL server 2012?
I am facing SQL Server Replication issue
(Identity Management in a Pull Merge Replication at Subscriber).
Replication situation:
Distributor and the Publisher are in one server running Windows Server 2012 Std and SQL Server 2012 Std
One Subscriber PC running Windows 7 Professional and SQL Server 2012 Express Edition
Both are connected through the internet using VPN
The Problem:
Subscriber has an article (Table) [DocumentItems] where its Identity field [DocumentItemsID] is managed by Replication and was assigned the following range:
([DocumentItemsID]>(280649) AND [DocumentItemsID]<=(290649) OR [DocumentItemsID]>(290649) AND DocumentItemsID]<=(300649)
The server was disconnected from electricity several times.
Every time the Subscriber PC is up, The [DocumentItemsID] field will pick an identity out of its range like 330035 when inserting new rows.
The issue happened 3 times.
I fixed the problem by a manual reseed:
DBCC CHECKIDENT('DocumentItems' , RESEED, xxxx)
Where xxxx is the MAX existing value for [DocumentItemsID] + 1
Once the electricity is disconnected again, the same problem occurs.
Does anybody have any idea what is happening?
And why the [DocumentItemsID] field was assigned values out of its range?
Thanks
OK, finally I knew what was going on.
It is an issue happening only in SQL Server 2012. when SQL Server instance is restarted, then table's Identity value is jumped (int will jump 1000, where big int will jump 10000).
To stop this increment, Register -t272 to SQL Server Startup Parameter.
This solved the problem.
Thanks to Code Project article by S. M. Ahasan Habib, I was totally in the dark when I read it.
For details on how to register the startup parameter, read the article. It shows how to reproduce the issue and provide 2 solutions.
We have 2 servers - one production, one test/development. I wanted to run some SQL checks and updates against production data but write the changes to test/development server, so people wouldn't see the changes.
Using SQL Server Management studio, I ran the cursor with the checks and updates in it. I was actively connected to test/development. However, I wrote my queries as follows.
SELECT * FROM [Production_Server].[Production_DB].[schema].[table]
I was under the impression this would look at the production server, however, it did not. It looks at the test/development server. I have access/rights in both environments.
Is there something I overlooked permission wise to get this to work? Or is just how it is intended to work?
"[Production_Server]" must be the wrong linked server name.
Run the below sproc to find the correct value to use
exec sp_linkedservers
Ok here is the thing:
I have an old MS SQL 2000 server, and this one will keep running.
however for a new website i have a SQL 2008 server.
I need 3 tables from the old server (lets call it www.oldserver.com) to be on the new server too. the data on the old server still changes daily.
I would like to update the tables immediately when something changes on the old server.
how do you do this. i looked at mirroring but that doesnt seem to be the way to go, now i've checked Import function in SQL Server management studio, but i dont want to import the data all the time. 1 import, then updated like daily are ok. so i guess i need to 'write a query to specify the data to transfer' but i have no idea how that query should look.
the import will go to a SSIS package so it can be scheduled.
what is the best practice here? how should i do it?
You could set up the old server as a Linked Server in the new server.
Then, you could create the tables on the new server not as tables, but as views, directly selecting from the tables on the old server.
Like this (on the new server):
create view OldTableOnNewServer as
select * from OldServer.OldDatabase.dbo.OldTable
Advantages:
no replication/updating necessary -
the data comes directly from the tables on the old server
Disadvantages:
Network traffic: each time someone
selects from the view, the new
server will access the oldserver
over the network
Availability: if the
old server is not available, the
views on the new server won't work at all