I would like to understand what happens in the following Lotus-Domino server to server replication scenario:
Server A has a replica of A database.
Server B has a replica of the same database.
Both servers have manager access on the database, including the delete document privilege.
The replicator process has just replicated A and B and all is in sync.
The database contains a note that has a reader field where both servers are mentioned.
On server A the entry for server B is removed from the readers field.
Server A initiates the replication with B.
In this scenario I expect that server A will remove the document from server B. There are variations on the scenario, server C replicating with B, B initiating the replication with A.
I have an application that is build around this expectation, and it has worked well most of the time. But there are notes that remain on server B and are excluded from the replication process. The OID remains different. There are instances where the DSN is updated on both notes without any result in the replication process.
Actually, I disagree with AndrewB's answer. In my experience, it should work as per your expectations. Using readernames fields to control replication has been part of my standard arsenal for 15+ years, and I have found it far more reliable than the alternative of selective replication -- which is evil and should be avoided at all costs, but that's another story!
It is true that once the readernames field no longer contains the entry for serverB the note itself is invisible to serverB, but the fact that the note has changed is not invisible to the replicator. The replicator should notice this, determine that serverB no longer has rights to the document, and remove it -- without leaving a stub.
Have you tried clearing the replication history on both sides?
This is an easy trap to fall into, you should not use Reader fields to control replication between servers, they are fantastic to control users and groups but all servers in a replication group should always have access to everything.
The reason the documents are left/not updated on server B is that removal of the server B from reader field on the document makes it invisible to the server hence it has no idea that it has changed, or been deleted. The reason that the deletion on server A was picked up by server B is that deletion converts the document into a deletion stub which is little more than the UNID so readers field also go making the deletion 'visible' to server B. You can't even force server A to write to server B because server A will know that server B isn't allowed to see the document so a push replication will ignore the document in question.
IBM has created an SPR for this:
Problem Normally, when a server is removed from the reader field of a
document, after scheduled replication takes place, the document is
deleted from the server since that server no longer has access to the
document. In some cases, when a secondary server is removed from the
reader field of a document residing on a primary server, after
replication occurs between the two servers, the document is not
deleted from the secondary server as is expected. Enabling replication
debug reveals the following error on the source server: "You are not
authorized to perform that operation". Clearing the replication
history and initiating replication from both servers does not resolve
the problem. Upon further investigation, it was determined that the
document on the secondary server had a higher sequence number which
implies that it was updated more recently than the document on the
primary server. Normally, when a document does not contain a reader
field or if both servers involved in the replication are listed in the
reader field of both copies of the document, a replication conflict
will be generated when the document is modified on both servers before
replication takes place. However, in this particular situation, since
the secondary server does not have access to the document on the
primary server, replication fails as is expected and a replication
conflict is not generated because in order for a conflict document to
be generated, both servers need to have access to the document.
Resolving the problem
1.) A short term solution would be to modify the document on the primary server so that its sequence number is higher than document on
the secondary server. After replication takes place, the changes
should replicate to the secondary server and the document should be
deleted from the secondary server as is expected.
2.) A more permanent solution would be to prevent users and servers from making changes to the document on both servers at around the same
time. Also, replicating more often should help reduce the chances of
such a condition from occurring since changes made on one server will
possibly replicate out before changes are also made on the other
server. This issue issue is being tracked under SPR MKHS8MLQVD
We had something like this happen when we were consolidating servers and it didn't work out very well for us. If I use your server A/server B scenario what happened for us was Server B replicated with Server A and the document disappeared from Server B. Unfortunately this was tracked as a deletion so when A and B replicated again the documents were then removed from server A.
Luckily we had backups.
Related
While analyzing Dynamic Management Views captured against fail-over server, observed that DMV is getting flush or SQL Engine going to reset the statistics.
In the production environment, it is not allowed to flush/clear the DMV, based on this I am identifying the delta between them. While calculating the delta I come to know that many times previous value is greater than current value.
My question is, let say if database A is configured in AG1 with 2 server like primary-secondary, while switching from primary to secondary will it be going to reset the primary server stats and what are the different reasons that could cause for DMV is getting reset?
Also what happens in the recompilation case for that particular procedure is it going to reset the DMV stats ?
When the failover occurs, you're moving from one server to another server. Sys.dm_exec_procedure_stats is providing information about procedures that are currently in cache. Since you changed servers, there is nothing in cache for that database after the failover. Therefore, you're going to see radical differences from one server to another after a failover.
It's not a reset of the information. It's simply that the information in the procedure cache of one server is not the same as the information in the procedure cache of another server.
There are some features in our LOB application that allow users to define their own queries to retrieve data for reports and listings within the app. The problem that we are encountering is that sometimes these queries they have written a really heavy (and sometimes erroneous) and cause massive load on the server.
Removing these features is out of the question but Im wanting to know if there is a way to create some type of sandbox within SQL server so that the queries that they execute are only allotted a certain amount of resources to execute therefore not giving them the chance to cause any damage to anyone else using the system. Any ideas?
The Resource governor has been mentioned in the comments above already. One other solution I can think of is using SQL Server High Availability Groups.
At the last place I worked had this kind of set up. There is a primary server which takes in all the transactions that write stuff to the database, with a secondary in case the primary fails. Added to this we also had read-only replicas added to the availability group.
The main purpose of this is in the event that your main server goes down you are automatically transferred to another replica. When you connect your application to the database, you connect it to the Availability Group rather than a specific server. Then if a server goes down you are automatically transferred to a secondary server instead. However, it can also be used to optimise application functionality that just needs read-only access by taking load off the primary server.
Any functionality that we knew that it only needed read-only access then we could connect to the availability group and add into the connection string ApplicationIntent=READONLY which means that we're using the read-only replica rather than the primary, leaving the primary for regular transactions. (IIRC, by default the primary will accept any read/write connection, so you have to configure the primary not to accept read-only connections)
Anyway, the kicking off point for reading up about this is here: https://msdn.microsoft.com/en-us/library/ms190202.aspx
The latest Windows 10 1903 upgrade already has inbuilt Sandbox features, where you can run SQL server within it's own sandbox. I don't think SQL Server itself has its own inbuilt sandbox environment, as it would be practically impossible to manage within a normal Windows server that is not using sandbox, if you know what I mean.
I have set up an Azure database instance which supposedly replicates into a 'read only' secondary database using standard geo-replication. In the Azure portal I can see the status of the replication is 'online' and 'Secondary type' is 'Offline', which appears to be normal.
My question is, is there a way for me to see the actual contents of the secondary database, to ensure the replication is actually working as planned?
I cannot 'Manage' the database in the portal. I can connect to the instance in SQL Management Studio, where I can see the database but expanding tables / stored procedures shows nothing (a bit like connecting to a secure database using the non-secure connection string). I am also not able to run any queries against it as it gives me 'Connection to an offline secondary database is not allowed.'
I've searched this site an did a web search for an answer but can't seem to find one. Am I supposed to blindly rely on the fact that Azure is perfoming the replication correctly (with no way to double-check), or am I missing something here?
Many thanks in advance for any light you are able to shed on this.
Standard Geo-Replicated Secondary DBs are offline copies that do not accept client connections (so there is no way to query the data directly). If you need a readable Geo-Replicated Secondary then you must use the Active Geo-Replication available for Premium DBs.
Even though you can't query Standard Geo-Replicated DBs directly, you can use the DMVs in the Master to determine if the continuous copy is working correctly.
On the Master try the following:
SELECT * FROM sys.dm_database_copies
SELECT * FROM sys.dm_Continuous_copy_status
I hope this helps!
For more information about Standard Geo-Replication, Active Geo-Replication Or Checking the activity of continuous copy use the following links.
Standard Geo-Replication: https://msdn.microsoft.com/en-us/library/azure/Dn758204.aspx
Active Geo-Replication: https://msdn.microsoft.com/en-us/library/azure/dn741339.aspx
Continuous Copy DMV Blog: http://www.sqlservercentral.com/blogs/pie-in-the-sky/2014/12/25/monitoring-geo-replication-in-sql-azure-using-dmvs/
I tried to repro your situation and I think I understand the confusion.
When the Secondary Type = "Offline" then it is a standard Geo-Replicated Secondary. The Primary Databases page is confusing, but when you click on the link to the secondary should show that it is offline.
As far as understanding if the continuous copy is working, run the script below against the Primary (I was mistaken last time, Sorry).
SELECT * FROM sys.dm_Continuous_copy_status
You should see the linked server, database, and Replication State.
As before if you need to read from your secondary, you will have to created a premium active-Geo Replicated Secondary.
Hope This helps!
Scenario
In our replication scheme we replicate a number of tables, including a photos table that contains binary image data. All other tables replicate as expected, but the photos table does not. I suspect this is because of the larger amount of data in the photos table or perhaps because the image data is a varbinary field. However, using smaller varbinary fields did not help.
Config Info
Here is some config information:
Each image could be anywhere from 65-120 Kb
A revision and approved copy is stored along with thumbnails, so a single row may approach ~800Kb
I once had trouble with the "max text repl size" configuration field, but I have set that to the max value using sp_configure and reconfigure with override
Photos are filtered based on a “published” field, but so are other working tables
The databases are using the same local db server (in the development environment) and are configured for transactional replication
The replicated database uses a “push” subscription
Also, I noticed that sometimes regenerating the snapshot and reinitializing the subscription caused the images to replicate. Taking this into consideration, I configured the snapshot agent to regenerate the snapshot every minute or so for debugging purposes (obviously this is overkill for a production environment). However, this did not help things.
The Question
What is causing the photos table not to replicate while all others do not have a problem? Is there a way around this? If not, how would I go about debugging further?
Notes
I have used SQL Server Profiler to look for errors as well as the Replication Monitor. No errors exist. The operation just fails silently as far as I can tell.
I am using SQL Server 2005 with Service Pack 3 on Windows Server 2003 Service Pack 2.
[update]
I have found out the hard way that Philippe Grondier is absolutely right in his answer below. Images, videos and other binary files should not be stored in the database. IIS handles these files much more efficiently than I can.
I do not have a straight answer to your problem, as our standard policy has always been 'never store (picture) files in (database) fields'. Our solution, that applies not only to pictures but to any kind of file, or document, is now standard:
We have a "document" table in our database, where document/file names and relative folders are stored (in order to get unique document/file names, we generate them from the primary key/uniqueIdentifier value of the 'Document' table).
This 'document' table is replicated among our different suscribers, like all other tables
We have a "document" folder and
subfolders, available on each of our
database servers.
Document folders are then replicated independently from the database, with some files and folders replication software (allwaysynch is an option)
main publisher's folders are fully accessible through ftp, where a user trying to read a document (still) unavailable on his local server will be proposed to download it from the main server through a ftp client software (such as coreFTP and its command line options)
With an images table like that, have you considered moving that article to a one-way (or two-way, if you like) merge publication? That may alleviate some of your issues.
I am having a problem with one database on my SQL Server 2005 production server.
A number of databases are already set up for mirroring, however when I right click and go to properties in SSMS, on one particular database there is no "Mirroring" property page available.
I have done the normal tasks, such as setting Full Recovery model, running a full backup of the database and backing up the logs.
I can't see that this is a server specific issue as other databases are happily mirroring.
I've looked around and I can't see that I'm missing a setting, any help would be appreciated.
Thanks.
EDIT: This is nothing to do with the Mirror Database yet, I can't get as far as specifying the Mirror Database , I cannot see the "Mirroring" page on the principle.
EDIT: I have managed to setup mirroring using t-sql commands. However I am still unable to see the "Mirroring Page".
UPDATE: This applies to the Transaction Log Shipping option as well. I can successfully set it up in SQL but not through SSMS.
Check theese items:
2 . The mirror database has to be created from a full backup of the principal server and should be restored in "Restore with Norecovery" model. It is followed by a restore of transaction log backup of the principal database so that the log sequence numbers of the mirror and the principal database are in synch with each other.
3 . The mirror database must have the same name as the principal database.
...
8 . DB Mirroring is available in Enterprise, Developer and Standard Editions, however, please refer to Microsoft website for a comparison chart as some features are not available in the Standard Edition. SQL Server Workgroup and Express Editions can only be used as witness servers.
Database Mirroring in Microsoft SQL Server 2005
Test monitoring with sp_dbmmonitorresults (Transact-SQL)
I don't have the answer, but I ran across the same symptom yesterday, and I remembered your question here, hahaha. My problem was that I set up database mirroring using the wizards, but one of the systems had a firewall blocking the mirroring port. The wizard setup went all the way to the final part of enabling database mirroring, and then errored out - but at that point, mirroring was already set up. Mirroring worked great, but there was something in the database metadata that wasn't set quite right. Even when I removed the firewall, parts of SSMS acted as if mirroring wasn't set up for that particular database, even though it was.
I then set up additional databases for mirroring (with the firewall off) and they worked great. My solution was to remove mirroring on that database and then add it again, and it worked fine. Doesn't sound like that's worked for you, though.
I ended up having to have a Microsoft Support call for the problem I was facing. Anyway after sometime and a number of support sessions they worked out that the database with the problem had an ID of 4 in sys.databases. IDs 1-4 are usually reserved for the system databases and if a database has any of these ids the T-log or Mirroring properties are not displayed. So somehow our database got the ID 3 and now I better get on and detach and reattach some databases to reassign IDs.