Galera Donor and switching to another host for insert statement - galera

We have an application that is connecting to a Galera Cluster we are using a connection string with multiple hosts. I.E. with MariaDB Connector J. If the first host in the connection string becomes a donor and we issue an insert statement the insert will hang until the donor stops being a donor and resumes as a regular node. We are using MariaDB Connector J is there any way to automatically switch to another host when the initial Host becomes a Donor? We tried using the keyword sequential, but it did not work in this case. Is there a setting in Galera that will allow a donor to immediatly execute an insert statement and return? How is this normally handled? Resynch can take up to 20 minutes and our program cannot pause that long.

I think I might have figured it out. I changed the setting wsrep_sst_donor_rejects_queries to on. With the keyword sequential, this allowed the driver to skip the donor and move to the next up host. If anyone has any alternative solution please post it. I thought I should post my solution so it would help others facing a similar challenge.

Related

Galera cluster Replikation

I built a galera cluster out of three Pi. The users are automatically copied to all servers. The databasename, tablename, columnname too. But they are always empty. So a select on a table only returns entries on the created server. I am absolutely new to the cluster area. Thank you if someone can help LG
I assume that you are using mariadb.
Make sure there is Primary Component in the cluster, you may check in using Status variable wsrep_cluster_status
Also, please always check the server log, if something goes wrong. Log is your friend.

MySql NetScaler DataStream Content Switching failing to detect select

We are using the new DataStream feature introduced in NetScaler 9 (we're on v10) to do content switching (described here: http://support.citrix.com/proddocs/topic/netscaler/ns-dbproxy-wrapper-con.html). We have a read-only virtual server that balances across several read-only MySql slaves. We use our Content Switching to send all "Selects" over to the read-only server.
the policy is configured as such:
mysql.req.query.command.contains("select")
our users send multi-part queries to our database server. Most often they are simple, like:
use database;
select col1 from table1;
Sometimes they will put comments at the head of the query. for example:
-- this is my query
select col1 from table1;
What we've found is that if the query simply starts with a select, everything works swimmingly. However, in the cases where there is a use statement or comments preceding the query, the content swticher fails to detect that this is a select query and it bypasses our read-only virtual server.
I am about to tell all of our developers that they must fully alias every table in every query and avoid use statements (yes, this is a good thing anyway), and also that they cannot use comments in their sql (that's just silly).
Does anyone know how I can configure my NetScaler DataStream Content Switching to ignore comments and use statements?
The decision on where to send the query is done on the first line received after successful authentication... so ignoring the comment won't work.
You could setup a responder policy which sends back an error message saying "Please don't use SQL Comments in commands sent to the Load Balanced VIP". A bit draconian, but your devs would get the message fairly quick.. but there's no way to ignore the comment, but still base a decision on the select statement. However, I was under the impression that the select statement is up to the first semi colon... so in your example above, it should (in theory) still find the select statement. I'd need to test that to be certain of the behaviour however.
Also - the USE statement is critical. This is the DB on which all subsequent commands are issued.
It would be best practice to NOT use the USE statement, but instead, change the select statement to:
select col1 from database.table1;
Once the USE statement is seen, it prevents any subsequent commands being pipelined down the same connection... So if there are a lot of Use statements, you will not get to enjoy the connection multiplexing functionality that comes with DataStream.
We learned that Block Level comments are acceptable, but single line comments are not.
This is properly ignored:
/* my comment */
These comment styles are treated as part of the query:
-- my comment
# my comment
kind of ridiculous when having SET autocommit=0 is perfectly reasonable. What about in that situation.

SQL Server : Replication Error - the row was not found at the Subscriber when applying the replicated command

I got the following error in Activity Monitor,
The row was not found at the Subscriber when applying the replicated command. (Source: SQL Server, Error number: 20598)
After some research, I found out the error occurs because it's trying to delete records which are not exists in the subscriber(Also are not exists in publisher)
{CALL [dbo].[sp_MSdel_testtable] (241)}
I can manually insert something to the record and the replication will move on. The problem I have right now is I don't how many bad records are there. Is there any fast way to do this? I already spends hours and inserted about 20 records.
Thank you
Re-initialize the subscription via Replication Monitor in SSMS, and generate a new snapshot during re-initialization. This should clear up the missing record issues.
Dude, I [literally] just solved this a few minutes ago. It pushed me about a fifty cent cab ride short of crazy. The replicated database was part of a third party solution to offload their app's reporting. As it turns out, one of their app servers was pointed to our reporting database rather than the live database. Hence, "row not found at subscriber." because they were inserting and deleting records in the subscriber table.
Use distribution.dbo.sp_helpsubscriptionerrors to find the xact_seqno, then you can use distribution.dbo.browsereplcmds and this query
SELECT *
FROM distribution.dbo.MSarticles
WHERE article_id in (
SELECT article_id
FROM MSrepl_commands
WHERE xact_seqno = 0x xact_seqno)
to get more info. Use distribution.dbo.sp_setsubscriptionxactseqno if you want to get that single transaction "un-stuck" or just re-initialize.
I ended up running SQL Profiler on both the publisher and subscriber, then keyed off the tables I found with the queries above. That's when I spotted the DML on the replicated table.
Good luck.
This query will give you all the pending commands for that article, which needs to be sent from distributor to subscriber.
Here you will get multiple records with same xact_seqno_start and xact_seqno_end but with different command ids.
You can check if all are delete commands, you can take primary key column value for all and insert them manually to subscriber server.
EXEC Sp_browsereplcmds
#article_id = 813,
#command_id = 1,
#xact_seqno_start = '0x00099979000038D60001',
#xact_seqno_end = '0x00099979000038D60001',
#publisher_database_id = 1

Oracle error: Application failover does not support non-single-SELECT statement

We are getting the following error using Oracle:
[Oracle JDBC Driver]Application failover does not support non-single-SELECT statement
The error occurs when we try to make a delete or insert over a large number of rows (tens of millions of rows).
I know that the script works, because it was working for almost an year before these error messages start to pop.
We know that no one change any database configuration, so we figure out that the problem must be on the volume of processed data (row number is growing as time goes by...).
But we never see that kind of error before! What does it means? It seems that a failover engine tries to recover from an error, but when oracle is 'taken over' by this engine, it enter in a more restricted state, where some kinds of queries does not work (like Windows Safe Mode...)
Well, if this is what is happening, how can I get the real error message? The one that trigger the failover mechanism?
BTW, below is one of the deletes that triggers the error:
delete from odf_ca_rnv_av_snapshot_week
(we tried this one just to test the simplest delete we could think of... a truncate won't help us with the real deal :) )
check this link
the error seems to come not from Oracle or JDBC, but from "progress". It means that it can only recover from SELECT statements and not from DML.
You'll have to figure out why the failover occurs in the first place.

Empty XML Columns during SQL Server replication

We have a merge replication setup on SQL Server that goes like this: 1 SQL server at the office, another SQL server traveling around the world. The publisher is the SQL server at the office.
In about 1% of the cases, two of our tables with a column of XML Data type (not bound to a schema) are replicated with rows containing empty XML columns. ( This only happened when data is sent from the "traveling server" back home, but then again, data seems to be changed more often there ). We only have this in prod. environment ( WAN replication ).
Things i have verified:
The row is replicated, as the last modification date on the row is refreshed but the xml column is empty. Of course it is not empty on the other SQL Server.
No conflicts are displayed in the replication conflicts UI.
It is not caused by the size of the data inside the XML Column as some are very small.
Usually, the problem occurs in batch. ( The xml column of 8-9 consecutive rows will be empty )
The problem occurs if a row was inserted OR updated. No pattern there.
The problem seems to occur, but this is pure speculation on my part when the connection is weaker. ( We've seen this problem happen more often when the server was far away as compared to when it was close by. )
Sorry if i have confused some things, I am not really a DBA, more of a DEV with knowledge of SQL but since the application using the database keeps getting blamed for the problems ( the XML column must not be empty!! ) I have taken it at heart to try and find the problem instead of just manually patching the data each time ( Whats the use of replication if you have to do that? )
If anyone could help out with this problem, or at least suggest some ways of being able to debug / investigate this it would be greatly appreciated.
I did search alot on google and I did find this: Hot Fix . But we do have the latest service pack and the problem seems a bit different.
fyi: We have a replication setup locally here but the problem never occurs. We will be trying a WAN simulator on it as well to see if that can help.
Thanks
Edit: hot fix is now available for my issue: http://support.microsoft.com/kb/2591902
After logging this issue with Microsoft, we were able to reproduce the problem without a slow link ( Big thanks to the competent escalation engineer at Microsoft ). The repro is a bit different from our scenario, but highlights the timing issue we were getting perfectly.
Create 2 tables – One parent one child (have a PK-FK relationship)
Insert 2 rows in the parent table
Set up replication – configure merge agent to run ON DEMAND
Sync
Once all is replicated:
On the PUBLISHER: delete one row from the parent table
On the SUBSCRIBER: Insert 2 rows of data that references the parentid you deleted above
Insert 5 rows of data that references the parentid that will stay in the table
Sync, Merge agent will fail, Sync again, Merge agent will succeed
Missing XML data on the publisher on the 5 rows.
Seems it is a bug that is in SQL Server 2005/2008 and 2008R2.
It will be addressed in a hot fix in 2008 and up. ( As SQL Server 2005 is no longer being altered )
Cheers.
You may want to start out by slapping a bandaid on this perplexing situation to buy some time to fully investigate and fix (or more likely get MS to fix it). SQL Data Compare is an excellent tool that might help.
Figured i'd put an update here as this issue got me a few gray hairs and I am somewhat closer to a solution now.
I finally had some time to work on this and managed to reproduce this issue in our test environment, using a WAN simulator and slowing down the link and injecting some random packet loss. ( to best simulate the production environment where the server is overseas on a really bad line ).
After doing some SQL tracing, and some verbose logging here are my conclusions:
When replicating a row with an XML column, the process is done in 2 steps. First an insert is done of the full row but with an empty string for the XML column. Right after, an update is done this time with the XML column having data. Since the link is slow, in some situations a foreign key violation occured.
In this scenario, Table2 depends on Table1. After finishing replicating table1, and starting to replicate table2 (Enumration of insert/updates which takes time on a slow link), some entries were added to table1 and table2. Therefore some inserts on Table2 failed because Table1 entries were not in the database and were only going to be replicated next batch. The next time the replication occured, no more foreign key violations occured, however when it tried to insert the row that had previously failed in Table2 ( XML column row ), the update part of it was missing ( I could see that in the SQL profiler ) and that is why the row ended up after all was done with an empty XML.
Setting "Enforce for replication" to false on the foreign keys seems to address the problem, however I do still think that this whole process should work with the option set to true.
I logged a support call with Microsoft for this. I have sent the traces and logs to Microsoft and will see what they have to say.
I've read this article: http://msdn.microsoft.com/en-us/library/ms152529(v=SQL.90).aspx. But for me, setting this option to false is kind of a work around, no?
What do you guys think?
ps: Hope this is clear, tried to explain it the best I could. English is not my first language.