Using Firebird Trace and Audit service for database replication - replication

Firebird supports Trace and Audit service since version 2.5.
Audit logs may configure to filter SQL statements that is useful for database replication.
Here is a sample log:
INSERT INTO RDB$BACKUP_HISTORY(RDB$BACKUP_ID, RDB$TIMESTAMP, RDB$BACKUP_LEVEL, RDB$GUID, RDB$SCN, RDB$FILE_NAME) VALUES(NULL, 'NOW', ?, ?, ?, ?)
param0 = integer, "0"
param1 = varchar(38), "{855507A3-C794-477C-7E8E-BF5381BB6132}"
param2 = integer, "39"
param3 = varchar(255), "r:\backup.nbk"
If we do a program to parse the committed INSERT, UPDATE and DELETE statements from log and replay these SQL statement on replicated database, it seems we can produce some kind replication strategy here.
However, I can't find any discussions using trace and audit service on database replication. Is it advisable to do so?

Related

Concurrency Spanner java stacktrace on `bq query` update column of partitioned table

I've created sql that does an update of all values in one column
UPDATE `Blackout_POC2.measurements_2020`
SET visitor.customerId_enc = enc.encrypted
FROM `Blackout_POC2.encrypted` AS enc
WHERE dateAmsterdam="2020-01-05"
AND session.visitId = enc.visitId
AND visitor.customerId = enc.plain
where dateAmsterdam is the partition key of the measurements_2020 table, and encrypted is a non-partitioned table that holds visitId, plain and encrypted fields. The code sets all values in the customerId_enc column with values from the encrypted table.
The code works perfectly fine when I run it one day at a time, but when I run days in parallel, I occassionally (1% or so) get a stacktrace from my bq query <sql> (see below).
I thought that I could modify partitioned tables in parallel within each partition, but that seems to occassionally not be the case. Could someone point me to where this would be documented, and preferably how to avoid this?
I can probably just rerun that query again, since it is idempotent, but I would like to know why this happens.
Thanks
Bart van Deenen, data-engineer Bol.com
Error in query string: Error processing job 'bolcom-dev-blackout-339:bqjob_r131fa5b3dfd24829_0000016faec5e5da_1': domain: "cloud.helix.ErrorDomain"
code: "QUERY_ERROR" argument: "Could not serialize access to table bolcom-dev-blackout-339:Blackout_POC2.measurements_2020 due to concurrent update"
debug_info: "[CONCURRENT_UPDATE] Table modified by concurrent UPDATE/DELETE/MERGE DML or truncation at 1579185217979. Storage set job_uuid:
03d3d5ec-2118-4e43-9fec-1eae99402c86:20200106, instance_id: ClonedTable-1579183484786, Reason: code=CONCURRENT_UPDATE message=Could not serialize
access to table bolcom-dev-blackout-339:Blackout_POC2.measurements_2020 due to concurrent update debug=Table modified by concurrent UPDATE/DELETE/MERGE
DML or truncation at 1579185217979. Storage set job_uuid: 03d3d5ec-2118-4e43-9fec-1eae99402c86:20200106, instance_id: ClonedTable-1579183484786
errorProto=domain: \"cloud.helix.ErrorDomain\"\ncode: \"QUERY_ERROR\"\nargument: \"Could not serialize access to table bolcom-dev-
blackout-339:Blackout_POC2.measurements_2020 due to concurrent update\"\ndebug_info: \"Table modified by concurrent UPDATE/DELETE/MERGE DML or
truncation at 1579185217979. Storage set job_uuid: 03d3d5ec-2118-4e43-9fec-1eae99402c86:20200106, instance_id: ClonedTable-1579183484786\"\n\n\tat
com.google.cloud.helix.common.Exceptions$Public.concurrentUpdate(Exceptions.java:381)\n\tat
com.google.cloud.helix.common.Exceptions$Public.concurrentUpdate(Exceptions.java:373)\n\tat
com.google.cloud.helix.server.metadata.StorageTrackerData.verifyStorageSetUpdate(StorageTrackerData.java:224)\n\tat
com.google.cloud.helix.server.metadata.AtomicStorageTrackerSpanner.validateUpdates(AtomicStorageTrackerSpanner.java:1133)\n\tat
com.google.cloud.helix.server.metadata.AtomicStorageTrackerSpanner.updateStorageSets(AtomicStorageTrackerSpanner.java:1310)\n\tat
com.google.cloud.helix.server.metadata.AtomicStorageTrackerSpanner.updateStorageSets(AtomicStorageTrackerSpanner.java:1293)\n\tat
com.google.cloud.helix.server.metadata.MetaTableTracker.updateStorageSets(MetaTableTracker.java:2274)\n\tat
com.google.cloud.helix.server.job.StorageSideEffects$1.update(StorageSideEffects.java:1123)\n\tat
com.google.cloud.helix.server.job.StorageSideEffects$1.update(StorageSideEffects.java:976)\n\tat
com.google.cloud.helix.server.metadata.MetaTableTracker$1.update(MetaTableTracker.java:2510)\n\tat
com.google.cloud.helix.server.metadata.StorageTrackerSpanner.lambda$atomicUpdate$7(StorageTrackerSpanner.java:165)\n\tat
com.google.cloud.helix.server.metadata.AtomicStorageTrackerSpanner$Factory$1.run(AtomicStorageTrackerSpanner.java:3775)\n\tat com.google.cloud.helix.se
rver.metadata.AtomicStorageTrackerSpanner$Factory.lambda$performJobWithCommitResult$0(AtomicStorageTrackerSpanner.java:3792)\n\tat
com.google.cloud.helix.server.metadata.persistence.SpannerTransactionContext$RetryCountingWork.run(SpannerTransactionContext.java:1002)\n\tat com.googl
e.cloud.helix.server.metadata.persistence.SpannerTransactionContext$Factory.executeWithResultInternal(SpannerTransactionContext.java:840)\n\tat com.goo
gle.cloud.helix.server.metadata.persistence.SpannerTransactionContext$Factory.executeOptimisticWithResultInternal(SpannerTransactionContext.java:722)\n
\tat com.google.cloud.helix.server.metadata.persistence.SpannerTransactionContext$Factory.lambda$executeOptimisticWithResult$1(SpannerTransactionContex
t.java:716)\n\tat
com.google.cloud.helix.server.metadata.persistence.SpannerTransactionContext$Factory.executeWithMonitoring(SpannerTransactionContext.java:942)\n\tat co
m.google.cloud.helix.server.metadata.persistence.SpannerTransactionContext$Factory.executeOptimisticWithResult(SpannerTransactionContext.java:715)\n\ta
t com.google.cloud.helix.server.metadata.AtomicStorageTrackerSpanner$Factory.performJobWithCommitResult(AtomicStorageTrackerSpanner.java:3792)\n\tat
com.google.cloud.helix.server.metadata.AtomicStorageTrackerSpanner$Factory.performJobWithCommitResult(AtomicStorageTrackerSpanner.java:3720)\n\tat
com.google.cloud.helix.server.metadata.StorageTrackerSpanner.atomicUpdate(StorageTrackerSpanner.java:159)\n\tat
com.google.cloud.helix.server.metadata.MetaTableTracker.atomicUpdate(MetaTableTracker.java:2521)\n\tat com.google.cloud.helix.server.metadata.StatsRequ
estLoggingTrackers$LoggingStorageTracker.lambda$atomicUpdate$8(StatsRequestLoggingTrackers.java:494)\n\tat
com.google.cloud.helix.server.metadata.StatsRequestLoggingTrackers$StatsRecorder.record(StatsRequestLoggingTrackers.java:181)\n\tat
com.google.cloud.helix.server.metadata.StatsRequestLoggingTrackers$StatsRecorder.record(StatsRequestLoggingTrackers.java:158)\n\tat
com.google.cloud.helix.server.metadata.StatsRequestLoggingTrackers$StatsRecorder.access$500(StatsRequestLoggingTrackers.java:123)\n\tat
com.google.cloud.helix.server.metadata.StatsRequestLoggingTrackers$LoggingStorageTracker.atomicUpdate(StatsRequestLoggingTrackers.java:493)\n\tat
com.google.cloud.helix.server.job.StorageSideEffects.apply(StorageSideEffects.java:1238)\n\tat
com.google.cloud.helix.server.rosy.MergeStorageImpl.commitChanges(MergeStorageImpl.java:936)\n\tat
com.google.cloud.helix.server.rosy.MergeStorageImpl.merge(MergeStorageImpl.java:729)\n\tat
com.google.cloud.helix.server.rosy.StorageStubby.mergeStorage(StorageStubby.java:937)\n\tat
com.google.cloud.helix.proto2.Storage$ServiceParameters$21.handleBlockingRequest(Storage.java:2100)\n\tat
com.google.cloud.helix.proto2.Storage$ServiceParameters$21.handleBlockingRequest(Storage.java:2098)\n\tat
com.google.net.rpc3.impl.server.RpcBlockingApplicationHandler.handleRequest(RpcBlockingApplicationHandler.java:28)\n\tat
....
BigQuery DML operations doesn't have support for multi-statement transactions; nevertheless, you can execute some concurrent statements:
UPDATE and INSERT
DELETE and INSERT
INSERT and INSERT
For example, you execute two UPDATES statements simultaneously against the table then only one of them will succeed.
Keeping this in mind, due you can execute concurrently UPDATE and INSERT statements, another possible cause is if you are executing multiple UPDATE statements simultaneously.
You could try using the Scripting feature to manage the execution flow to prevent DML concurrency.

Hibernate - First Sql insert takes a long time

I'm trying to insert a record in DB using Hibernate.
The data gets saved in to multiple tables in the DB. On the hibernate side I have a parent Entity class with one-to-one and one-to-many mapping to other Entity classes.
In debug mode, I could see that the save operation results in multiple sql inserts.
The first insert sql takes a long time, approximately 300 milliseconds.
Please note: This does not include time taken for Session Initialisation, Obtaining JDBC connection etc.
10:46:24.132 [main] DEBUG org.hibernate.SQL - insert into MY_SCHEMA_NAME.PARENT_ENTITY (COLUMN1, COLUMN2, COLUMN3, COLUMN4, COLUMN5, COLUMN6, COLUMN7, COLUMN8, COLUMN9, COLUMN10, COLUMN11, COLUMN12, COLUMN13, COLUMN14) values (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
The same sql if I execute from any other tool ( Oracle SQL developer ) it takes about 20 milliseconds.
The subsequent sql inserts executed by hibernate takes only about 15-20 milliseconds.
The question is, why the fist sql insert in Hibernate takes so much time, nearly 10 times when compared to subsequent sql inserts?
To answer this question you need to learn:
how the query is processed on the database side - see: Oracle Database tuning guide - SQL processing
how the query is cached on the client side - see: Oracle Database JDBC developer's guide - Statement and resultset caching
In short: when the SQL query is coming from the client to the database for the very first time, then the database is performing some additional steps before the execution of this statement (see the first link). After this very first time, the sql plan for this statement is placed into the shared pool (kind of cache), and the database can skip a few of the most time consuming tasks (hard parsing - optimization and row source generation) for all subsequent requests for this concrete query - this process is called "soft parse" in the diagram from the above link.
If the shared pool is cleared (for example after the database restart), these steps must be repeated again for the first incoming query - and this take an additional time.
The statement is flushed out from the cache when some table/view referenced by the query is changed (for example after ALTER TABLE command, or CREATE/DROP INDEX command), and the database is performing the hard parse again after this, and this take an additional time again.
On the client side when the statement is executed for the first time, it is placed in the cache (see the second link) - and this must take some additional time. After this, for all subsequent statement invocation the statement is retrieved from the cache - and this improves performance.
When the database driver is closed (for example on the application restart), the cache is cleared, and the next statement invocation must again take an additional time.
You can explicitely disable statement caching (see the second lnk for detailed instruction) and most likely you will see that all executed statements will take more time.

OLe DB provider "SQLNCLI" for linked server was unable to begin a distributed transaction

I am trying to call a stored procedure in SQL Server 2008 and store the fetched data into a local temp table.
When I try to run it, I receive the following error:
The operation could not be completed because OLe DB provider "SQLNCLI"
for linked server was unable to begin a distributed transaction
My code is as follows :
create table #temp(
col1 as int,
col2 as varchar(50)
)
insert into #temp
exec [192.168.0.9].[db1].[dbo].[tablename] #usr_id=3
You can prevent using distributed transactions for the linked server by setting server option 'remote proc transaction promotion' to 'false':
EXEC sp_serveroption 'servername', 'remote proc transaction promotion', 'false'
Here's the same issue
linked server was unable to begin a distributed transaction errors are because of issues in MSDTC (MS distributed transaction coordiinator). Issues can arise from a number of problems. Including MSDTC not running, being blocked by firewall, and others.
If you require transactions, you have to debug the problem pretty much yourself since it is environmental. If you can rewrite to avoid requiring a transaction your life will be simpler. Just to make sure it is a MSDTC issue, write a simple query that will not depend on MSDTC. e.g.
create table #temp( col1 as int, col2 as varchar(50) )
insert into #temp
select col1, col2 from [192.168.0.9].[db1].[dbo].[tablename] where usr_id=3
If this works, its definitely MSDTC (and perhaps an avoidable problem)
-- Added this. Spent a little while looking for MSDTC debugging. http://www.sqlwebpedia.com/content/msdtc-troubleshooting was pretty good as was http://www.mssqltips.com/sqlservertip/2083/troubleshooting-sql-server-distributed-transactions-part-1-of-2/ togehter they cover just about every thing I can recall having to debug for MSDTC problems (and some others too).

IDbCommand's not being properly enlisted in NHibernate transaction

I have two IDbCommand objects that are created from an NHibernate session, and they are enlisted in a transaction via the NHibernate session. The first database command inserts a value into an Oracle global temporary table and the second command reads values from the table. With an Oracle GTT, a transaction is needed for both commands in order to preserve the data in the GTT.
The strange thing is that the second command reads values from the GTT, as expected, when it's run on one server, but the exact same code doesn't work on the other server. What's even stranger, is that the first request on the non-working server works if it happens immediately after the IIS worker processes have been recycled. Each request after that does not work - specifically, the values in the GTT are not maintained after being inserted.
ISession session = sessionFactory.GetSession();
ITransaction transaction = session.BeginTransaction();
IDbCommand cmdInsert = session.Connection.CreateCommand();
transaction.Enlist(cmdInsert);
cmdInsert.CommandText = "insert into TEMP_TABLE values (1)";
cmdInsert.ExecuteNonQuery();
IDbCommand cmdRead = session.Connection.CreateCommand();
transaction.Enlist(cmdRead);
cmdRead.CommandText = "select from TEMP_TABLE";
// Nothing is returned here after the second request
cmdRead.ExecuteQuery();
transaction.Commit();
Why would the transaction that is created from an NHibernate session not properly enlist IDbCommands after the first request to an IIS server?
We ended up using the Oracle Data Provider for .NET (ODP.NET) driver and replacing the deprecated Microsoft System.Data.OracleClient driver. This fixed the transaction support. Not sure why the deprecated driver worked on one server and not the other, but I guess it's deprecated, so I'm not going to investigate it further.

Sybase ASE: "Your server command encountered a deadlock situation"

When running a stored procedure (from a .NET application) that does an INSERT and an UPDATE, I sometimes (but not that often, really) and randomly get this error:
ERROR [40001] [DataDirect][ODBC Sybase Wire Protocol driver][SQL Server]Your server command (family id #0, process id #46) encountered a deadlock situation. Please re-run your command.
How can I fix this?
Thanks.
Your best bet for solving you deadlocking issue is to set "print deadlock information" to on using
sp_configure "print deadlock information", 1
Everytime there is a deadlock this will print information about what processes were involved and what sql they were running at the time of the dead lock.
If your tables are using allpages locking. It can reduce deadlocks to switch to datarows or datapages locking. If you do this make sure to gather new stats on the tables and recreate indexes, views, stored procedures and triggers that access the tables that are changed. If you don't you will either get errors or not see the full benefits of the change depending on which ones are not recreated.
I have a set of long term apps which occasionally over lap table access and sybase will throw this error. If you check the sybase server log it will give you the complete info on why it happened. Like: The sql that was involved the two processes trying to get a lock. Usually one trying to read and the other doing something like a delete. In my case the apps are running in separate JVMs, so can't sychronize just have to clean up periodically.
Assuming that your tables are properly indexed (and that you are actually using those indexes - always worth checking via the query plan) you could try breaking the component parts of the SP down and wrapping them in separate transactions so that each unit of work is completed before the next one starts.
begin transaction
update mytable1
set mycolumn = "test"
where ID=1
commit transaction
go
begin transaction
insert into mytable2 (mycolumn) select mycolumn from mytable1 where ID = 1
commit transaction
go