I am in the process of re-writing an MS Access database to SQL server and have found an strange issue in Access that I am hoping someone can help with.
I have a table let's call it 'Main' with a Primary Key on the Account that is indexed and doesn't allow for duplicates. Seems simple enough but my issue is occurring when data is getting Inserted.
My INSERT query is (the number of fields have been limited for brevity)
INSERT INTO Main (Account, SentDate, Amount)
SELECT C.Account, C.SentDate, C.Amount
FROM
(CALLS C LEFT JOIN Bals B ON C.Account = B.ACCT_ID)
LEFT JOIN AggAnt A ON C.Account = A.Account
The issue is this, if I run the SELECT portion of my query I get 2365 records but when I run the INSERT I get 2364 records. So I did some checking and I found one Account is duplicated the difference between the records is the SentDate and the Amount. But Access is inserting only one of the records and not throwing any kind of error message or anything. There is nothing in the query that says select the most recent date, etc.
Sample Data:
Account SentDate Amount
12345678 8/1/2011 123.00
23456789 8/1/2011 45678.00
34567890 8/1/2011 7850.00
45678912 8/1/2011 635.00
45678912 5/1/2011 982.00
56789123 8/1/2011 2639.00
In the sample I have one account that is duplicated 45678912 when I run my INSERT, I get no errors and I get the record from 8/1/2011.
Why is Access not throwing an error when this violates the PK on the table? Is there some quirk in Access to select one record and just skip the other?
I am totally stumped by this issue so any help would be great.
How are you running the query? If you're using DoCmd.RunSQL, switch to using the .Execute method of a DAO database object, and use dbFailOnError.
Dim db As DAO.Database
Dim strInsert As String
strInsert = "your insert statement"
Set db = CurrentDb
db.Execute strInsert, dbFailOnError
Set db = Nothing
Edit: If Main is an ODBC link to a SQL Server table, I would examine the Errors Collection (DAO) after db.Execute strInsert, dbFailOnError
After HansUp pointing me in the direction of checking for SetWarnings = false. I found it buried in my code which is why there was no warning message about the records not being inserted due to primary key violations.
A word of caution would be to make sure you want these messages suppressed.
Is there some quirk in Access to [update] one record and just skip the
other?
Yes, you can control this behaviour at the engine level (also at the recordset level if using OLE DB).
For OLE DB (e.g. ADO) the setting is Jet OLEDB:Global Partial Bulk Ops:
determines the behavior of the Jet database engine when SQL DML bulk
operations fail. When set to allow partial completion of bulk
operations, inconsistent changes can occur because operations on some
records could succeed and others could fail. When set to allow no
partial completion of bulk operations, all changes are rolled back if
a single error occurs. The Jet OLEDB:Global Partial Bulk Ops
property setting can be overridden on a per-Recordset basis by
setting the Jet OLEDB:Partial Bulk Ops property in the
Properties collection of a Recordset object.
Note the default is to allow no partial completion of bulk operations.
Related
I've inherited an application that uses Access as the database. I have a field (Date/Time, not required) with some values I need to set to null after a specific date.
To update it I have a little program that runs the query and tells me how many rows affected. (Access isn't installed on the server, the mdb is constantly locked. So I can't download, update, replace. But I can use a simple VB program)
Anyway I need to set some values to null, and to do that I use the following query:
UPDATE [AppPosting] SET [approvedTime] = NULL WHERE [approvedTime] >= #25/10/2022 00:00:00#
Running it gives me "82 affected rows", and running it again gives me the same amount of affected rows. If I open access and look in the (local copy of the) database I can see they haven't update. If I run the same query in access also get 82 affected rows, but they're also set to null.
So what gives? My update says it updates through OleDbConnection, but doesn't update. Whereas through access it says it updated, and actually updates?
I've created sql that does an update of all values in one column
UPDATE `Blackout_POC2.measurements_2020`
SET visitor.customerId_enc = enc.encrypted
FROM `Blackout_POC2.encrypted` AS enc
WHERE dateAmsterdam="2020-01-05"
AND session.visitId = enc.visitId
AND visitor.customerId = enc.plain
where dateAmsterdam is the partition key of the measurements_2020 table, and encrypted is a non-partitioned table that holds visitId, plain and encrypted fields. The code sets all values in the customerId_enc column with values from the encrypted table.
The code works perfectly fine when I run it one day at a time, but when I run days in parallel, I occassionally (1% or so) get a stacktrace from my bq query <sql> (see below).
I thought that I could modify partitioned tables in parallel within each partition, but that seems to occassionally not be the case. Could someone point me to where this would be documented, and preferably how to avoid this?
I can probably just rerun that query again, since it is idempotent, but I would like to know why this happens.
Thanks
Bart van Deenen, data-engineer Bol.com
Error in query string: Error processing job 'bolcom-dev-blackout-339:bqjob_r131fa5b3dfd24829_0000016faec5e5da_1': domain: "cloud.helix.ErrorDomain"
code: "QUERY_ERROR" argument: "Could not serialize access to table bolcom-dev-blackout-339:Blackout_POC2.measurements_2020 due to concurrent update"
debug_info: "[CONCURRENT_UPDATE] Table modified by concurrent UPDATE/DELETE/MERGE DML or truncation at 1579185217979. Storage set job_uuid:
03d3d5ec-2118-4e43-9fec-1eae99402c86:20200106, instance_id: ClonedTable-1579183484786, Reason: code=CONCURRENT_UPDATE message=Could not serialize
access to table bolcom-dev-blackout-339:Blackout_POC2.measurements_2020 due to concurrent update debug=Table modified by concurrent UPDATE/DELETE/MERGE
DML or truncation at 1579185217979. Storage set job_uuid: 03d3d5ec-2118-4e43-9fec-1eae99402c86:20200106, instance_id: ClonedTable-1579183484786
errorProto=domain: \"cloud.helix.ErrorDomain\"\ncode: \"QUERY_ERROR\"\nargument: \"Could not serialize access to table bolcom-dev-
blackout-339:Blackout_POC2.measurements_2020 due to concurrent update\"\ndebug_info: \"Table modified by concurrent UPDATE/DELETE/MERGE DML or
truncation at 1579185217979. Storage set job_uuid: 03d3d5ec-2118-4e43-9fec-1eae99402c86:20200106, instance_id: ClonedTable-1579183484786\"\n\n\tat
com.google.cloud.helix.common.Exceptions$Public.concurrentUpdate(Exceptions.java:381)\n\tat
com.google.cloud.helix.common.Exceptions$Public.concurrentUpdate(Exceptions.java:373)\n\tat
com.google.cloud.helix.server.metadata.StorageTrackerData.verifyStorageSetUpdate(StorageTrackerData.java:224)\n\tat
com.google.cloud.helix.server.metadata.AtomicStorageTrackerSpanner.validateUpdates(AtomicStorageTrackerSpanner.java:1133)\n\tat
com.google.cloud.helix.server.metadata.AtomicStorageTrackerSpanner.updateStorageSets(AtomicStorageTrackerSpanner.java:1310)\n\tat
com.google.cloud.helix.server.metadata.AtomicStorageTrackerSpanner.updateStorageSets(AtomicStorageTrackerSpanner.java:1293)\n\tat
com.google.cloud.helix.server.metadata.MetaTableTracker.updateStorageSets(MetaTableTracker.java:2274)\n\tat
com.google.cloud.helix.server.job.StorageSideEffects$1.update(StorageSideEffects.java:1123)\n\tat
com.google.cloud.helix.server.job.StorageSideEffects$1.update(StorageSideEffects.java:976)\n\tat
com.google.cloud.helix.server.metadata.MetaTableTracker$1.update(MetaTableTracker.java:2510)\n\tat
com.google.cloud.helix.server.metadata.StorageTrackerSpanner.lambda$atomicUpdate$7(StorageTrackerSpanner.java:165)\n\tat
com.google.cloud.helix.server.metadata.AtomicStorageTrackerSpanner$Factory$1.run(AtomicStorageTrackerSpanner.java:3775)\n\tat com.google.cloud.helix.se
rver.metadata.AtomicStorageTrackerSpanner$Factory.lambda$performJobWithCommitResult$0(AtomicStorageTrackerSpanner.java:3792)\n\tat
com.google.cloud.helix.server.metadata.persistence.SpannerTransactionContext$RetryCountingWork.run(SpannerTransactionContext.java:1002)\n\tat com.googl
e.cloud.helix.server.metadata.persistence.SpannerTransactionContext$Factory.executeWithResultInternal(SpannerTransactionContext.java:840)\n\tat com.goo
gle.cloud.helix.server.metadata.persistence.SpannerTransactionContext$Factory.executeOptimisticWithResultInternal(SpannerTransactionContext.java:722)\n
\tat com.google.cloud.helix.server.metadata.persistence.SpannerTransactionContext$Factory.lambda$executeOptimisticWithResult$1(SpannerTransactionContex
t.java:716)\n\tat
com.google.cloud.helix.server.metadata.persistence.SpannerTransactionContext$Factory.executeWithMonitoring(SpannerTransactionContext.java:942)\n\tat co
m.google.cloud.helix.server.metadata.persistence.SpannerTransactionContext$Factory.executeOptimisticWithResult(SpannerTransactionContext.java:715)\n\ta
t com.google.cloud.helix.server.metadata.AtomicStorageTrackerSpanner$Factory.performJobWithCommitResult(AtomicStorageTrackerSpanner.java:3792)\n\tat
com.google.cloud.helix.server.metadata.AtomicStorageTrackerSpanner$Factory.performJobWithCommitResult(AtomicStorageTrackerSpanner.java:3720)\n\tat
com.google.cloud.helix.server.metadata.StorageTrackerSpanner.atomicUpdate(StorageTrackerSpanner.java:159)\n\tat
com.google.cloud.helix.server.metadata.MetaTableTracker.atomicUpdate(MetaTableTracker.java:2521)\n\tat com.google.cloud.helix.server.metadata.StatsRequ
estLoggingTrackers$LoggingStorageTracker.lambda$atomicUpdate$8(StatsRequestLoggingTrackers.java:494)\n\tat
com.google.cloud.helix.server.metadata.StatsRequestLoggingTrackers$StatsRecorder.record(StatsRequestLoggingTrackers.java:181)\n\tat
com.google.cloud.helix.server.metadata.StatsRequestLoggingTrackers$StatsRecorder.record(StatsRequestLoggingTrackers.java:158)\n\tat
com.google.cloud.helix.server.metadata.StatsRequestLoggingTrackers$StatsRecorder.access$500(StatsRequestLoggingTrackers.java:123)\n\tat
com.google.cloud.helix.server.metadata.StatsRequestLoggingTrackers$LoggingStorageTracker.atomicUpdate(StatsRequestLoggingTrackers.java:493)\n\tat
com.google.cloud.helix.server.job.StorageSideEffects.apply(StorageSideEffects.java:1238)\n\tat
com.google.cloud.helix.server.rosy.MergeStorageImpl.commitChanges(MergeStorageImpl.java:936)\n\tat
com.google.cloud.helix.server.rosy.MergeStorageImpl.merge(MergeStorageImpl.java:729)\n\tat
com.google.cloud.helix.server.rosy.StorageStubby.mergeStorage(StorageStubby.java:937)\n\tat
com.google.cloud.helix.proto2.Storage$ServiceParameters$21.handleBlockingRequest(Storage.java:2100)\n\tat
com.google.cloud.helix.proto2.Storage$ServiceParameters$21.handleBlockingRequest(Storage.java:2098)\n\tat
com.google.net.rpc3.impl.server.RpcBlockingApplicationHandler.handleRequest(RpcBlockingApplicationHandler.java:28)\n\tat
....
BigQuery DML operations doesn't have support for multi-statement transactions; nevertheless, you can execute some concurrent statements:
UPDATE and INSERT
DELETE and INSERT
INSERT and INSERT
For example, you execute two UPDATES statements simultaneously against the table then only one of them will succeed.
Keeping this in mind, due you can execute concurrently UPDATE and INSERT statements, another possible cause is if you are executing multiple UPDATE statements simultaneously.
You could try using the Scripting feature to manage the execution flow to prevent DML concurrency.
suppose I fetch an RS, based on certain conditions and start looping though it , then , on certain situations , I update insert or delete records, which may have been part of this RS, using separate prepared statements.
How does this effect the result set ? My inclination is to think that since the Statement which fetched this RS was executed earlier in the process, this RS will now be blind to the changes made by my prepared statements.
Pseudocode :
Preapare Statement ps1
execute ps1 -> get Result Set rs1
loop through rs1
{
Update or delete records using other prepared statements
}
Read Consistency
Oracle guarantees that the set of data seen by a statement is consistent with respect to a single point in time and does not change during statement execution (statement-level read consistency)
That is why, If you have a query such as
insert into t
select * from t;
Oracle will simply duplicate all rows without going into an infinite loop or raising an error.
There are other implications because of this.
1) Oracle reads from the rollback segment to provide you with this read-consistent image of your data. So, if your rollback segments are nor correctly sized, or you commit across fetches, you'll get the "Snapshot too old" error, since your rollback data is no longer available.
Ok , so if that is the case , is it possible to refresh it while making updates ? I mean aside from making the cursor updateable and using the inbuilt functions of the result set.
2) Each query sees the data at the point of time it began. If by refresh you mean refiring the query, then the data you see might be different again, if you do commits in your pl/sql body or within a pl/sql loop or if some other transactions are running in your system concurrently.
It doesn't. The result set of a query/cursor is kept by the database, even if you alter or remove the rows that are the base of this result set. So you are correct, it is blind to changes made after the statement is executed.
I have this code in a trigger.
if isnull(#d_email,'') <> isnull(#i_email,'')
begin
update server2.database2.dbo.Table2
set
email = #i_email,
where user_id = (select user_id from server2.database2.dbo.Table1 where login = #login)
end
I would like to update a table on another db server, both are MSSQL. the query above works for me but it is taking over 10 seconds to complete. table2 has over 200k records. When I run the execution plan it says that the remote scan has a 99% cost.
Any help would be appreciated.
First, the obvious. Check the indexes on the linked server. If I saw this problem without the linked server issue, that would be the first thing I would check.
Suggestion:
Instead of embedding the UPDATE in the server 1 trigger, create a stored procedure on the linked server and update the records by calling the stored procedure.
Try to remove the sub-query from the UPDATE:
if isnull(#d_email,'') <> isnull(#i_email,'')
begin
update server2.database2.dbo.Table2
set email = #i_email
from server2.database2.dbo.Table2 t2
inner join
server2.database2.dbo.Table1 t1
on (t1.user_id = t2.user_id)
where t1.login = #login
end
Whoa, bad trigger! Never and I mean never, never write a trigger assuming only one record will be inserted/updated or deleted. You SHOULD NOT use variables this way in a trigger. Triggers operate on batches of data, if you assume one record, you will create integrity problems with your database.
What you need to do is join to the inserted table rather than using a varaible for the value.
Also really updating to a remote server may not be such a dandy idea in a trigger. If the remote server goes down then you can't insert anything to the orginal table. If the data can be somewaht less than real time, the normal technique is to have the trigger go to a table on the same server and then a job pick up the new info every 5-10 minutes. That way if the remote server is down, the records can still be inserted and they are stored until the job can pick them up and send them to the remote server.
When running a stored procedure (from a .NET application) that does an INSERT and an UPDATE, I sometimes (but not that often, really) and randomly get this error:
ERROR [40001] [DataDirect][ODBC Sybase Wire Protocol driver][SQL Server]Your server command (family id #0, process id #46) encountered a deadlock situation. Please re-run your command.
How can I fix this?
Thanks.
Your best bet for solving you deadlocking issue is to set "print deadlock information" to on using
sp_configure "print deadlock information", 1
Everytime there is a deadlock this will print information about what processes were involved and what sql they were running at the time of the dead lock.
If your tables are using allpages locking. It can reduce deadlocks to switch to datarows or datapages locking. If you do this make sure to gather new stats on the tables and recreate indexes, views, stored procedures and triggers that access the tables that are changed. If you don't you will either get errors or not see the full benefits of the change depending on which ones are not recreated.
I have a set of long term apps which occasionally over lap table access and sybase will throw this error. If you check the sybase server log it will give you the complete info on why it happened. Like: The sql that was involved the two processes trying to get a lock. Usually one trying to read and the other doing something like a delete. In my case the apps are running in separate JVMs, so can't sychronize just have to clean up periodically.
Assuming that your tables are properly indexed (and that you are actually using those indexes - always worth checking via the query plan) you could try breaking the component parts of the SP down and wrapping them in separate transactions so that each unit of work is completed before the next one starts.
begin transaction
update mytable1
set mycolumn = "test"
where ID=1
commit transaction
go
begin transaction
insert into mytable2 (mycolumn) select mycolumn from mytable1 where ID = 1
commit transaction
go