I've inherited an application that uses Access as the database. I have a field (Date/Time, not required) with some values I need to set to null after a specific date.
To update it I have a little program that runs the query and tells me how many rows affected. (Access isn't installed on the server, the mdb is constantly locked. So I can't download, update, replace. But I can use a simple VB program)
Anyway I need to set some values to null, and to do that I use the following query:
UPDATE [AppPosting] SET [approvedTime] = NULL WHERE [approvedTime] >= #25/10/2022 00:00:00#
Running it gives me "82 affected rows", and running it again gives me the same amount of affected rows. If I open access and look in the (local copy of the) database I can see they haven't update. If I run the same query in access also get 82 affected rows, but they're also set to null.
So what gives? My update says it updates through OleDbConnection, but doesn't update. Whereas through access it says it updated, and actually updates?
Related
I'm trying to improve the current auditing I have with one of my databases. Currently this is done with access data macro's however I use vb.net as a front end.
Most of the updates use a data adapter and use the following command to update the backend
CurrentDataAdapter.Update
For the purposes of inserting the information into an audit table I would like to be able to list the SQL commands that take place with this. Using the command text just gives a single SQL command with the parameters place holders
CurrentDataAdapter.UpdateCommand.CommandText
Gives
UPDATE [Table] Set F1=#P1 WHERE ID=#P2
However I'm more after a list of
UPDATE [Table] SET F1=a WHERE ID=1
UPDATE [Table] Set F2=b WHERE ID=2
UPDATE [Table] SET F3=c WHERE ID=3
Is this possible? (Multiple SQL statements in one are not supported with Access backend)
Many thanks
What is the DB2 equivalent of SQL Server's SET NOCOUNT ON?
"From the SQL Server documentation:
SET NOCOUNT ON... Stops the message that shows the count of the number of rows affected by a Transact-SQL statement or stored procedure from being returned as part of the result set...
For stored procedures that contain several statements that do not return much actual data, or for procedures that contain Transact-SQL loops, setting SET NOCOUNT to ON can provide a significant performance boost, because network traffic is greatly reduced."
my problem is if I update a row in a table, a trigger runs that update another
row in a different table.
In Hibernate I get this error: "Batch update returned unexpected row
count from update; actual row count: 2; expected: 1".
I think because of the trigger DB2 returns 2 instead of 1, what
is correct. However, is there any way to make DB2 to return 1
without removing the trigger or can I disable the check in Hibernate?
How to handle this issue?
Can anyone plz tell "Set NoCount on"(sql server) equivalent in db2?
There is no equivalent to SET NOCOUNT in DB2 because DB2 does not produce any informational messages after a DML statement has completed successfully. Instead, the DB2 driver stores that type of information in a local, connection-specific data structure called the SQL communications area (SQLCA). It is up to the application (or whatever database framework or API the application is using) to decide which SQLCA variables to examine after executing each statement.
In your case, your application has delegated its database interaction to Hibernate, which compares the number of affected rows reported by DB2 in the SQLCA with the number of rows Hibernate expected its UPDATE statement to change. Since Hibernate isn't aware of the AFTER UPDATE trigger you created, it expects the update statement to affect only one row, but the SQLCA shows that two rows were updated (one by Hibernate's update statement, and one by the AFTER UPDATE trigger on that table), so Hibernate throws an exception to complain about the discrepancy.
This leaves you with two options:
Drop the trigger from that table and instead define an equivalent followup action in Hibernate. This is not an ideal solution if other applications that don't use Hibernate are also updating the table in question, but that's the sort of decision a team gets to make when they inflict Hibernate on a database.
Keep the AFTER UPDATE trigger where it is in DB2, and examine your options for defining Hibernate object mappings to determine if there's a way to at least temporarily disable Hibernate's row count verification logic. One approach that looks particularly encouraging is to specify the ResultCheckStyle.NONE option as part of a custom #SQLUpdate annotation.
For SQL Server and Sybase, there appears to be a third option: Hide the activity of an AFTER UPDATE trigger from Hibernate by activating SET NOCOUNT ON within the trigger. Unfortunately, there is no equivalent in DB2 (or Oracle, for that matter) that allows an application to selectively skip certain activities when tallying the number of affected rows.
I'm using SQL Server 2012, and I'm debugging a store procedure that do some INSERT INTO #temporal table SELECT.
There is any way to view the data selected in the command (the subquery of the insert into?)
There is any way to view the data inserted and/or the temporal table where the insert maked the changes?
It doesn't matter if is the total rows, not one by one
UPDATE:
Requirements from AT Compliance and Company Policy requires that any modification can be done in the process of test and it's probable this will be managed by another team. There is any way to avoid any change on the script?
The main idea is that the AT user check in their workdesktop the outputs, copy and paste them, without make any change on environment or product.
Thanks and kind regards.
If I understand your question correctly, then take a look at the OUTPUT clause:
Returns information from, or expressions based on, each row affected
by an INSERT, UPDATE, DELETE, or MERGE statement. These results can be
returned to the processing application for use in such things as
confirmation messages, archiving, and other such application
requirements.
For instance:
INSERT INTO #temporaltable
OUTPUT inserted.*
SELECT *
FROM ...
Will give you all the rows from the INSERT statement that was inserted into the temporal table, which were selected from the other table.
Is there any reason you can't just do this: SELECT * FROM #temporal? (And debug it in SQL Server Management Studio, passing in the same parameters your application is passing in).
It's a quick and dirty way of doing it, but one reason you might want to do it this way over the other (cleaner/better) answer, is that you get a bit more control here. And, if you're in a situation where you have multiple inserts to your temp table (hopefully you aren't), you can just do a single select to see all of the inserted rows at once.
I would still probably do it the other way though (now I know about it).
I know of no way to do this without changing the script. Howeer, for the future, you should never write a complex strored proc or script without a debug parameter that allows you to put in the data tests you will want. Make it the last parameter with a default value of 0 and you won't even have to change your current code that calls the proc.
Then you can add statements like the below everywhere you will want to check intermediate results. Further in debug mode you might always rollback any transactions so that a bug will not affect the data.
IF #debug = 1
BEGIN
SELECT * FROM #temp
END
I am in the process of re-writing an MS Access database to SQL server and have found an strange issue in Access that I am hoping someone can help with.
I have a table let's call it 'Main' with a Primary Key on the Account that is indexed and doesn't allow for duplicates. Seems simple enough but my issue is occurring when data is getting Inserted.
My INSERT query is (the number of fields have been limited for brevity)
INSERT INTO Main (Account, SentDate, Amount)
SELECT C.Account, C.SentDate, C.Amount
FROM
(CALLS C LEFT JOIN Bals B ON C.Account = B.ACCT_ID)
LEFT JOIN AggAnt A ON C.Account = A.Account
The issue is this, if I run the SELECT portion of my query I get 2365 records but when I run the INSERT I get 2364 records. So I did some checking and I found one Account is duplicated the difference between the records is the SentDate and the Amount. But Access is inserting only one of the records and not throwing any kind of error message or anything. There is nothing in the query that says select the most recent date, etc.
Sample Data:
Account SentDate Amount
12345678 8/1/2011 123.00
23456789 8/1/2011 45678.00
34567890 8/1/2011 7850.00
45678912 8/1/2011 635.00
45678912 5/1/2011 982.00
56789123 8/1/2011 2639.00
In the sample I have one account that is duplicated 45678912 when I run my INSERT, I get no errors and I get the record from 8/1/2011.
Why is Access not throwing an error when this violates the PK on the table? Is there some quirk in Access to select one record and just skip the other?
I am totally stumped by this issue so any help would be great.
How are you running the query? If you're using DoCmd.RunSQL, switch to using the .Execute method of a DAO database object, and use dbFailOnError.
Dim db As DAO.Database
Dim strInsert As String
strInsert = "your insert statement"
Set db = CurrentDb
db.Execute strInsert, dbFailOnError
Set db = Nothing
Edit: If Main is an ODBC link to a SQL Server table, I would examine the Errors Collection (DAO) after db.Execute strInsert, dbFailOnError
After HansUp pointing me in the direction of checking for SetWarnings = false. I found it buried in my code which is why there was no warning message about the records not being inserted due to primary key violations.
A word of caution would be to make sure you want these messages suppressed.
Is there some quirk in Access to [update] one record and just skip the
other?
Yes, you can control this behaviour at the engine level (also at the recordset level if using OLE DB).
For OLE DB (e.g. ADO) the setting is Jet OLEDB:Global Partial Bulk Ops:
determines the behavior of the Jet database engine when SQL DML bulk
operations fail. When set to allow partial completion of bulk
operations, inconsistent changes can occur because operations on some
records could succeed and others could fail. When set to allow no
partial completion of bulk operations, all changes are rolled back if
a single error occurs. The Jet OLEDB:Global Partial Bulk Ops
property setting can be overridden on a per-Recordset basis by
setting the Jet OLEDB:Partial Bulk Ops property in the
Properties collection of a Recordset object.
Note the default is to allow no partial completion of bulk operations.
I have a sql script running on a server (ServerA)
This server, has a linked server set up (ServerB) - this is located off site in a datacenter.
This query works relatively speeidily:
SELECT OrderID
FROM [ServerB].[DBName].[dbo].[MyTable]
WHERE Transferred = 0
However, when updating the same table using this query:
UPDATE [ServerB].[DBName].[dbo].[MyTable]
SET Transferred = 1
It takes > 1 minute to complete (even if there's only 1 column where Transferred = 0)
Is there any reason this would be acting so slowly?
Should I have an index on MyTable for the "Transferred" column?
If you (I mean SQL server) cannot use index on remote side to select records, such remote update in fact reads all records (primary key and other needed fields) from remote side, updates these locally and sends updated records back. If your link is slow (say 10Mbit/s or less), then this scenario takes lot of time.
I've used stored procedure on remote side - this way you should only call that procedure remotely (with set of optional parameters). If your updateable subset is small, then proper indexes may help too - but stored procedure is usually faster.
UPDATE [ServerB].[DBName].[dbo].[MyTable]
SET Transferred = 1
WHERE Transferred = 0 -- missing this condition?
How often is this table being used?
If this table is used by many users at the same time, you may have a problem with lock/block.
Everytime some process updates a table without filtering the records, the entire table is locked by the transaction and the other processess that needs to update the table stand waiting.
It his case, you may be waiting for some other process to unlock the table.