Access SQL Server Insert blocked by Select - sql

OK we have a multi user (25 users) Access 2013 FE and a SQL Server 2012 BE. Up until yesterday the whole system was working FINE and now it has completely stopped.
If user A has a record open via a straight forward select query reading from TABLE Z, then if user B tries to do an insert on TABLE Z, they receive a timeout message. When I go to SQL server and run SP_WH02, it states User B is blocked by User A. When I then investigate the command that is blocking user B, it is just a simple SELECT statement.
Does anyone know why this would be?
The form that User A has open has Record Locks = No Locks and Recordset Type = Dynaset
The record source is a SELECT, retrieving two fields where the key field is a parameter based on the value of another.
However, nothing has changed on this system for months, so I'm confused as to why this would happen.
Thanks for any help.

Ok we solved it. If anyone else has the same issue: We archived 9400 records into a new table and now we can do inserts again. It has bought us time so going forward I will normalize table Z further and auto archive records against a criteria.

It may be the same or similar issue as in MS Access holds locks on table rows indefinitely
Access only fetches the first x rows of the big recordsource, leaving the table in a ASYNC_NETWORK_IO wait state, i.e. locked.
Possible solutions are:
Don't have forms or queries that select all records. It usually doesn't make too much sense to scroll through 20k+ records.
Force Access to fetch all records, to release the lock. You can do this with Me.RecordsetClone.MoveLast e.g. in Form_Load(). Only advisable with a fast network connection.

Related

Best way to do a long running schema change (or data update) in MS Sql Server?

I need to alter the size of a column on a large table (millions of rows). It will be set to a nvarchar(n) rather than nvarchar(max), so from what I understand, it will not be a long change. But since I will be doing this on production I wanted to understand the ramifications in case it does take long.
Should I just hit F5 from SSMS like I execute normal queries? What happens if my machine crashes? Or goes to sleep? What's the general best practice for doing long running updates? Should it be scheduled as a job on the server maybe?
Thanks
Please DO NOT just hit F5. I did this once and lost all the data in the table. Depending on the change, the update statement that is created for you actually stores the data in memory, drops the table, creates the new one that has the change you want, and populates the data from memory. However in my case one of the changes I made was adding a unique constraint so the population failed, and as the statement was over the data in memory was dropped. This left me with the new empty table.
I would create the table you are changing, with the change(s) you want, as a new table. Then select * into the new table, then re-name the tables in a single statement. If there is potential for data to be entered into the table while this is running and that is an issue, you may want to lock the table.
Depending on the size of the table and duration of the statement, you may want to save the locking and re-naming for later, and after the initial population of the new table do a differential population of new data and re-name the tables.
Sorry for the long post.
Edit:
Also, if the connection times out due to duration, then run the insert statement locally on the DB server. You could also create a job and run that, however it is essentially the same thing.

MSSQL Automatic Merge Database

I have a PC that has a MSSQL database with 800 variables being populated every second. I need that database to merge/backup to a second database on another server PC at least every 10 minutes. Additionally, the first database needs to be wiped clean once per week, in order to save local drive space, so that only 1 week's worth of data is stored on that first database at any given time; meanwhile, the second database keeps everything intact and never gets cleared, only being added upon by the merges that occur every 10 minutes.
To my knowledge, this means I cannot rely on database mirroring, since the first one will be wiped every week. So from what I have gathered, this means I have to have scheduled merges going on every 10 minutes.
I will readily admit I know next to nothing about SQL. So my two questions are:
How do I set up scheduled merges to occur from one database to another in 10 minute frequencies?
How do I set a database to be scheduled/scripted so that it gets cleared every week?
(Note: both databases are running on MS SQL Server 2012 Standard.)
Assuming you can create a linked server on server A that connects to server B (Here's a guide)
Then create a trigger on your table, for example table1:
CREATE TRIGGER trigger1
ON table1
AFTER INSERT
AS
INSERT INTO ServerB.databaseB.dbo.table1
select *
from inserted
More on triggers here.
For part 2, you can schedule a job to truncate the table on whatever schedule you would like. How to create a scheduled job.
The trigger only fires on Inserts so deleting rows does nothing to the table on server B.
How is the purging/deleting of the data happening, via a stored proc? If so, you could also try transactional replication, and replicate the execution of that particular stored proc, but dummy the proc on the subscriber, so when the proc gets replicated and executed on the subscriber, nothing will get deleted/purged.

Sql server bulk load into a table

I need to update a table in sql server 2008 along the lines of the Merge statement - delete, insert, updates. Table is 700k rows and I need users to still have read access to it assuming an isolation level of read committed.
I tried things like ALTER TABLE table SET (LOCK_ESCALATION=DISABLE) to no avail. I tested by doing a select top 50000 * from another window, obvious read uncommitted worked :). Is there anyway around this without changing the user's isolation level and retaining an 'all or nothing' transaction behaviour?
My current solution of a cursor that commits in batches of n may allow users to work but loses the transactional behaviour. Perhaps I could just make the bulk update fast enough to always be less than 30 seconds (for timeout). The problem is the user's target db's are on very slow machines with only 512mb ram. Not sure the processor but assume it is really slow and I don't have access to them at this time!
I created a test that causes an update statement to need to run against all 700k rows:
I tried an update with a left join on my dev box (quite fast) and it was 17 seconds
The merge statement was 10 seconds
The FORWARD ONLY cursor was slower than both
These figures are acceptable on my machine but I would feel more comfortable if I could get the query time down to less than 5 seconds before allowing locks.
Any ideas on preventing any locking on the table/rows or making it faster still?
It sounds like this table may be queried a lot but not updated much. If it really is a true read-only table for everyone else, but you want to update it extremely quickly, you could create a script that uses this method (could even wrap it in a transaction, I believe, although it's probably unnecessary):
Make a copy of the table named TABLENAME_COPY
Update the copy table
Rename the original table to TABLENAME_ORIG
Rename the copy table to TABLENAME
You would only experience downtime in between steps 3 and 4, and a scripted table rename would probably be quicker than an update of so many rows.
This does all assume that no else can update the table while your process is running, but they will still be able to read it fully at any point except between 3 & 4

How to delete a large record from SQL Server?

In a database for a forum I mistakenly set the body to nvarchar(MAX). Well, someone posted the Encyclopedia Britanica, of course. So now there is a forum topic that won't load because of this one post. I have identified the post and ran a delete query on it but for some reason the query just sits and spins. I have let it go for a couple hours and it just sits there. Eventually it will time out.
I have tried editing the body of the post as well but that also sits and hangs. When I sit and let my query run the entire database hangs so I shut down the site in the mean time to prevent further requests while it does it's thinking. If I cancel my query then the site resumes as normal and all queries for records that don't involve the one in question work fantastically.
Has anyone else had this issue? Is there an easy way to smash this evil record to bits?
Update: Sorry, the version of SQL Server is 2008.
Here is the query I am running to delete the record:
DELETE FROM [u413].[replies] WHERE replyID=13461
I have also tried deleting the topic itself which has a relationship to replies and deletes on topics cascade to the related replies. This hangs as well.
Option 1. Depends on how big the table itself and how big are the rows.
Copy data to a new table:
SELECT *
INTO tempTable
FROM replies WITH (NOLOCK)
WHERE replyID != 13461
Although it will take time, table should not be locked during the copy process
Drop old table
DROP TABLE replies
Before you drop:
- script current indexes and triggers so you are able to recreate them later
- script and drop all the foreign keys to the table
Rename the new table
sp_rename 'tempTable', 'replies'
Recreate all the foreign keys, indexes and triggers.
Option 2. Partitioning.
Add a new bit column, called let's say 'Partition', set to 0 for all rows except the bad one. Set it to 1 for bad one.
Create partitioning function so there would be two partitions 0 and 1.
Create a temp table with the same structure as the original table.
Switch partition 1 from original table to the new temp table.
Drop temp table.
Remove partitioning from the source table and remove new column.
Partitioning topic is not simple. There are some examples in the internet, e.g. Partition switching in SQL Server 2005
Start by checking if your transaction is being blocked by another process. To do this, you can run this command..
SELECT * FROM sys.dm_os_waiting_tasks WHERE session_id = {spid}
Replace {spid} with the correct spid number of the connection running your DELETE command. To get that value, run SELECT ##spid before the DELETE command.
If the column sys.dm_os_waiting_tasks.blocking_session_id has a value, you can use activity monitor to see what that process is doing.
To open activity monitor, right-click on the server name in SSMS' Object Explorer and choose Activity Monitor. The Processes and Resource Waits sections are the ones you want.
Since you're having issues deleting the record and recreating the table, have you tried updating the record?
Something like (changing "body" field name to whatever it is in the table):
update [u413].[replies] set body='' WHERE replyID=13461
Once you clear out the text from that single reply record you should be able to alter the data type of the column to set an upper bound. Something like:
alter table [u413].[replies] alter column body nvarchar(100)

Access sometimes jumps to existing record on save new record - Access2k FE/SQL2005 BE

I am really posting this out of desperation after searching around a lot for an answer and trying a few different things with no success.
I have an Access database where I have recently migrated the tables to SQL 2005, Access continues to function to the users as a front-end providing forms, reports, and queries.
However, since moving to the Access FE/SQL BE setup, the users have been reporting that sometimes, when they are entering a new record, they click into a subform (saving the record) or click save on the menu itself, it jumps to an existing record. The new record has been saved, but for some reason access switches to a different record as it refreshes. The user then has to close out, find the saved record, and continue editing it.
Scenario: A user is entering a quote and fills out all the quote details, customer,
date, etc, then clicks in the line-items subform to add a product (or clicks save in the menu), and suddenly
the quote form (and line-item subform) is showing the details of some random quote. The random quote could be recent, or from years ago, and has nothing in common with the quote they were entering.
This weird behavior only happens on inserting a new record, never on editing an existing record. Users tell me that it happens 'more often' when they go to add a new (quote, customer, whatever) after opening the database.
I have noticed it is only happening on forms that have subforms, so my first thought is that it had to do with Access sending through the subform data before the form data is saved, causing a PK violation. But this doesn't appear to be happening: there are no errors on the SQL server, and the record is successfully saved. Forcing the users to save the main form record before adding subform records (i.e. on a quote, forcing them to save the quote before they can add line items) didn't work, it just causes the jump (sometimes) on the save.
It isn't vba running on the save or on current, I have set breakpoints on all the event handlers as it jumps and no vba is being executed. Some of the 'jumping' forms have no vba on the form. But all have subforms. I suspect it has to do with record locking.
The server running the tables is SQL Server 2005, the users are using a mix of Access 2000 and 2003, mostly XP SP3 with a couple of old Win2k boxes. They are using Merge replication and a couple of users are running replicated SSEE2005 editions and subscribing to the main server. Most users are not replicated, just connecting directly to the server via ODBC or SQL native client connections. But I have verified that this is happening to all users, usually once or twice a day, and it has happened to me before. So it isn't a user issue.
The worst part about this behavior is that it only happens some of the time and I haven't managed to find a scenario that will always cause it to happen.
If anyone has experienced anything like this before, please let me know how you sorted it out, or even suggestions would be welcome.
Update:
(1/10/09) Problem solved, thanks to David Fenton. Setting the form to Data Entry mode (Form.DataEntry = true) before opening it to add records does indeed prevent the jumping. Client reports no issues at all since I changed this a week ago.
A client is reporting occasional similar problems. It started immediately after they started using merge replication.
I've informed several contacts within the Microsoft Access product group as well as my fellow Access and SQL Server MVPs.
Please email me your email address so I can forward that to my contacts at Microsoft as I would assume they would want to contact you directly. tony at granite.ab.ca
BTW excellent trouble shooting and detailed problem description.
It definitely sounds like a record locking issue. Are you using autonumbers as PK? Have you tried 2 computers adding a record on the same form at the same time (meaning one of them will fire the insert event while the other has added a new record on the form one but is still editing it)?
Could you check in a way or another if the PK of the inserted record after insertion in the table stays similar to the PK given before the insertion (by adding for example a few 'debug.print's to your code)?
A scenario could be 2 pending inserts been given by the machine the same PK, the second one being then automatically changed at insert time, resulting in your form loosing the 'active' record.
I'm wondering about the scenario where you are using a form to add records that has any other records for the user to jump to.
That is, I don't believe in using the same form to edit records as is used to create them.
Instead, I use an unbound dialog to collect all the required fields, insert the record in SQL, then open the main editing form to that single record (not a form with the whole table navigated to the record that was just added).
Keep in mind that in a main form/subform scenario, creating a record in the subform when the parent form is unsaved causes the parent record to be saved. You might want to check if there is any code in the Insert and Update events of the main form that would cause a requery of the main form on the insert of a new record (triggered by editing the subform).
But I would still suggest that the best architecture is to avoid this kind of possible scenario by loading only single records, so there is no other record to jump to. That would certainly limit the possibilities of where the user could end up when the problem occurs.
I have seen behavior 'like' this when there are multiple ways of doing the same thing. (i.e. tabbing out of the textbox triggering the lostfocus vs clicking a button) So make sure that this isn't the case, if you haven't already.
This problem is coused by merge replication trigger. In this trigger (this problem strart from sql 2005 server , in SQL 2000 server this not nake problems) replication insert some data in replication tables with identities and access get this number of identity instead real form indentity insert. I read that access use ##IDENTITY insetad of SCOPE_IDENTITY and this is problem . To avoid this you should change merge trigger in way that in insert trigger on begining you save current value from ##identity in variable and on the end of trigger insert value in temp table as identity with start value of what is written in variable. this will correct ##iddentity and acces will get right value.
at begin of trigger
DECLARE #identity int
DECLARE #strsql varchar(128)
set #identity=##IDENTITY
ar end something like
set #strsql='select identity(int,'+CAST(#identity as varchar(15)) +',1) as id into #temp'
exec(#strsql)
et the and it should be placed between
if ##error <> 0
goto FAILURE
and
return
Problem in acces will erace not only on form but directly in ODBC link table too.
I'm looking for way how to add this automaticly to merge replication trigger (mainly insert).
This is bug in Access and SQL comunication. Access take identity of new record from ##IDENTITY and when you finish entering record it reload data based on value from ##IDENTITY value from SQL. In SQL 200 inserted merge trigger and Acces usualy work ok. From SQL 2005 merge trigger have some part in which data are entered in some merge replication table which have identity to and change value of #IDDENTITY form that of newly entered rcord from Access.
One solution is to chanege all merege insert trigger to save #IDDENTITY on begining of it in variable and at the end of trigger insert dumy record in #temp table as identity column with starting value of variable previosly saved.
This solution I found somewhere the net when before week I was affected with this problem too. I was moving database from SQL 200 to SQL 2008 and then I found this problem with identity in Access. I suspect replication because when I was removing one of subscription all start to work well but after recreating it erased again.
I use this for solving problem (takem from somewhere on net).
at the begining of merge insert trigger
DECLARE #identity int
DECLARE #strsql varchar(128)
set #identity=##IDENTITY
and at the end of merge insert trigger
set #strsql='select identity(int,'+CAST(#identity as varchar(15)) +',1) as id into #temp'
exec(#strsql)
last code should be placed on the place of /*insert end on this place */ in merge replication code
if ##error <> 0
goto FAILURE
/*insert end on this place */
return
But I'm searching for a way to do that automaticly for all existing merge trigger on publication and on all existing merge trigger on existing and future subscriptions.