I run a script provided by one of Microsoft employee to find out about which indexes need to Rebuild/Reorganize depending on the average fragmentation. I got back a reasonable list but while trying to rebuild some of them on a specific database I kept receiving errors :
The first idea I got is to set the database in single user mode, rebuild the indexes and then bring it back to life. Well that did not help because the database is being populated by a Windows service that ironically uses the same user I am connected with and the only available to me with enough permissions to do so. I am working on a corporate environment so the moon is a bit closer than getting another user credentials. I also cannot stop the service while executing my tasks because it is used for many other things.
My question is simple: How can I force single-user mode to force single connection source? In other words how to hide the database or eventually the SQL server from the service? It will correctly handle the absence as a network issue so I don't have to worry about that part.
I found a good solution to use that might help others. I start by getting the list of transactions with locks on the current table using :
USE [Your DB Name]
SELECT REQUEST_MODE, REQUEST_TYPE, REQUEST_SESSION_ID
FROM sys.dm_tran_locks
WHERE RESOURCE_TYPE = 'OBJECT'
AND RESOURCE_ASSOCIATED_ENTITY_ID =(SELECT OBJECT_ID('YourTableName'))
The REQUEST_SESSION_ID is the ID of the session which has the lock set on the table. Then I run EXEC sp_who2 to make sure that the SPID is the one for the expected service. All I needed to do at the end was KILL <SPID> and rebuild the index. You might need to do it multiple time if you are building more than one index as the lock could be set again.
There is an ONLINE = ON/OFF option available when rebuilding indexes in SQL Server 2005 and above which controls how users can access underlying table which may solve your problem.
http://msdn.microsoft.com/en-us/library/ms188388(v=sql.110).aspx
your problem is that the interface will only wait a certain amount of time before deciding to fail. I run into this all of the time.
You can try scripting the change and then running it manually, this will allow you to just wait until all of the locks are released by the users currently using the index. You will have to be careful though, an index rebuild locks the index for the time that it is running (unless of course you have enterprise edition, where rebuilds are online, and everything is made of money)
Related
I run heavy query on IBM i. First time it takes a long time, Subsequent times are much faster. It seems to be creating temporary index. How can I remove this index, so I can re-test like the first time?
Use the Visual Explain (VE) tool in the Run SQL Scripts component of ACS to see the differences between runs.
If indeed the issue is a system maintained temporary index (MTI), you can track it down via the schema's tooling in ACS and delete it if you so desire.
However, an MTI only gets deleted by the system when the system reboots (IPL).
So if you seeing differences without rebooting the server, I suspect the differences are caused by psuedo-closing. By default, once the DB see's the same query a few times (3 is the default), instead of hard closing it's cursors, it will psuedo-close them.
Again, VE will show "hard opens" and "pseudo opens".
To get the pseduo closed cursors to hard close, simply disconnect and reconnect.
I'm starting to assess our company using RavenDB for storing some stuff that doesn't really belong in a relational database (we're traditionally a SQL Server shop). I installed RavenDB locally on my machine, created a database, added a document. Nice!
Being a DBA, I decided to see how backups/restores work. I backed up my database, deleted it, then restored it from the backup. After refreshing my admin screen, I saw my database. I clicked on it, and got a message that the database doesn't exist.
After a couple hours, I tried again. Still doesn't exist. A full day later, I walk into work, and try again. This time the database works. I've had similar situations with updating documents. The update seems to take anywhere between 1 second - several hours to show an update...
Is this normal for RavenDB?? Am I completely misconfigured?? I run SQL Server on my local machine and it's lightning-fast, so I can't imagine updating a single document could take that long. As-is, I can't imagine recommending we use RavenDB for anything.
Are you querying using indexes or getting documents by ID? Documents should be updated immediately (ACID). If indexes are slow to update (check their status using RavenDB Studio), it could be a configuration problem or something external like an anti-virus software can cause them to update slowly.
Apparently, at least for the document-update latency, the default for caching in queries is enabled, so I was getting cached results.
Jeffery,
No, that isn't normal by a long short. You should be able to immediately see what was changed.
Note that certain AV products will interfere with the HTTP pipeline and can affect RavenDB's usage. The studio will also auto update things only every 5 seconds (to reduce UI jitter), but that is about it.
Restoring a database (from the same machine), should take only as long as it take to copy the files (pure I/O bound operation).
If this is from another machine using a different version of Windows, we might need to run a check on the file, which can take a bit of time, but that doesn't sound like your scenario
I am working on a VB.NET application.
As per the nature of the application, One module has to monitor database (SQLite DB) each second. This Monitoring is done by simple select statement which run to check data against some condition.
Other Modules performs a select,Insert and Update statements on same SQLite DB.
on SQLite concurrent select statements are working fine, but I'm having hard time here to find out, why it is not allowing Inset and Update.
I understand it's a file based lock, but is there anyway to get it done?
each module, in fact statement opens and close the connection to DB.
I've restricted user to run single statement at a time by GUI design.
any Help will be appreciated.
If your database file is not on a network, you could allow a certain amount of read/write concurrency by enabling WAL mode.
But perhaps you should use only a single connection and do your own synchronization for all DB accesses.
You can use some locking mechanism to make sure the database works in a multithreading situation. Since your application is a read intensive one according to what you said, you can consider using a ReaderWriterLock or ReaderWriterLockSlim. (refer to here and here for more details)
If you have only one database, then creating just one instance of the lock is OK; if you have more than one database, each of them can be assigned a lock. Every time you do some read or write, enter the lock (by EnterReadLock() for ReaderWriterLockSlim, or by AcquireReaderLock() for ReaderWriterLock) before you do something, and after you're done exit the lock. Note that you can place the exit of the lock in a finally clause lest you forget to release it.
The strategy above is being used in our production applications. It's not so good as to use a single thread in your case because you have to take performance into account.
We are using Replication and seem to be having endless problems with it. It seems to shut down for unknown reasons. It needs to be shut down to remove a column and only starts back up half the time. Does anyone have any advice on how to properly use replication or some alternatives to it.
Edit:
We are using Sql Server 2005, We cannot use database mirroring as we used the other database for reporting. As far as I am aware you cannot query from a mirrored database.
If you need just couple of tables from your DB for reports, replication is more useful, but you also can set up log shipping with secondary server in STAND BY mode (especially if you need significant part of your data for reports), then you can run reports on secondary server. You just have to remember that log shipping will interfere with transaction log backups, so you have to use the same folder with log backup files for both processes.
I would think the combination of database mirroring and database snapshots will solve your issues.
First, database mirroring is very easy to setup and I have never had any problems with it (using it for the past 4+ years).
Second, creating a database snapshot on your failover server will allow you to run reports. You can setup a sql agent job to drop and re-create the snapshot on whatever acceptable interval you like.
Of course this is all dependent on if you need your reports to run on real-time data or if they can be delayed somewhat.
Here are a list of the problems that I have had to resolve to get replication working:
1) The replication sometimes lies to me and tells me this, even when its working fine.
"The server 'Bob' is not a Subscriber. (.Net SqlClient Data Provider)" I have tried to re-initialise it thinking that it was broken and it never was...
2) It can take a little while to restart itself, especially if your remote DB is on the other side of the planet, which it is in my case. If you are on a slow network connection, or it is not 100% reliable, then you can have problems. Also, the jobs which restart the process can sometimes take a while to run, which also delays things further.
3) Some changes require full re-initalisation which involves sending a new snapshot out. If you don't have your permissions quite right, and you can re-initialise manually, but it doesn't happen automatically, then this can be a another reason for problems.
We have a SQL transactional replication which runs perfectly happily. You seem to say that it is when you are making schema changes to the publisher that you get problems. Each time we do a schema change we drop the publication, subscription and the subscription database. Do the change, then re-build it all. We can do this becuase we can tolerate the time it takes to re-apply the snapshot. There are ways to apply schema changes to the publication and have them propogate to the subscriber. Take a look at sp_register_custom_scripting. We have made this work once, so I can give some more information about it if you need.
As #Jason says, you can report from a mirrored database by using a snapshot. Beware that the snapshot will take up space, and cause more work for the mirror server. Although how much space will depend on how much data is changing and how big your original database is. We do use a snapshot on a mirrored database for occasional reports because our entire database is not replicated.
log shipping http://msdn.microsoft.com/en-us/library/ms187103.aspx
What version of SQL Server are you using?
We're using replication now for a particular solution, and it seems to just work, day in, day out.
I would examine your event log's, and SQL Server logs to see if you can determine why it is shutting down, and why it doesn't start up.
Are you possibly patching the servers, or are you having network errors?
The alternatives to replication are log shipping, or database mirroring.
I personally prefer Database Mirroring, but it really depends what you're trying to do, as some of these aren't appropriate for certain situations.
We also have used SQL transactional replication. We had the same pains with updating schema, which requires dropping the publication on all servers, performing the updates, and then reinitializing replication, and hoping for the best. Sometimes it would not initialize, or a node would fall behind and we'd get little warning for it. A few times we even lost all the stored procedure execute permissions causing pretty much total failure on the websites.
We have a rather large database so reinitialization could take quite some time, meaning all updates had to be done at 2am on Sunday - not exactly when we're awake and alert and able to use all our faculties to deal with a problem that might arise.
We are ditching replication in favor of failover clustering on SQL 2008, but it can still be done all the way back to SQL 2000.
http://technet.microsoft.com/en-us/library/cc917693.aspx
I'm working on a project where we are thinking of using SQLCacheDependency with SQL Server 2005/2008 and we are wondering how this will affect the performance of the system.
So we are wondering about the following questions
Can the number of SQLCacheDependency objects (query notifications) have negative effect on SQL Server performance i.e. on insert, update and delete operations on affected tables ?
What effect (performance wise) would for example 50000 different query notifications on a single table have in SQL Server 2005/2008 on insertion and deletion on that table.
Are there any recommendations of how to use SQLCacheDependencies? Any official do‘s and don‘ts? We have found some information on the internet but haven‘t found information on performance implications.
If there is anyone here that has some answers to these questions that would be great.
The SQL Cache dependency using the polling mechanism should not be a load on the sql server or the application server.
Lets see what all steps are there for sqlcachedependency to work and analyze them:
Database is enabled for sqlcachedependency.
A table say 'Employee' is enabled for sqlcachedependency. (can be any number of tables)
Web.config is updated to enable sqlcachedependency.
The Page where u r using sql cache dependency is configured.
thats it.
Internally:
step 1. creates a table 'ASPnet_sqlcachetablesforchangenotification' in database which will store the 'Employee' table name for which sqlcachedependency is enabled. and add some stored procedures aswell.
step 2. inserts a 'Employee' table entry in the 'ASPnet_sqlcachetablesforchangenotification' table. Also creates an insert update delete trigger on this 'Employee' table.
step 3. enables application for sqlcachedependency by providing the connectionstring and polltime.
whenever there is a change in 'Employee' table, trigger is fired which inturn updates the 'ASPnet_sqlcachetablesforchangenotification' table.
Now application polls the database say every 5000ms and checks for any changes to the 'ASPnet_sqlcachetablesforchangenotification' table. if there r any changes the respective caches is removed from memory.
The great benefit of caching combined with freshness of data ( atmost data can be 5 seconds stale). The polling is taken care by a background process with should not be a performance hurdle. because as u see from above list the task are least CPU demanding.
SQLCacheDependency is implemented as an indexed view and every time the table is modified this views index gets changed. so many views (SQLCacheDependency objects) on the same table mean quite a perf hit for modifications. however if you have 1 view (SQLCacheDependency object) per table you should have no problems.
the cache changed notification is async and is triggered when the server has resources.
You're right, not much information on this is provided but there's a phrase related to your question in this page http://msdn.microsoft.com/en-us/library/ms178604%28VS.80%29.aspx
"The database operations associated with SQL cache dependency are simple and therefore do not incur a heavy processing cost on the server."
Hope this helps you although your question is a little bit old already.
This page appears to have some good info on setup which technique to use well (granted I did just skim it).
All I can provide is anecdotal evidence for performance, but we use SqlCacheDependency as a sort of "messaging solution" for a large enterprise application that processes on the order of ten thousand messages per hour.
The basic architecture is that our company uses Perforce for source control and we have a "subscription service" that receives messages from a trigger webservice call than gets called on every p4 commit and inserts a record into a SQL database. Our application has the dependency setup to send subscription notifications for every changeliest that affects a branch or path that you are monitoring.
The performance is fine. Trigger runs on the order of 200ms and we have never had a complaint about the latency of relaying the messages to end users.
As always, your mileage may vary.