I keep on getting a timeout in my Microsoft SQL Server Management Studio. I'm not running any code. All I am doing is trying to look at the tables within a database in the Object Explorer. I always get:
Execution Timeout Expired
I've looked at some of my settings and it says lockout of 0, meaning it should be unlimited time. I'm not even running any code. Just trying to understand what's in my database by going through the Object Explorer.
Thanks!
It depends on your work environment. But in all cases, I trust it is related to the Database but not the Studio itself.
If you are working on a server that is reached by the network by many other clients, then:
It could be a transient network problem,
High load of requests on the Server,
Deadlock or other problems related to multiprocess contention.
I suggest you troubleshoot your server in idle time, and if possible you detach the databases one by one and work to see which database is resulting in the problem. For this database, you go through the Stored Procedures and Functions and try to enhance them in terms of performance.
Related
I have been recently reading about configuring jobs within SQL Server and that they can be configured to do specific tasks.
I recently had issues whereby all the DB indexes where > 75% fragmented and I wondered if there was a way to have SQL Server automatically manage itself.
Now when reading about setting up and configuring jobs it mentions the SQL Server Agent.
In the DB Server I was looking at the SQL Server Agent was switched off.
This made me think that having a "job" to handle the rebuilding/reorganising of indexes may not be great if this agent can simply be disabled...
Is there anything at a DB level which can be configured to do this, or is this still really in the hands of a "DBA"?
So to summarise, my question is, what is the best way to handle rebuilding/reorganising indexes?
A job calling some stored procedures could be your answer.
Automation of this task depends on your DB: volume of data, fragmentation degree, batch updates, etc.
I recommend you to check regularly your index fragmentation, before applying an automatized solution.
Also, you can programmatically check if SQL Server Agent is running.
Please advise what suits my problem better. I have a highload web app hosted on the same server where SQL server is hosted. I also have SQL Service reporting running on the same server, generating user reports.
So my server basically works on top of disk read/write speed. I'm going to get another server and install there another SQL server in order to host SSRS there. So my criteria is to get as fresh data as it possible.
I've looked a couple of solution, currently I do make backup via jobs, copy it to second server and restore it there, also via jobs. But that's not the best solution.
All replication mechanism(transaction, merge, snapshot) affect publisher database by locking it's table, what is unacceptable for me.
So I wonder is there any possibility to create a replica with read only access, that would be synced periodically not affecting main db? I would put all report load to that replica and make my primary db be used only by web app.
What solution might suit my problem? As I'm not a DBA, I'd start investigating that direction. Thanks.
Transactional Replication is typically used to off-load reporting to another server/instance and can be near real-time in a best case scenario. The benefit of Transactional Replication is you can place different indexes on the subscriber(s) to optimize reporting. You can also choose to replicate only a portion of the data if only a subset is needed for reporting.
The only time locking occurs with Transactional Replication is when you generate a snapshot. With concurrent snapshot processing, which is the default for Transactional Replication, the shared locks are only held for a short period of time, so users are able to continue working uninterrupted. Either way, this shouldn't be an issue since you'll likely be generating the snapshot during a period of low user activity anyway.
I am trying to open the database in SQL Server 2012. But it is showing the following error:
Failed to retrieve data for this request.(Microsoft.SqlServer.Management.Sdk.Sft)
Additional Information:
An exception occurred while executing a Transact_SQL statement or batch.
(Microsoft.SqlServer.ConnectionInfo)
There is insufficient system memory in resource pool 'default' to run this query.
(Microsoft SQL Server, Error:701)
Any suggestion to resolve the problem ?
You get this error when the engine gets an OutOfMemeory exception while trying to perform an action. There isn't much you can do about this programmaticly because you are bumping up against the physical constraints of the machine that is hosting the SQL Server instance. Look at your system statistics if you still have uncommitted memory chances are you just need to increase the memory pool limit of the SQL engine. You will need an account with admin privs to do this. I have also run into this issue where query's are getting blocked at the server and queuing up, after a certain point you run out of memory to do anything and you have to restart the server so it's probably worth it to check the jobs pane and make sure you don't have a bunch of queries in the WAIT state.
I have been tring to use SQL Server Azure as a data storage for an Atlassian Crowd on premise for a few days and I am encountering huge performance issues.
For example, crowd admin application is extremely slow, almost unusable.
I was wondering if someone who has successfully set up this kind of solution could give me some advices.
What I have done so far :
setting up an on premise Atlassian Crowd 2.4.2 on an on premise
SQL2008R2 database
scripting the database and running the script on
an azure database (could not set up directly on azure as setup
scripts misses an azure-mandatory clustered index on table
hibernate_unique_key)
adding the mandatory clustered index to the azure hibernate_unique_key table
setting up the jdbc connection with ssl
I encounter no problem connecting crowd to the db, but everything is very slow. Crowd startup takes something like 5 minutes, when it takes something like 20 seconds with an on premise sql server.
Every round trip to the crowd admin web console takes something like 30 seconds.
My database is less than 1Mb in size. Queries execution summary in azure does not show any problematic query.
I forgot to mention that the SQL Azure Db is very reactive with SQL Server Manager or with an on premise .Net Web App
I tried both jtds jdbc driver and MS JDBC driver 4.0, both with data encryption. I tried both sqlDialect offered by crowd. It stays desperatly slow.
I tried setting special registry keys for Azure as stated by MS JDBC 4.0 driver (HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters\KeepAliveTime , KeepAliveInterval, TcpMaxDataRetransmission)
Maybe it comes from :
the fact that I don't start from a clean setup done by crowd on Azure (because of the clustered index problem)
Sql Azure using utc time, making "something" expire everytime.
I would be glad if someone had advices on this problem.
Sorry - I don't have direct experience with Crowd.
I may go on a limb here, but 9 times out of 10, client applications installed remotely fail basic performance tests against SQL Database (that's the way it's called nowadays) when the application layer is exceedingly chatty (or exceedlingly chunky), with dozens or worse roundtrips for every screen/function, or returning all the records all the time. The reason this usually comes into play is due to the fact that SQL Database is far away going through a network link that is usually slower than on a local network, and on top of it the traffic is always encrypted (meaning there are more packets to transfer).
The only way to go around this type of issue, other than rewriting applications with a better design, would be to try to deploy your Crowd console in a VM in the cloud, in the same data center than you SQL Database instance. At that point, your console will be on the same network than you database, and it should, if my theory holds, be much faster.
Every time I run a query, my database does not respond to an immediate second query and complains that it is in recovery mode (though it does not show anything beside the database name). This happens for about 5-10 minutes after which everything goes back to being normal.
I am expecting a major crash so I am copying the tables into a different database but anyone knows why this could happen or if there is a permanent fix?
Normally, a database is only in "Recovery" mode during startup - when SQL Server starts up the database. If your database goes into Recovery mode because of a SQL statement, you almost definitely have some sort of corruption.
This corruption can take one of many forms and can be difficult to diagnose. Before you do anything, you need to check a few things.
Make sure you have good backups of your database - copied onto a separate file system/server.
Check Windows Event Log and look for errors. If any critical errors are found, contact Microsoft.
Check SQL Server ERRORLOG and look for errors. If any critical errors are found, contact Microsoft.
Run chkdsk on all the hard drives on the server.
Run dbcc checkdb against your database. If any errors are found, you can attempt to fix the database with the REPAIR_REBUILD option. If any errors could not be fixed, contact Microsoft.
Restore a backup copy of your database onto a different server. This will confirm whether it is a problem within your database or the SQL Server/machine.
After step #4, #5, and #6, run your queries again to see if you can cause the database to go into Recovery mode. Unfortunately, corruption can occur because of an untold number of reasons, but more important than anything is the data. It will confirm whether it is a problem with your data or elsewhere. As long as you have backups that can be restored to a different SQL Server and a restored copy does not continually go into Recovery mode, you don't have to worry too much.
I always put Number 6 last because setting up a separate server with SQL Server and moving/restoring a large database can take an extensive amount of time; but if you already have a backup/test server in place, this might be a good first option. After all, it won't cause any downtime with your live server.
Finally, don't be afraid to contact Microsoft over this. Databases are often mission-critical, and Microsoft has plenty of tools at their disposal to diagnose problems just like this.
Late answer...
Does your database have autoclose set to true? When set, the DBMS has to bring the database online which may account for your symptoms
This can happen when the SQL Server Service has gone down hard in the middle of write operations and sometimes during mode during server startup. Follow the query in this link to monitor
http://errorbank.blogspot.com/2012/09/mssql-server-database-in-recovery.html
I've only had this happen when the service (or the SQL Server Service) has gone down hard in the middle of write operations. Once it came back, everything was fine.
However, if this happening often, then I would suspect a disk level failure of some sort. I would make sure the database is fully backed up and move it to another server while you run diagnostics / rebuild the problem server.