i use window application and i want to fire trigger after user session time out So ,
How to detect if user abort or his session timeout using SQ L server 2008 ?
SQL Server never times out sessions nor requests. A query issued against SQL Server may run for hours, days even, uninterrupted. You may be under the wrong impression that queries against SQL Server time out and get aborted because the ADO.Net client chooses to abort queries after 30 seconds, because that is the default value of the SqlCommand.CommandTimeout:
The time in seconds to wait for the
command to execute. The default is 30
seconds.
However, aborting the queries is a client specific behavior that SQL Server is not involved in. Other clients (eg. JDBC) use different policies.
Similarly, a SQL Server session never times out, even if not used for days. The application has to explicitly close the connection for its sessions to terminate. While is true that there are administrative ways to disconnect sessions (the KILL command) these are never to be used except for extreme last measure administrative operations.
So the good news is that you don't have to do anything, what you're asking for doesn't exists, or shouldn't be done to start with.
Related
I say cancel a "request" rather than a query because typically I am able to continue using Management Studio when I am running queries. However, I have remote access to an Azure database that shuts off after an hour of no activity. When I send it a query and it's shut down, it take a really long time to fire up and I'm not able to continue working during this time as Management Studio completely freezes.
I am literally still waiting on the request that prompted me to write this post to complete and it has been several minutes now. In this case, I actually ran my query on the wrong connection and did not even mean to hit the Azure database so it's especially annoying that I have to wait this long, lol.
If I didn't know better I would think it was permanently locked, but this has happened enough times now that I know it will eventually return control.
By your description, you have a SERVERLESS version of Azure SQL Database.
When this shuts off the compute due to lack of activity, it completely removes the compute portion of the service - simply leaving your database on storage. When you then query it again for the first time, it needs to allocate some compute to the database, start that up (with the redundant copies that Azure SQL provides), connect to your database, ensure the gateway is up to date to direct your connection and only THEN will it accept a connection.
So, with Management Studio, it is waiting on the response and I believe that it also has some degree of retry so that it keeps checking until the connection is established.
You could change the tier to a PROVISIONED service tier where it is available all the time (your billing will change so be sure it is what you need) and this will stop, or you could have a PowerShell script or similar that you run to ensure the database is available before connecting from SSMS.
When it is waiting for the response back from the service that it has started OK, there isn't a session available to KILL - so your only scope there would be to kill the client - i.e. use task manager to shut down SSMS.
In one of our systems we experience random delays when opening a connection to SQL Server.
The system is running Windows Server 2012 R2 Standard and SQL Server 2012, located on the same physical machine as our application.
Even when our application is idle, it is executing DB operations once every few seconds on average.
DB operations our application executes usually consist of 3 steps:
open a connection to SQL server
run a stored procedure
close the connection
Normally the first step takes a tiny fraction of second, while running a stored procedure may take much longer, depending on many factors.
The problem: opening a connection may randomly take 5-13 seconds. This only happens rarely, once in a few hours, even once in a day.
In other words this could happen once per a few thousand DB operations. We have not detected any discernable pattern in the timing of these delays.
There is nothing suspicious in the SQL Server log files.
Running SQL Server profiler does not seem practical, as the fault may not be exhibited for 10-20 hours.
We have not seen this phenomenon on any other machine.
It looks like we've fixed the problem. Somewhere I read a recommendation to try to use SQL Server authentication instead of Windows authentication. The problem discussed there was not exactly the same as ours but somewhat similar. Since connection string is used in every Open Connection operation, I decided to give it a try. As a result, our application has been working for 3 days in a row by now and there has not been a single incident of opening connection being slow. To put this in a context, before this fix we had several incidents in 24 hours on average, and not a single incident-free 24-hour period for the last two months.
I am working on a Rails app with Postgres on Ubuntu. Unfortunately for me, this legacy app uses some heavyweight stored procedures in the db. What's more, the db is quite large (5GB) and my computer is not particularly fast. Every now and then, if I pass some bad parameters from my code to the db, my computer becomes super slow to the degree that I cannot get to the console and kill the postgres process. I assume, this is due to some very expensive db query. My only solution is to hard reset my laptop. So my question is - is there a way to forcefully kill a long-taking query? Or perhaps, is there a way to limit the CPU and RAM the db is allowed to use, so that I still have some resources left to go and manually restart postgres?
You can set a maximum time for statements to take with the statement_timeout configuration option:
Abort any statement that takes more than the specified number of milliseconds, starting from the time the command arrives at the server from the client. If log_min_error_statement is set to ERROR or lower, the statement that timed out will also be logged. A value of zero (the default) turns this off.
You can set this option a variety of ways, such as in postgresql.conf for everyone, per session with the SET command, or even per database or per role. More information on setting options is in the documentation.
We have an app with around 200-400 users and once a day or every other day we get the dreaded sql exception:
"Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding".
Once we get this then it happens several times for different users and then all users are stuck. They can't perform any operations.
I don't have the full specs of the boxes right in front of me but we have:
IIS and SQL Server running on separate boxes
each box has 64gb of memory with multiple cores
We get nothing in the SQL server logs (as would be expected) and our application catches the sqlexception so we just see the timeout error there - on an UPDATE. In the database we have only a few key tables. The timeout happens on one of the tables where there is 30k of rows. We have run profiler on these queries hitting the UI against a copy of production to get the size and made sure we have all of the right indexes (clustered/non-clustered). In a local environment (smaller box, same size database) everything runs fast and to the users most of the day the system runs fast. The exact same query (which had the timeout error in production) ran in less than a second.
We did change our command timeout from 30 seconds to 300 seconds (I know that 0 is unlimited and I guess we should use that, but it seems like that's just masking the real problem).
We had the profiler running in production, but unfortunately it wasn't fully enabled the last time it happened. We are setting it up correctly now.
Any ideas on what this might be?
I am using SQL Express 2008 as a backend for a web application, the problem is the web application is used during business hours so sometimes during lunch or break time when there is no users logged in for a 20 minute period SQL express will kick into idle mode and free its cache.
I am aware of this because it logs something like:
Server resumed execution after being idle 9709 seconds
or
Starting up database 'xxxxxxx'
in the event log
I would like to avoid this idle behavior. Is there anyway to configure SQL express to stop idling or at least widen the time window to longer than 20mins? Or is my only option to write a service that polls the db every 15mins to keep it spooled up ?
After reading articles like this it doesn't look to promising but maybe there is a hack or registry setting someone knows about.
That behavior is not configurable.
You do have to implement a method to poll the database every so often. Also, like the article you linked to said, set the AUTO CLOSE property to false.
Just a short SQL query like this every few minutes will prevent SQLserver from going idle:
SELECT TOP 0 NULL
FROM [master].[dbo].[MSreplication_options]
GO
Write a thread that does a simple query every few minutes. Start the thread in your global.asax Application_Start and you should be done!
Here is a good explanation: https://blogs.msdn.microsoft.com/sqlexpress/2008/02/22/understanding-sql-express-behavior-idle-time-resource-usage-auto_close-and-user-instances/
Whatever: I do not know the time after sql express goes idle. I suggest to run the script below every 10 minutes (maybe task scheduler).
This will prevent SQL Server Express from going idle:
SELECT TOP 0 NULL FROM [master].[dbo].[MSreplication_options] GO
Also make sure all data bases' property is set to AUTO_CLOSE = FALSE