Any way to cancel a request in SSMS? - ssms

I say cancel a "request" rather than a query because typically I am able to continue using Management Studio when I am running queries. However, I have remote access to an Azure database that shuts off after an hour of no activity. When I send it a query and it's shut down, it take a really long time to fire up and I'm not able to continue working during this time as Management Studio completely freezes.
I am literally still waiting on the request that prompted me to write this post to complete and it has been several minutes now. In this case, I actually ran my query on the wrong connection and did not even mean to hit the Azure database so it's especially annoying that I have to wait this long, lol.
If I didn't know better I would think it was permanently locked, but this has happened enough times now that I know it will eventually return control.

By your description, you have a SERVERLESS version of Azure SQL Database.
When this shuts off the compute due to lack of activity, it completely removes the compute portion of the service - simply leaving your database on storage. When you then query it again for the first time, it needs to allocate some compute to the database, start that up (with the redundant copies that Azure SQL provides), connect to your database, ensure the gateway is up to date to direct your connection and only THEN will it accept a connection.
So, with Management Studio, it is waiting on the response and I believe that it also has some degree of retry so that it keeps checking until the connection is established.
You could change the tier to a PROVISIONED service tier where it is available all the time (your billing will change so be sure it is what you need) and this will stop, or you could have a PowerShell script or similar that you run to ensure the database is available before connecting from SSMS.
When it is waiting for the response back from the service that it has started OK, there isn't a session available to KILL - so your only scope there would be to kill the client - i.e. use task manager to shut down SSMS.

Related

Cannot access SQL azure

Just had a bizarre issue with SQL Azure, and it's happened in a small phase just before full go live with some users doing some data entry.
"Database 'dbname' on server 'xxx' is not currently available. Please rety the connection later. If the problem persists, contact customer support."
When I tried to connect via SQL Azure database website I got:
"Firewall check failed.
Resource ID : 1. The request minimum guarantee is 0,
maximum limit is 180 and the current usage for the database is 0.
However, the server is currently too busy to support request greater than 0 for this database."
Looking at the databases section of the Azure Management website the site reported it couldn't access the DB, but I didn't capture the exact error message unfortunately.
Bizarrely, a couple of my users were still able to login to our system website that access the DB, and view and save data. Eventually they lost connection too however.
After an hour or so, the databases came back to life and we could fully access them again.
I have looked at the servers master db event table using queries from here and there was a couple of connection failures but nothing interesting. No throttling or deadlocks, a couple of failed connections that said "Client may have timed out when establishing connection. Try increasing the connection timeout." in the description
Any ideas where else to look?
Business users have had a massive drop in confidence because of this.
What your describing normally occurs because of :
1) SQL Connection limit being hit. Assuming you don't see this often you unlikely to be the cause. But worth checking putting a limit on your connection pool can help.
2)You neighbours being extremely noisy and thus the node re-adjusts.
3) Hardware failure and Microsoft bringing your database back online in a different node. This can take some time.
Normally I have seen this when Microsoft have throttled or had problems with a box and had to recover everyone over. Because you are on a shared system you have to keep in mind that they are recovering everyone else also in that node also and thus sometimes this takes time.
The best bet if you are worried and need to get a resolution for the business is to open a support ticket with MS and give them the time and error message you saw this. They will investigate and generally they have really good back end telemetry that will point to a reason. This will allow you to give the business a resolution and then you can make a call on future plans and contingencies. You have to keep in mind though that SQL Azure is shared system and transient errors can happen, you might need to design more failover into your designs.

Run a SQL command in the event my connection is broken? (SQL Server)

Here's the sequence of events my hypothetical program makes...
Open a connection to server.
Run an UPDATE command.
Go off and do something that might take a significant amount of time.
Run another UPDATE that reverses the change in step 2.
Close connection.
But oh-no! During step 3, the machine running this program literally exploded. Other machines querying the same database will now think that the exploded machine is still working and doing something.
What I'd like to do, is just as the connection is opened, but before any changes have been made, tell the server that should this connection close for whatever reason, to run some SQL. That way, I can be sure that if something goes wrong, the closing update will run.
(To pre-empt the answer, I'm not looking for table/record locks or transactions. I'm not doing resource claims here.)
Many thanks, billpg.
I'm not sure there's anything built in, so I think you'll have to do some bespoke stuff...
This is totally hypothetical and straight off the top of my head, but:
Take the SPID of the connection you
opened and store it in some temp
table, with the text of the reversal
update.
Use an a background process (either
SSIS or something else) to monitor
the temp table and check that the
SPID is still present as an open connection.
If the connection dies then the background process can execute the stored revert command
If the connection completes properly then the SPID can be removed from the temp table so that the background process no longer reverts it when the connection closes.
Comments or improvements welcome!
I'll expand on my comment. In general, I think you should reconsider your approach. All database access code should open a connection, execute a query then close the connection where you rely on connection pooling to mitigate the expense of opening lots of database connections.
If it is the case that we are talking about a single SQL command whose rows on which it operates should not change, that is a problem that should be handled by the transaction isolation level. For that you might investigate the Snapshot isolation level in SQL Server 2005+.
If we are talking about a series of queries that are part of a long running transaction, that is more complicated and can be handled via storage of a transaction state which other connections read in order to determine whether they can proceed. Going down this road, you need to provide users with tools where they can cancel a long running transaction that might no longer be applicable.
Assuming it's even possible... this will only help you if the client machine explodes during the transaction. Also, there's a risk of false positives - the connection might get dropped for a few seconds due to network noise.
The approach that I'd take is to start a process on another machine that periodically pings the first one to check if it's still on-line, then takes action if it becomes unreachable.

sql 2008 express slow on first request? Goes to sleep?

I have read somewhere about SQL express running as a user instance or something.. and as such, the instance/service "goes to sleep" if not used for x time.. (don't know the actual timings etc)
So the scenario is:
If my website (in this case) doesn't have anyone using it for "a few hours", SQL Express "seems" to go to sleep.
The next time someone comes along (after the pause for however long), the initial response takes quite a few seconds more to action.
Subsequent requests directly after the initial one seem very fast.. again until there is a pause "for a few hours" or whatever the timing is?
Any ideas? if so, any examples/directions of what to do?
Thanks!
David.
Yes, there is the so called RANU instance, which is what you get when you specify User Instance=True in the connection string. Read more about this in SQL Server 2005 Express Edition User Instances. I would recommend you stay as far away as possible from anything related to User Instances. They are impossible to debug and troubleshoot when things go wrong, they sometimes have a ramp up time to create the new instance of minutes, and they really offer no advantage in the real world. Besides, they are deprecated in SQL Server Express 2008.
If you're using SQL Express 2008 and you do not specify User Instance=True in your connection string then you do no get an user instance, so probably the first request time comes from the IIS app pool warm up, as other have suggested. It may also occur due to ordinary process workingset attrition that would cause the SQL buffer pools to go cold. You can easily identify whether it is IIS or SQL by monitoring appropriate performance counters on your system.
This isn't the database going to sleep, this is the Application Pool in IIS. If no users are connected/using the website, then the application pool will reset and the session(s) will shutdown. Then, when a user comes to the website, it has to restart the website.
There is a term called Database Warmup pal, you can find out more here and probably this is your solution
Are you sure it's the database going to sleep and not IIS? IIS will unload websites after a certain period of inactivity, and they can be very slow to reload.

Continuously checking database from a Windows service

I am making a Windows service which needs to continuously check for database entries that can be added at any time to tell it to execute some code. It is looking to see if it's status is set to pending, and it's execute time entry is > than the current time. Is the only way to do this to just run select statements over and over? It might need to execute the code every minute which means I need to run the select statement every minute looking for entries in the database. I'm trying to avoid unneccesary cpu time because I'm probably going to end up paying for cpu cycles on the hosting provider
Be aware that Notification Services is only for SQL 2005, and has been dropped from SQL 2008.
Rather than polling the database for changes, I would recommend writing a CLR stored procedure that is called from a trigger, which is raised when an appropriate change occurs (e.g. insert or update). The CLR sproc alerts your service which then performs its work.
Sending the service alert via a TCP/IP or HTTP channel is a good choice since you can deploy your service anywhere, just by modifying some configuration parameter that is read by the sproc. It also makes it easy to test the service.
I would use an event driven model in your service. The service waits on an auto-reset event, starting a block of work when the event is raised. The sproc communications channel runs on another thread and sets the event on each incoming request.
Assuming the service is doing a block of work and a set of multiple pending requests are outstanding, this design ensures that those requests trigger just 1 more block of work when the current one is finished.
You can also have multiple workers waiting on the same event if overlapping processing is desired.
Note: for external network access the CREATE ASSEMBLY statement will require the PERMISSION_SET option to be set to EXTERNAL_ACCESS.
Given you talk about the service provider, I suspect one of the main alternatives will not be open to you, which is notification services. It allows you to register for data changed events and be notified, without the need to poll the database. It does however require service broker enabled for it to work, and that potentially could be a problem if it is hosted - some companies keep it switched off.
The question is not tagged to a specific database just SQL, the notification services is a SQL Server facility.
If you're using SQL Server and open to a different approach, check out SQL Server Notification Services.
Oracle also provides notifications, the call it Database Change Notification

Stop Monitoring SQL Services for Registered Servers in SMSS

Question: Is it possible to stop SSMS from monitoring the service status of registered servers?
Details:
SSMS 2008 monitors the service status of every registered server. From what I have seen it seems to reach out to every registered server every minute or so to check it's status, in my case that is over 100 servers. This process has raised issues with our Security and Network departments. Network identified it initially as suspicious traffic due to the fact that it appeard as an unknown utility was scanning the network for SQL Servers. Security was concerned because the Security Event Logs on each server are being filled up with my logon events.
I have looked all over for a setting but can't seem to find one. Am I missing it somewhere?
TIA,
Brian
I finally found an answer!!
While it is not possible (at least that I've found) to stop SSMS from checking the service status of registered servers it is possible to change the interval at which it checks it.
The short version is to create the following registry keys (DWORD):
(SQL Server 2008)
HKLM\Software\Microsoft\Microsoft SQL Server\100\Tools\Shell | PollingInterval = 600 (decimal)
(SQL Server 2005)
HKLM\Software\Microsoft\Microsoft SQL Server\90\Tools\Shell | PollingInterval = 600 (decimal)
This will make SSMS connect automatically every minute instead of every few seconds.
See this MS Connect Post for details.
Since it doesn't appear that there's any way to stop these status checks by SSMS, can you focus on helping them to see their harmlessness?
Can the network group allow certain exceptions to this particular rule (pinging servers on port 1433) in their scanning software, which would allow you and your group to monitor SQL Server uptime? Even if you weren't using SSMS, this type of sweeping monitoring activity is pretty common, and you'll know the requests will only ever come from a handful of workstations.
I don't think these SQL status checks generate any more events in the security log than any other activity, so maybe they were just concerned because it was something they weren't expecting. Could the security group be convinced that these events aren't dangerous, again as long as they're coming from certain approved workstations?
If neither of these is an option (or even if it is), you could help mitigate the problem by not connecting to all your SQL servers at once. Maybe just connect to the ones you need at the time - it looks like loading the entire list actively connects to each of them, but just connecting to the ones you intend to use in that session might help reduce the number of network sessions open.
I hope this helps - if it doesn't, or you've got some additional input that might help find a workaround, please post it!