Stored Procedures in SQL Database in Azure Timeout randomly once every two months - sql

Scenario: System in a VM in Azure using MVC and a SQL Database (not in the VM) working under normal conditions for 2 or 3 months. Suddenly, stored procedures called from my MVC web app or SQL Management Studio return Time Out. Queries like Select * from Table work perfect.
EDIT: Timeouts while executing Alter or Create SP queries happened too.
No proper solutions or explanations found.
Workaround: Restore old backup in a new SQL Database and change the connection string to the new Database. While the system is running in the backup, try to backup the database with issues (first close all connections to that DB like Management Studio). It may take some time and some retries. After the backup is done, restore it in a new DB and change back the connectionString. You will lose a few minutes of data and some downtime but you will have your system working again in Azure.
Any ideas about this issue in the Stored Procedures in Azure?

At first glance, this smells like a parameter sniffing issue; it is probably not related to Azure.
Check this thread for details on what the issue is, and how to resolve it: Parameter Sniffing (or Spoofing) in SQL Server

Related

UPDATE Azure sql table fails from onprem sql server using linked server

I have an on premise SQL Server database that is the backend for our project management software, a Azure sql table that contains limited data used for reporting with power bi and a linked server to connect the two. Both of the databases have a specific user/pass account just for this which is stored in the linked server. Heres the problem:
When I run a SQL Server Agent job to update the azure table from the on prem table using the linked server everything works fine.
When I manually run a sql update statement from an open window in SSMS to do the same everything works fine.
When I use a workflow in the project management software to trigger a Stored Procedure to execute the same code (update Azure from the on prem database) I get the following error:
The OLE DB provider "SQLNCLI11" for linked server "LinkedServerName" reported an error. One or more arguments were reported invalid by the provider.
The operation could not be performed because OLE DB provider "SQLNCLI11" for linked server "LinkedServerName" was unable to begin a distributed transaction.
OLE DB provider "SQLNCLI11" for linked server "LinkedServerName" returned message "The parameter is incorrect.". Error occurred in: STORED_PROCEDURE_NAME[CRLF]Error occurred on line 23
There's nothing on line 23, and like I mentioned earlier, if I manually run the same update statement it works and if I have a SQL Server Agent Job run the same statement it works. Why does it fail when the code is executed by the project management software? Anyone have experience with this?
This is the code to insert the data from on prem into Azure:
INSERT INTO [LinkedServerName].DatabaseName.SchemaName.TableName ([ProjectNumber],[CreateDate],[SyncDate])
I'm not sure about this with Azure but I had a similar issue with a remote server and had to disable promotion of distributed transactions. It might not be the best thing to do in a production environment so read up carefully about the implications of doing this.
I'm only suggesting to try this to narrow down what the real issue is..
Change this setting and test.
I ended up taking a different strategy. We know using a scheduled SQL Agent Job to insert data in to azure works, it just wouldnt work in any script ran by our software and the user it uses to access the on prem database. So I created a SP in the on prem database that the software executes through a built in workflow. The SP executes saving the data to a staging table, then executes the SQL Job, which reads from the staging table and then inserts the data into an Azure table.
Everything worked in the testing environment, but when I replicated all the scripts into production I got a permissions error. After doing a lot of research and testing adjustments to the user I ended up getting it to work by assigning the role TargetServersRole and db_ddladmin to the user in the msdb database and that worked.
ssms screenshot
Below are the two articles that let me to this conclusion:
Article 1
Article 2

SQL Server Stored Procedure RPC VS SSMS

I have a stored procedure that takes 1 parameter. When I run the stored procedure from SQL Server Management Studio, it runs in 2-4 seconds. When I call it with a console application, it takes 30+ seconds. The SQL Server is remote and both SSMS and my application are being run from my local machine so I don't think it's a networking issue.
I've ran the SQL Server Profiler to try to track down the issue and one thing I'm seeing is that when it's run from SSMS it starts the statement, recompiles it, then starts it over again, then completes it, like this:
SP:StmtStarting
SP:Recompile
SQL:StmtRecompile
SP:StmtStarting
SP:StmtCompleted
The 2 recompile entries have an EventSubClass of "2 - Statistics changed"
From the app I only see entries for SP:StmtStarting & SP:StmtCompleted, no recompile entries.
I'm calling exactly the same stored procedure with the same parameter value. Why does SSMS recompile based on statistics but my console app does not?
After researching and troubleshooting it appears to be entirely due to SET_ARITHABORT_ON. SSMS defaults this to 'ON' while the .net sql client defaults it to 'OFF' so it was going with 2 different execution plans, although I'm not entirely sure why the two plans are so drastically different.
I overrode the OpenConnection() method to open the connection set it to ON and my application then had the same performance as SSMS. I hope this helps anyone else who stumbles upon this.

MSSQL database on external hard drive shows Recovery Pending

I have created a database in SQL Server 2012 with mdf and ldf pointing to a external hard drive attached to my machine. I created tables, stored procedures, populated tables, etc. etc.
I removed the hard drive at the end of the day.
Today, when I attached the hard drive and tried to access the DB in Management Studio, I see the name of the database with (Recovery Pending).
What does this mean? I see the mdf and ldf files in the D drive.
What worked for me was to take the database offline*, then back online - no RESTORE DATABASE was necessary in this case, so far as I can tell.
In SQL Server Management Studio:
right-click on the database
select Tasks / Take Offline ... breathe deeply, cross fingers...
right-click on the database again
select Tasks / Take Online
When you removed the drive, you forcefully disconnected the database from the SQL Server service. SQL Server does not like that.
SQL Server is designed by default so that any database created is automatically kept open until either the computer shuts down, or the SQL Server service is stopped. Prior to removing the drive, you should have "Detached" the database, or stopped the SQL Server service.
You "may" be able to get the database running by executing the following command in a query window: RESTORE DATABASE [xxx] WITH RECOVERY;
You could, although I would not normally recommend this, alter the database to automatically close after there are no active connections.
To accomplish this, you would execute the following query:
ALTER DATABASE [xxx] SET AUTO_CLOSE ON WITH NO_WAIT;
Another way that works is to "Restart" the Database Engine. If feasible and/or practical for this server, it may be faster whenever you have several DB in the external drive.
In SQL Server Management Studio:
Attach the external drive
right-click on the database engine : Server Name(SQL Server
12.0.2000 ... etc)
Select "Restart"
Answer Yes when asked if you want to proceed
Below worked for me:
Run SQL Management Studio as Administrator (right click on SQL
Management Studio icon and select 'Run As')
Take database offline
Detach the database using DROP option
Attach the database
If you were using this database with a Web App running on IIS then you may need to restart the IIS Server
Hope this helps someone
If the SQL Server knows that database recovery needs to be run but something is preventing it from starting, the Server marks the db in ‘Recovery Pending’ state. This is different from the SUSPECT state because it cannot be said that recovery is going to fail – it just hasn’t started yet.
Check this thread: How to fix Recovery Pending State in SQL Server Database?

SQL Server stored procedures automatically recompiled?

If I use a web application Web Data Administrator and I edit the stored procedures SQL query, does it recompile on it's own? (new to SQL Server and this side of the database development)
MSSQL Server does maintain a cache of query plans, but this is not the same as compiled code.
The SQL Server manages this cache and can be the source of some pain if it caches a plan that is non-optimal. Though this has happened to me less than 5 times in 15 years (and that seemed to be a problem with a particular server), its best to let SQL server handle this and not touch it.
You can force SQLServer to recompile by supplying the WITH RECOMPILE option. Same caveat applies, unless you have a substantial reason to, DONT.
SQL is a scripting language, which means the code you write is not compiled. Rather, it is stored on the server to be used later.
When you edit a stored procedure, you can execute an ALTER script, or a DROP then CREATE script. This sends the text in your Web Data Admin (or SSMS) window to the server, issuing a command that tells the server to store this new query as a procedure for later use.
So, in short, yes, if you execute an ALTER script.

Strange Sql Server 2005 behavior

Background:
I have a site built in ASP.NET with Sql Server 2005 as it's database. The site is the only site on a Windows Server 2003 box sitting in my clients server room. The client is a local school district, so for data security reasons there is no remote desktop access and no remote Sql Server connection, so if I have to service the database I have to be at the terminal. I do have FTP access to update ASP code.
Problem:
I was contacted yesterday about an issue with the system. When I looked in to it, it seems a bug that I had solved nearly a year ago had returned. I have a stored procedure that used to take an int as a parameter but a year ago we changed the structure of the system and updated the stored procedure to take an nvarchar(10). The stored procedure somehow changed back to taking an int instead of an nvarchar.
There is an external hard drive connected to the server that copies data periodically and has the ability to restore the server in case of failure. I would have assumed that somehow an older version of the database had been restored, but data that I know was inserted 7 days and 1 day before the bug occurred is still in the database.
Question:
Is there anyway that the structure of a Sql Server 2005 database can revert to a previous version or be restored to a previous version without touching the actual data? No one else should have access to the server so I'm going a little insane trying to figure out how this even happened.
Any ideas?
Using SQL Server's built-in backup and restore mechanism, there is no means to pick only certain objects to restore. With transaction log backups, you can restore to a point in time which might be before a certain transaction or ALTER statement was made but that's the closest you get. There are tool's which will let you pick certain objects to restore however they work by either restoring the database to a copy and copying over the objects you want or reading the backup directly and copying out those objects. In other words, this is not something could have happened using the built-in tools accidentally. My guess is that someone accidentally ran an old script of the stored proc(s) that reverted it.
It would be trivial to change a stored procedure without touching any data, or any other stored procedure. How who why when, that's the problem.
One suggestion, run
select * from sys.procedures
and check the create_date and modify_date columns, for both your problem procedure and all other procedures in the database.
I've witnessed similar things happening with an app I have installed at one client location. Every so often the s'procs revert to an older version.
It's just one client, the app is installed at several others which have never had this issue, and they happen to be a school district as well. It happens about once every 3 months or so, and no one should be touching that machine. I'm not even sure they have anyone in house that would know how to open enterprise manager.
Out of curiousity, what backup software is your client using? and, after checking the creation / modify dates on the procedures, did a server reboot occur around that time?
The reason I ask is that my client has backup software that does some really weird things on that server. For example, on reboot it has to "play back" changes, including file operations, since the last successful backup. Also, is it installed in a VM?
Through Data Transformation Services (DTS) ? or if the scripts that set up the database are available someplace..