We're running Azure SQL Single Database (Serverless tier) and are having problems with our development environment SQL servers appearing not to pause despite the DBs being out of use and autopause being correctly configured.
We've narrowed it down to SSMS running the following SQL query against the DB if it has a query window open but we have no idea how to prevent it.
(#type int)SELECT file_id, name, size AS size_8KB, max_size AS max_size_8KB, ISNULL(FILEPROPERTY(name, 'SpaceUsed'), size) AS space_used_8KB
FROM sys.database_files
WHERE type = #type ORDER BY size DESC
This query is run every 5 - 7 minutes while SSMS is open. This is causing us considerable headache and cost.
Does anyone know what feature of SSMS is calling this query and how to turn it off?
As I know about the serverless, when the database is inactive, it can be paused. But when the SSMS or query editor opened, the connection to SQL database is open which means the database is always active., then the autopause congifuration won't work.
Ref this document: https://learn.microsoft.com/en-us/azure/azure-sql/database/serverless-tier-overview#performance-configuration
HTH.
Related
I am using odbc.eval on KDB to run a SQL stored procedure that generates over tens of millions of rows of data. We have 2 different RDBs (SQL Server 2012 and SQL Server 2016) set up with the same data, allocated memory, etc. The KDB code works against one of them, but doesn't work against the other. For the newer server, KDB crashes midway through the query. The stored procedure seems to work on SQL Management Studio 2016 just fine, though it does take a long time to fully execute - around an hour or so. Could this be a time-out error? Any suggestions for running a SQL query with this large amount of data on KDB without running into memory or timeout issues?
I just used the SQLAzureMW (SQL Azure Migration Wizard Tool) to migrate my SQL Server database to Azure SQL. It went off without a hitch - all my tables are there, the website is running fine off it, etc.
Here's what's odd: if I execute a simple SELECT statement against my tables, I get only a few of the rows. I assumed they were missing, but my website is using some of those records as if they're there. So I queried with a WHERE clause and BAM - they showed up. How the... what the... why isn't my select showing me everything? This applies to many of the tables I've tested.
SQL Azure
On-Premise
I gave up on MS SQL Management Studio and am instead using SQL Server Object Explorer from Visual Studio 2012/2013. It functions properly and allows inline editing of data.
Consider this SELECT statement:
SELECT
SvcTimeID,
LoginName,
MeanSeconds,
MedianSeconds,
RequestCount,
StdDevSeconds,
SvcDate,
CAST (TS AS INT) AS TS
FROM dbo.SvcTime
WHERE SvcDate >= #SvcDate
Where the parameter is set:
cmd.Parameters["#SvcDate"].Value = DateTime.UtcNow - new TimeSpan(31, 0, 0, 0);
Execute that statement in an Azure Web Role - brought back, say 24 rows.
Now, insert two new rows; wait at least one minute; execute the statement again. Do the recently inserted rows appear? In my case, they did not. Note: the default value of SvcDate in the database is getutcdate().
Move the SQL Azure database from the web edition to the standard (S2) edition. Rows magically appear.
Here is my theory. The issue you had was not with MS SQL Management Studio but with SQL Azure itself where, under certain circumstances, the same query will return the original rows from a cache someplace and will miss the new rows in the database.
This has blown any remaining confidence I had with Azure.
I was scared at first, but I think this has an explanation:
If you inserted some rows in connection "A" and can't find them in other sessions, maybe you have a uncommited transaction. By default, in SQL Server on premise, your second connections would hung until transaction is commited or rolled back. (Isolation level read committed)
Somehow, using the same isolation level, Azure acts differently. I seems to work in some cases as a snapshot isolation. Because of that, you can read from the table, but results are not updated. Or maybe the lock are set in a different way.
To solve this, check sysprocesses for sessions with open_tran > 0 or just be careful commmiting trans. In the example, running commit in your session "A" should do it.
Good luck!
I used the below command to create a database snapshot in SQL Server 2008 R2:
CREATE DATABASE "dbss" ON (
NAME = "os-file-name",
FILENAME = 'path')
AS SNAPSHOT OF "dbName";
GO
I got this error:
Database Snapshot is not supported on Standard Edition (64-bit).
Does anyone knows how can I create a database snapshot in SQL Server 2008 R2?
Database Snapshot is a feature of the Enterprise Edition and the 2008 Developer Edition.
Besides that there is only little use of Snapshots for a "common user". Most things can be done with a backup too.
Main purpose for snapshots are expensive queries on rapidly changing data.
If you got a huge database and need to execute a query for a report that takes some time there is the danger that data may change while the query / procedure fetches data for the report. In this case you need snapshots. There you can query all your data without having problems with changing data.
I have a rather large (many gigabytes) table of data in SQL Server that I wish to move to a table in another database on the same server.
The tables are the same layout.
What would be the most effecient way of going about doing this?
This is a one off operation so no automation is required.
Many thanks.
If it is a one-off operation, why care about top efficiency so much?
SELECT * INTO OtherDatabase..NewTable FROM ThisDatabase..OldTable
or
INSERT OtherDatabase..NewTable
SELECT * FROM ThisDatabase..OldTable
...and let it run over night. I would dare to say that using SELECT/INSERT INTO on the same server is not far from the best efficiency you can get anyway.
Or you could use the "SQL Import and Export Wizard" found under "Management" in Microsoft SQL Server Management Studio.
I'd go with Tomalak's answer.
You might want to temporarily put your target database into bulk-logged recovery mode before executing a 'select into' to stop the log file exploding...
If it's SQL Server 7 or 2000 look at Data Transformation Services (DTS). For SQL 2005 and 2008 look at SQL Server Integration Services (SSIS)
Definitely put the target DB into bulk-logged mode. This will minimally log the operation and speed it up.
We have a database running on SQL 2005. One of the store procedure looks up a user's email address from Active Directory using a linked server. The call to the linked server occurs in a database function.
I'm able to call is successfully from my Asp.Net application the first time, but periodically after that, it fails with the following error:
{"The requested operation could not be performed because OLE DB provider \"ADsDSOObject\" for linked server \"ADSI\" does not support the required transaction interface."}
It appears that the amount of time between calling the function affects whether the linked server query will work correctly. I am not using any transactions. When I try calling the function in a quick make-shift SQL script, it runs fine everytime (even when tested in quick succession).
Is there some sort of transaction being left open that naturally dies if I don't try calling the procedure again? I'm at a loss here.
Here is the simple call in the store procedure:
DECLARE #email varchar(50)
SELECT #email = LEFT(mail, 50)
FROM OPENQUERY (
ADSI,
'SELECT mail, sAMAccountName FROM ''LDAP://DC=Katz,DC=COM'' WHERE objectCategory = ''Person'' AND objectClass = ''User'''
)
WHERE sAMAccountName = CAST(#LoginName AS varchar(35))
RETURN #email
I've worked with SQL Server linkservers often, though rarely LDAP queries... but I got curious and read the Microsoft support page linked to in Ric Tokyo's previous post. Towards the bottom it reads:
It is typical for a directory server
to enforce a server limitation on the
number of objects that will be
returned for a given query. This is to
prevent denial-of-service attacks and
network overloading. To properly query
the directory server, large queries
should be broken up into many smaller
ones. One way to do this is through a
process called paging. While paging is
available through ADSI's OLEDB
provider, there is currently no way
available to perform it from a SQL
distributed query. This means that the
total number of objects that can be
returned for a query is the server
limit. In the Windows 2000 Active
Directory, the default server limit is
1,000 objects.
I'm thinking that the reason it fails on you (or not) depending on whether call it from the app or from a "quick make-shift sql script" (as you put it) might be related to the security context under which the operation is executing. Depending on how the link server connection was set up, the operation could be being executed under a variety of possible credentials depending on how you initiate the query.
I don't know, but that's my best guess. I'd look at the linkserver configuration, in particular the linkserver settings for what set of credentials are used as the security context under which operations executed across the linkserver run.
Rather then query Active Directory through a linked server, you might be better off caching your AD data into a SQL database and then querying that instead. You could use Integration Services by creating a OLE DB connection using "OLE DB PRovider for Microsoft Directory Services" and having a DataReader source with a query like:
SELECT physicalDeliveryOfficeName, department, company, title, displayName, SN,
givenName, sAMAccountName, manager, mail, telephoneNumber, mobile
FROM 'LDAP://DC=SOMECO,DC=COM'
WHERE objectClass='User' and objectCategory = 'Person'
order by mail
Using this method you will still run into the 1000 row limit for results from an AD query (note it is NOT advisable to try and increase this limit in AD, it is there to prevent the domain controller from becoming overloaded). Sometimes its possible to use a combination of queries to return the full data set, e.g. names A - L and M - Z
Alternatively you could use the CSVDE command line utility in Windows Server to export your directory information to a CSV file and then import it into a SQL database (see http://computerperformance.co.uk/Logon/Logon_CSVDE_Export.htm for more info on exporting AD data with CSVDE).
please read the support page from Microsoft
I suspect that it might be the cached query plan due to your statement that "When I try calling the function in a quick make-shift SQL script, it runs fine everytime (even when tested in quick succession)."
Could you try executing your stored procedure like so:
EXEC usp_MyProcedure WITH RECOMPILE
This question appears in the top of the first google page when search for the error string but has not valid answer.
This error happens intermitently when isolation level is not specified on .NET code nor in Store Procedure.
This error also happens in SQL Server 2008.
The fix is force SET TRANSACTION ISOLATION LEVEL READ (UN)COMMITTED because a isolation level any higher is not supported by Active Directory and SQL Server is trying to use SERIALIZABLE.
Now, as this error is intermitent. Why is ADO.NET or SQLServer switching its default isolation to SERIALIZABLE sometimes and sometimes not? What triggers this switching?