Azure SQL "select" query not showing all rows - sql

I just used the SQLAzureMW (SQL Azure Migration Wizard Tool) to migrate my SQL Server database to Azure SQL. It went off without a hitch - all my tables are there, the website is running fine off it, etc.
Here's what's odd: if I execute a simple SELECT statement against my tables, I get only a few of the rows. I assumed they were missing, but my website is using some of those records as if they're there. So I queried with a WHERE clause and BAM - they showed up. How the... what the... why isn't my select showing me everything? This applies to many of the tables I've tested.
SQL Azure
On-Premise

I gave up on MS SQL Management Studio and am instead using SQL Server Object Explorer from Visual Studio 2012/2013. It functions properly and allows inline editing of data.

Consider this SELECT statement:
SELECT
SvcTimeID,
LoginName,
MeanSeconds,
MedianSeconds,
RequestCount,
StdDevSeconds,
SvcDate,
CAST (TS AS INT) AS TS
FROM dbo.SvcTime
WHERE SvcDate >= #SvcDate
Where the parameter is set:
cmd.Parameters["#SvcDate"].Value = DateTime.UtcNow - new TimeSpan(31, 0, 0, 0);
Execute that statement in an Azure Web Role - brought back, say 24 rows.
Now, insert two new rows; wait at least one minute; execute the statement again. Do the recently inserted rows appear? In my case, they did not. Note: the default value of SvcDate in the database is getutcdate().
Move the SQL Azure database from the web edition to the standard (S2) edition. Rows magically appear.
Here is my theory. The issue you had was not with MS SQL Management Studio but with SQL Azure itself where, under certain circumstances, the same query will return the original rows from a cache someplace and will miss the new rows in the database.
This has blown any remaining confidence I had with Azure.

I was scared at first, but I think this has an explanation:
If you inserted some rows in connection "A" and can't find them in other sessions, maybe you have a uncommited transaction. By default, in SQL Server on premise, your second connections would hung until transaction is commited or rolled back. (Isolation level read committed)
Somehow, using the same isolation level, Azure acts differently. I seems to work in some cases as a snapshot isolation. Because of that, you can read from the table, but results are not updated. Or maybe the lock are set in a different way.
To solve this, check sysprocesses for sessions with open_tran > 0 or just be careful commmiting trans. In the example, running commit in your session "A" should do it.
Good luck!

Related

Migrating legacy data from SQL Server 2000 to 2019 , log block error - is there a painless way of moving over tables with autoinc identity columns? [migrated]

This question was migrated from Stack Overflow because it can be answered on Database Administrators Stack Exchange.
Migrated 5 days ago.
I've been tasked with migrating data from an instance of SQL Server 2000 to 2019. There are a total of four databases to bring over, three of which I was able to backup/restore into 2008 and then into 2019 without any issues. Please note: I am not a DBA in any sense, though I'm the closest thing to one on hand.
The fourth and final database presented the following error that prevented moving from 2008 to 2019:
System.Data.SqlClient.SqlError: An error occurred while processing the log for database 'DbNameHere'. The log block version 2 is unsupported. This server supports log version 3 to 6. (Microsoft.SqlServer.SmoExtended)
Is there a simple fix for this problem that I'm missing in the various SSMS menus?
Alternatively, is there a way to copy raw data from one server to another via, for instance, a flat file, and preserve the identity columns as identity columns? That is, I don't want to just strip that column and bulk insert, as they are often used as foreign keys in other tables, and with twenty-some-odd years of data, something is bound to break in doing this.
An example of an ideal final result in this solution would be something like: legacy table X has 1000 rows, the last of which has an identity column value of 1000. Once the move is complete, new table X has 1000 rows, the last of which has an identity column value of 1000, and upon insert the next row automatically increments to 1001.
Apart from unsuccessfully messing around with flat files, I've also tried the "Copy Database" option in SSMS, which also failed.
I would attempt to get SQL Server to rebuild the transaction log. Based on the error message, that might sort out the situation.
You first use sp_detach_db to detach the database. It is now very likely that the ldf file isn't needed when you do a subsequent attach, and perhaps rebuilding the log this way will sort the situation.
Then you attach the database, without the ldf file. Use CREATE DATABASE with either of the FOR ATTACH or FOR ATTACH_REBUILD_LOG options.
I would do this on the 2008 instance, since from what I understand you got the database in there successfully. But feel free to play around regarding on which version (2000 or 2008) you do the detach and also on which version (2000, 2008, 2019) you do the attach.

Azure SQL Single DB (Serverless) Autopause vs SSMS (SQL Server Management Studio)

We're running Azure SQL Single Database (Serverless tier) and are having problems with our development environment SQL servers appearing not to pause despite the DBs being out of use and autopause being correctly configured.
We've narrowed it down to SSMS running the following SQL query against the DB if it has a query window open but we have no idea how to prevent it.
(#type int)SELECT file_id, name, size AS size_8KB, max_size AS max_size_8KB, ISNULL(FILEPROPERTY(name, 'SpaceUsed'), size) AS space_used_8KB
FROM sys.database_files
WHERE type = #type ORDER BY size DESC
This query is run every 5 - 7 minutes while SSMS is open. This is causing us considerable headache and cost.
Does anyone know what feature of SSMS is calling this query and how to turn it off?
As I know about the serverless, when the database is inactive, it can be paused. But when the SSMS or query editor opened, the connection to SQL database is open which means the database is always active., then the autopause congifuration won't work.
Ref this document: https://learn.microsoft.com/en-us/azure/azure-sql/database/serverless-tier-overview#performance-configuration
HTH.

SQL Server SSIS Transaction Handling

My organization is upgrading to SQL Server 2008 from 2000 (yay!) and I am unfamiliar with the inter workings of Integration Services.
I currently manage very large databases that store business process transactions that amount to between 500000 and 1000000 transactions daily. We've had very poor management of the databases in the past and they have thus grown to an unmaintainable size. I'm working to provide daily archival of the databases so that the working databases are more manageable. I wrote several stored procedures to do an archive and subsequently prune the working databases. However, in dabbling with Integration Services, I've found great built in functionality for the job that my SPs currently do.
What I've created are several SSIS packages that perform an export/import. Since I'm only interested in certain data, I use a custom query in the packages that is of the form:
DELETE TransactionTable
OUTPUT
DELETED.*
WHERE (EventTimestamp >=
DATEADD(D, 0, DATEDIFF(D, 0, (SELECT MIN(EventTimestamp) FROM TransactionTable)))
)
AND (EventTimestamp <
DATEADD(HH, 0, (DATEADD(YY, -1, DATEDIFF(D, 1, GETDATE()))))
);
This query grabs the data I'm interested in and deletes it from the working table. Using SSIS, this query produces the output that is placed into the archive table.
My question(s) are:
Since I want to import records into my archive and delete those records from the working database within the same SSIS package to ensure consistency, this query seems to be the way to do it. However, I'm concerned about the structure of the transaction. I'm deleting records from my working database as output to be inserted into my archive database. How does SQL Server handle errors in this case? Is running this package safe? What happens if the output generated by the statement is invalid and an error occurs? Does the statement get rolled back? Will the DELETE only be committed if all of the output was able to be transferred to the archive? If not, how might I be able to achieve a fail-safe condition?
The good news is, you can set the SSIS package to rollback a set of tasks if any failure occurs.
There is a TransactionOption property that is available on pretty much every container/task, including the package itself. You can set it to Required, Supported, and NotSupported.
You can details on each option here: http://msdn.microsoft.com/en-us/library/ms137690.aspx
Obviously play around with this a bit, forcing errors on different steps to see what the result is.

mysql query listener

Do you know a tool that i will be able to see what queries where run against the database .
Thanks for help
You can use the built in MySql Query Profiler.
The new profiler became available in the 5.0.37 version of the MySQL Community Server
And:
To begin profiling one or more SQL queries, simply issue the following command:
mysql> set profiling=1;
Query OK, 0 rows affected (0.00 sec)
Two things happen once you issue this command. First, any query you issue from this point on will be traced by the server with various performance diagnostics being created and attached to each distinct query. Second, a memory table named profiling is created in the INFORMATION_SCHEMA database for your particular session (not viewable by any other MySQL session) that stores all the SQL diagnostic results. This table remains persistent until you disconnect from MySQL at which point it is destroyed.

How to find recent sql update operations acting upon a certain table (SQL Server 2005)

say I want to find the latest added rows (UPDATE by any user, not necessarily the one which is executing UPDATE) in XX table.
You would need to use a Transaction Log reader tool. There are several free ones available as well as commercial offerings.
ApexSQL Log
You could also try this undocumented command:
DBCC LOG(<database name>[,{0|1|2|3|4}]).
If you're using SQL Server 2000, RedGate have a free tool called SQL Log Rescue.
EDIT: Documentation for DBC LOG:
(1) (2)
Please refer to SQL Docs & look for OUTPUT clause (that you can use with UPDATE/INSERT to get the affected records).
http://msdn.microsoft.com/en-us/library/ms177564.aspx
SQL Server Profiler will allow you to track hits to the database in real time. You can set filters on a number of properties to ge the output you need.