I currently have an Azure S2 database running via the new Azure Portal.
I notice my billing was higher than it should be and after investigating further, I noticed there were new databases appearing every day then disappearing.
Basically, something is running a CreateDatabase and DeleteDatabase event every evening, and I'm being charged an extra hour each day.
Microsofts response is:
"Our Operations Team investigated the issue and found that these databases did indeed exist in a 1 hour windows at midnight PST every day. It looks like you may have some workload which is doing this unknowingly or an application with permissions which is unknowingly creating these databases and then dropping them. "
I haven't set up any scripts to do this, and I have no apps running that could be doing this.
How can I find out what's happening?
Regards
Ben
Related
We’re trying to migrate to Azure SQL, and have built a prod and test SQL server (using Azure Devops, Bicep and Powershell). We have a requirement for a manual process in an Azure Devops pipeline (this needs to be manual as we need a steady state in test when getting ready for a release) to copy the prod databases over the top of the test ones when we need to refresh the data. As the prod databases may not be consistent in the day, when this is triggered, the database we want to restore is as at 4am this morning.
We originally attempted this with a nightly pipeline that ran New-AzSqlDatabaseCopy to copy the prod databases to a serverless backup copy (I couldn’t use the elastic pool the test databases are sat in, as its at the limit of the number of databases it can hold) on the test server, we could then drop the test database and do a create as copy of to create the test database as needed. This worked really nicely in performance but resulted in us running up a massive bill (think six times the bill for the whole company), we’re still trying to understand why that is with the support team, but I suspect it’s to do with the interplay of the retention period of Azure deleted databases, and us doing a delete and restore every night.
Ideally, I’d like to do a restore from a point in time of the prod database, over the top of the existing database on the test server, but combinations of New-AzSqlDatabaseCopy and Restore-AzSqlDatabase don’t seem to be able to get me there. I’d also need to be sure that this approach wouldn’t slow down the prod databases or cost an excessive amount, and would be reasonably performant.
I’d be comfortable with detaching the backup from the restore, and running the backup step early every morning as a fallback, again as long as it didn’t cost an excessive amount.
In terms of speed, I’m not too fussed about how long the backup step costs as long as it’s detached from the restore, but ideally the restore step needs to be efficient as possible, as it puts our test instance out of action for the time it runs for.
Has anyone got to such a solution that works effectively and efficiently, any help greatfully recieved!
Sort of is the honest answer! We never worked out a way of doing it across two servers and Microsoft support ended up saying they didn't think it was feasible, but we got to a nice compromise.
We created a single server for both sets of databases, but placed them in two elastic pools. As the server is just a logical arrangement and the thing we wanted to protect against was overwhelming of compute, the elastic pools ring fenced the live compute nicely.
We could then do point in time restores from live into test using powershell to restore live from last night without the need to backup. This approach does mean that secrets are shared between the two, but it covered off our needs well.
I'm making a financial app and I run into some problems with recurring money like fixed payment, salary, bank saving, ... I tried to add these payments on a certain day by comparing the current day and day of payments. The code is something like this:
If Date.Now.Day = GetPayDate(date) then
//code here //
It's in a start up event and it works but the problem is if users don't open the app on that day, the app will ignore and nothing will be added.
I'm using ADO.net with sql database. It's an app on local client without real time data.
In order to work correctly, users don't have to log on but the app must be run, so I tried to fix it by adding an auto start function on it. But it's not an option because users may not use computer for few days.
Is there any other way to resolve this problem? I just need some solutions or ideas about it, so even if users don't use the app in 2 or 3 months, it still calculate everything once they log on.
Sounds like you really need a windows service that runs on startup, or a scheduled task. A windows service is a type of C# / VB.Net application that's designed to run in the background, and has no UI. The Windows task scheduler can start a program on a regular basis.
For more info about windows services, see https://msdn.microsoft.com/en-us/library/zt39148a%28v=vs.110%29.aspx. For more information on scheduled tasks, see http://www.7tutorials.com/task-scheduler. For a discussion about which is better, see Which is better to use for a recurring job: Service or Scheduled Task?
Or you could compare the current date to >= the pay date if you don't mind paying a few days late.
I've acquired the maintenance task for an aspdotnetstorefront website that appears to have not had the maintenance scripts from the admin panel run for several years.
I've since ran each, but this has only freed up roughly 50MB. I have a table inside of the database (dbo.SecurityLog) that has rows from several years ago (2012). I assume that these rows display by going to Maintenance > Security, but because of the sheer size of the table, the application times out (Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding.)
Would deleting older rows from 2012 be safe to do or would this cause possible issues with the application?
So I resolved this based off the following link:
Is my SQL Server Express 2008 R2 database about to hit the size limit and crash my website?
The post contributors discuss several issues with the aspdotnetstorefront in regards to database size growing, along with methods to clear rows, one of which is the SecurityLog table. I cleared this table up to a specified date, and all is well, freeing up 300MB+
I have a problem that has puzzled me for a while now. Once in a while, say 4-5 times a week we get timeouts from the database at HH:03 (or HH:02 sometimes I think).
I've been digging into the scheduled tasks on the server to investigate if there is something that puts the server to it's knees in performance without any findings.
I've also gone so fas so that I've made a watchdog for the application so that when the query only has 1 seconds left of it's max query time it checks the processlist for the database and emails this to me. The process list always contains just one entry and that's the entry that is about to get an timeout exception.
To further add to the complexity we have many customers to this application but it's only one of the customers that get this timeout. All customers run the same code but have different databases, different application pools with different application pool identities.
The application is an ASP.net application. The database is a Microsoft SQL 2008 R2 Express edition.
Has anyone heard of something like this? Can anyone give me any pointers about what to investigate in order to resolve this issue?
Kind regards
We do not use our Azure storage account for anything except standard Azure infrastructure concerns (i.e. no application data). For example, the only tables we have are the WAD (Windows Azure Diagnostics) ones, and our only blob containers are for vsdeploy, iislogfiles, etc. We do not use queues in the app either.
14 cents per gigabyte isn't breaking the bank yet, but after several months of logging WAD info to these tables, the storage account is quickly nearing 100 GB.
We've found that deleting rows from these tables is painful, with continuation tokens, etc, because some contain millions of rows (have been logging diagnostics info since June 2011).
One idea I have is to "cycle" storage accounts. Since they contain diagnostic data used by MS to help us debug unexpected exceptions and errors, we could log the WAD info to storage account A for a month, then switch to account B for the following month, then C.
By the time we get to the 3rd month, it's a pretty safe bet that we no longer need the diagnostics data from storage account A, and can safely delete it, or delete the tables themselves rather than individual rows.
Has anyone tried an approach like this? How do you keep WAD storage costs under control?
Account rotation would work, if you don't mind the manual work to be done updating your configurations and redeploying every month. That would probably be the most cost-effective route, as you wouldn't have to pay for all the transaction to query and delete the logs.
There are some tools that will purge logs for you. Azure Diagnostics Manager from Cerebrata [which is currently showing me an ad to the right :) ] will do it, though it's a manual process too. I think they have some Powershell commandlets to do it as well.