What is Azure SQL Database automatically grows rates - azure-sql-database

On a normal SQL server we can tell it how to grow.  The default is 10% each time, so the database grows by 10% its current size. Do we have any insight on how the Azure SQL database is growing other than it grows automatically?
Azure SQL server would allow us to configure the database to grow in fixed chunks e.g. 20 MB?
thanks,
sakaldeep

You can use PowerShell, T-SQL, CLI. the portal to increase or decrease the maximum size of a database but Azure SQL Database does not support setting autogrow. You can vote for this feature to be available in the future on this URL.
If you run the following query on the database, you will see the growth is set to 2048 KB.
SELECT name, growth
FROM sys.database_files;

Related

How to increase DWUs on Azure Synapse Dedicated SQL Pool

I inherited an Azure Dedicated SQL Pool at my current firm. The DWU was set at 100, is there a way to increase the DWU's? It would be appear I would have to create a new Dedicated SQL Pool to increaase the DWU's
Also, is there a guide showing what DWU's are best for a particular environments? For example, Microsoft recommend a minimum of 1100 DWUs for production, but I'm not sure what production environment that is based on?
In Azure portal, Click the dedicated SQL resource where you need to increase DWUs.
Then click Scale
Then, Increase the DWU based on the requirement.
You can also use t-sql command, powershell to change DWUs.
Reference: Microsoft document on Changing DWUs

Azure DTUs for a medium size application

I am trying to migrate my ASP (IIS) +SQLServer application from SQL Server Express Edition to Azure SQL database. Currently, we only have one dedicated server with both IIS and SQL express edition on it. The planned setup will be ASP (IIS) on an Azure virtual machine and Azure SQL database.
Per my search on google, it seems SQL server Express Edition has performance issues which are resolved in standard and enterprise edition. The DTU calculator indicates that I should move to 200 DTUs. However, that is based on test run on SQL Express edition setup with IIS on the same dedicated server.
Some more information:
The database size is around 5 GB currently including backup files.
Total users are around 500.
Concurrent usage is limited, say around 30-40 users at a time.
Bulk usage happens for report retrieval during a certain time frame only by a limited number of users.
I am skeptical to move to 300DTUs given the low number of total users. I am initially assuming 100 DTUs is good enough but looking for some advice on someone who has dealt with this before.
Database size and number of users isn't a solid way to estimate DTU usage. A poorly indexed database with a handful of users can consume ridiculous amounts of DTUs. A well-tuned database with lively traffic can consume a comparatively small number of DTUs. At one of my clients, we have a database that handles several million CRUD ops per day over 3,000+ users that rarely breaks 40DTUs.
That being said, don't agonize over your DTU settings. It is REALLY easy to monitor and change. You can scale up or scale down without interrupting service. I'd make a best guess, over-allocate slightly, then move your allocated DTUs up or down based on what you see.
it seems SQL server Express Edition has performance issues
This is not correct.There are certain limitations like 10GB size,one core CPU and some features are disabled ..
I am initially assuming 100 DTUs is good enough but looking for some advice on someone who has dealt with this before.
I would go with the advice of DTU calculator,but if you want to go with 100 DTU's,i recommend going with it ,but consistently evaluate performance..
Below query can provide you DTU metrics in your instance and if any one of the metrics is consistently over 90% over a period of time,i would try to tune that metric and finally upgrade to new tier,if i am not successfull
DTU query
SELECT start_time, end_time,
(SELECT Max(v)
FROM (VALUES (avg_cpu_percent), (avg_physical_data_read_percent), (avg_log_write_percent)) AS value(v)) AS [avg_DTU_percent]
FROM sys.resource_stats
WHERE database_name = '<your db name>'
ORDER BY end_time DESC;
-- Run the following select on the Azure Database you are using
SELECT
max (avg_cpu_percent),max(avg_data_io_percent),
max (avg_log_write_percent)
FROM sys.resource_stats
WHERE database_name = 'Database_Name'

How do I change the online status of an Azure SQL Database to offline

I want to change the status of my Azure SQL Database to offline, but cant see a way to do it from the management portal.
Thanks for reading :-)
You can add a firewall setting to deny all the IP addresses, your won't get double billed and your database is intact. The other option is rename but I wouldn't go with it unless needed.
https://msdn.microsoft.com/en-us/library/dn270017.aspx
For those who are looking to pause the Azure SQL Database to save money, now there is also a vCore-based purchase model for SQL databases in Azure.
After choosing vCore-based billing, you need to choose between:
Provisioned: Compute resources are pre-allocated. Billed per hour based on vCores configured​.
or
Serverless: Compute resources are auto-scaled​. Billed per second based on vCores used​.
If you choose "Provisioned" you can enable "auto-pause" where the database automatically pauses if it is inactive for the time period specified (eg. 1 Hour or more), and automatically resumes when database activity recurs.
Source: https://portal.azure.com
Currently, there is no way to take a database "offline" without deleting the database. A few alternatives are deleting the database and then restoring it at a later day (7, 14, 35 days) depending on the edition of the database (basic, standard, premium) respectively or exporting the database to Azure storage and then restoring at a later date.
For purpose of reducing the expenses you can downscale your database to S0 tier, which allows for same 250Gb as S3 and will cost just 15$ per month.
If you have a premium database with size more than 250Gb, than you can export it to a .bacpac and just delete / re-import. But this actually takes a lot of time and is hard to automate.
Denying all IP-s will not prevent billing AFAIK.

Will autogrowth overcome SQL Server Express database size limit?

I've hit the 4gb limit in my database and I need a bit of breathing space before upgrading it to a SQL Server 2008 database and I just wondered if increasing auto growth would give me more space inside the database beyond the 4GB limit that SQL Server 2005 Express imposes.
I know the pitfalls of Auto Growth with regards to performance as data will be fragmented across the disk and thus make querying it slower but is there an advantage in granting it say 50/100mb of room for auto growth? Whilst the migration process is planned out or an alternative is sought.
Disk space is not an issue and it would only be a temporary measure anyway.
No. Express Edition will not grow, nor attach or restore, a database over its size limit, no matter what the errorlog viewer tells you. Not even temporarily.
But you have an easy solution: SQL Server 2008R2 Express Edition has raised the limit to 10GB.
No it won't. The SQL Server Express edition is for creating Database-oriented applications without the need to purchase an official SQL Server licenses. However, you cannot use it in an production environment for more reasons other than just file size limit.
No, once you've reached 10'240 MB (10*1024MB) , you're out of luck
(technically not exactly 10GiB).

Memory Issues with MS SQL Server Backup Jobs

I will preface this by stating we are using MS SQL Server 2008 R2.
We're having issues when our database backups are running SQL Server takes all of the available memory and never releases. Our current high watermark of memory usage is about 60%. When the backup job runs it goes to 99% and never releases unless we reset the SQL service. This leads me to 2 questions:
Dealing with memory allocation, Is there a way to accurately limit memory usage of SQL Server? We are limiting the "Maximum server memory" value to 85% but in consistently exceeds that value.
What is the best method of backing up the database? We are currently relying on our provider to maintain the database backups and it seems like the "home grown" method they use through a stored proc and commands is the cause of the memory issues but it is working for other customers of theirs. Should we look at using Maintenance Plans as a replacement?
Any help with this would be great.
Is there a way to accurately limit memory usage of SQL Server?
Yes there is. How to: Set a Fixed Amount of Memory (SQL Server Management Studio)
Use the default settings to allow SQL Server to change its memory
requirements dynamically based on available system resources. The
default setting for min server memory is 0, and the default setting
for max server memory is 2147483647 megabytes (MB). The minimum amount
of memory you can specify for max server memory is 16 MB.
What is the best method of backing up the database?
You can get the answer here: Select the Most Optimal Backup Methods for Server