Max tps support by postgreSQL - postgresql-9.5

Required TPS 200K in postgreSQL database. How much TPS can support by postgreSQL. what hardware and configuration required to achieve this TPS.

Related

What is Azure SQL Database automatically grows rates

On a normal SQL server we can tell it how to grow.  The default is 10% each time, so the database grows by 10% its current size. Do we have any insight on how the Azure SQL database is growing other than it grows automatically?
Azure SQL server would allow us to configure the database to grow in fixed chunks e.g. 20 MB?
thanks,
sakaldeep
You can use PowerShell, T-SQL, CLI. the portal to increase or decrease the maximum size of a database but Azure SQL Database does not support setting autogrow. You can vote for this feature to be available in the future on this URL.
If you run the following query on the database, you will see the growth is set to 2048 KB.
SELECT name, growth
FROM sys.database_files;

Azure SQL Database or SQL Data Warehouse

I am working on a solution architecture and am having hard time choosing between Azure SQL DB or SQL DW.
The current scope involves around developing real-time BI reporting solution which is based on multiple sources. But in the long run the solution may be extended into a full fledged EDW and Marts.
I initially thought of using SQL DW so that for future scope the MPP capabilities could be used. But when I spoke to a mate who recently used SQL DW, he explained that the the development in SQL DW is not similar to SQL DB.
I have worked previously on Real Time reporting with no scope for EDW and we successfully used SQL DB. With this as well we can create Facts and Dimension and Marts.
Is there a strong case where I should be choosing SQL DW over SQL DB?
I think the two most important data points you can have here is the volume of data you're processing and the number of concurrent queries that you need to support. When talking about processing large volume data, and by large, I mean more than 3tb (which is not even really large, but large enough), then Azure SQL Data Warehouse becomes a juggernaut. The parallel processing is simply amazing (it's amazing at smaller volumes too, but you're paying a lot of money for overkill). However, the one issue can be the simultaneous query limit. It currently has a limit of 128 concurrent queries with a limit of 1,000 queries queued (read more here). If you're using the Data Warehouse as a data warehouse to process large amounts of data and then feed them into data marts where the majority of the querying takes place, this isn't a big deal. If you're planning to open this to large volume querying, it quickly becomes problematic.
Answer those two questions, query volume and data volume, and you can more easily decide between the two.
Additional factors can include the issues around the T-SQL currently supported. It is less than traditional SQL Server. Again, for most purposes around data warehousing, this is not an issue. For a full blown reporting server, it might be.
Most people successfully implementing Azure SQL Data Warehouse are using a combination of the warehouse for processing and storage and Azure SQL Database for data marts. There are exceptions when dealing with very large data volumes that need the parallel processing, but don't require lots of queries.
The 4 TB limit of Azure SQL Database may be an important factor to consider when choosing between the two options. Queries can be faster with Azure SQL Data Warehouse since is a MPP solution. You can pause Azure SQL DW to save costs with Azure SQL Database you can scale down to Basic tier (when possible).
Azure SQL DB can support up to 6,400 concurrent queries and 32k active connections, where Azure SQL DW can only support up to 32 concurrent queries and 1,024 active connections. So SQL DB is a much better solution if you are using something like a dashboard with thousands of users.
About developing for them, Azure SQL Database supports Entity Framework but Azure SQL DW does not support it.
I want also to give you a quick glimpse of how both of them compare in terms of performance 1 DWU is approximately 7.5 DTU (Database Throughput Unit, used to express the horse power of an OLTP Azure SQL Database) in capacity although they are not exactly comparable. More information about this comparison here.
Thanks for you responses Grant and Alberto. The responses have cleared a lot of air to make a choice.
Since, the data would be subject to dash-boarding and querying, I am tilting towards SQL Database instead of SQL DW.
Thanks again.

Azure DTUs for a medium size application

I am trying to migrate my ASP (IIS) +SQLServer application from SQL Server Express Edition to Azure SQL database. Currently, we only have one dedicated server with both IIS and SQL express edition on it. The planned setup will be ASP (IIS) on an Azure virtual machine and Azure SQL database.
Per my search on google, it seems SQL server Express Edition has performance issues which are resolved in standard and enterprise edition. The DTU calculator indicates that I should move to 200 DTUs. However, that is based on test run on SQL Express edition setup with IIS on the same dedicated server.
Some more information:
The database size is around 5 GB currently including backup files.
Total users are around 500.
Concurrent usage is limited, say around 30-40 users at a time.
Bulk usage happens for report retrieval during a certain time frame only by a limited number of users.
I am skeptical to move to 300DTUs given the low number of total users. I am initially assuming 100 DTUs is good enough but looking for some advice on someone who has dealt with this before.
Database size and number of users isn't a solid way to estimate DTU usage. A poorly indexed database with a handful of users can consume ridiculous amounts of DTUs. A well-tuned database with lively traffic can consume a comparatively small number of DTUs. At one of my clients, we have a database that handles several million CRUD ops per day over 3,000+ users that rarely breaks 40DTUs.
That being said, don't agonize over your DTU settings. It is REALLY easy to monitor and change. You can scale up or scale down without interrupting service. I'd make a best guess, over-allocate slightly, then move your allocated DTUs up or down based on what you see.
it seems SQL server Express Edition has performance issues
This is not correct.There are certain limitations like 10GB size,one core CPU and some features are disabled ..
I am initially assuming 100 DTUs is good enough but looking for some advice on someone who has dealt with this before.
I would go with the advice of DTU calculator,but if you want to go with 100 DTU's,i recommend going with it ,but consistently evaluate performance..
Below query can provide you DTU metrics in your instance and if any one of the metrics is consistently over 90% over a period of time,i would try to tune that metric and finally upgrade to new tier,if i am not successfull
DTU query
SELECT start_time, end_time,
(SELECT Max(v)
FROM (VALUES (avg_cpu_percent), (avg_physical_data_read_percent), (avg_log_write_percent)) AS value(v)) AS [avg_DTU_percent]
FROM sys.resource_stats
WHERE database_name = '<your db name>'
ORDER BY end_time DESC;
-- Run the following select on the Azure Database you are using
SELECT
max (avg_cpu_percent),max(avg_data_io_percent),
max (avg_log_write_percent)
FROM sys.resource_stats
WHERE database_name = 'Database_Name'

Azure database performance is very bad

I am facing performance issues with database hosted in azure, if I run same query in my server response time is 4sec but in azure database it takes around 12secs and sometimes timeout.Basically it takes 3X times the response of server hosted in my company environment. We were using Shared plan after that we tried upgrading the plan to S01 Standard and then to Default1 (Basic: 1 Large), but upgrading plan dint make any difference in performance.Can you please help to make the performance faster.
Thanks.
Azure SQL Database charges on performance. So if you want more performance, you should change to a higher priced performance tier.
What performance level are you currently in?
Also a comparison with your desktop/laptop is an apples to oranges comparison. A typical laptop has 4 or 6 cores, 4GB or 8GB of memory and more. Only a larger Premium performance-level could compare to that potential laptop performance. Lastly your laptop doesn't have any HA available, and SQL DB has a 99.99 SLA.
I hope this helps.

using HSQL as in memeory data store

I am planning to use HSQL has a inmemeory datastore(only inmemory /no disk backup) .Then i wil take periodic backup of HSQL every x min (eg 15 min ) so that i can restore the data in case if box goes down for some reason.
Few doubts:
1)Is HSQL good for storing large amount of data . (Eg 15 GB)
2)Will search be good ? I guess yes since it is inmemory
3)Any other concerns?
4)Have you used HSQL for such purpose?
5)Any other open source which supports SQL like queries. I know memsql but its not open sourced
Yes. I have used HSQL DB as in-memory database. I run a stand-alone java application with 20 GB heap and HSQL DB. I have faced no problems. Application process nearly 1 million records everyday and has round 12GB of data.
I do not see major concerns with HSQL DB. But, concurrent access and locking is something which has to be taken care properly.
There are many open source in-memory databases.
Derby (Java Db), H2, HSQL DB.
You can also use MySQL, PostGRESQL, Oracle Coherence (recently released)
Here is a nice article which will maybe help you with your decision.
What I think:
I have intensively used HSQLDB 2 years ago in in-memory mode but not with very large amount of data (2-3 GB) and it performed well enough. However it was said that it is not the best solution for production usage.
HSQLDB doesn't support full text search - use H2 for it. Also H2 has built-in support for clustering and replication according to their comparison.
If you don't need ACID in your application and feel that key-value storage is enough - I certainly recommend using Redis. It is designed to perform well on in-memory data and certainly can handle milions of rows.