How do I create a database in an elastic pool using bicep? - azure-sql-database

I'm trying to set up a database into an elastic pool using Bicep. So far I've created a sql server and a related elastic pool successfully. When I try to then create a database that refers to these parts I get unstuck with a helpful error from Azure
'The language expression property array index '1' is out of bounds.'
I'm really unclear on what settings I need to put in the SKU and other properties of the sqlServer configuration. So far I have the following:
resource sqlDatabase 'Microsoft.Sql/servers/databases#2022-05-01-preview' = {
parent: sqlServer
name: databaseName
location: location
sku: {
name: databaseSku
}
properties: {
elasticPoolId: elasticPoolId
collation: collation
maxSizeBytes: maxDatabaseSizeInBytes
catalogCollation: collation
zoneRedundant: zoneRedundant
readScale: 'Disabled'
requestedBackupStorageRedundancy: 'Zone'
}
}
I want to use the StandardElastic pool and I've tried passing that as the databaseSku and I want to use 50 DTU's as the limit. But there is capacity, family, size and tier and from powershell I get these sorts of options:
Sku Edition Family Capacity Unit Available
------------ ---------------- -------- ---------- ------ -----------
StandardPool Standard 50 DTU True
StandardPool Standard 100 DTU True
StandardPool Standard 200 DTU True
StandardPool Standard 300 DTU True
So how do I map my sql database onto my sql server on that pool using the 50 DTU StandardPool settings? Capacity appears to be a string as well on this template!

I found out that firstly you don't supply an sku to the sql database as it inherits the SKU information from the pool (which makes sense). Secondly that in my reference to the elastic pool above I was using the following syntax
resource elasticPool 'Microsoft.Sql/servers/elasticPools#2022-05-01-preview'
existing = {
name: 'mything-pool'
}
And had excluded the PARENT for the pool, so the correct reference to the pool would have been
resource elasticPool 'Microsoft.Sql/servers/elasticPools#2022-05-01-
preview' existing = {
name: 'mything-pool'
parent: **dbServer**
}
Which then fixed my obscure error

Related

SSAS tabular model timeout raised during processing

When doing a Full Process on a tabular model to an Azure Analysis Service model I get the following error after 10 minutes into the processing:
Failed to save modifications to the server. Error returned: 'Microsoft SQL: Execution Timeout Expired. The timeout period elapsed prior to completion of the operation or the server is not responding.. The exception was raised by the IDbCommand interface.
Technical Details:
RootActivityId: cd0cfc78-416a-4039-a79f-ed7fe9836906
Date (UTC): 2/27/2018 1:25:58 PM
The command has been canceled.. The exception was raised by the IDbCommand interface.
The command has been canceled.. The exception was raised by the IDbCommand interface.
The command has been canceled.. The exception was raised by the IDbCommand interface.
The command has been canceled.. The exception was raised by the IDbCommand interface.
The data source for the model is Azure Data Warehouse and SSAS authenticates to it via SQL authentication. When the Timeout occurs some partitions have retrieved all their rows but the others are still processing. The model contains 11 tables each with a single partition.
I get the error both when processing with Visual Studio 2015 and SSMS 2017. I can't see any SSAS server properties with a 10 minute (600 second) timeout. Individual table processing can be done without the timeout issue since individually they all complete in under 10 minutes.
I've tried setting the timeout property in the dataSources.connectionDetails object in my Tabular Model Scripting Language json file (i.e. Model.bim). But editing it drops the authentication credentials, and then resetting the credentials drops the timeout property. So I don't know if that property is even relevant to the timeout error issue.
An example of a partition query expression I'm using:
let
Source = #"SQL/resourcename database windows net;DatabaseName",
MyQuery =
Value.NativeQuery(
Source,
"SELECT * FROM [dbo].[MyTable]"
)
in
MyQuery
So thanks to GregGalloway's prompting I've figured out the timeout can be set on a per Partition basis using the Power Query M language.
So the data access parts of my TMSL object now look like so...
The model.dataSource is as so:
"dataSources": [
{
"type": "structured",
"name": "MySource",
"connectionDetails": {
"protocol": "tds",
"address": {
"server": "serverName.database.windows.net",
"database": "databaseName"
},
"authentication": null,
"query": null
},
"options": {},
"credential": {
"AuthenticationKind": "UsernamePassword",
"Username": "dbUsername",
"EncryptConnection": true
}
}
]
And the individual Partition queries are as so (note the CommandTimeout parameter):
let
Source = Sql.Database("serverName.database.windows.net","databaseName",[CommandTimeout=#duration(0, 2, 0, 0)]),
MyQuery =
Value.NativeQuery(
Source,
"SELECT * FROM [dbo].[MyTable]"
)
in
MyQuery
So now I'm explicitly setting a timeout of 2 hours for the Partition query.
Data Source -> Options: increasing Command timeout (default 600 secs) will also do the trick:

Apache Ignite sql query returns only cache contents, not complete results from database

My Ignite nodes (2 server nodes - let's call them A and B) are configured as follows:
ccfg.setCacheMode(CacheMode.PARTITIONED);
ccfg.setAtomicityMode(CacheMode.TRANSACTIONAL);
ccfg.setReadThrough(true);
ccfg.setWriteThrough(true);
ccfg.setWriteBehindEnabled(true);
ccfg.setWriteBehindBatchSize(10000);
Node A is started first, from command line as follows:
apache-ignite-fabric-2.2.0-bin>bin/ignite.bat config/default-config.xml
Node B is started from java code by running
public static void main(String[] args) throws Exception {
Ignite ignite = Ignition.start(ServerConfigurationFactory.createConfiguration());
ignite.cache("MyCache").loadCache(null);
...
}
(jar containing ServerConfigurationFactory is put in the apache-ignite-fabric-2.2.0-bin\libs directory so Node A and B are on the same cluster..otherwise there is an error)
I have a query that is supposed to return 9061 results from the database. After the cache loading process in Node B, I went to the Web Console and ran a simple count SQL statement against the caches. There is a button "Execute on selected node" that allows you to choose a specific cache to query. I queried Node A and got a count of 2341, and on Node B I get a count of 2064. If I just use the "Execute" button I get 4405 which is just the total of node A and B. Obviously they are missing 4656 records (9061 total records in db - 4405 in nodes A and B). I also ran the same count query in Java code using SqlFieldsQuery and I also get 4405.
Since readThrough is set to true I expected Ignite to also return results that are not in memory. But this is not the case because it just returns whatever is on the cache. Am I doing something wrong here? Thank you.
Read though works only for key-value APIs, so SQL engine assumes that all required data is preloaded from database prior to running a query.
If your data set doesn't fit in memory and you can't preload all the data, you can use native Ignite persistence storage: https://apacheignite.readme.io/docs/distributed-persistent-store

AWS RDS PostgreSQL error "remaining connection slots are reserved for non-replication superuser connections"

In the dashboard I see there are currently 22 open connections to the DB instance, blocking new connections with the error:
remaining connection slots are reserved for non-replication superuser connections.
I'm accessing the DB from web service API running on EC2 instance and always keep the best practise of:
Connection connection = DriverManager.getConnection(URL, USER_NAME, PASSWORD);
Class.forName(DB_CLASS);
Statement statement = connection.createStatement();
ResultSet resultSet = statement.executeQuery(SQL_Query_String);
...
resultSet.close();
statement.close();
connection.close();
Can I do something else in the code?
Should I do something else in the DB management?
Is there a way to periodically close connections?
Amazon has to set the number of connections based on each model's right to demand a certain amount of memory and connections
MODEL max_connections innodb_buffer_pool_size
--------- --------------- -----------------------
t1.micro 34 326107136 ( 311M)
m1-small 125 1179648000 ( 1125M, 1.097G)
m1-large 623 5882511360 ( 5610M, 5.479G)
m1-xlarge 1263 11922309120 (11370M, 11.103G)
m2-xlarge 1441 13605273600 (12975M, 12.671G)
m2-2xlarge 2900 27367833600 (26100M, 25.488G)
m2-4xlarge 5816 54892953600 (52350M, 51.123G)
But if you want you can change the max connection size to custom value by
From RDS Console > Parameter Groups > Edit Parameters,
You can change the value of the max_connections parameter to a custom value.
For closing the connections periodically you can setup a cron job some thing like this.
select pg_terminate_backend(procpid)
from pg_stat_activity
where usename = 'yourusername'
and current_query = '<IDLE>'
and query_start < current_timestamp - interval '5 minutes';
I'm using Amazon RDS, SCALA, Postgresql & Slick. First of all - number of available connections in RDS depends on the amount of available RAM - i.e. size of the RDS instance. It's best not to change the default conn number.
You can check the max connection number by executing the following SQL statement on your RDS DB instance:
show max_connections;
Check your SPRING configuration to see how many threads you're spawning:
database {
dataSourceClass = org.postgresql.ds.PGSimpleDataSource
properties = {
url = "jdbc:postgresql://test.cb1111.us-east-2.rds.amazonaws.com:6666/dbtest"
user = "youruser"
password = "yourpass"
}
numThreads = 90
}
All of the connections ARE made upon SRING BOOT initialization so beware not to cross the RDS limit. That includes other services that connect to the DB. In this case the number of connections will be 90+.
The current limit for db.t2.small is 198 (4GB of RAM)
You can change in the parameter group idle_in_transaction_session_timeout to remove idle connections.
idle_in_transaction_session_timeout (integer)
Terminate any session with an open transaction that has been idle for
longer than the specified duration in milliseconds. This allows any
locks held by that session to be released and the connection slot to
be reused; it also allows tuples visible only to this transaction to
be vacuumed. See Section 24.1 for more details about this.
The default value of 0 disables this feature.
The current value in AWS RDS is 86400000 which when converted to hours (86400000/1000/60/60) is 24 hours.
you can change the max connections in the Parameters Group for your RDS instance. Try to increase it.
Or you can try to upgrade your instance, as the max connexions is set to {DBInstanceClassMemory/31457280}
http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_WorkingWithParamGroups.html

Tasks in SQL Server and multiple worker role instances

Consider the following table in SQL Server: Tasks (Payload nvarchar, DateToExecute datetime, DateExecuted datetime null).
Now we have two worker processes (2 Azure worker role instances in our case). Both of them periodically try to get records where DateExecuted IS NULL AND DateToExecute <= GETDATE(). Then they process that record and set (SQL update) DateExecuted to current date.
The problem is that a single task should be processed only once by a single worker instance.
What's the best way to provide synchronization or locking for implementing such scenario?
The easiest way to do locking over multiple roles/instances in Windows Azure is by using blob leases. Steve Marx created a great class for this called AutoRenewLease (source, NuGet, blog post). If you already have a timer or while loop, you can write code like this:
using (var arl = new AutoRenewLease(leaseBlob))
{
if (arl.HasLease)
{
// Query Tasks table and do work....
}
else
{
// Other worker is busy....
}
}
Or you could use the DoEvery method which allows you to schedule your code every X minutes:
AutoRenewLease.DoEvery(leaseBlob, TimeSpan.FromMinutes(15), () => {
// Query Tasks table and do work....
});

Should I set max pool size in database connection string? What happens if I don't?

This is my database connection string. I did not set max pool size until now.
public static string srConnectionString =
"server=localhost;database=mydb;uid=sa;pwd=mypw;";
So currently how many connections does my application support? What is the correct syntax for increasing the connection pool size?
The application is written in C# 4.0.
Currently your application support 100 connections in pool. Here is what conn string will look like if you want to increase it to 200:
public static string srConnectionString =
"server=localhost;database=mydb;uid=sa;pwd=mypw;Max Pool Size=200;";
You can investigate how many connections with database your application use, by executing sp_who procedure in your database. In most cases default connection pool size will be enough.
"currently yes but i think it might cause problems at peak moments"
I can confirm, that I had a problem where I got timeouts because of peak requests. After I set the max pool size, the application ran without any problems.
IIS 7.5 / ASP.Net
For Spring yml config:
spring:
datasource:
url: ${DB-URL}
driverClassName: com.microsoft.sqlserver.jdbc.SQLServerDriver
username: ${DB-USERNAME}
password: ${DB-PASSWORD}
hikari:
auto-commit: true
maximum-pool-size: 200
or
application-prod.properties file:
spring.datasource.hikari.maximum-pool-size=200
We can define maximum pool size in following way:
<pool>
<min-pool-size>5</min-pool-size>
<max-pool-size>200</max-pool-size>
<prefill>true</prefill>
<use-strict-min>true</use-strict-min>
<flush-strategy>IdleConnections</flush-strategy>
</pool>