I am able to create Sql Server, Sql database, sql elastic Pool Successfully using ARM templates. But when I trying to create new database with existing elastic pool name. I am getting below error.
Without elastic pool id, database is creating successfully.
Both Sql database Elastic Pool and database are using same location, tier, edition etc.Also When tried in azure portal it created successfully.
"error": {
"code": "ResourceDeploymentFailure",
"message": "The resource operation completed with terminal provisioning state 'Failed'.",
"details": [
{
"code": "ElasticPoolSkuCombinationInvalid",
"message": "Elastic pool 'sqlsamplepool' and sku 'Basic' combination is invalid."
}
]
ARM Template:
{
"$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"collation": {
"type": "string",
"metadata": {
"description": "The collation of the database."
},
"defaultValue": "SQL_Latin1_General_CP1_CI_AS"
},
"skutier": {
"type": "string",
"metadata": {
"description": "The edition of the database. The DatabaseEditions enumeration contains all the
valid editions. e.g. Basic, Premium."
},
"allowedValues": [ "Basic", "Standard", "Premium" ],
"defaultValue": "Basic"
},
"resourcelocation": {
"type": "string",
"defaultValue": "[resourceGroup().location]",
"metadata": {
"description": "Location for all resources."
}
},
"sqlservername": {
"type": "string",
"metadata": {
"description": "The name of the sql server."
}
},
"zoneRedundant": {
"type": "bool",
"metadata": {
"description": "Whether or not this database is zone redundant, which means the replicas of this database will be spread across multiple availability zones."
},
"defaultValue": false
},
"sqlElasticPoolName": {
"type": "string",
"metadata": {
"description": "The Elastic Pool name."
}
},
"databaseName": {
"type": "string"
}
},
"functions": [],
"variables": { },
"resources": [
{
"type": "Microsoft.Sql/servers/databases",
"apiVersion": "2020-08-01-preview",
"name": "[concat(parameters('sqlservername'),'/',parameter('databaseName'))]",
"location": "[parameters('resourcelocation')]",
"sku": {
"name": "[parameters('skutier')]",
"tier": "[parameters('skutier')]"
},
"properties": {
"collation": "[parameters('collation')]",
"zoneRedundant": "[parameters('zoneRedundant')]",
"elasticPoolId":"[concat('/subscriptions/',subscription().subscriptionId,'/resourceGroups/',resourceGroup().name,'/providers/Microsoft.Sql/servers/',parameters('sqlservername'),'/elasticPools/',parameters('sqlElasticPoolName'))]"
}
}
]
}
I am not sure what wrong with "2020-08-01-preview" version but its working fine with stable version. below is my partial arm template code that working.
I changed to 2014-04-01 api version.
"comments": "If Elastic Pool Name is defined, then curent database will be added to elastic pool.",
"type": "Microsoft.Sql/servers/databases",
"apiVersion": "2014-04-01",
"name": "[concat(parameters('sqlservername'),'/',variables('dbname'))]",
"location": "[parameters('resourcelocation')]",
"properties": {
"collation": "[parameters('collation')]",
"zoneRedundant": "[parameters('zoneRedundant')]",
"elasticPoolName":"[if(not(empty(parameters('sqlElasticPoolName'))),parameters('sqlElasticPoolName'),'')]",
"edition": "[parameters('skutier')]"
}
Related
I'm trying to use the BigQuery Execute function in Cloud Data Fusion (Google). The component validates fine, the SQL checks out but I get this non-meaningful error with every execution:
02/11/2022 12:51:25 ERROR Pipeline 'test-bq-execute' failed.
02/11/2022 12:51:25 ERROR Workflow service 'workflow.default.test-bq-execute.DataPipelineWorkflow.<guid>' failed.
02/11/2022 12:51:25 ERROR Program DataPipelineWorkflow execution failed.
I can see nothing else to help me debug this. Any ideas? The SQL in question is a simple DELETE from dataset.table WHERE ds = CURRENT_DATE()
This was the pipeline
{
"name": "test-bq-execute",
"description": "Data Pipeline Application",
"artifact": {
"name": "cdap-data-pipeline",
"version": "6.5.1",
"scope": "SYSTEM"
},
"config": {
"resources": {
"memoryMB": 2048,
"virtualCores": 1
},
"driverResources": {
"memoryMB": 2048,
"virtualCores": 1
},
"connections": [],
"comments": [],
"postActions": [],
"properties": {},
"processTimingEnabled": true,
"stageLoggingEnabled": false,
"stages": [
{
"name": "BigQuery Execute",
"plugin": {
"name": "BigQueryExecute",
"type": "action",
"label": "BigQuery Execute",
"artifact": {
"name": "google-cloud",
"version": "0.18.1",
"scope": "SYSTEM"
},
"properties": {
"project": "auto-detect",
"sql": "DELETE FROM GCPQuickStart.account WHERE ds = CURRENT_DATE()",
"dialect": "standard",
"mode": "batch",
"dataset": "GCPQuickStart",
"table": "account",
"useCache": "false",
"location": "US",
"rowAsArguments": "false",
"serviceAccountType": "filePath",
"serviceFilePath": "auto-detect"
}
},
"outputSchema": [
{
"name": "etlSchemaBody",
"schema": ""
}
],
"id": "BigQuery-Execute",
"type": "action",
"label": "BigQuery Execute",
"icon": "fa-plug"
}
],
"schedule": "0 1 */1 * *",
"engine": "spark",
"numOfRecordsPreview": 100,
"maxConcurrentRuns": 1
}
}
I was able to catch the error using Cloud Logging. To enable Cloud Logging in Cloud Data Fusion, you may use this GCP Documentation. And follow these steps to view the logs from Data Fusion to Cloud Logging. Replicating your scenario this is the error I found:
"logMessage": "Program DataPipelineWorkflow execution failed.\njava.util.concurrent.ExecutionException: com.google.cloud.bigquery.BigQueryException: Cannot set destination table in jobs with DML statements\n at java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357)\n at java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1908)\n at io.cdap.cdap.internal.app.runtime.distributed.AbstractProgramTwillRunnable.run(AbstractProgramTwillRunnable.java:274)\n at org.apache.twill.interna..."
}
What we did to resolve this error: Cannot set destination table in jobs with DML statements is we left the Dataset Name and Table Name empty inside the pipeline properties as there is no need for the destination table to be specified.
Output:
I want to set up the conditional validation in my schema. I saw an example here on SO.
I have a similar setup, where I would like to validate if the field public is set to string "public". If it is set to "public" then I want to make fields description, attachmentUrl and tags required. If the field is not set to "public" then this fields are not required.
{
"$schema": "http://json-schema.org/draft-04/schema#",
"title": "Update todo",
"type": "object",
"properties": {
"public": {
"type": "string"
},
"description": {
"type": "string",
"minLength": 3
},
"tags": {
"type": "array",
"items": {
"type": "string"
},
"uniqueItems": true,
"minItems": 1
},
"attachmentUrl": {
"type": "string"
}
},
"anyOf": [
{
"not": {
"properties": {
"public": { "const": "public" }
},
"required": ["public"]
}
},
{ "required": ["description", "tags", "attachmentUrl"] }
],
"additionalProperties": false
}
But, when I try to deploy it like that, I get the following error:
Invalid model specified: Validation Result: warnings : [], errors :
[Invalid model schema specified. Unsupported keyword(s): ["const"]]
The "const" keyword wasn't added until draft 06. You should upgrade to an implementation that supports at least that version.
https://json-schema.org/draft-06/json-schema-release-notes.html#additions-and-backwards-compatible-changes
Otherwise, you can use "enum" with a single value: "enum": ["public"]
I'm trying to send a message to my broker, using Avro schema, but "im always getting error:
2020-02-01 11:24:37.189 [nioEventLoopGroup-4-1] ERROR Application -
Unhandled: POST - /api/orchestration/
org.apache.kafka.common.errors.SerializationException: Error
registering Avro schema: "string" Caused by:
io.confluent.kafka.schemaregistry.client.rest.exceptions.RestClientException:
Schema being registered is incompatible with an earlier schema; error
code: 409
Here my docker container:
connect:
image: confluentinc/cp-kafka-connect:5.4.0
hostname: confluentinc-connect
container_name: confluentinc-connect
depends_on:
- zookeeper
- broker
- schema-registry
ports:
- "8083:8083"
environment:
CONNECT_BOOTSTRAP_SERVERS: 'broker:29092'
CONNECT_REST_ADVERTISED_HOST_NAME: connect
CONNECT_REST_PORT: 8083
CONNECT_GROUP_ID: confluentinc-connect
CONNECT_CONFIG_STORAGE_TOPIC: confluentinc-connect-configs
CONNECT_CONFIG_STORAGE_REPLICATION_FACTOR: 1
CONNECT_OFFSET_FLUSH_INTERVAL_MS: 10000
CONNECT_OFFSET_STORAGE_TOPIC: confluentinc-connect-offsets
CONNECT_OFFSET_STORAGE_REPLICATION_FACTOR: 1
CONNECT_STATUS_STORAGE_TOPIC: confluentinc-connect-status
CONNECT_STATUS_STORAGE_REPLICATION_FACTOR: 1
CONNECT_KEY_CONVERTER: org.apache.kafka.connect.storage.StringConverter
CONNECT_KEY_CONVERTER_SCHEMAS_ENABLE: "true"
CONNECT_VALUE_CONVERTER: io.confluent.connect.avro.AvroConverter
CONNECT_VALUE_CONVERTER_SCHEMA_REGISTRY_URL: 'http://schema-registry:8081'
CONNECT_INTERNAL_KEY_CONVERTER: "org.apache.kafka.connect.json.JsonConverter"
CONNECT_INTERNAL_VALUE_CONVERTER: "org.apache.kafka.connect.json.JsonConverter"
CONNECT_ZOOKEEPER_CONNECT: 'zookeeper:2181'
CONNECT_LOG4J_ROOT_LOGLEVEL: "INFO"
CONNECT_LOG4J_LOGGERS: "org.apache.kafka.connect.runtime.rest=WARN,org.reflections=ERROR"
CONNECT_CONFIG_STORAGE_REPLICATION_FACTOR: "1"
CONNECT_OFFSET_STORAGE_REPLICATION_FACTOR: "1"
CONNECT_STATUS_STORAGE_REPLICATION_FACTOR: "1"
CONNECT_PRODUCER_INTERCEPTOR_CLASSES: "io.confluent.monitoring.clients.interceptor.MonitoringProducerInterceptor"
CONNECT_CONSUMER_INTERCEPTOR_CLASSES: "io.confluent.monitoring.clients.interceptor.MonitoringConsumerInterceptor"
CONNECT_LOG4J_LOGGERS: org.apache.zookeeper=ERROR,org.I0Itec.zkclient=ERROR,org.reflections=ERROR
CONNECT_PLUGIN_PATH: "/usr/share/java,/usr/share/extras"
My producer (written in Kolin)
val prop: HashMap<String, Any> = HashMap()
prop[BOOTSTRAP_SERVERS_CONFIG] = bootstrapServers
prop[KEY_SERIALIZER_CLASS_CONFIG] = StringSerializer::class.java.name
prop[VALUE_SERIALIZER_CLASS_CONFIG] = KafkaAvroSerializer::class.java.name
prop[SCHEMA_REGISTRY_URL] = schemaUrl
prop[ENABLE_IDEMPOTENCE_CONFIG] = idempotence
prop[ACKS_CONFIG] = acks.value
prop[RETRIES_CONFIG] = retries
prop[MAX_IN_FLIGHT_REQUESTS_PER_CONNECTION] = requestPerConnection
prop[COMPRESSION_TYPE_CONFIG] = compression.value
prop[LINGER_MS_CONFIG] = linger
prop[BATCH_SIZE_CONFIG] = batchSize.value
return KafkaProducer(prop)
My Avro Schema:
{
"type": "record",
"namespace": "com.rjdesenvolvimento",
"name": "create_client_value",
"doc": "Avro Schema for Kafka Command",
"fields": [
{
"name": "id",
"type": "string",
"logicalType": "uuid",
"doc": "UUID for indentifaction command"
},
{
"name": "status",
"type": {
"name": "status",
"type": "enum",
"symbols": [
"Open",
"Closed",
"Processing"
],
"doc": "Can be only: Open, Closed or Processing"
},
"doc": "Status of the command"
},
{
"name": "message",
"type": {
"type": "record",
"name": "message",
"doc": "Avro Schema for insert new client",
"fields": [
{
"name": "id",
"type": "string",
"logicalType": "uuid",
"doc": "UUID for indentifaction client transaction"
},
{
"name": "active",
"type": "boolean",
"doc": "Soft delete for client"
},
{
"name": "name",
"type": "string",
"doc": "Name of the client"
},
{
"name": "email",
"type": "string",
"doc": "Email of the client"
},
{
"name": "document",
"type": "string",
"doc": "CPF or CPNJ of the client"
},
{
"name": "phones",
"doc": "A list of phone numbers",
"type": {
"type": "array",
"items": {
"name": "phones",
"type": "record",
"fields": [
{
"name": "id",
"type": "string",
"logicalType": "uuid",
"doc": "UUID for indentifaction of phone transaction"
},
{
"name": "active",
"type": "boolean",
"doc": "Soft delete for phone number"
},
{
"name": "number",
"type": "string",
"doc": "The phone number with this regex +xx xx xxxx xxxx"
}
]
}
}
},
{
"name": "address",
"type": "string",
"logicalType": "uuid",
"doc": "Adrres is an UUID for a other address-microservice"
}
]
}
}
]
}
And my post:
{
"id" : "9ec818da-6ee0-4634-9ed8-c085248cae12",
"status" : "Open",
"message": {
"id" : "9ec818da-6ee0-4634-9ed8-c085248cae12",
"active" : true,
"name": "name",
"email": "email#com",
"document": "document",
"phones": [
{
"id" : "9ec818da-6ee0-4634-9ed8-c085248cae12",
"active" : true,
"number": "+xx xx xxxx xxxx"
},
{
"id" : "9ec818da-6ee0-4634-9ed8-c085248cae12",
"active" : true,
"number": "+xx xx xxxx xxxx"
}
],
"address": "9ec818da-6ee0-4634-9ed8-c085248cae12"
}
}
What am I doing wrong?
github project: https://github.com/rodrigodevelms/kafka-registry
UPDATE =====
Briefly:
I'm not generating my classes using the Gradle Avro plugin.
In this example, my POST sends an Client object. And in service, it assembles a Command-type object as follows:
id: same client id
status: open
message: the POST that was sent.
So I send this to KAFKA, and in the connect (jdbc sink postgres) I put as fields.whitelist only the attributes of the message (the client) and I don't get either the command id or the status.
on github the only classes that matter to understand the code are:
1
-https://github.com/rodrigodevelms/kafka-registry/blob/master/kafka/src/main/kotlin/com/rjdesenvolvimento/messagebroker/producer/Producer.kt
2 -
https://github.com/rodrigodevelms/kafka-registry/blob/master/kafka/src/main/kotlin/com/rjdesenvolvimento/messagebroker/commnad/Command.kt
3 -
https://github.com/rodrigodevelms/kafka-registry/blob/master/src/client/Controller.kt
4
-https://github.com/rodrigodevelms/kafka-registry/blob/master/src/client/Service.kt
5 - docker-compose.yml, insert-client-value.avsc, postgresql.json,
if i set the compatibility mode of the avro scheme to "none", i can send a message, but some unknown characters will be shown, as shown in the photo below.
I suspect that you're trying to do multiple things and you've not been cleaning up state after previous attempts. You should not get that error in a fresh installation
Schema being registered is incompatible with an earlier schema
Your data has changed in a way that the schema in the registry is not compatible with the one you're sending.
You can send an HTTP DELETE request to http://registry:8081/subjects/[name]/ to delete all versions of the schema, then you can restart your connector
I'm looking for a valid property to retrieve FQDN of a managed Azure SQL server from a deployment of linked template. The one below seems not to be valid
[reference(variables('sqlDeployment')).outputs.fullyQualifiedDomainName.value]"
and where can I find all supported parameters? It seems to be challenging to find enough info from Microsoft Docs.
Looks like your linked template did not have an output property named as 'fullyQualifiedDomainName'.
To get an output value from a linked template, retrieve the property value with syntax like "[reference('deploymentName').outputs.propertyName.value]" as explained here -> https://learn.microsoft.com/en-us/azure/azure-resource-manager/resource-group-linked-templates#get-values-from-linked-template
Please find below sample parent and linked templates to accomplish your requirement of retrieving FQDN of a managed Azure SQL server.
Parent template named as "parenttemplate.json":
{
"$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"location": {
"type": "string",
"defaultValue": "[resourceGroup().location]",
"metadata": {
"description": "Location for all resources."
}
}
},
"variables": {
"sqlserverName": "gttestsqlserver",
"sqlAdministratorLogin": "gttestuser",
"sqlAdministratorLoginPassword": "gttestpassword2#",
"sqlDeployment": "linkedTemplate"
},
"resources": [
{
"apiVersion": "2017-05-10",
"name": "[variables('sqlDeployment')]",
"type": "Microsoft.Resources/deployments",
"properties": {
"mode": "Incremental",
"templateLink": {
"uri": "[uri(deployment().properties.templateLink.uri, 'linkedtemplate.json')]",
"contentVersion": "1.0.0.0"
}
}
}
],
"outputs": {
"messageFromLinkedTemplate": {
"type": "string",
"value": "[reference(variables('sqlDeployment')).outputs.MessageOne.value]"
}
}
}
Linked template named as "linkedtemplate.json":
{
"$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"location": {
"type": "string",
"defaultValue": "[resourceGroup().location]",
"metadata": {
"description": "Location for all resources."
}
}
},
"variables": {
"sqlserverName": "gttestsqlserver",
"sqlAdministratorLogin": "gttestuser",
"sqlAdministratorLoginPassword": "gttestpassword2#"
},
"resources": [
{
"name": "[variables('sqlserverName')]",
"type": "Microsoft.Sql/servers",
"location": "[parameters('location')]",
"tags": {
"displayName": "gttestsqlserver"
},
"apiVersion": "2014-04-01",
"properties": {
"administratorLogin": "[variables('sqlAdministratorLogin')]",
"administratorLoginPassword": "[variables('sqlAdministratorLoginPassword')]",
"version": "12.0"
}
}
],
"outputs": {
"MessageOne": {
"type" : "string",
"value": "[reference(variables('sqlserverName')).fullyQualifiedDomainName]"
}
}
}
Both the above mentioned templates are placed in Storage blob container.
Deployment:
Illustration of retrieval of FQDN from the deployment:
In the above example and illustration, the output property name in linked template is named as "MessageOne" and as we need FQDN of managed Azure SQL server so the value of that "MessageOne" output property is referenced to "fullyQualifiedDomainName".
And regarding finding all the supported parameters, one of the easiest ways is to get all the properties of any resource by using 'Get-Member' as shown in below example.
Hope this helps!! Cheers!!
I have created a pipeline to load data from S3 to RDS mysql instance.I can save the pipeline without any errors but on activation I get the error "No value specified for parameter 1". My online search so far has suggested that the insert statement parameters need to be defined somewhere. If this is correct then how to do so?
The following is the script generated in the process
{
"objects": [
{
"output": {
"ref": "DestinationRDSTable"
},
"input": {
"ref": "S3InputDataLocation"
},
"dependsOn": {
"ref": "RdsMySqlTableCreateActivity"
},
"name": "DataLoadActivity",
"id": "DataLoadActivity",
"runsOn": {
"ref": "Ec2Instance"
},
"type": "CopyActivity"
},
{
"*password": "#{*myRDSPassword}",
"name": "rds_mysql",
"jdbcProperties": "allowMultiQueries=true",
"id": "rds_mysql",
"type": "RdsDatabase",
"rdsInstanceId": "#{myRDSInstanceId}",
"username": "#{myRDSUsername}"
},
{
"instanceType": "t1.micro",
"name": "Ec2Instance",
"actionOnTaskFailure": "terminate",
"securityGroups": "#{myEc2RdsSecurityGrps}",
"id": "Ec2Instance",
"type": "Ec2Resource",
"terminateAfter": "2 Hours"
},
{
"database": {
"ref": "rds_mysql"
},
"name": "RdsMySqlTableCreateActivity",
"runsOn": {
"ref": "Ec2Instance"
},
"id": "RdsMySqlTableCreateActivity",
"type": "SqlActivity",
"script": "#{myRDSTableInsertSql}"
},
{
"database": {
"ref": "rds_mysql"
},
"name": "DestinationRDSTable",
"insertQuery": "#{myRDSTableInsertSql}",
"id": "DestinationRDSTable",
"type": "SqlDataNode",
"table": "#{myRDSTableName}",
"selectQuery": "select * from #{table}"
},
{
"escapeChar": "\\",
"name": "DataFormat1",
"columnSeparator": "|",
"id": "DataFormat1",
"type": "TSV",
"recordSeparator": "\\n"
},
{
"directoryPath": "#{myInputS3Loc}",
"dataFormat": {
"ref": "DataFormat1"
},
"name": "S3InputDataLocation",
"id": "S3InputDataLocation",
"type": "S3DataNode"
},
{
"failureAndRerunMode": "CASCADE",
"resourceRole": "DataPipelineDefaultResourceRole",
"role": "DataPipelineDefaultRole",
"pipelineLogUri": "s3://logs3tords/",
"scheduleType": "ONDEMAND",
"name": "Default",
"id": "Default"
}
],
"parameters": [
{
"description": "RDS MySQL password",
"id": "*myRDSPassword",
"type": "String"
},
{
"watermark": "security group name",
"helpText": "The names of one or more EC2 security groups that have access to the RDS MySQL cluster.",
"description": "RDS MySQL security group(s)",
"isArray": "true",
"optional": "true",
"id": "myEc2RdsSecurityGrps",
"type": "String"
},
{
"description": "RDS MySQL username",
"id": "myRDSUsername",
"type": "String"
},
{
"description": "Input S3 file path",
"id": "myInputS3Loc",
"type": "AWS::S3::ObjectKey"
},
{
"helpText": "The SQL statement to insert data into the RDS MySQL table.",
"watermark": "INSERT INTO #{table} (col1, col2, col3) VALUES(?, ?, ?) ;",
"description": "Insert SQL query",
"id": "myRDSTableInsertSql",
"type": "String"
},
{
"helpText": "The name of an existing table or a new table that will be created based on the create table SQL query parameter below.",
"description": "RDS MySQL table name",
"id": "myRDSTableName",
"type": "String"
},
{
"watermark": "CREATE TABLE pet IF NOT EXISTS (name VARCHAR(20), owner VARCHAR(20), species VARCHAR(20), gender CHAR(1), birth DATE, death DATE);",
"helpText": "The idempotent SQL statement to create the RDS MySQL table if it does not already exist.",
"description": "Create table SQL query",
"optional": "true",
"id": "myRDSCreateTableSql",
"type": "String"
},
{
"watermark": "DB Instance",
"description": "RDS Instance ID",
"id": "myRDSInstanceId",
"type": "String"
}
],
"values": {
"myRDSInstanceId": "instance name",
"myRDSUsername": "user",
"myRDSTableInsertSql": "Insert into Ten.MD_ip_hp (ID, NAME, ADDRESS1, ADDRESS2, CITY, STATE, ZIP, DS ) VALUES(?,?,?,?,?,?,?,?);",
"*myRDSPassword": "password",
"myInputS3Loc": "log location",
"myRDSTableName": "MD_ip_hp"
}
}
UPDATE:
So I specified 'script argument' 1 to 8 on the sql activity node which resulted in my error to change to "No value specified for parameter 2". How to now read each number as a different parameter? >:x
Such a silly thing!
I was able to resolve it by creating separate script argument corresponding to each parameter in my query. In layman words, a script argument for each of the ? in my query.