ARM - How can I get the access key from a storage account to use in AppSettings later in the template? - azure-storage

I'm creating an Azure Resource Manager template that instantiates multiple resources, including an Azure storage account and an Azure App Service with a Web App.
I'd like to be able to capture the primary access key (or the full connection string, either way is fine) from the newly-created storage account, and use that as a value for one of the AppSettings for the Web App.
Is that possible?

Use the listkeys helper function.
"appSettings": [
{
"name": "STORAGE_KEY",
"value": "[listKeys(resourceId('Microsoft.Storage/storageAccounts', parameters('storageAccountName')), providers('Microsoft.Storage', 'storageAccounts').apiVersions[0]).keys[0].value]"
}
]
This quickstart does something similar:
https://azure.microsoft.com/en-us/documentation/articles/cache-web-app-arm-with-redis-cache-provision/

The syntax has changed since the other answer was accepted. The error you will now hit is 'Template language expression property 'key1' doesn't exist, available properties are 'keys'
Keys are now represented as an array of keys, and the syntax is now:
"StorageAccount": "[Concat('DefaultEndpointsProtocol=https;AccountName=',variables('StorageAccountName'),';AccountKey=',listKeys(resourceId('Microsoft.Storage/storageAccounts', variables('StorageAccountName')), providers('Microsoft.Storage', 'storageAccounts').apiVersions[0]).keys[0].value)]",
See: http://samcogan.com/retrieve-azure-storage-key-in-arm-script/

I faced with this issue two times. First in the 2015 and last today in May of 2017.
I need to add connection strings to the WebApp - I want to add strings automatically from generated resources during deployment from the ARM template. It can help later to not add manually this values.
First time I used old version of the function listKeys (it looks like old version returns result not as object but as value):
"AzureWebJobsStorage": {
"type": "Custom",
"value": "[concat(variables('storageConnectionString'), listKeys(resourceId('Microsoft.Storage/storageAccounts', parameters('storageAccountName')), '2015-05-01-preview').key1)]"
},
Today last version of the working template is:
"resources": [
{
"apiVersion": "2015-08-01",
"type": "config",
"name": "connectionstrings",
"dependsOn": [
"[resourceId('Microsoft.Web/Sites/', parameters('webSiteName'))]"
],
"properties": {
"DefaultConnection": {
"value": "[concat('Data Source=tcp:', reference(resourceId('Microsoft.Sql/servers/', parameters('sqlserverName'))).fullyQualifiedDomainName, ',1433;Initial Catalog=', parameters('databaseName'), ';User Id=', parameters('administratorLogin'), '#', parameters('sqlserverName'), ';Password=', parameters('administratorLoginPassword'), ';')]",
"type": "SQLServer"
},
"AzureWebJobsStorage": {
"type": "Custom",
"value": "[concat(variables('storageConnectionString'), listKeys(resourceId('Microsoft.Storage/storageAccounts', parameters('storageName')), '2016-01-01').keys[0].value)]"
},
"AzureWebJobsDashboard": {
"type": "Custom",
"value": "[concat(variables('storageConnectionString'), listKeys(resourceId('Microsoft.Storage/storageAccounts', parameters('storageName')), '2016-01-01').keys[0].value)]"
}
}
},
Thanks.

below is example for adding storage account to ADLA
"storageAccounts": [
{
"name": "[parameters('DataLakeAnalyticsStorageAccountname')]",
"properties": {
"accessKey": "[listKeys(variables('storageAccountid'),'2015-05-01-preview').key1]"
}
}
],
in variable you can keep
"variables": {
"apiVersion": "[providers('Microsoft.Storage', 'storageAccounts').apiVersions[0]]",
"storageAccountid": "[concat(resourceGroup().id,'/providers/','Microsoft.Storage/storageAccounts/', parameters('DataLakeAnalyticsStorageAccountname'))]"
},

Related

How can i custom config CHANGELOG.md using standard-version npm package?

I'm using the command standard-version each time I want to publish new version, but the yielded changes in the CHANGELOG.md look like this:
### [10.1.9](https://github.com/my-project-name/compare/v10.1.8...v10.1.9) (2021-03-29)
### [10.1.8](https://github.com/my-project-name/compare/v10.1.7...v10.1.8) (2021-03-29)
### [10.1.7](https://github.com/my-project-name/compare/v10.1.6...v10.1.7) (2021-03-29)
first the links do not work - the github url is not correct and i want to configure it to the right url, and second, I'd like to configure the link that's shown in the changeslog file (there are some types)
I tried to use this documentation but didn't find anything that can help me
https://github.com/conventional-changelog/conventional-changelog
so how do I configure the way standard-version works on the CHANGELOG.md ? can someone provide example?
yes.
according to doc:
You can configure standard-version either by:
Placing a standard-version stanza in your package.json (assuming your project is JavaScript).
Creating a .versionrc, .versionrc.json or .versionrc.js.
If you are using a .versionrc.js your default export must be a configuration object, or a function returning a configuration object.
Any of the command line parameters accepted by standard-version can instead be provided via configuration.
Please refer to the conventional-changelog-config-spec for details on available configuration options.
example:
.versionrc
{
"types": [
{
"type": "feat",
"section": "Features"
},
{
"type": "fix",
"section": "Bug Fixes"
},
{
"type": "chore",
"hidden": true
},
{
"type": "docs",
"hidden": true
},
{
"type": "style",
"hidden": true
},
{
"type": "refactor",
"section": "Refactor"
},
{
"type": "perf",
"section": "Performance"
},
{
"type": "test",
"hidden": true
}
]
}

Schema evolution when adding new field

Imagine there are to separate apps: producer and consumer.
The code of producer:
import os
from confluent_kafka import avro
from confluent_kafka.avro import AvroProducer
avsc_dir = os.path.dirname(os.path.realpath(__file__))
value_schema = avro.load(os.path.join(avsc_dir, "basic_schema.avsc"))
config = {'bootstrap.servers': 'localhost:9092', 'schema.registry.url': 'http://0.0.0.0:8081'}
producer = AvroProducer(config=config, default_value_schema=value_schema)
producer.produce(topic='testavro', value={'first_name': 'Andrey', 'last_name': 'Volkonsky'})
basic_schema.avsc file is located within producer app. Its content:
{
"name": "basic",
"type": "record",
"doc": "basic schema for tests",
"namespace": "python.test.basic",
"fields": [
{
"name": "first_name",
"doc": "first name",
"type": "string"
},
{
"name": "last_name",
"doc": "last name",
"type": "string"
}
]
}
For now it does not matter what's inside consumer.
We run producer once and everything is ok. Then I want to add age field:
basic_schema.avsc:
{
"name": "basic",
"type": "record",
"doc": "basic schema for tests",
"namespace": "python.test.basic",
"fields": [
{
"name": "first_name",
"doc": "first name",
"type": "string"
},
{
"name": "last_name",
"doc": "last name",
"type": "string"
},
{
"name": "age",
"doc": "age",
"type": "int"
}
]
}
Here I got error:
confluent_kafka.avro.error.ClientError: Incompatible Avro schema:409
They say here https://docs.confluent.io/platform/current/schema-registry/avro.html#summary that for compitability type == BACKWARD consumers should be updated first.
I cannot understand technically. I mean do I have to copy basic_schema.avsc file to consumer
and run it?
If you registered schema with BACKWARDS compatibility (the default), confluent schema registry simply wont allow you to make an incompatible change - adding a mandatory field.
you can add optional field or use forward compatibility
the rules about what should be upgraded first is correct regardless of what changes the compatibility rule actually allows you to make.
edit - additional info
don't use forward compatibility simply because you might have a need to add mandatory fields. Use the compatibility that makes sense for your case based on who can update first e.g. it may be impossible to make all producers upgrade at the same time.
so if using backwards compatibility AND need to add a mandatory field, you probably need a new version of the service e.g. topic.v1 and topic.v2 where v2 of the service uses the schema with the new mandatory field and you can deprecate v1 service...for example

How do I automate adding a custom Iot Hub Endpoint (and route to it)?

In order to receive Azure IotHub Device Twin change notifications, it appears that it's necessary to create a custom endpoint and create a route to send notifications to that endpoint. This seems straightforward enough on the Azure Portal, but as one might expect we want to automate it.
I haven't been able to find any documentation for the the az cli or even the REST API, though I might have missed something. I didn't find anything promising looking in the SDKs either.
How do I automate adding a custom endpoint and then setting up the route for device twin notifications?
You can check IotHubs template to see if it helps.
Route:
"routing": {
"endpoints": {
"serviceBusQueues": [
{
"connectionString": "string",
"name": "string",
"subscriptionId": "string",
"resourceGroup": "string"
}
]
},
"routes": [
{
"name": "string",
"source": "string",
"condition": "string",
"endpointNames": [
"string"
],
"isEnabled": boolean
}
],
Consumer group:
{
"apiVersion": "2016-02-03",
"type": "Microsoft.Devices/IotHubs/eventhubEndpoints/ConsumerGroups",
"name": "[concat(parameters('hubName'), '/events/cg1')]",
"dependsOn": [
"[concat('Microsoft.Devices/Iothubs/', parameters('hubName'))]"
]
},
For more detailed information you can reference:
Microsoft.Devices/IotHubs template reference
Create an IoT hub using Azure Resource Manager template (PowerShell)

How to version an API in Azure API Management using Azure Resource Manager

When creating a new API in an Azure API Management Service using the portal, you can specify whether you would like the API to be versioned. However, I can't find a way to replicate this when creating an API in the Management service using ARM. Is this not currently supported, or am I missing something?
I have tried creating a versioned API in the portal and comparing the created template to the template of a non-versioned API and can't see a difference.
Thanks in advance.
To achieve this through ARM scripts you'll need to create an ApiVersionSet resource first:
{
"name": "[concat(variables('ManagementServiceName'), '/', variables('VersionSetName'))]",
"type": "Microsoft.ApiManagement/service/api-version-sets",
"apiVersion": "2017-03-01",
"properties": {
"description": "Api Description",
"displayName": "Api Name",
"versioningScheme": "Segment"
}
}
Then update the apiVersionSetId property on the Microsoft.ApiManagement/service/apis resource:
{
"type": "Microsoft.ApiManagement/service/apis",
"name": "[concat(variables('ManagementServiceName'), '/', variables('ApiName'))]",
"apiVersion": "2017-03-01",
"dependsOn": [
"[resourceId('Microsoft.ApiManagement/service/api-version-sets', variables('ManagementServiceName'), variables('VersionSetName'))]"
],
"properties": {
"displayName": "string",
"apiRevision": "1",
"description": "",
"serviceUrl": "string",
"path": "string",
"protocols": [
"https"
],
"isCurrent": true,
"apiVersion": "v1",
"apiVersionName": "v1",
"apiVersionDescription": "string",
"apiVersionSetId": "[concat('/api-version-sets', variables('VersionSetName'))]"
}
}
resource for the api-version-sets
"name": "my-api-version-sets",
"type": "api-version-sets",
"apiVersion": "2018-01-01",
"properties": {
"displayName": "Provider API",
"versioningScheme": "Segment"
},
"dependsOn": [
"[concat('Microsoft.ApiManagement/service/', variables('apiManagementServiceName'))]"
]
Then other to apis
"apiVersion": "2018-01-01",
"type": "apis",
"properties": {
....
"isCurrent": true,
"apiVersion": "v1",
"apiVersionSetId": "/api-version-sets/my-api-version-sets"
You can specify the version on Azure ARM portal in path,header or as a query string.But the old azure API management portal not support in build versioning.Any way you can specify the versioning in Web API URL suffix.
Still if you have any issue kindly add some image and describe your issue.
Azure ARM Portal (New APIM)
Azure APIM Portal (OLD)
Thanks,
Infaaz

AWS Data pipeline CSV data from S3 to DynamoDB

I am trying to transfer CSV data from S3 bucket to DynamoDB using AWS pipeline, following is my pipe line script, it is not working properly,
CSV file structure
Name, Designation,Company
A,TL,C1
B,Prog, C2
DynamoDb : N_Table, with Name as hash value
{
"objects": [
{
"id": "Default",
"scheduleType": "cron",
"name": "Default",
"role": "DataPipelineDefaultRole",
"resourceRole": "DataPipelineDefaultResourceRole"
},
{
"id": "DynamoDBDataNodeId635",
"schedule": {
"ref": "ScheduleId639"
},
"tableName": "N_Table",
"name": "MyDynamoDBData",
"type": "DynamoDBDataNode"
},
{
"emrLogUri": "s3://onlycsv/error",
"id": "EmrClusterId636",
"schedule": {
"ref": "ScheduleId639"
},
"masterInstanceType": "m1.small",
"coreInstanceType": "m1.xlarge",
"enableDebugging": "true",
"installHive": "latest",
"name": "ImportCluster",
"coreInstanceCount": "1",
"logUri": "s3://onlycsv/error1",
"type": "EmrCluster"
},
{
"id": "S3DataNodeId643",
"schedule": {
"ref": "ScheduleId639"
},
"directoryPath": "s3://onlycsv/data.csv",
"name": "MyS3Data",
"dataFormat": {
"ref": "DataFormatId1"
},
"type": "S3DataNode"
},
{
"id": "ScheduleId639",
"startDateTime": "2013-08-03T00:00:00",
"name": "ImportSchedule",
"period": "1 Hours",
"type": "Schedule",
"endDateTime": "2013-08-04T00:00:00"
},
{
"id": "EmrActivityId637",
"input": {
"ref": "S3DataNodeId643"
},
"schedule": {
"ref": "ScheduleId639"
},
"name": "MyImportJob",
"runsOn": {
"ref": "EmrClusterId636"
},
"maximumRetries": "0",
"myDynamoDBWriteThroughputRatio": "0.25",
"attemptTimeout": "24 hours",
"type": "EmrActivity",
"output": {
"ref": "DynamoDBDataNodeId635"
},
"step": "s3://elasticmapreduce/libs/script-runner/script-runner.jar,s3://elasticmapreduce/libs/hive/hive-script,--run-hive-script,--hive-versions,latest,--args,-f,s3://elasticmapreduce/libs/hive/dynamodb/importDynamoDBTableFromS3,-d,DYNAMODB_OUTPUT_TABLE=#{output.tableName},-d,S3_INPUT_BUCKET=#{input.directoryPath},-d,DYNAMODB_WRITE_PERCENT=#{myDynamoDBWriteThroughputRatio},-d,DYNAMODB_ENDPOINT=dynamodb.us-east-1.amazonaws.com"
},
{
"id": "DataFormatId1",
"name": "DefaultDataFormat1",
"column": [
"Name",
"Designation",
"Company"
],
"columnSeparator": ",",
"recordSeparator": "\n",
"type": "Custom"
}
]
}
Out of four steps while executing the pipeline, two are getting finished, but it is not executing completely
Currently (2015-04) default import pipeline template does not support importing CSV files.
If your CSV file is not too big (under 1GB or so) you can create a ShellCommandActivity to convert CSV to DynamoDB JSON format first and the feed that to EmrActivity that imports the resulting JSON file into your table.
As a first step you can create sample DynamoDB table including all the field types you need, populate with dummy values and then export the records using pipeline (Export/Import button in DynamoDB console). This will give you the idea about the format that is expected by Import pipeline. The type names are not obvious, and the Import activity is very sensitive about the correct case (e.g. you should have bOOL for boolean field).
Afterwards it should be easy to create an awk script (or any other text converter, at least with awk you can use the default AMI image for your shell activity), which you can feed to your shellCommandActivity. Don't forget to enable "staging" flag, so your output is uploaded back to S3 for the Import activity to pick it up.
If you are using the template data pipeline for Importing data from S3 to DynamoDB, these dataformats won't work. Instead, use the format in the link below to store the input S3 data file http://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/dp-importexport-ddb-pipelinejson-verifydata2.html
This format of the output file generated by the template data pipeline that exports data from DynamoDB to S3.
Hope that helps.
I would recommend using the CSV data format provided by datapipeline instead of custom.
For debugging the errors on cluster, you can lookup the jobflow in EMR console and look at the log files for the tasks that failed.
See below link for a solution that works (in the question section), albeit EMR 3.x. Just change the delimiter to "columnSeparator": ",". Personally, I wouldn't do CSV unless you are certain the data is sanitized correctly.
How to upgrade Data Pipeline definition from EMR 3.x to 4.x/5.x?