How to enforce naming pattern such as "*-*-asp" using Azure policy? - naming-conventions

I am trying to enforce Azure resource naming pattern for prod/dev/uat environments, the suggested pattern is [service name]-[environment]-[resource short name]. Is there a way to enforce this using Azure policy? It appears that Azure policy (Like/Match functions) does not support regex. Please suggest a workaround solution.
Note: The [service name], [environment], [resource short name] are of variable length.
Thanks.

Below code block should address the requirement of *-*-asp pattern. i have not put this through sufficient testing, but for anyone who is looking for enforcing naming conventions through policies i hope this would be of help. Also, it would be interesting to know if there is better solution than that provided here.
Azure Policy Like/Match does not support regex, the complexity of the below solution only highlights the need for such a system. There is a user voice, i request for your vote, if you see relevance of regex feature in Azure policy - link here.
{
"if": {
"allOf": [
{
"field": "type",
"in": "[parameters('listOfResourceTypes')]"
},
{
"not": {
"allOf": [
{
"value": "[equals(length(split(parameters('namePattern'), '-')), length(split(field('name'), '-')))]",
"equals": true
},
{
"value": "[equals(toLower(last(split(parameters('namePattern'), '-'))), toLower(last(split(field('name'), '-'))))]",
"equals": true
}
]
}
}
]
},
"then": {
"effect": "[parameters('policyEffect')]"
}
}

Related

OPA authorization policies with scopes and roles

I'm using Open Policy Agent as an authorization component together with OIDC enabled apps.
I have input from the apps in the format:
{
"token": {
"scopes": [
"read:books",
"write:books"
]
},
"principal": {
"roles": [
"user",
"moderator"
]
},
"context": {
"action": "read",
"resource": "books"
}
}
Then I have data with access mapping in the format:
{
"user": [
"read:books"
],
"moderator": [
"read:books",
"write:books"
],
"administrator": [
"read:books",
"write:books",
"read:store",
"write:store"
]
}
And the policy currently looks like this:
package whatever.authz
context_scope := concat(":", [input.context.action, input.context.resource])
default allow = false
allow {
token_has_context_scope
principal_has_resource_access
}
token_has_context_scope {
context_scope == input.token.scopes[_]
}
principal_has_resource_access {
principal_role := input.principal.roles[_]
context_scope == data[principal_role][_]
}
This produces the following error:
2 errors occurred:
policy.rego:16: rego_recursion_error: rule principal_has_resource_access is recursive: principal_has_resource_access -> principal_has_resource_access
policy.rego:7: rego_recursion_error: rule allow is recursive: allow -> principal_has_resource_access -> allow
It is the recursive lookup in the principal_has_resource_access function that is causing the error.
I need to check if one of the roles of the principal is allowed to access the resource as specified by the context. Since roles is an array i need to find the union of all access scopes in the data and see if one of them matches the context scope. What am I doing wrong in the policy?
The snippet can be found in the Rego Playground https://play.openpolicyagent.org/p/KhovLRgMup
OPA stores all data under the data path, including policy and rules. There's no way for the compiler to know that the input you're providing isn't referencing the policy itself (i.e. data["whatever"]) which would be recursive. The easiest way to work around this is to simply use a top level attribute for your data which differs from your policy (i.e package name), like this:
{
"attributes": {
"user": [
"read:books"
],
"moderator": [
"read:books",
"write:books"
],
"administrator": [
"read:books",
"write:books",
"read:store",
"write:store"
]
}
}
And update your policy to reference this:
context_scope == data["attributes"][principal_role][_]
Since data.attributes != data.whatever.authz there is no risk of recursion, and the compiler won't complain. You might want a better name than "attributes", but I'll leave that to you :)

Understanding JSON Schema errors using ajv

I have the following schema and json to validate using ajv.
const schema = {
"$schema": "http://json-schema.org/draft-07/schema#",
"type": "object",
"required": [ "countries" ],
"definitions": {
"europeDef": {
"type": "object",
"required": ["type"],
"properties": { "type": {"const": "europe"} }
},
"asiaDef": {
"type": "object",
"required": ["type"],
"properties": { "type": {"const": "asia"} }
}
},
"properties": {
"countries": {
"type": "array",
"items": {
"oneOf":[
{ "$ref": "#/definitions/europeDef" },
{ "$ref": "#/definitions/asiaDef"}
]
}
}
}
}
const data = {
"countries":[
{"type": "asia"},
{"type": "europe1"}
]
}
const isValid = ajv.validate(schema, data); //schema, data
if(! isValid){
console.log(ajv.errors);
}
and the error is:
[
{
keyword: 'const',
dataPath: '/countries/1/type',
schemaPath: '#/definitions/europeDef/properties/type/const',
params: { allowedValue: 'europe' },
message: 'should be equal to constant'
},
{
keyword: 'const',
dataPath: '/countries/1/type',
schemaPath: '#/definitions/asiaDef/properties/type/const',
params: { allowedValue: 'asia' },
message: 'should be equal to constant'
},
{
keyword: 'oneOf',
dataPath: '/countries/1',
schemaPath: '#/properties/countries/items/oneOf',
params: { passingSchemas: null },
message: 'should match exactly one schema in oneOf'
}
]
I know why the error is appearing (reason: as I have used 'europe1' and it is not conforming the schema standard)
I have following questions from the above error situation:
Being, I have provided 'asia' as a valid const, the error stills talks about 'asia' as part of second entry in the array. Why did it showing as an error despite schema is absolute fine from asia perspective. Is this because 'oneOf' getting used ? In other words, it is very hard to understand, what and where is the error and what is not?
For asia, 'message: 'should be equal to constant' (2nd item of the array) is misleading imo. It gives an impression that there are still some problems with the 'asia'.
How to parse this error: on the basis of schemaPath or dataPath? Also in any case, it will still give an impression that there is a problem in terms of 'asia' (and actually its not)
Also, how to explain the above error output to a novice, as the novice will still say, why asia is coming part of error despite its correct?
Also, if the schema become more complex using oneOf/anyOf,allOf or using if-then-else, the ajv.errors output becomes more complex to understand and to explain (when certain condition are accurate but displayed as error, example asia here)
Are there any theory/documentaion/guidelines to understand the errors in a better way?
For JSON Schema draft 2019-09, we created several standardised output formats. ajv provides one of the most useful outputs from a draft-07 schema in comparison to many libraries.
When looking at the errors, what you might be overlooking is the dataPath value.
In answer to 1, the errors reported are all when applying to data path /countries/1. /countries/0 is fine, as you say. Arrays in javascript start at 0.
I think knowing that also answers all your other questions.
I think you may have assumed that arrays start at 1, and the data path was referring to asia object while it's actually targeting europe1 object.
Please do comment if I'm missing something or you're still confused on this.

How do I automate adding a custom Iot Hub Endpoint (and route to it)?

In order to receive Azure IotHub Device Twin change notifications, it appears that it's necessary to create a custom endpoint and create a route to send notifications to that endpoint. This seems straightforward enough on the Azure Portal, but as one might expect we want to automate it.
I haven't been able to find any documentation for the the az cli or even the REST API, though I might have missed something. I didn't find anything promising looking in the SDKs either.
How do I automate adding a custom endpoint and then setting up the route for device twin notifications?
You can check IotHubs template to see if it helps.
Route:
"routing": {
"endpoints": {
"serviceBusQueues": [
{
"connectionString": "string",
"name": "string",
"subscriptionId": "string",
"resourceGroup": "string"
}
]
},
"routes": [
{
"name": "string",
"source": "string",
"condition": "string",
"endpointNames": [
"string"
],
"isEnabled": boolean
}
],
Consumer group:
{
"apiVersion": "2016-02-03",
"type": "Microsoft.Devices/IotHubs/eventhubEndpoints/ConsumerGroups",
"name": "[concat(parameters('hubName'), '/events/cg1')]",
"dependsOn": [
"[concat('Microsoft.Devices/Iothubs/', parameters('hubName'))]"
]
},
For more detailed information you can reference:
Microsoft.Devices/IotHubs template reference
Create an IoT hub using Azure Resource Manager template (PowerShell)

ARM - How can I get the access key from a storage account to use in AppSettings later in the template?

I'm creating an Azure Resource Manager template that instantiates multiple resources, including an Azure storage account and an Azure App Service with a Web App.
I'd like to be able to capture the primary access key (or the full connection string, either way is fine) from the newly-created storage account, and use that as a value for one of the AppSettings for the Web App.
Is that possible?
Use the listkeys helper function.
"appSettings": [
{
"name": "STORAGE_KEY",
"value": "[listKeys(resourceId('Microsoft.Storage/storageAccounts', parameters('storageAccountName')), providers('Microsoft.Storage', 'storageAccounts').apiVersions[0]).keys[0].value]"
}
]
This quickstart does something similar:
https://azure.microsoft.com/en-us/documentation/articles/cache-web-app-arm-with-redis-cache-provision/
The syntax has changed since the other answer was accepted. The error you will now hit is 'Template language expression property 'key1' doesn't exist, available properties are 'keys'
Keys are now represented as an array of keys, and the syntax is now:
"StorageAccount": "[Concat('DefaultEndpointsProtocol=https;AccountName=',variables('StorageAccountName'),';AccountKey=',listKeys(resourceId('Microsoft.Storage/storageAccounts', variables('StorageAccountName')), providers('Microsoft.Storage', 'storageAccounts').apiVersions[0]).keys[0].value)]",
See: http://samcogan.com/retrieve-azure-storage-key-in-arm-script/
I faced with this issue two times. First in the 2015 and last today in May of 2017.
I need to add connection strings to the WebApp - I want to add strings automatically from generated resources during deployment from the ARM template. It can help later to not add manually this values.
First time I used old version of the function listKeys (it looks like old version returns result not as object but as value):
"AzureWebJobsStorage": {
"type": "Custom",
"value": "[concat(variables('storageConnectionString'), listKeys(resourceId('Microsoft.Storage/storageAccounts', parameters('storageAccountName')), '2015-05-01-preview').key1)]"
},
Today last version of the working template is:
"resources": [
{
"apiVersion": "2015-08-01",
"type": "config",
"name": "connectionstrings",
"dependsOn": [
"[resourceId('Microsoft.Web/Sites/', parameters('webSiteName'))]"
],
"properties": {
"DefaultConnection": {
"value": "[concat('Data Source=tcp:', reference(resourceId('Microsoft.Sql/servers/', parameters('sqlserverName'))).fullyQualifiedDomainName, ',1433;Initial Catalog=', parameters('databaseName'), ';User Id=', parameters('administratorLogin'), '#', parameters('sqlserverName'), ';Password=', parameters('administratorLoginPassword'), ';')]",
"type": "SQLServer"
},
"AzureWebJobsStorage": {
"type": "Custom",
"value": "[concat(variables('storageConnectionString'), listKeys(resourceId('Microsoft.Storage/storageAccounts', parameters('storageName')), '2016-01-01').keys[0].value)]"
},
"AzureWebJobsDashboard": {
"type": "Custom",
"value": "[concat(variables('storageConnectionString'), listKeys(resourceId('Microsoft.Storage/storageAccounts', parameters('storageName')), '2016-01-01').keys[0].value)]"
}
}
},
Thanks.
below is example for adding storage account to ADLA
"storageAccounts": [
{
"name": "[parameters('DataLakeAnalyticsStorageAccountname')]",
"properties": {
"accessKey": "[listKeys(variables('storageAccountid'),'2015-05-01-preview').key1]"
}
}
],
in variable you can keep
"variables": {
"apiVersion": "[providers('Microsoft.Storage', 'storageAccounts').apiVersions[0]]",
"storageAccountid": "[concat(resourceGroup().id,'/providers/','Microsoft.Storage/storageAccounts/', parameters('DataLakeAnalyticsStorageAccountname'))]"
},

API Pagination Standards

I have been working on an API and pagination is required. Only 25 elements will be returned in each request. I was looking around for standards and I seem to see 2 different things going on.
The Link Header
Link: https://www.rfc-editor.org/rfc/rfc5988
Example:
Link: <https://api.github.com/user/repos?page=3&per_page=100>; rel="next",
<https://api.github.com/user/repos?page=50&per_page=100>; rel="last"
In the JSON response
Link: API pagination best practices
Example:
"paging": {
"previous": "http://api.example.com/foo?since=TIMESTAMP"
"next": "http://api.example.com/foo?since=TIMESTAMP2"
}
Question:
Should I do both? and that being said; is the key "paging" the correct key? or "links" or "pagination"
I would say it depends on the structure of data you return (and may return in the future).
If you never have nested objects that need their own links, then using the Link header is (mildly) preferable, because it's more correct. The issue with nested objects is that you can't nest Link headers.
Consider the following collection entity:
{
"links": {
"collection": "/cards?offset=0&limit=25"
},
"data": [
{
"cardName": "Island of Wak-Wak",
"type": "Land",
"links": {
"set": "/cards?set=Arabian Knights"
}
},
{
"cardName": "Mana Drain",
"type": "Interrupt",
"links": {
"set": "/cards?set=Legends"
}
}
]
}
There's no good way to include links for the cards in the headers.