I'm using the command standard-version each time I want to publish new version, but the yielded changes in the CHANGELOG.md look like this:
### [10.1.9](https://github.com/my-project-name/compare/v10.1.8...v10.1.9) (2021-03-29)
### [10.1.8](https://github.com/my-project-name/compare/v10.1.7...v10.1.8) (2021-03-29)
### [10.1.7](https://github.com/my-project-name/compare/v10.1.6...v10.1.7) (2021-03-29)
first the links do not work - the github url is not correct and i want to configure it to the right url, and second, I'd like to configure the link that's shown in the changeslog file (there are some types)
I tried to use this documentation but didn't find anything that can help me
https://github.com/conventional-changelog/conventional-changelog
so how do I configure the way standard-version works on the CHANGELOG.md ? can someone provide example?
yes.
according to doc:
You can configure standard-version either by:
Placing a standard-version stanza in your package.json (assuming your project is JavaScript).
Creating a .versionrc, .versionrc.json or .versionrc.js.
If you are using a .versionrc.js your default export must be a configuration object, or a function returning a configuration object.
Any of the command line parameters accepted by standard-version can instead be provided via configuration.
Please refer to the conventional-changelog-config-spec for details on available configuration options.
example:
.versionrc
{
"types": [
{
"type": "feat",
"section": "Features"
},
{
"type": "fix",
"section": "Bug Fixes"
},
{
"type": "chore",
"hidden": true
},
{
"type": "docs",
"hidden": true
},
{
"type": "style",
"hidden": true
},
{
"type": "refactor",
"section": "Refactor"
},
{
"type": "perf",
"section": "Performance"
},
{
"type": "test",
"hidden": true
}
]
}
Related
Does the Azure Devops REST API allow me to expand multiple levels? When using the release definitions I specifically need the workflowtasks, which are buried a couple of lists deep.
More context:
I'm optimizing an Azure Devops extension that scans pipelines for compliancy. Right now there's a rule that scans the workflowtasks. To get the information required on all relevant pipelines, we do the following call to the Azdo API for each release definition:
https://vsrm.dev.azure.com/{Organization}/{Project}/_apis/release/definitions/{definitionID}?api-version=6.0
This returns a completely decked-out release definition including environments like this:
"environments": [{
"id": 10,
"name": "Stage 1",
... etc
"deployPhases": [{
"deploymentInput": {
"parallelExecution": {
"parallelExecutionType": "none"
},
...etc
"rank": 1,
...etc
"workflowTasks": [{
"environment": {},
"taskId": "obfuscated",
"version": "2.*",
"name": "obfuscated",
"refName": "",
"enabled": true,
"alwaysRun": false,
"continueOnError": false,
"timeoutInMinutes": 0,
"retryCountOnTaskFailure": 0,
"definitionType": "task",
"overrideInputs": {},
"condition": "succeeded()",
"inputs": {
"template": "obfuscated",
"assets": "obfuscated",
"duration": "60",
"title": "",
"description": "",
"implementationPlan": "obfuscated"
}
}, {
"environment": {},
"taskId": "obfuscated",
"version": "2.*",
"name": "obfuscated",
"refName": "",
"enabled": true,
"alwaysRun": false,
"continueOnError": false,
"timeoutInMinutes": 0,
"retryCountOnTaskFailure": 0,
"definitionType": "task",
"overrideInputs": {},
"condition": "succeeded()",
"inputs": {
"changeClosureCode": "1",
"changeClosureComments": "Successful implementation",
"changeId": ""
}
}
]
}
],
...etc
}
],
But when I try and get the list as a whole, using the following URL:
https://vsrm.dev.azure.com/{organization}/{project}/_apis/release/definitions?$expand=environments&api-version=6.0
My Environments arrays (there is one for each definition obviously) looks nothing like the previous one. It doesn't include deployPhases (not even as an empty array).
Since we have 2300 release definitions, you can Imagine how inconvenient it is to call the release/definitions/{definitionID} endpoint instead of the release/definitions one that fetches all of them at the same time.
Is there a way to expand the release/definitions call to fetch all environments including workflowTasks and maybe other stuff? Is there a syntax that allows for this? Something like $expand=environments>deployPhases>workflowTasks?
Is there a way to expand the release/definitions call to fetch all environments including workflowTasks and maybe other stuff? Is there a syntax that allows for this? Something like $expand=environments>deployPhases>workflowTasks?
I am afraid there is no such syntax allow you to fetch all environments including workflowTasks.
You could use the REST API with some powershell scripts to fetch all environments including workflowTasks:
The sample powershell scripts:
$url = "https://vsrm.dev.azure.com/{Organization}/{Project}/_apis/release/definitions/{definitionID}?api-version=6.0"
Write-Host "URL: $url"
$pipeline = Invoke-RestMethod -Uri $url -Method Get -Headers #{
Authorization = "Bearer $env:SYSTEM_ACCESSTOKEN"
}
Write-Host "workflowTasks = $($pipeline.environments.deployPhases.workflowTasks | ConvertTo-Json -Depth 100)"
The output is:
Running code in Edge extension throws a TypeError, unable to get property 'create' of undefined or null reference.
I have tried running it in both the popup and background scripts. I have the notifications permissions in the manifest. I did see that some APIs require being run in the content script, but since I'm not engaging the tabs or web pages, I don't think that applies to me...?
Manifest:
{
"name": "xxx",
"author": "xxx", "version": "1.1",
"options_page": "options.html",
"background": {
"scripts": ["jquery-3.3.1.min.js","background.js"], "persistent": true
},
"permissions": [
"xxx",
"background",
"notifications",
"storage"
],
"offline_enabled": true,
"browser_action": {
"default_title": "xxx",
"default_popup": "popup.html",
"default_icon": "32.png"
},
"manifest_version": 2
}
Background script:
try{
browser.notifications.create("test",{
"type": "basic",
"title": "Test",
"iconUrl": "48.png",
"message": "This is a test"
});
}catch(e){
alert(e);
}
Based on your description, first, you could try to upgrade the Edge browser to the latest version, then, try to use the browser.notifications.create method.
But, in my opinion, I prefer to display the notification using the Web Notifications API, you could check this article.
In order to receive Azure IotHub Device Twin change notifications, it appears that it's necessary to create a custom endpoint and create a route to send notifications to that endpoint. This seems straightforward enough on the Azure Portal, but as one might expect we want to automate it.
I haven't been able to find any documentation for the the az cli or even the REST API, though I might have missed something. I didn't find anything promising looking in the SDKs either.
How do I automate adding a custom endpoint and then setting up the route for device twin notifications?
You can check IotHubs template to see if it helps.
Route:
"routing": {
"endpoints": {
"serviceBusQueues": [
{
"connectionString": "string",
"name": "string",
"subscriptionId": "string",
"resourceGroup": "string"
}
]
},
"routes": [
{
"name": "string",
"source": "string",
"condition": "string",
"endpointNames": [
"string"
],
"isEnabled": boolean
}
],
Consumer group:
{
"apiVersion": "2016-02-03",
"type": "Microsoft.Devices/IotHubs/eventhubEndpoints/ConsumerGroups",
"name": "[concat(parameters('hubName'), '/events/cg1')]",
"dependsOn": [
"[concat('Microsoft.Devices/Iothubs/', parameters('hubName'))]"
]
},
For more detailed information you can reference:
Microsoft.Devices/IotHubs template reference
Create an IoT hub using Azure Resource Manager template (PowerShell)
I'm trying to make storage plugin for Hadoop (hdfs) and Apache Drill.
Actually I'm confused and I don't know what to set as port for hdfs:// connection, and what to set for location.
This is my plugin:
{
"type": "file",
"enabled": true,
"connection": "hdfs://localhost:54310",
"workspaces": {
"root": {
"location": "/",
"writable": false,
"defaultInputFormat": null
},
"tmp": {
"location": "/tmp",
"writable": true,
"defaultInputFormat": null
}
},
"formats": {
"psv": {
"type": "text",
"extensions": [
"tbl"
],
"delimiter": "|"
},
"csv": {
"type": "text",
"extensions": [
"csv"
],
"delimiter": ","
},
"tsv": {
"type": "text",
"extensions": [
"tsv"
],
"delimiter": "\t"
},
"parquet": {
"type": "parquet"
},
"json": {
"type": "json"
},
"avro": {
"type": "avro"
}
}
}
So, is ti correct to set localhost:54310 because I got that with command:
hdfs -getconf -nnRpcAddresses
or it is :8020 ?
Second question, what do I need to set for location? My hadoop folder is in:
/usr/local/hadoop
, and there you can find /etc /bin /lib /log ... So, do I need to set location on my datanode, or?
Third question. When I'm connecting to Drill, I'm going through sqlline and than connecting on my zookeeper like:
!connect jdbc:drill:zk=localhost:2181
My question here is, after I make storage plugin and when I connect to Drill with zk, can I query hdfs file?
I'm very sorry if this is a noob question but I haven't find anything useful on internet or at least it haven't helped me.
If you are able to explain me some stuff, I'll be very grateful.
As per Drill docs,
{
"type" : "file",
"enabled" : true,
"connection" : "hdfs://10.10.30.156:8020/",
"workspaces" : {
"root" : {
"location" : "/user/root/drill",
"writable" : true,
"defaultInputFormat" : null
}
},
"formats" : {
"json" : {
"type" : "json"
}
}
}
In "connection",
put namenode server address.
If you are not sure about this address.
Check fs.default.name or fs.defaultFS properties in core-site.xml.
Coming to "workspaces",
you can save workspaces in this. In the above example, there is a workspace with name root and location /user/root/drill.
This is your HDFS location.
If you have files under /user/root/drill hdfs directory, you can query them using this workspace name.
Example: abc is under this directory.
select * from dfs.root.`abc.csv`
After successfully creating the plugin, you can start drill and start querying .
You can query any directory irrespective to workspaces.
Say you want to query employee.json in /tmp/data hdfs directory.
Query is :
select * from dfs.`/tmp/data/employee.json`
I have similar problem, Drill cannot read dfs server. Finally, the problem is cause by namenode port.
The default address of namenode web UI is http://localhost:50070/.
The default address of namenode server is hdfs://localhost:8020/.
I'm creating an Azure Resource Manager template that instantiates multiple resources, including an Azure storage account and an Azure App Service with a Web App.
I'd like to be able to capture the primary access key (or the full connection string, either way is fine) from the newly-created storage account, and use that as a value for one of the AppSettings for the Web App.
Is that possible?
Use the listkeys helper function.
"appSettings": [
{
"name": "STORAGE_KEY",
"value": "[listKeys(resourceId('Microsoft.Storage/storageAccounts', parameters('storageAccountName')), providers('Microsoft.Storage', 'storageAccounts').apiVersions[0]).keys[0].value]"
}
]
This quickstart does something similar:
https://azure.microsoft.com/en-us/documentation/articles/cache-web-app-arm-with-redis-cache-provision/
The syntax has changed since the other answer was accepted. The error you will now hit is 'Template language expression property 'key1' doesn't exist, available properties are 'keys'
Keys are now represented as an array of keys, and the syntax is now:
"StorageAccount": "[Concat('DefaultEndpointsProtocol=https;AccountName=',variables('StorageAccountName'),';AccountKey=',listKeys(resourceId('Microsoft.Storage/storageAccounts', variables('StorageAccountName')), providers('Microsoft.Storage', 'storageAccounts').apiVersions[0]).keys[0].value)]",
See: http://samcogan.com/retrieve-azure-storage-key-in-arm-script/
I faced with this issue two times. First in the 2015 and last today in May of 2017.
I need to add connection strings to the WebApp - I want to add strings automatically from generated resources during deployment from the ARM template. It can help later to not add manually this values.
First time I used old version of the function listKeys (it looks like old version returns result not as object but as value):
"AzureWebJobsStorage": {
"type": "Custom",
"value": "[concat(variables('storageConnectionString'), listKeys(resourceId('Microsoft.Storage/storageAccounts', parameters('storageAccountName')), '2015-05-01-preview').key1)]"
},
Today last version of the working template is:
"resources": [
{
"apiVersion": "2015-08-01",
"type": "config",
"name": "connectionstrings",
"dependsOn": [
"[resourceId('Microsoft.Web/Sites/', parameters('webSiteName'))]"
],
"properties": {
"DefaultConnection": {
"value": "[concat('Data Source=tcp:', reference(resourceId('Microsoft.Sql/servers/', parameters('sqlserverName'))).fullyQualifiedDomainName, ',1433;Initial Catalog=', parameters('databaseName'), ';User Id=', parameters('administratorLogin'), '#', parameters('sqlserverName'), ';Password=', parameters('administratorLoginPassword'), ';')]",
"type": "SQLServer"
},
"AzureWebJobsStorage": {
"type": "Custom",
"value": "[concat(variables('storageConnectionString'), listKeys(resourceId('Microsoft.Storage/storageAccounts', parameters('storageName')), '2016-01-01').keys[0].value)]"
},
"AzureWebJobsDashboard": {
"type": "Custom",
"value": "[concat(variables('storageConnectionString'), listKeys(resourceId('Microsoft.Storage/storageAccounts', parameters('storageName')), '2016-01-01').keys[0].value)]"
}
}
},
Thanks.
below is example for adding storage account to ADLA
"storageAccounts": [
{
"name": "[parameters('DataLakeAnalyticsStorageAccountname')]",
"properties": {
"accessKey": "[listKeys(variables('storageAccountid'),'2015-05-01-preview').key1]"
}
}
],
in variable you can keep
"variables": {
"apiVersion": "[providers('Microsoft.Storage', 'storageAccounts').apiVersions[0]]",
"storageAccountid": "[concat(resourceGroup().id,'/providers/','Microsoft.Storage/storageAccounts/', parameters('DataLakeAnalyticsStorageAccountname'))]"
},