APIM named values create/update through powershell/cli - apim

is there a way we can create or update apim named values through power-shell or azure cli.
i tried searching the Microsoft documents i was not able to get anything related.
is there any command let , im trying to create new named values or update existing named values through power-shell or azure cli

As far as a quick research told me, there is no idiomatic Azure CLI command to do this at the moment.
However, you can achieve the same result using the az rest command:
az rest \
--method put \
--uri https://management.azure.com/subscriptions/$CURRENT_SUBSCRIPTION/resourceGroups/$RESOURCE_GROUP/providers/Microsoft.ApiManagement/service/$APIM/namedValues/$NAME?api-version=2019-12-01 \
--body '{"properties":{"displayName":"$DISPLAYNAME","value": "$VALUE","secret":$SECRET_BOOLEAN}}'
Replace the $CURRENT_SUBSCRIPTION, $RESOURCE_GROUP, $APIM, $NAME, $DISPLAYNAME, $VALUE, $SECRET_BOOLEAN with your values or PowerShell variables.

Related

How to detach or remove a data catalog tag from a BigQuery table using gcloud command

Could anyone please share ETL tag template in GCP data catalog?
I'd like to refresh a tag value with its ETL status every time a BigQuery table is updated. I'm trying to use gcloud commands to create a tag template. I need to remove the tag from the template using the gcloud command and add another tag to that template, so that I can maintain the ETL status through automation.
I am able to remove the tag through UI manually. I need corresponding gcloud command for the same.
The procedure is explained in the Data Catalog documentation:
Suppose that you have the following tag template created:
gcloud data-catalog tag-templates create demo_template \
--location=us-central1 \
--display-name="Demo Tag Template" \
--field=id=source,display-name="Source of data asset",type=string,required=TRUE \
--field=id=num_rows,display-name="Number of rows in the data asset",type=double \
--field=id=has_pii,display-name="Has PII",type=bool \
--field=id=pii_type,display-name="PII type",type='enum(EMAIL_ADDRESS|US_SOCIAL_SECURITY_NUMBER|NONE)'
You need to lookup for Data Catalog entry created for your BigQuery table:
ENTRY_NAME=$(gcloud data-catalog entries lookup '//bigquery.googleapis.com/projects/PROJECT_ID/datasets/DATASET/tables/TABLE' --format="value(name)")
Once you have entry name you can:
Create a tag if it wasn't created on the entry or if it was deleted earlier:
cat > tag_file.json << EOF
{
"source": "BigQuery",
"num_rows": 1000,
"has_pii": true,
"pii_type": "EMAIL_ADDRESS"
}
EOF
gcloud data-catalog tags create --entry=${ENTRY_NAME} --tag-template=demo_template --tag-template-location=us-central1 --tag-file=tag_file.json
The command returns (among others) tag name that can be used with update or delete.
Delete a tag from entry:
gcloud data-catalog tags delete TAG_NAME
Update a tag, so that you don't have to delete it and recreate:
gcloud data-catalog tags update --entry=${ENTRY_NAME} --tag-template=demo_template --tag-template-location=us-central1 --tag-file=tag_file.json
If you lost your tag name use list command:
gcloud data-catalog tags list --entry=${ENTRY_NAME}

Azcopy command issue with parameters

I'm using Azcopy within a shell script to copy blobs within a container from one storage account to another on Azure.
Using the following command -
azcopy copy "https://$source_storage_name.blob.core.windows.net/$container_name/?$source_sas" "https://$dest_storage_name.blob.core.windows.net/$container_name/?$dest_sas" --recursive
I'm generating the SAS token for both source and destination accounts and passing them as parameters in the command above along with the storage account and container names.
On execution, I keep getting this error ->
failed to parse user input due to error: the inferred source/destination combination could not be identified, or is currently not supported
When I manually enter the storage account names, container name and SAS tokens, the command executes successfully and storage data gets transferred as expected. However, when I use parameters in the azcopy command I get the error.
Any suggestions on this would be greatly appreciated.
Thanks!
You can use the below PowerShell Script
param
(
[string] $source_storage_name,
[string] $source_container_name,
[string] $dest_storage_name,
[string] $dest_container_name,
[string] $source_sas,
[string] $dest_sas
)
.\azcopy.exe copy "https://$source_storage_name.blob.core.windows.net/$source_container_name/?$source_sas" "https://$dest_storage_name.blob.core.windows.net/$container_name/?$dest_sas" --recursive=true
To execute the above script you can run the below command.
.\ScriptFileName.ps1 -source_storage_name "<XXXXX>" -source_container_name "<XXXXX>" -source_sas "<XXXXXX>" -dest_storage_name "<XXXXX>" -dest_container_name "<XXXXXX>" -dest_sas "<XXXXX>"
I am Generating SAS token for both the Storage from here . Make Sure to Check all the boxes as i did in the picture.
OutPut ---

az devops powershell passing space delimited values

The AZ devops command
az pipelines variable-group create takes --variables defined as "Variables in format key=value space separated pairs"
I need to create a variable group with 20+ key=value pairs. I'm finding it hard to do so as a variable.
This command line works
az pipelines variable-group create --name myname --variables “owner=Firstname Lastname” “application=cool-name”
I'm trying to do something like this:
$tags ={owner='Firstname Lastname' application='cool-name'}
az pipelines variable-group create --name owgroup --variables $tags
I looked at this article that showed promise but the recommended answer(s) doesn't work
Space separated values; how to provide a value containing a space
Any ideas how this can be accomplished?
You could try the following.
$tags=(“owner=Firstname Lastname”, “application=cool-name”)
az pipelines variable-group create --name owgroup --variables $tags
I have not AZ DevOps but validate a similar scenario with Azure CLI on PowerShell ISE. If you are running Az CLI on Bash, you could follow your linked answer.

DynamoDB Local to DynamoDB AWS

I've built an application using DynamoDB Local and now I'm at the point where I want to setup on AWS. I've gone through numerous tools but have had no success finding a way to take my local DB and setup the schema and migrate data into AWS.
For example, I can get the data into a CSV format but AWS has no way to recognize that. It seems that I'm forced to create a Data Pipeline... Does anyone have a better way to do this?
Thanks in advance
As was mentioned earlier, DynamoDB local is there for testing purposes. However, you can still migrate your data if you need to. One approach would be to save data into some format, like json or csv and store it into S3, and then use something like lambdas or your own server to read from S3 and save into your new DynamoDB. As for setting up schema, You can use the same code you used to create your local table to create remote table via AWS SDK.
you can create a standalone application to get the list of tables from the local dynamoDB and create them in your AWS account after that you can get all the data for each table and save them.
I'm not sure which language you familiar with but will explain some API might help you in Java.
DynamoDB.listTables();
DynamoDB.createTable(CreateTableRequest);
example about how to create table using the above API
ProvisionedThroughput provisionedThroughput = new ProvisionedThroughput(1L, 1L);
try{
CreateTableRequest groupTableRequest = mapper.generateCreateTableRequest(Group.class); //1
groupTableRequest.setProvisionedThroughput(provisionedThroughput); //2
// groupTableRequest.getGlobalSecondaryIndexes().forEach(index -> index.setProvisionedThroughput(provisionedThroughput)); //3
Table groupTable = client.createTable(groupTableRequest); //4
groupTable.waitForActive();//5
}catch(ResourceInUseException e){
log.debug("Group table already exist");
}
1- you will create TableRequest against mapping
2- setting the provision throughput and this will vary depend on your requirements
3- if the table has global secondary index you can use this line (Optional)
4- the actual table will be created here
5- the thread will be stopped till the table become active
I didn't mention the API related to data access (insert ... etc), I supposed that you're familiar with since you already use them in local dynamodb
I did a little work setting up my local dev environment. I use SAM to create the dynamodb tables in AWS. I didn't want to do the work twice so I ended up copying the schema from AWS to my local instance. The same approach can work the other way around.
aws dynamodb describe-table --table-name chess_lobby \
| jq '.Table' \
| jq 'del(.TableArn)' \
| jq 'del(.TableSizeBytes)' \
| jq 'del(.TableStatus)' \
| jq 'del(.TableId)' \
| jq 'del(.ItemCount)' \
| jq 'del(.CreationDateTime)' \
| jq 'del(.GlobalSecondaryIndexes[].IndexSizeBytes)' \
| jq 'del(.ProvisionedThroughput.NumberOfDecreasesToday)' \
| jq 'del(.GlobalSecondaryIndexes[].IndexStatus)' \
| jq 'del(.GlobalSecondaryIndexes[].IndexArn)' \
| jq 'del(.GlobalSecondaryIndexes[].ItemCount)' \
| jq 'del(.GlobalSecondaryIndexes[].ProvisionedThroughput.NumberOfDecreasesToday)' > chess_lobby.json
aws dynamodb create-table \
--cli-input-json file://chess_lobby.json \
--endpoint-url http://localhost:8000
The top command uses describe table aws cli capabilities to get the schema json. Then I use jq to delete all unneeded keys, since create-table is strict with its parameter validation. Then I can use create-table to create the table in the local environent by using the --endpoint-url command.
You can use the --endpoint-url parameter on the top command instead to fetch your local schema and then use the create-table without the --endpoint-url parameter to create it directly in AWS.

How to get a list of internal IP addresses of GCE instances

I have a bunch of instances running in GCE. I want to programmatically get a list of the internal IP addresses of them without logging into the instances (locally).
I know I can run:
gcloud compute instances list
But are there any flags I can pass to just get the information I want?
e.g.
gcloud compute instances list --internal-ips
or similar? Or am I going to have to dust off my sed/awk brain and parse the output?
I also know that I can get the output in JSON using --format=json, but I'm trying to do this in a bash script.
The simplest way to programmatically get a list of internal IPs (or external IPs) without a dependency on any tools other than gcloud is:
$ gcloud --format="value(networkInterfaces[0].networkIP)" compute instances list
$ gcloud --format="value(networkInterfaces[0].accessConfigs[0].natIP)" compute instances list
This uses --format=value which also requires a projection which is a list of resource keys that select resource data values. For any command you can use --format=flattened to get the list of resource key/value pairs:
$ gcloud --format=flattened compute instances list
A few things here.
First gcloud's default output format for listing is not guaranteed to be stable, and new columns may be added in the future. Don't script against this!
The three output modes are three output modes that are accessible with the format flag, --format=json, --format=yaml, and format=text, are based on key=value pairs and can scripted against even if new fields are introduced in the future.
Two good ways to do what you want are to use JSON and the jq tool,
gcloud compute instances list --format=json \
| jq '.[].networkInterfaces[].networkIP'
or text format and grep + line-oriented using tools,
gcloud compute instances list --format=text \
| grep '^networkInterfaces\[[0-9]\+\]\.networkIP:' | sed 's/^.* //g'
I hunted around and couldn't find a straight answer, probably because efficient tools weren't available when others replied to the original question. GCP constantly updates their libraries & APIs and we can use the filter and projections to extract targeted attributes.
Here I outline how to reserve an external static IP, see how it's attributes are named & organised, and then export the external IP address so that I can use it in other scripts (e.g. assign this to a VM instance or authorise this network (IP address) on a Cloud SQL instance.
Reserve a static IP in a region of your choice
gcloud compute --project=[PROJECT] addresses create [NAME] --region=[REGION]
[Informational] View the details of the regional static IP that was reserved
gcloud compute addresses describe [NAME] --region [REGION] --format=flattened
[Informational] List the attributes of the static IP in the form of key-value pairs
gcloud compute addresses describe [NAME] --region [REGION] --format='value(address)'
Extract the desired value (e.g. external IP address) as a parameter
export STATIC_IP=$(gcloud compute addresses describe [NAME] --region [REGION] --format='value(address)’)
Use the exported parameter in other scripts
echo $STATIC_IP
The best possible way would be to have readymade gcloud command use the same as and when needed.
This can be achieved using table() format option with gcloud as per below:
gcloud compute instances list --format='table(id,name,status,zone,networkInterfaces[0].networkIP :label=Internal_IP,networkInterfaces[0].accessConfigs[0].natIP :label=External_IP)'
What does it do for you?
Get you data in clean format
Give you option to add or remove columns
Need additional columns? How to find column name even before you run the above command?
Execute the following, which will give you data in raw JSON format consisting value and its name, copy those names and add them into your table() list. :-)
gcloud compute instances list --format=json
Plus Point: This is pretty much same syntax you can tweak with any GCP resources data to fetch including with gcloud, kubectl etc.
As far as I know you can't filter on specific fields in the gcloud tool.
Something like this will work for a Bash script, but it still feels a bit brittle:
gcloud compute instances list --format=yaml | grep " networkIP:" | cut -c 14-100
I agree with #Christiaan. Currently there is no automated way to get the internal IPs using the gcloud command.
You can use the following command to print the internal IPs (4th column):
gcloud compute instances list | tail -n+2 | awk '{print $4}'
or the following one if you want to have the pair <instance_name> <internal_ip> (1st and 4th column)
gcloud compute instances list | tail -n+2 | awk '{print $1, $4}'
I hope it helps.