How to generate SAS token using Access policy for a container of ADLS gen 2 - azure-storage

How to generate SAS token using Access policy for a folder in container of ADLS gen 2.
exactly like below image but for ADLS gen 2 containers or folders. thank you in advance.

To generate SAS token using Access policy on ADLS containers need to create a Access Policy first . You can create Access Policy through Azure portal (Please Check with this link) or Storage Explorer.
Based on your attached
Screenshot you are using the Microsoft Storage Explorer so here are steps create access policy
1)Go to your container --> right click on container
2)Select the manage access policy
3)Click on the add. There you can provide the Access policy id and permissions you need to give on container like read ,write (click on check boxes).And click on save
4)Once access policy created. You can create the SAS based on that access policy .Right click on
The container select Get Share Access Signature. From the dropdown select the access policy and click
On the create
Generate SAS using terraform
terraform {
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "~> 2.65" }
}
required_version = ">= 0.14.9"
}
provider "azurerm" {
features {}
}
resource "azurerm_resource_group" "rg" {
name = "terraformtest"
location = "West Europe"
}
resource "azurerm_storage_account" "storage" {
name = "storage name"
resource_group_name = azurerm_resource_group.rg.name
location = azurerm_resource_group.rg.location
account_tier = "Standard"
account_replication_type = "GRS"
allow_blob_public_access = true
}
resource "azurerm_storage_container" "container" {
name = "terraformcont"
storage_account_name = azurerm_storage_account.storage.name
container_access_type = "private"
}
data "azurerm_storage_account_blob_container_sas" "example" {
connection_string = azurerm_storage_account.storage.primary_connection_string
container_name = azurerm_storage_container.container.name
https_only = true
start = "Date"
expiry = "Date"permissions {
read = true
add = true
create = false
write = false
delete = true
list = true
}
}
output "sas_url_query_string" {
value = data.azurerm_storage_account_blob_container_sas.example.sas
sensitive = true
}
After running the above command you will get output inside terraform.tfstate
For more information check with this link

Related

azure runbook PowerShell script content is not importing in terraform properly in azure automation account

I have created azure automation account using terraform. I have save my existing runbook PowerShell script files in local. I have successfully uploaded all the script files at one time while creation of automation account with below code:
resource "azurerm_automation_runbook" "example" {
for_each = fileset("Azure_Runbooks/", "*")
name = split(".", each.key)[0]
location = var.location
resource_group_name = var.resource_group
automation_account_name = azurerm_automation_account.example.name
log_verbose = var.log_verbose
log_progress = var.log_progress
runbook_type = var.runbooktype
content = each.value
}
After running the terraform apply command, all the script files are uploading successfully to the automation account but the content of the PowerShell script is not getting uploaded. I have checked the runbooks in the automation account but there is not content inside the file. I am seeing only the name of the file.
Can some one please help me with above issue.
You assuming fileset(path, pattern) returns the contents of the file as each.value, but that is not the case. The each.value is just the file name.
You need something like:
resource "azurerm_automation_runbook" "example" {
for_each = fileset("Azure_Runbooks/", "*")
name = split(".", each.key)[0]
location = var.location
resource_group_name = var.resource_group
automation_account_name = azurerm_automation_account.example.name
log_verbose = var.log_verbose
log_progress = var.log_progress
runbook_type = var.runbooktype
content = file(format("%s%s", "Azure_Runbooks/", each.value)
}
I hope this helps.
I have fixed the issue with correct code:
resource "azurerm_automation_runbook" "example" {
for_each = fileset("Azure_Runbooks/", "*")
name = split(".", each.key)[0]
location = var.location
resource_group_name = var.resource_group
automation_account_name = azurerm_automation_account.example.name
log_verbose = var.log_verbose
log_progress = var.log_progress
runbook_type = var.runbooktype
content = file(format("%s%s" , "Azure_Runbooks/" , each.key))
}
Thanks #YoungGova for your help.

Multi S3 bucket Definition with versioning limitations

I was reading this post: Terraform - creating multiple buckets
and was wondering how could I have added a filter to enable bucket versioning on one of the buckets and disable versioning on the rest of the buckets using terraform conditionals or anything that would allow it to work?
I was trying something like this but it is not working
variable "s3_bucket_name" {
type = "list"
default = ["prod_bucket", "stage-bucket", "qa_bucket"]
}
resource "aws_s3_bucket" "henrys_bucket" {
count = "${length(var.s3_bucket_name)}"
bucket = "${var.s3_bucket_name[count.index]}"
acl = "private"
force_destroy = "true"
var.s3_bucket_name[count.index] != "target-bucket-name" versioning { enabled = true } : versioning { enabled = false }
}
You can use list of objects instead of just list of bucket names. The object can contain bucket name, and versioning_enabled flag. Then use the bucket-name and versioning_enabled.
Something like:
bucket = var.s3_buckets[count.index].bucket_name
And for versioning, add dynamic block based on var.s3_buckets[count.index].versioning_enabled like below:
dynamic "versioning" {
for_each = var.s3_buckets[count.index].versioning_enabled== true ? [1] : []
content {
enabled = true
}
}

Terraform - Conditional expression for a AWS role

I have the following in my module to create a role:
resource "aws_iam_role" "default" {
name = var.name
assume_role_policy = var.assume_role_policy
permissions_boundary = var.account_id != "" ? var.permissions_boundary : "arn:aws:iam::${data.aws_caller_identity.default.account_id}:policy/BoundedPermissionsPolicy"
}
Problem - I want to be able to set the permissions boundary argument to use the account ID if present and if not specified then make use of var.permissions_boundary (arn:aws:iam::${data.aws_caller_identity.default.account_id}:policy/BoundedPermissionsPolicy).
The code above does not work when I try to use the account ID.

Terraform update access policies

I am facing a problem and workflow related to terraform to automate the creation of storage account, key vaults and access policies.
What I am trying to achieve is as follow:
I have a storage-account that runs with a for_each loop:
//==================================================
// Automation storage accounts
//==================================================
resource "azurerm_storage_account" "storage-foreach" {
for_each = var.storage-foreach
access_tier = "Hot"
account_kind = "StorageV2"
account_replication_type = "LRS"
account_tier = "Standard"
location = var.location
name = each.value
resource_group_name = azurerm_resource_group.tenant-testing-hamza.name
depends_on = [azurerm_key_vault_key.client-key]
identity {
type = "SystemAssigned"
}
lifecycle {
prevent_destroy = false
}
}
this storage account resource, loops through this variable to create the storage accounts
variable "storage-foreach" {
type = map(string)
default = { "storage1" = "storage1", "storage2" = "storage2", "storage3" = "storage3", "storage4" = "storage4"}
}
so far everything works smoothly. Than I wanted to add those storage accounts object id to my key vault access policy, as follow:
resource "azurerm_key_vault_access_policy" "storage" {
for_each = var.storage-foreach
key_vault_id = azurerm_key_vault.tenantsnbshared.id
tenant_id = "<tenant-id"
object_id = azurerm_storage_account.storage-foreach[each.key].identity.0.principal_id
key_permissions = ["get", "Create", "List", "Restore", "Recover", "Unwrapkey", "Wrapkey", "Purge", "Encrypt", "Decrypt", "Sign", "Verify"]
secret_permissions = ["get", "set", "list", "delete", "recover"]
}
so far everything works just fine while creating the resource, I have all the access policies in place. But, If I try to remove, for example the storage1 from my variable, the storage account get deleted and the access policies related to that specific storage, which is good.
And here the main issue I am facing. If I try to add again the same storage in the variable and run a terraform apply , what happen is that the 3 policies still existing they get removed and the access policy for the storage account get created. If I do one more time terraform apply the logic get inverted, it will delete the first storage account access policy and add the other 3.
I can't find a solution to just update my access policies accordingly to the element I have set in my variable.

VM creation using terraform in vsphere gives An error occurred while customizing VM

provider "vsphere" {
vsphere_server = "myserver"
user = "myuser"
password = "mypass"
allow_unverified_ssl = true
version = "v1.21.0"
}
data "vsphere_datacenter" "dc" {
name = "pcloud-datacenter"
}
data "vsphere_datastore_cluster" "datastore_cluster" {
name = "pc-storage"
datacenter_id = data.vsphere_datacenter.dc.id
}
data "vsphere_compute_cluster" "compute_cluster" {
name = "pcloud-cluster"
datacenter_id = data.vsphere_datacenter.dc.id
}
data "vsphere_network" "network" {
name = "u32c01p26-1514"
datacenter_id = data.vsphere_datacenter.dc.id
}
data "vsphere_virtual_machine" "vm_template" {
name = "first-terraform-vm"
datacenter_id = data.vsphere_datacenter.dc.id
}
resource "vsphere_virtual_machine" "vm" {
count = 1
name = "first-terraform-vm-1"
resource_pool_id = data.vsphere_compute_cluster.compute_cluster.resource_pool_id
datastore_cluster_id = data.vsphere_datastore_cluster.datastore_cluster.id
num_cpus = 2
memory = 1024
wait_for_guest_ip_timeout = 2
wait_for_guest_net_timeout = 0
guest_id = data.vsphere_virtual_machine.vm_template.guest_id
scsi_type = data.vsphere_virtual_machine.vm_template.scsi_type
network_interface {
network_id = data.vsphere_network.network.id
adapter_type = data.vsphere_virtual_machine.vm_template.network_interface_types[0]
}
disk {
name = "disk0.vmdk"
size = data.vsphere_virtual_machine.vm_template.disks.0.size
eagerly_scrub = data.vsphere_virtual_machine.vm_template.disks.0.eagerly_scrub
thin_provisioned = data.vsphere_virtual_machine.vm_template.disks.0.thin_provisioned
}
folder = "virtual-machines"
clone {
template_uuid = data.vsphere_virtual_machine.vm_template.id
customize {
linux_options {
host_name = "first-terraform-vm-1"
domain = "localhost.localdomain"
}
network_interface {
ipv4_address = "10.10.14.100"
ipv4_netmask = 24
}
ipv4_gateway = "10.10.14.1"
}
}
}
The command terraform script throws the below error
Error:
Virtual machine customization failed on "/pcloud-datacenter/vm/virtual-machines/first-terraform-vm-1":
An error occurred while customizing VM first-terraform-vm-1. For details reference the log file <No Log> in the guest OS.
The virtual machine has not been deleted to assist with troubleshooting. If
corrective steps are taken without modifying the "customize" block of the
resource configuration, the resource will need to be tainted before trying
again. For more information on how to do this, see the following page:
https://www.terraform.io/docs/commands/taint.html
on create_vm.tf line 34, in resource "vsphere_virtual_machine" "vm":
34: resource "vsphere_virtual_machine" "vm" {
Some how the generated vm "first-terraform-vm-1" doesn't have the connected box checked-in in network settings. While i checked my template "first-terraform-vm" it has network connected box checked-in.
I see similar post in github https://github.com/hashicorp/terraform-provider-vsphere/issues/951
But not sure why this issue is still surfacing?
Vsphere version: 6.7
Terraform v0.12.28
provider.vsphere v1.21.0
Is there anything wrong with my template? Or am i missing something? Can anyone help please? Stuck with this for last 2 days.
The problem looks to be with the template that i have used. The linux template should have Network Manager installed and running. It looks like terraform uses the network manager to assign IPaddress for newly created vm.