Azure CLI command to make changes to storage accounts - azure-storage

I was looking to get help with writing an Azure CLI command to make changes to storage accounts:
Storage accounts to use private link
Storage account public access to be blocked
Storage accounts should restrict network access using VNET rules
Firewall and Private Endpoint should be configured on key vault

Storage accounts to use private link
To Set/Approve a private endpoint connection for the Azure Storage account, AZ CLI command is the az storage accoount private-endpoint-connection approve.
To manage private-link resources on storage account, az storage account private-link-resource
Storage account public access to be blocked
There are 2 types of public access to allow or disallow to the Azure Storage accounts:
--public-network-access: Its values are Disabled, Enabled to the storage account.
--allow-blob-public-access: Its values are false, true which does the functionality of public access to all blobs or containers in the storage account.
Storage accounts should restrict network access using VNET rules
To allow the storage account within a specific address-range:
az storage account network-rule add -g myRg --account-name mystorageaccount --ip-address 23.45.1.0/24
To allow the access of storage account for a subnet:
az storage account network-rule add -g myRg --account-name mystorageaccount --vnet-name myvnet --subnet mysubnet
Note: --subnet means Name of ID or subnet. If name is supplied, --vnet-name (Name of a virtual network) must be supplied.
Refer here for more information.
Firewall and Private Endpoint should be configured on key vault
There are plenty of AZ CLI commands on keyvault for approving, listing out, deleting, and managing the private-endpoint-connections like
az keyvault private-endpoint-connection
az keyvault private-endpoint-connection approve
az keyvault private-endpoint-connection delete
az keyvault private-endpoint-connection list
az keyvault private-endpoint-connection reject
az keyvault private-endpoint-connection show
To override the set firewall rules in the key Vault while creation or updating, use az keyvault --public-network-access, its values are Disabled, Enabled. This --public-network-access property is to specify whether the vault will accept traffic from public internet.
Refer here for more information on AZ key Vault commands.
Note: Complete list of Azure CLI Commands on Storage Accounts

Related

Vault Hashicorp: Passing aws dynamic secret to a script

1/ Everyday at 3am, we are runnning a script alfa.sh on server A in order to send some backups to AWS (s3 bucket).
As a requirement we had to configure AWS (aws configure) on the server which means the Secret Key and Access Key are stored on this server. We now would like to use short TTL credential valid only from 3am to 3:15am . Vault Hashicorp does that very well
2/ On server B we have a Vault Hashicorp installed and we managed to generate short ttl dynamic secrets for our s3 bucket (access key / secret key).
3/We now would like to pass the the daily generated dynamic secrets to our alpha.sh. Any idea how to achieve this?
4/Since we are generating a new Secret Key and Access Key, I understand that a new AWS configuration "aws configure" will have to be performed on server A in order to be able to perform the backup. Any experience with this?
DISCLAIMER: I have no experience with aws configure so someone else may have to answer this part of the question. But I believe it's not super relevant to the problem here, so I'll give my partial answer.
First things first - solve your "secret zero" problem. If you are using the AWS secrets engine, it seems unlikely that your server is running on AWS, as you could skip the middle man and just give your server an IAM policy that allowed direct access to the S3 resource. So find the best Vault auth method for your use case. If your server is in a cloud like AWS, Azure, GCP, etc or container like K8S, CF provider, or has a JWT token delivered along with a JWKS endpoint Vault can trust, target one of those, and if all else fails, use AppRole authentication delivering a wrapped token via a trusted CI solution.
Then, log into Vault in your shell script using those credentials. The login will look different depending on the auth method chosen. You can also leverage Vault Agent to automatically handle the login for you, and cache secrets locally.
#!/usr/bin/env bash
## Dynamic Login
vault login -method="${DYNAMIC_AUTH_METHOD}" role=my-role
## OR AppRole Login
resp=$(vault write -format=json auth/approle/login role-id="${ROLE_ID}" secret-id="${SECRET_ID}")
VAULT_TOKEN=$(echo "${resp}" | jq -r .auth.client_token)
export VAULT_TOKEN
Then, pull down the AWS dynamic secret. Each time you read a creds endpoint you will get a new credential pair, so it is important not to make multiple API calls here, and instead cache the entire API response, then parse the response for each necessary field.
#!/usr/bin/env bash
resp=$(vault read -format=json aws/creds/my-role)
AWS_ACCESS_KEY_ID=$(echo "${resp}" | jq -r .data.access_key)
export AWS_ACCESS_KEY_ID
AWS_SECRET_KEY_ID=$(echo "${resp}" | jq -r .data.secret_key)
export AWS_SECRET_KEY_ID
This is a very general answer establishing a pattern. Your environment particulars will determine manner of execution. You can improve this pattern by leveraging features like CIDR binds, number of uses of auth credentials, token wrapping, and delivery via CI solution.

gcp how to edit cloud api access scopes on running nodes in GKE

I have an issue. on existing cluster selected cloud api access scopes "Storage
Read Only" but cronjob must to push backups to cloud storage and got error 403. So how can i change it?

Access Azure Datalake from Datafactory using Service Principal

We are trying to access datalake from datafactory using Service principal.
So as part of it, i created an AD Group and a Service principal. Added the SP to the AD Group.
Used the AD Group to create ACL Roles in the Azure datalake. But this does not work as we get 'This request is not authorized' error.
If i add the ServicePrincipal to 'Storage Blob Contributor' RBAC it works.
Any idea on how to get this working. TIA.
Role assignments are used by Azure RBAC to apply sets of permissions to security principals. A security principal is an Azure Active Directory object that represents a user, group, service principal, or managed identity (AD). A permission set can grant a security principal "coarse-grain" access to all of the data in a storage account or all of the data in a container, for example.
Storage Blob Data Contributor, owner or Reader roles.
ACLs allow you to apply the degree of access to directories and files at "Finner Grain." A permission construct containing a sequence of ACL entries is known as an ACL. Each ACL entry links a security principle to a certain access level. See Access control lists (ACLs) in Azure Data Lake Storage Gen2 for additional information.
Hence, you require to set both the access.

Access Key Vault from local Service Fabric cluster with User Assigned Manged Identity(MSI)

I want to access the Key Vault from my Service Fabric application via Managed Service Identity (MSI). I have enabled MSI on the virtual machine scale set in the Azure Portal and given it access to my Key Vault resource. This works like a charm up in the cloud. However, I am having problems with my local develop environment.
As far as I understand, I can grant myself access to the Key Vault and run az login in Azure CLI. Alas, this doesn't work when running the application in a local Service Fabric cluster.
I am using .net core 2.1 in service fabric and getting below mentioned exception.
Azure.Identity.AuthenticationFailedException: DefaultAzureCredential failed to retrieve a token from the included credentials.
EnvironmentCredential authentication unavailable. Environment variables are not fully configured.
ManagedIdentityCredential authentication unavailable. No Managed Identity endpoint found.
SharedTokenCacheCredential authentication failed: Persistence check failed. Inspect inner exception for details
Visual Studio Token provider can't be accessed at C:\Users\Default\AppData\Local.IdentityService\AzureServiceAuth\tokenprovider.json
VisualStudioCodeCredential authentication failed: A specified logon session does not exist. It may already have been terminated.
Services are likely running under the built-in 'NetworkService' account, which cannot access the CLI for credentials because it has run in your user session.
Try creating machine level environment variables to access the vault:
Create a service principal with a password. Follow steps here to create a service principal and grant it permissions to the Key Vault.
Set an environment variable named AzureServicesAuthConnectionString to RunAs=App;AppId=AppId;TenantId=TenantId;AppKey=Secret. You need to
replace AppId, TenantId, and Secret with actual values from step #1.
Run the application in your local development environment. No code change is required. AzureServiceTokenProvider will use this
environment variable and use the service principal to authenticate to
Azure AD.
Don't forget to restart, so the environment variables are added to all processes.
As the document shows about DefaultAzureCredential, Environment and Managed Identity are deployed service authentication. Azure CLI needs to login with your Azure account via the az login command.
So, Environment and Managed Identity are appropriate for you. For example, Using the environment needs to set Environment Variables first, see here. Then you could create a secret client using the DefaultAzureCredential.
// Create a secret client using the DefaultAzureCredential
var client = new SecretClient(new Uri("https://myvault.azure.vaults.net/"), new DefaultAzureCredential());
I was able to get this working with with local service fabric development by opening Services.msc on my local development machine and configuring the 'Service Fabric Host Service' to run as my local user account rather than the default local service.
Only then would DefaultAzureCredential work for picking up the Az CLI login.

Azure Storage - Allowed Microsoft Service when Firewall is set

I am trying to connect a public logic app (not ISE environment) to a storage account that is restricted to a Vnet.
According to the Storage account documentation access should be possible using a system managed identity.
However I just tried in 3 different subscriptions and the result is always the same:
{
"status": 403,
"message": "This request is not authorized to perform this operation.\r\nclientRequestId: 2ada961e-e4c5-4dae-81a2-520397f277a6",
"error": {
"message": "This request is not authorized to perform this operation."
},
"source": "azureblob-we.azconn-we-01.p.azurewebsites.net"
}
Already provided access with different IAM roles, including owner. This feels like the service that should be allowed according to the documentation is not being allowed.
The Allow trusted Microsoft services... setting also allows a
particular instance of the below services to access the storage
account, if you explicitly assign an RBAC role to the system-assigned
managed identity for that resource instance. In this case, the scope
of access for the instance corresponds to the RBAC role assigned to
the managed identity.
Azure Logic Apps Microsoft.Logic/workflows Enables logic apps to
access storage accounts
[https://learn.microsoft.com/en-us/azure/storage/common/storage-network-security#exceptions][1]
What am I doing wrong?
Added screenshots:
https://i.stack.imgur.com/CfwJK.png
https://i.stack.imgur.com/tW7k9.png
https://i.stack.imgur.com/Lxyqd.png
https://i.stack.imgur.com/Sp7ZV.png
https://i.stack.imgur.com/Hp9JG.png
https://i.stack.imgur.com/rRbau.png
For authenticating access to Azure resources by using managed identities in Azure Logic Apps, you could follow the document. Azure Logic Apps should be registered in the same subscription as your storage account. If you want to access the blob in an Azure Storage container. You could add the Storage Blob Data Contributor(Use to grant read/write/delete permissions to Blob storage resources) role for the Logic App system identity in the storage account.
Update
From your screenshot, I found that you have not used a system-managed identity to design the Create blob logic but using an API connection.
For validating connecting a public logic app to a storage account with Allow trusted Microsoft services... setting enabled. You can design your logic using the managed identity with a trigger or action through the Azure portal. To specify the managed identity in a trigger or action's underlying JSON definition, see Managed identity authentication.
output
For more details, please read these steps in Authenticate access with managed identity.