Mount an individual file in Azure Container Instances? - azure-container-instances

I'm attempting to mount a single file in an azure container instance, in this case the ssh host key file as described in this docker image: https://github.com/atmoz/sftp
However from my experiments Azure Container Instances via ARM / Azure CLI seem to only support mounting folders.
If I attempt to mount as a file I suspect it's actually mounting as a folder, as the built in bash appears to miss the fact it already exists, and then errors when it tries to write to it.
Are there any undocumented features to mount individual files? I'm hoping not needing to resorting customising the docker image, as it would defeat my objective of using a ready made image. :-(

You can mount files using Key Vault. If you are deploying your ACI container group using an ARM template, you can integrate it with an instance of Azure Key Vault. It is possible to mount a key vault "secret" as a single file within a directory of your choosing. Refer to the ACI ARM template reference for more details.

You can do it via Azure Container Instance secrets.
Either azure cli:
az container create \
--resource-group myResourceGroup \
--name secret-volume-demo \
--image mcr.microsoft.com/azuredocs/aci-helloworld \
--secrets id_rsa.pub="<file-content>" \
--secrets-mount-path /home/foo/.ssh/keys
or with terraform:
resource "azurerm_container_group" "aci_container" {
name = ""
resource_group_name = ""
location = ""
ip_address_type = "public"
dns_name_label = "dns_endpoint"
os_type = "Linux"
container {
name = "sftp"
image = "docker.io/atmoz/sftp:alpine-3.7"
cpu = "1"
memory = "0.5"
ports {
port = 22
protocol = "TCP"
}
// option 1: mount key as Azure Container Instances secret volume
volume {
name = "user-pub-key"
mount_path = "/home/foo/.ssh/keys"
secret = {
"id_rsa.pub" = base64encode("<public-key-content>")
}
}
// option 2: mount ssh public key as Azure File share volume
// Note: This option will work for user keys to auth, but not for the host keys
// since atmoz/sftp logic is to change files permission,
// but Azure File share does not support this POSIX feature
volume {
name = "user-pub-key"
mount_path = "/home/foo/.ssh/keys"
read_only = true
share_name = "share-name"
storage_account_name = "storage-account-name"
storage_account_key = "storage-account-key"
}
}
In both cases, you will have a file /home/foo/.ssh/keys/id_rsa.pub with the given content.

Related

How to store IP-address of VM created by terraform in a text file

I am creating a windows VM using vsphere plugin from terraform. I am fetching the IPaddress of the created VM.
output "ipv4" {
value= vsphere_virtual_machine.vm.guest_ip_addresses[0]
}
this will flash the IPaddress on the commandprompt but I want to store this IPaddress in some file and want to use it another python function later on.
Can someone help.
You can use terraform output:
terraform output -json ipv4 > myfile.json

how to invoke gcloud commands using terraform

Since terraform does not have redis tls support yet, was planning to invoke gcloud command to create tls enabled redis through terraform. I am new to terraform, so was looking for some resource on web. but couldn't find much. Can some one please help with a working sample. any gcloud command invocation would work.
thanks
Terraform builds upon underlying REST APIs and does not use Cloud SDK (gcloud) directly. For this reason, it's not straightforward to invoke gcloud commands directly in the absence of a provider supporting the resource.
I'm unfamiliar with Terraform but I expect (!?) that invoking shell commands directly (e.g. gcloud redis instances create ...) is discouraged. And, while it's likely also possible to call REST APIs directly, you'll then need to take care authenticating.
That said, google-beta supports TLS.
transit_encryption_mode = "SERVER_AUTHENTICATION"
terraform {
required_providers {
google-beta = {
source = "hashicorp/google-beta"
version = "3.58.0"
}
}
}
variable "project" {}
variable "region" {}
variable "zone" {}
variable "key" {}
variable "instance" {}
provider "google-beta" {
credentials = file(var.key)
project = var.project
region = var.region
zone = var.zone
}
resource "google_redis_instance" "cache" {
provider = google-beta
name = var.instance
tier = "BASIC"
memory_size_gb = 1
location_id = var.zone
transit_encryption_mode = "SERVER_AUTHENTICATION"
}
Then to confirm TLS-enabled:
gcloud redis instances describe ${INSTANCE} \
--region=${REGION} \
--project=${PROJECT} \
--format="value(transitEncryptionMode)"
SERVER_AUTHENTICATION

How do you SSH into a Google Compute Engine VM instance with Python rather than the CLI?

I want to SSH into a GCE VM instance using the google-api-client. I am able to start an instance using google-api-client with the following code:
from googleapiclient import discovery
from oauth2client.client import GoogleCredentials
credentials = GoogleCredentials.get_application_default()
service = discovery.build('compute', 'v1', credentials=credentials)
project = 'my_project'
zone = 'us-west2-a'
instance = 'my_instance'
request = service.instances().start(project=project, zone=zone, instance=instance)
response = request.execute()
In the command line the above code is rendered as:
gcloud compute instances start my_instance
Similarly, to SSH into a GCE VM instance with the command line one writes:
gcloud init && gcloud compute ssh my_instance --project my_project --verbosity=debug --zone=us-west2-a
I've already got the SSH keys set up and all that.
I want to know how to write the above command line in Google Api Client or Python.
There is no official REST API method to connect to a Compute Engine instance with SSH. But assuming you have the SSH keys configured as per the documentation, in theory, you could use a third-party tool such as Paramiko. Take a look at this post for more details.

I am working on a databricks notebook running with ADLSV2 using service principle id but receive the following error after mounting my drive

I am working on a databricks notebook running with ADLSV2 using service
priciple id but receive the following error after mounting my drive.
StatusCode=403
StatusDescription=This request is not authorized to perform this operation using this permission.
configs = {"dfs.adls.oauth2.access.token.provider.type":
"ClientCredential",
"dfs.adls.oauth2.client.id": "78jkj56-2ght-2345-3453-b497jhgj7587",
"dfs.adls.oauth2.credential": dbutils.secrets.get(scope =
"DBRScope", key = "AKVsecret"),
"dfs.adls.oauth2.refresh.url":
"https://login.microsoftonline.com/bdef8a20-aaac-4f80-b3a0-
d9a32f99fd33/oauth2/token"}
dbutils.fs.mount(source =
"adl://<accountname>.azuredatalakestore.net/tempfile",mount_point =
"/mnt/tempfile",extra_configs = configs)
%fs ls mnt/tempfile
The uri for your lake is a gen1 uri not gen2. Either way your service principal does not have permission to access the lake. As a test make it a resource owner, then remove it and work out what permissions are missing.

Stackdriver Node.js Logging not showing up

I have a Node.js application, running inside of a Docker container and logging events using Stackdriver.
It is a Node.Js app, running with Express.js and Winston for logging and using a StackDriverTransport.
When I run this container locally, everything is logged correctly and shows up in the Cloud console. When I run this same container, with the same environment variables, in a GCE VM, the logs don't show up.
What do you mean exactly by locally? Are you running the container on the Cloud Shell vs running it on an instance? Keep in mind that if you create a container or instance that has to do something that needs privileges (like the Stackdriver logging client library) and run it, if that instance doesn't have a service account with that role/privileges set up it won't work.
Yu mentioned that you use the same environment variables, I take that one of the env vars points to your json key file. Is the key file present in that path on the instance?
From Winston documentation it looks like you need to specify the key file location for the service account:
const winston = require('winston');
const Stackdriver = require('#google-cloud/logging-winston');
winston.add(Stackdriver, {
projectId: 'your-project-id',
keyFilename: '/path/to/keyfile.json'
});
Have you checked if this is defined with the key for the service account with a logging role?