How to store IP-address of VM created by terraform in a text file - file-io

I am creating a windows VM using vsphere plugin from terraform. I am fetching the IPaddress of the created VM.
output "ipv4" {
value= vsphere_virtual_machine.vm.guest_ip_addresses[0]
}
this will flash the IPaddress on the commandprompt but I want to store this IPaddress in some file and want to use it another python function later on.
Can someone help.

You can use terraform output:
terraform output -json ipv4 > myfile.json

Related

How do you SSH into a Google Compute Engine VM instance with Python rather than the CLI?

I want to SSH into a GCE VM instance using the google-api-client. I am able to start an instance using google-api-client with the following code:
from googleapiclient import discovery
from oauth2client.client import GoogleCredentials
credentials = GoogleCredentials.get_application_default()
service = discovery.build('compute', 'v1', credentials=credentials)
project = 'my_project'
zone = 'us-west2-a'
instance = 'my_instance'
request = service.instances().start(project=project, zone=zone, instance=instance)
response = request.execute()
In the command line the above code is rendered as:
gcloud compute instances start my_instance
Similarly, to SSH into a GCE VM instance with the command line one writes:
gcloud init && gcloud compute ssh my_instance --project my_project --verbosity=debug --zone=us-west2-a
I've already got the SSH keys set up and all that.
I want to know how to write the above command line in Google Api Client or Python.
There is no official REST API method to connect to a Compute Engine instance with SSH. But assuming you have the SSH keys configured as per the documentation, in theory, you could use a third-party tool such as Paramiko. Take a look at this post for more details.

Mount an individual file in Azure Container Instances?

I'm attempting to mount a single file in an azure container instance, in this case the ssh host key file as described in this docker image: https://github.com/atmoz/sftp
However from my experiments Azure Container Instances via ARM / Azure CLI seem to only support mounting folders.
If I attempt to mount as a file I suspect it's actually mounting as a folder, as the built in bash appears to miss the fact it already exists, and then errors when it tries to write to it.
Are there any undocumented features to mount individual files? I'm hoping not needing to resorting customising the docker image, as it would defeat my objective of using a ready made image. :-(
You can mount files using Key Vault. If you are deploying your ACI container group using an ARM template, you can integrate it with an instance of Azure Key Vault. It is possible to mount a key vault "secret" as a single file within a directory of your choosing. Refer to the ACI ARM template reference for more details.
You can do it via Azure Container Instance secrets.
Either azure cli:
az container create \
--resource-group myResourceGroup \
--name secret-volume-demo \
--image mcr.microsoft.com/azuredocs/aci-helloworld \
--secrets id_rsa.pub="<file-content>" \
--secrets-mount-path /home/foo/.ssh/keys
or with terraform:
resource "azurerm_container_group" "aci_container" {
name = ""
resource_group_name = ""
location = ""
ip_address_type = "public"
dns_name_label = "dns_endpoint"
os_type = "Linux"
container {
name = "sftp"
image = "docker.io/atmoz/sftp:alpine-3.7"
cpu = "1"
memory = "0.5"
ports {
port = 22
protocol = "TCP"
}
// option 1: mount key as Azure Container Instances secret volume
volume {
name = "user-pub-key"
mount_path = "/home/foo/.ssh/keys"
secret = {
"id_rsa.pub" = base64encode("<public-key-content>")
}
}
// option 2: mount ssh public key as Azure File share volume
// Note: This option will work for user keys to auth, but not for the host keys
// since atmoz/sftp logic is to change files permission,
// but Azure File share does not support this POSIX feature
volume {
name = "user-pub-key"
mount_path = "/home/foo/.ssh/keys"
read_only = true
share_name = "share-name"
storage_account_name = "storage-account-name"
storage_account_key = "storage-account-key"
}
}
In both cases, you will have a file /home/foo/.ssh/keys/id_rsa.pub with the given content.

Can someone please tell me how to define a check_disk service with check_nrpe in icinga 2?

I'm trying to check disk status of client ubuntu 16.04 instance using icinga2 master server. In here I tried to use nrpe plugin for check disk status. I faced trouble When I'm going to define service in service.conf file. Please, can someone tell me what the correct files that should be changed when using nrpe are. Because I'm new to Icinga and nrpe.
I was able to find the solution to my problem. I hope to put it here because It may help someone's need.
Here I carried check_load example to the explain.
First of all, you need to create .conf file (name: 192.168.30.40-host.conf)regarding the client-server that you are going to monitor using icinga2. It should be placed on /etc/icinga2/conf.d/ folder
/etc/icinga2/conf.d/192.168.30.40-host.conf
object Host "host1" {
import "generic-host"
display_name = "host1"
address = "192.168.30.40"
}
you should create a service file for your client.
/etc/icinga2/conf.d/192.168.30.40-service.conf
object Service "LOAD AVERAGE" {
import "generic-service"
host_name = "host1"
check_command = "nrpe"
vars.nrpe_command = "check_load"
}
This is an important part of the problem. You should add this line to your nrpe.cfg file in Nagios server.
/etc/nagios/nrpe.cfg file
command[check_load]=/usr/lib64/nagios/plugins/check_load -w 15,10,5 -c 20,15,10
4.make sure to restart icinga2 and Nagios servers after making any change.
You could also use an icinga2 agent instead of nrpe. The agent will be able to receive its configuration from a master or satellite, and perform local checks on the server.

How do I connect to Neptune using Java

I have the following code based on the docs...
#Controller
#RequestMapping("neptune")
public class NeptuneEndpoint {
#GetMapping("")
#ResponseBody
public String test(){
Cluster.Builder builder = Cluster.build();
builder.addContactPoint("...endpoint...");
builder.port(8182);
Cluster cluster = builder.create();
GraphTraversalSource g = EmptyGraph.instance()
.traversal()
.withRemote(
DriverRemoteConnection.using(cluster)
);
GraphTraversal t = g.V().limit(2).valueMap();
t.forEachRemaining(
e -> System.out.println(e)
);
cluster.close();
return "Neptune Up";
}
}
But when I try to run I get ...
java.util.concurrent.TimeoutException: Timed out while waiting for an available host - check the client configuration and connectivity to the server if this message persists
Also how would I add Secret key from AWS IAM account?
Neptune doesn't allow you to connect to the db instance from your local machine. You can only connect to Neptune via an EC2 inside the same VPC as Neptune (aws documentation).
Try making a runnable jar of this code and run it inside an ec2, the code should work fine. If you're trying to debug something from your local system, then use PuTTY instance tunneling to connect to ec2 which then will be forwarded to neptune cluster.
Have you created an instance with IAM auth enabled?
If yes, you will have to sign your request using SigV4. More information (and examples) on how to connect using SigV4 is available at https://docs.aws.amazon.com/neptune/latest/userguide/iam-auth-connecting-gremlin-java.html
The examples given in the documentation above also contain information on how to use your IAM credentials to connect to a Neptune cluster.
I just had the same issue and the root cause was a dependency version conflict with Netty which is unfortunately a very pervasive dependency. Gremlin 3.3.2 uses io.netty/netty-all version 4.0.56.Final. You might find your project depends on another Netty jar such as io.netty/netty or io.netty/netty-handler both of which can cause issues so you will need to excluded them from other dependencies in your POM or use managed-dependencies to set a project level Netty version.
Another option is to use a AWS SigV4 signing proxy that acts as a bridge between Neptune and your local development environment. One of these proxies is https://github.com/monken/aws4-proxy
npm install --global aws4-proxy
# have your credentials exported as AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY
aws4-proxy --service neptune-db --endpoint cluster-die4eenu.cluster-eede5pho.eu-west-1.neptune.amazonaws.com --region eu-west-1
wscat localhost:3000/gremlin
Refer this
Note: You need to be in the same VPC to access Neptune cluster.

Configuring Google cloud bucket as Airflow Log folder

We just started using Apache airflow in our project for our data pipelines .While exploring the features came to know about configuring remote folder as log destination in airflow .For that we
Created a google cloud bucket.
From Airflow UI created a new GS connection
I am not able to understand all the fields .I just created a sample GS Bucket under my project from google console and gave that project ID to this Connection.Left key file path and scopes as blank.
Then edited airflow.cfg file as follows
remote_base_log_folder = gs://my_test_bucket/
remote_log_conn_id = test_gs
After this changes restarted the web server and scheduler .But still my Dags is not writing logs to the GS bucket .I am able to see the logs which is creating logs in base_log_folder .But nothing is created in my bucket .
Is there any extra configuration needed from my side to get it working
Note: Using Airflow 1.8 .(Same issue I faced with AmazonS3 also. )
Updated on 20/09/2017
Tried the GS method attaching screenshot
Still I am not getting logs in the bucket
Thanks
Anoop R
I advise you to use a DAG to connect airflow to GCP instead of UI.
First, create a service account on GCP and download the json key.
Then execute this DAG (you can modify the scope of your access):
from airflow import DAG
from datetime import datetime
from airflow.operators.python_operator import PythonOperator
def add_gcp_connection(ds, **kwargs):
"""Add a airflow connection for GCP"""
new_conn = Connection(
conn_id='gcp_connection_id',
conn_type='google_cloud_platform',
)
scopes = [
"https://www.googleapis.com/auth/pubsub",
"https://www.googleapis.com/auth/datastore",
"https://www.googleapis.com/auth/bigquery",
"https://www.googleapis.com/auth/devstorage.read_write",
"https://www.googleapis.com/auth/logging.write",
"https://www.googleapis.com/auth/cloud-platform",
]
conn_extra = {
"extra__google_cloud_platform__scope": ",".join(scopes),
"extra__google_cloud_platform__project": "<name_of_your_project>",
"extra__google_cloud_platform__key_path": '<path_to_your_json_key>'
}
conn_extra_json = json.dumps(conn_extra)
new_conn.set_extra(conn_extra_json)
session = settings.Session()
if not (session.query(Connection).filter(Connection.conn_id ==
new_conn.conn_id).first()):
session.add(new_conn)
session.commit()
else:
msg = '\n\tA connection with `conn_id`={conn_id} already exists\n'
msg = msg.format(conn_id=new_conn.conn_id)
print(msg)
dag = DAG('add_gcp_connection', start_date=datetime(2016,1,1), schedule_interval='#once')
# Task to add a connection
AddGCPCreds = PythonOperator(
dag=dag,
task_id='add_gcp_connection_python',
python_callable=add_gcp_connection,
provide_context=True)
Thanks to Yu Ishikawa for this code.
Yes, you need to provide additional information for both, S3 and GCP connection.
S3
Configuration is passed via extra field as JSON. You can provide only profile
{"profile": "xxx"}
or credentials
{"profile": "xxx", "aws_access_key_id": "xxx", "aws_secret_access_key": "xxx"}
or path to config file
{"profile": "xxx", "s3_config_file": "xxx", "s3_config_format": "xxx"}
In case of the first option, boto will try to detect your credentials.
Source code - airflow/hooks/S3_hook.py:107
GCP
You can either provide key_path and scope (see Service account credentials) or credentials will be extracted from your environment in this order:
Environment variable GOOGLE_APPLICATION_CREDENTIALS pointing to a file with stored credentials information.
Stored "well known" file associated with gcloud command line tool.
Google App Engine (production and testing)
Google Compute Engine production environment.
Source code - airflow/contrib/hooks/gcp_api_base_hook.py:68
The reason for logs not being written to your bucket could be related to service account rather than config on airflow itself. Make sure it has access to the mentioned bucket. I had same problems in the past.
Adding more generous permissions to the service account, e.g. even project wide Editor and then narrowing it down. You could also try using gs client with that key and see if you can write to the bucket.
For me personally this scope works fine for writing logs: "https://www.googleapis.com/auth/cloud-platform"