terraform provide a list of available images in your cloud - terraform-provider-openstack

Is there any command that can provide you with a list of images available in your infrastructure
Something like: terraform show providers images ?

Related

How do I add GCS credentials to tensorflow?

I'm trying to train a model on kaggle and dump tensorboard logs into a GCS bucket. I'm hesitant to allow anonymous read/write on my project and would like to be able to have tensorflow use a custom service account with limited quotas for all GCP / gfile.GFile operations. Is there anyway to provide tensorflow with a service account json to use?
Is my best bet just security by obscurity?
I am not experienced using Kraggle and I do not really understand what limits do you want to apply on the service account, but you can follow the next steps to determine a service account access for Google Cloud Storage while using TensorFlow:
Follow this guide to implement GCS custom FileSystem in Tensorflow.
Check the Python client library to instantiate the client.
The service account permissions required for storage are listed here.
To grant roles to a service account, follow this guide.
Check the snippet in Federico's post here, based on this documentation, to implement the service account in your Python code.
Snippet:
from google.oauth2 import service_account
SERVICE_ACCOUNT_FILE = 'service.json'
credentials = service_account.Credentials.from_service_account_file(
SERVICE_ACCOUNT_FILE)
If you have service account credentials in a json file, you can specify it in the GOOGLE_APPLICATION_CREDENTIALS environment variable to have TensorFlow be able to read/write to GCS via gs:// urls.
You can test it out in the following way, by running the following in bash (it downloads a smoke test script from TensorFlow's repository and runs it on your bucket url with your credentials):
wget https://raw.githubusercontent.com/tensorflow/tensorflow/master/tensorflow/tools/gcs_test/python/gcs_smoke.py
GOOGLE_APPLICATION_CREDENTIALS=my_credentials.json python gcs_smoke.py --gcs_bucket_url=gs://my_bucket/test_tf
This should create some dummy records in GCS and read from them. After this, you'd want to clean up the remaining temporary outputs to avoid further charges:
gsutil rm -r gs://my_bucket/test_tf

Where and how to store files uploaded by a user using rest api?

Currently I’m using a shared storage(azure file storage) to store profile pictures and company logos and also some custom python scripts uploaded by admins. My rest services are running in a docker swarm cluster where all the nodes have access to the shared location. Are there any drawbacks to this kind of design? I’m currently saving the files to the location and creating a url for that file and serving it as a static resource using my nginx reverse proxy/load balancer. So I was curious to know if there are any drawbacks to this design and how can I make it better?
There are several ways to access, store, and manipulate files in Azure file storage using REST API:
The Azure File service offers the following four resources: the storage account, shares, directories, and files. Shares provide a way to organize sets of files and also can be mounted as an SMB file share that is hosted in the cloud.
More info here
When it comes to the design, it will depend of what kind of concerns your customers may have, slow connectivity, are they going to need these files permanently etc ...

Terraform Shared State

Terraform 0.9.5.
I am in the process of putting together a group of modules that our infrastructure team and automation team will use to create resources in a standard fashion and in turn create stacks to provision different envs. All working well.
Like all teams using terraform shared state becomes a concern. I have configured terraform to use a s3 backend, that is versioned and encrypted, added a lock via a dynamo db table. Perfect. All works with local accounts... Okay the problem...
We have multiple aws accounts, 1 for IAM, 1 for billing, 1 for production, 1 for non-production, 1 for shared services etc... you get where I am going. My problem is as follows.
I authenticate as user in our IAM account and assume the required role. This has been working like a dream until i introduced terraform backend configuration to utilise s3 for shared state. It looks like the backend config within terraform requires default credentials to be set within ~/.aws/credentials. It also looks like these have to be a user that is local to the account where the s3 bucket was created.
Is there a way to get the backend configuration setup in such a way that it will use the creds and role configured within the provider? Is there a better way to configured shared state and locking? Any suggestions welcome :)
Update:Got this working. I created a new user within the account where the s3 bucket is created. Created a policy to just allow that new user s3:DeleteObject,GetObject,PutObject,ListBucket and dynamodb:* on the specific s3 bucket and dynamodb table. Created a custom credentials file and added default profile with access and secret keys assigned to that new user. Used the backend config similar to
terraform {
required_version = ">= 0.9.5"
backend "s3" {
bucket = "remote_state"
key = "/NAME_OF_STACK/terraform.tfstate"
region = "us-east-1"
encrypt = "true"
shared_credentials_file = "PATH_TO_CUSTOM_CREDENTAILS_FILE"
lock_table = "MY_LOCK_TABLE"
}
}
It works but there is an initial configuration that needs to happen within your profile to get it working. If anybody knows of a better setup or can identify problems with my backend config please let me know.
Terraform expects backend configuration to be static, and does not allow it to include interpolated variables as might be true elsewhere in the config due to the need for the backend to be initialized before any other work can be done.
Due to this, applying the same config multiple times using different AWS accounts can be tricky, but is possible in one of two ways.
The lowest-friction way is to create a single S3 bucket and DynamoDB table dedicated to state storage across all environments, and use S3 permissions and/or IAM policies to impose granular access controls.
Organizations adopting this strategy will sometimes create the S3 bucket in a separate "adminstrative" AWS account, and then grant restrictive access to the individual state objects in the bucket to the specific roles that will run Terraform in each of the other accounts.
This solution has the advantage that once it has been set up correctly in S3 Terraform can be used routinely without any unusual workflow: configure the single S3 bucket in the backend, and provide appropriate credentials via environment variables to allow them to vary. Once the backend is initialized, use workspaces (known as "state environments" prior to Terraform 0.10) to create a separate state for each of the target environments of a single configuration.
The disadvantage is the need to manage a more-complicated access configuration around S3, rather than simply relying on coarse access control with whole AWS accounts. It is also more challenging with DynamoDB in the mix, since the access controls on DynamoDB are not as flexible.
There is a more complete description of this option in the Terraform s3 provider documentation, Multi-account AWS Architecture.
If a complex S3 configuration is undesirable, the complexity can instead be shifted into the Terraform workflow by using partial configuration. In this mode, only a subset of the backend settings are provided in config and additional settings are provided on the command line when running terraform init.
This allows options to vary between runs, but since it requires extra arguments to be provided most organizations adopting this approach will use a wrapper script to configure Terraform appropriately based on local conventions. This can be just a simple shell script that runs terraform init with suitable arguments.
This then allows to vary, for example, the custom credentials file by providing it on the command line. In this case, state environments are not used, and instead switching between environments requires re-initializing the working directory against a new backend configuration.
The advantage of this solution is that it does not impose any particular restrictions on the use of S3 and DynamoDB, as long as the differences can be represented as CLI options.
The disadvantage is the need for unusual workflow or wrapper scripts to configure Terraform.

Syncing buckets between two S3 Storage Providers

I am currently using RIAK CS as an S3 Provider but I want to change to Scality S3. Therefore, I need to migrate the existing data from RIAK to Scality. Is there a quick an easy way of syncing buckets between the two different storage providers? I have got two docker containers running containing the docker images for the two.
One way of doing it would be to simply download the contents of the buckets to a local folder and then upload to Scality using s3cmd or a similar tool. However, I was hoping there was a direct route between the buckets.
Any ideas?
There would not be a "direct route between the buckets".
While the Amazon S3 CopyObject command can copy objects between different Amazon S3 buckets (even if they are in different regions), it will not work with a non-Amazon endpoint.
Your only hope is if Riak/Scality have somehow built-in connectivity with each other.

Permissions to create Entities in Google Datastore via Cloud Console

I'm managing a project running in the google cloud and have a team working on it. The team-members are organized in a google group, that has the permission to edit the project. Each team-member can start instances, create container-engine cluster, etc. but it's not possible to create datastore entities.
When I add the team-member directly as editor to the project (not via the google group), he is able to create datastore entities. But I like managing members via the google group, because I can give selected team-members the permission to add team-members without giving them the owner-role of the project.
Is there anything I missed? Or is it just not possible to give project editors added via google group the permission to create entities in the datastore?