Is there a way to get the Location value of a zone/region via the gcloud cli? I'm looking for the values listed here: https://cloud.google.com/compute/docs/regions-zones
I'm hoping to use these values to mark locations on a geochart.
Thanks!
You can get a zones location via gloud cli using the following commmand: gcloud compute zones list.
You can check in here the other parameters that you can use.
Maintaining my own list of locations by referring to: https://cloud.google.com/compute/docs/regions-zones
Related
I've an EKS cluster purely on Fargate and I'm trying to setup the logging to Cloudwatch.
I've a lot of [OUTPUT] sections that can be unified using some variables. I'd like to unify the logs of each deployment to a single log_stream and separate the log_stream by environment (name_space). Using a couple of variable I'd need just to write a single [OUTPUT] section.
For what I understand the new Fluentbit plugin: cloudwatch_logs doesn't support templating, but the old plugin cloudwatch does.
I've tried to setup a section like in the documentation example:
[OUTPUT]
Name cloudwatch
Match *container_name*
region us-east-1
log_group_name /eks/$(kubernetes['namespace_name'])
log_stream_name test_stream
auto_create_group on
This generates a log_group called fluentbit-default that according to the README.md is the fallback name in case the variables are not parsed.
The old plugin cloudwatch is supported (but not mentioned in AWS documentation) because if I replace the variable $(kubernetes['namespace_name']) with any string it works perfectly.
Fluentbit in Fargate manages automatically the INPUT section so I don't really know which variables are sent to the OUTPUT section, I suppose the variable kubernetes is not there or it has a different name or a different array structure.
So my questions are:
Is there a way to get the list of the variables (or input) that Fargate + Fluentbit are generating?
Get I solve that in a different way? (I don't want to write more than 30 different OUTPUT one for each service/log_stream_name. It would be also difficult to maintain it)
Thanks!
After few days of tests, I've realised that you need to enable the kubernetes filter to receive the kubernetes variables to the cloudwatch plugin.
This is the result, and now I can generate log_group depending on the environment label and log_stream depending of the container-namespace names.
filters.conf: |
[FILTER]
Name kubernetes
Match *
Merge_Log Off
Buffer_Size 0
Kube_Meta_Cache_TTL 300s
output.conf: |
[OUTPUT]
Name cloudwatch
Match *
region eu-west-2
log_group_name /aws/eks/cluster/$(kubernetes['labels']['app.environment'])
log_stream_name $(kubernetes['namespace_name'])-$(kubernetes['container_name'])
default_log_group_name /aws/eks/cluster/others
auto_create_group true
log_key log
Please note that the app.environment is not a "standard" value, I've added it to all my deployments. The default_log_group_name is necessary in case that value is not present.
Please note also that if you use log_retention_days and new_log_group_tags the system is not going to work. To be honest log_retention_days it never worked for me also using the new cloudwatch_logs plugin either.
I am using Ansible tower 3.4.3.
As part of one of my jobs, I need to generate a log file and logfile name should contain Tower_Job_ID to easily recognize which log is generated by which tower job id.
I guess there will be some global variables like "ansible_tower_job_id" but unable to find any documentation or the variable name.
Can some one help, how to capture the current running job ID in ansible tower.
The callback link contains the ID in it.
From the docs: "The ‘1’ in this sample URL is the job template ID in Tower." .
I'm trying to list info about some instances using cloud sdk, but for some reason the createTime field is not returned. Any idea why?
$ gcloud compute instances list --format="table(name,createTime)" --filter="name:florin*"
NAME CREATE_TIME
florin-ubuntu-18-playground
This should work according to this https://cloud.google.com/sdk/gcloud/reference/topic/filters
The command gcloud compute instances list does not show the instances creation time by default. But the gcloud --format flag can change the default output displayed.
gcloud compute instances list --format="table(name,creationTimestamp)"
It is also possible to retrieve the instance creation time from the gcloud compute describe command:
gcloud compute instances describe yourInstance --zone=yourInstanceZone | grep creationTimestamp
Or:
gcloud compute instances describe yourInstance --zone=yourInstanceZone --flatten=creationTimestamp
In our infrastructure, we set multiples grains on the minion including an "environment" and "app" grain.
When we use the cli, we can get the correct minions using:
salt -C "G#app:middle_tier_1 and G#environment:dev" test.ping
But if we try to use the cherrypy api, only got a result if set only one target like:
[{"client":"local","tgt_type":"grain","fun":"test.ping","tgt":"G#app:middle_tier_1"}]
or
[{"client":"local","tgt_type":"grain","fun":"test.ping","tgt":"G#environment:dev"}]
with multiples one, don't get any
[{"client":"local","tgt_type":"grain","fun":"test.ping","tgt":"G#app:middle_tier_1 and G#environment:dev"}]
[{"client":"local","tgt_type":"grain","fun":"test.ping","tgt":["G#app:middle_tier_1","G#environment:dev"]}]
According with the documentation, i can use a list in the tgt paramenter.
I have looked their documentation fairly extensively and have found no examples of this type of minion targeting.
Is this even possible, and if so, how would I go about doing it?
Extra info:
salt-master 2018.3.2 (Oxygen)
salt-api 2018.3.2 (Oxygen)
Thanks in advance!
If you want to use multiple grains, the tgt_type is compound not grains.
See: https://docs.saltstack.com/en/latest/ref/clients/#salt-s-client-interfaces, https://docs.saltstack.com/en/latest/topics/targeting/compound.html
I know committer_email, author_name, and load of other variables are part of the notification event. Is it possible to get access to them in earlier events like before_script, after_script?
I would like to get access of the information and add it directly to my test results. Having build information, test result information, and github repo information in the same file would be great.
You can extract committer e-mail, author name, etc. to environment variables using git log with --pretty, e.g.
export COMMITTER_EMAIL="$(git log -1 $TRAVIS_COMMIT --pretty="%cE")"
export AUTHOR_NAME="$(git log -1 $TRAVIS_COMMIT --pretty="%aN")"
On Travis one'd put this in the before_install or before_script stage.
TRAVIS_COMMIT environment variable is provided by default.