How to get a list of internal IP addresses of GCE instances - awk

I have a bunch of instances running in GCE. I want to programmatically get a list of the internal IP addresses of them without logging into the instances (locally).
I know I can run:
gcloud compute instances list
But are there any flags I can pass to just get the information I want?
e.g.
gcloud compute instances list --internal-ips
or similar? Or am I going to have to dust off my sed/awk brain and parse the output?
I also know that I can get the output in JSON using --format=json, but I'm trying to do this in a bash script.

The simplest way to programmatically get a list of internal IPs (or external IPs) without a dependency on any tools other than gcloud is:
$ gcloud --format="value(networkInterfaces[0].networkIP)" compute instances list
$ gcloud --format="value(networkInterfaces[0].accessConfigs[0].natIP)" compute instances list
This uses --format=value which also requires a projection which is a list of resource keys that select resource data values. For any command you can use --format=flattened to get the list of resource key/value pairs:
$ gcloud --format=flattened compute instances list

A few things here.
First gcloud's default output format for listing is not guaranteed to be stable, and new columns may be added in the future. Don't script against this!
The three output modes are three output modes that are accessible with the format flag, --format=json, --format=yaml, and format=text, are based on key=value pairs and can scripted against even if new fields are introduced in the future.
Two good ways to do what you want are to use JSON and the jq tool,
gcloud compute instances list --format=json \
| jq '.[].networkInterfaces[].networkIP'
or text format and grep + line-oriented using tools,
gcloud compute instances list --format=text \
| grep '^networkInterfaces\[[0-9]\+\]\.networkIP:' | sed 's/^.* //g'

I hunted around and couldn't find a straight answer, probably because efficient tools weren't available when others replied to the original question. GCP constantly updates their libraries & APIs and we can use the filter and projections to extract targeted attributes.
Here I outline how to reserve an external static IP, see how it's attributes are named & organised, and then export the external IP address so that I can use it in other scripts (e.g. assign this to a VM instance or authorise this network (IP address) on a Cloud SQL instance.
Reserve a static IP in a region of your choice
gcloud compute --project=[PROJECT] addresses create [NAME] --region=[REGION]
[Informational] View the details of the regional static IP that was reserved
gcloud compute addresses describe [NAME] --region [REGION] --format=flattened
[Informational] List the attributes of the static IP in the form of key-value pairs
gcloud compute addresses describe [NAME] --region [REGION] --format='value(address)'
Extract the desired value (e.g. external IP address) as a parameter
export STATIC_IP=$(gcloud compute addresses describe [NAME] --region [REGION] --format='value(address)’)
Use the exported parameter in other scripts
echo $STATIC_IP

The best possible way would be to have readymade gcloud command use the same as and when needed.
This can be achieved using table() format option with gcloud as per below:
gcloud compute instances list --format='table(id,name,status,zone,networkInterfaces[0].networkIP :label=Internal_IP,networkInterfaces[0].accessConfigs[0].natIP :label=External_IP)'
What does it do for you?
Get you data in clean format
Give you option to add or remove columns
Need additional columns? How to find column name even before you run the above command?
Execute the following, which will give you data in raw JSON format consisting value and its name, copy those names and add them into your table() list. :-)
gcloud compute instances list --format=json
Plus Point: This is pretty much same syntax you can tweak with any GCP resources data to fetch including with gcloud, kubectl etc.

As far as I know you can't filter on specific fields in the gcloud tool.
Something like this will work for a Bash script, but it still feels a bit brittle:
gcloud compute instances list --format=yaml | grep " networkIP:" | cut -c 14-100

I agree with #Christiaan. Currently there is no automated way to get the internal IPs using the gcloud command.
You can use the following command to print the internal IPs (4th column):
gcloud compute instances list | tail -n+2 | awk '{print $4}'
or the following one if you want to have the pair <instance_name> <internal_ip> (1st and 4th column)
gcloud compute instances list | tail -n+2 | awk '{print $1, $4}'
I hope it helps.

Related

Get value by position in Redis

When getting all the keys from Redis, like this:
redis.server.com:6379> keys *
1) "z13235jxby03knne1w1gucl5"
Instead of manually copying the long key to execute get z13235jxby03knne1w1gucl5, I'd like to run something like get $(1) (pseudo code) to get the value at position 1, as output by the keys command.
Is this possible, if not, is there any workaround to not have to manually copy paste?
Note, I don't want to solve this with a script, then I prefer just copy and paste it
off the top of my head, I'm not aware of a way from inside the cli.
But, Regardless of any performance implications, you can pipe the cli command lines together , but you have to do it from the shell
redis-cli --raw KEYS "*" | sed -n 1p | xargs redis-cli GET
where the 1 in :
sed -n 1p
is the line number (1-based index) inside KEYS output.
but still you need to do your validations; like making sure the index is withing the nuber of keys returned by the KEYS command all keys are of simple string type ; not sets, hash maps, etc...

DynamoDB Local to DynamoDB AWS

I've built an application using DynamoDB Local and now I'm at the point where I want to setup on AWS. I've gone through numerous tools but have had no success finding a way to take my local DB and setup the schema and migrate data into AWS.
For example, I can get the data into a CSV format but AWS has no way to recognize that. It seems that I'm forced to create a Data Pipeline... Does anyone have a better way to do this?
Thanks in advance
As was mentioned earlier, DynamoDB local is there for testing purposes. However, you can still migrate your data if you need to. One approach would be to save data into some format, like json or csv and store it into S3, and then use something like lambdas or your own server to read from S3 and save into your new DynamoDB. As for setting up schema, You can use the same code you used to create your local table to create remote table via AWS SDK.
you can create a standalone application to get the list of tables from the local dynamoDB and create them in your AWS account after that you can get all the data for each table and save them.
I'm not sure which language you familiar with but will explain some API might help you in Java.
DynamoDB.listTables();
DynamoDB.createTable(CreateTableRequest);
example about how to create table using the above API
ProvisionedThroughput provisionedThroughput = new ProvisionedThroughput(1L, 1L);
try{
CreateTableRequest groupTableRequest = mapper.generateCreateTableRequest(Group.class); //1
groupTableRequest.setProvisionedThroughput(provisionedThroughput); //2
// groupTableRequest.getGlobalSecondaryIndexes().forEach(index -> index.setProvisionedThroughput(provisionedThroughput)); //3
Table groupTable = client.createTable(groupTableRequest); //4
groupTable.waitForActive();//5
}catch(ResourceInUseException e){
log.debug("Group table already exist");
}
1- you will create TableRequest against mapping
2- setting the provision throughput and this will vary depend on your requirements
3- if the table has global secondary index you can use this line (Optional)
4- the actual table will be created here
5- the thread will be stopped till the table become active
I didn't mention the API related to data access (insert ... etc), I supposed that you're familiar with since you already use them in local dynamodb
I did a little work setting up my local dev environment. I use SAM to create the dynamodb tables in AWS. I didn't want to do the work twice so I ended up copying the schema from AWS to my local instance. The same approach can work the other way around.
aws dynamodb describe-table --table-name chess_lobby \
| jq '.Table' \
| jq 'del(.TableArn)' \
| jq 'del(.TableSizeBytes)' \
| jq 'del(.TableStatus)' \
| jq 'del(.TableId)' \
| jq 'del(.ItemCount)' \
| jq 'del(.CreationDateTime)' \
| jq 'del(.GlobalSecondaryIndexes[].IndexSizeBytes)' \
| jq 'del(.ProvisionedThroughput.NumberOfDecreasesToday)' \
| jq 'del(.GlobalSecondaryIndexes[].IndexStatus)' \
| jq 'del(.GlobalSecondaryIndexes[].IndexArn)' \
| jq 'del(.GlobalSecondaryIndexes[].ItemCount)' \
| jq 'del(.GlobalSecondaryIndexes[].ProvisionedThroughput.NumberOfDecreasesToday)' > chess_lobby.json
aws dynamodb create-table \
--cli-input-json file://chess_lobby.json \
--endpoint-url http://localhost:8000
The top command uses describe table aws cli capabilities to get the schema json. Then I use jq to delete all unneeded keys, since create-table is strict with its parameter validation. Then I can use create-table to create the table in the local environent by using the --endpoint-url command.
You can use the --endpoint-url parameter on the top command instead to fetch your local schema and then use the create-table without the --endpoint-url parameter to create it directly in AWS.

In Lettuce(4.x) for Redis how to reduce round trips and use output of one command as input for another command, especially for Georadius

I have seen this pass results to another command in redis
and using via command line this command works well :
src/redis-cli keys '*' | xargs src/redis-cli mget
However how can we achieve the same effect via Lettuce (i started trying out 4.0.2.Final)
Also a solution to this is particularly important in the following scenario :
Say we are using geolocation capabilities, and we add a set of locations of "my-location-category"
using GEOADD
GEOADD "category-1" 8.6638775 49.5282537 "location-id:1" 8.3796281 48.9978127 "location-id:2" 8.665351 49.553302 "location-id:3"
Next, say we do a GeoRadius to get locations within 10 km radius of 8.6582361 49.5285495 for "category-1"
Now when we get "location-id:1" & "location-id:3"
Given that I already set values for above keys "location-id:1" & "location-id:3"
I want to pipe commands to do the GEORADIUS as well as do mget on all the matching results.
Does Redis provide feature to do that?
and / or how can we achieve this via the Lettuce client library without first manually iterating through results of GEORADIUS and then do manual mget.
That would be more efficient performance for the program that uses it.
Does anyone know how we can do this ?
Update
This is the piped command for the scenario I discussed above :
src/redis-cli GEORADIUS "category-1" 8.6582361 49.5285495 10 km | xargs src/redis-cli mget
Now we need to know how to do this via Lettuce
IMPORTANT: never use KEYS, always use SCAN instead if you must.
This isn't really a question about Lettuce nor Java so I can actually answer it :)
What you're trying to do is use the results from a read operation (GEORADIUS) as input (key names) for another read operation (MGET). This type of flow can't be pipelined, well, just because of that - pipelining means that you don't need the answers for operations right away but in you case you do.
However.
Since you're reading String keys with MGET, you might as well just denormalize everything (remember, we're NoSQL) and store the contents of these keys in the Sorted Set's members, e.g.:
GEOADD "category-1" 8.6638775 49.5282537 "location-id:1:moredata:evenmoredata:{maybe a JSON document here}:orperhapsmsgpack"
This will allow you to get the locations and their "data" with one GEORADIUS call. Of course, any updates to location:1's data will need to be done across all categories.
A note about Lua scripts: while a Lua script could definitely save on the back and forth in this case, any such script will be against best practices/not cluster safe.
After digging around and studying Lua script, my conclusion is that removing round-trips in such a way can only be done via Lua scripts as suggested by Itamar Haber.
I ended up creating a lua script file (myscript.lua) as below
local locationKeys = redis.call('GEORADIUS', 'category-1', '8.6582361', '49.5285495', '10', 'km' )
if unpack(locationKeys) == nil then
return nil
else
return redis.call('MGET', unpack(locationKeys))
end
** of course we should be sending in parameters to this... this is just a poc :)
now you can execute it via command
src/redis-cli EVAL "$(cat myscript.lua)" 0
Then to reduce the network-overhead of sending across the entire script to Redis for execution, we have the option of registering the script with Redis.
Redis will give us a sha1 digested code for future references for that script, which can be used for next calls to that script.
This can be done as below :
src/redis-cli SCRIPT LOAD "$(cat myscript.lua)"
this should give back a sha1 code something like this : 49730aa2ed3034ee48f818e486tpbdf1b500b19e
next calls can be done using this code
eg
src/redis-cli evalsha 49730aa2ed3034ee48f818e486b2bdf1b500b19e 0
The sad part however here is that the sha1 digest is remembered only so long as the instance of redis is running. If it is restarted, that the sha1 digest is lost. Then you do the SCRIPT LOAD once again. And if nothing changes in the script, then the sha1-digest code will be the same.
Ideally while using through client api, we should first attempt evalsha, if that returns a "No matching script" error, then as a fallback do script load, and procure the sha1 code once again, and create an internal map of that and use that sha1 code for further calls.
This can well be done via Lettuce. I could find the methods for those. Hope this gives a good insight into solution for the problem.

Is it possible to query data from Whisper (Graphite DB) from console?

I have configured Graphite to monitor my application metrics. And I configured Zabbix to monitor my servers CPU and other metrics.
Now I want to pass some critical Graphite metrics to Zabbix to add triggers for them.
So I want to do something like
$ whisper get prefix1.prefix2.metricName
> 155
Is it possible?
P.S. I know about Graphite-API project, I don't want to install extra app.
You can use the whisper-fetch program which is provided in the whisper installation package.
Use it like this:
whisper-fetch /path/to/dot.wsp
Or to get e.g. data from the last 5 minutes:
whisper-fetch --from=$(date +%s -d "-5 min") /path/to/dot.wsp
Defaults will result in output like this:
1482318960 21.187000
1482319020 None
1482319080 21.187000
1482319140 None
1482319200 21.187000
You can change it to json using the --json option.
OK! I found it myself: http://graphite.readthedocs.io/en/latest/render_api.html?highlight=rawJson (I can use curl and return csv or json).
Answer was found here custom querying in graphite
Also see: https://github.com/graphite-project/graphite-web/blob/master/docs/render_api.rst

Categorize the output of ec2-describe-images or ec2-describe-instances

Is there any command/tool/script to categorize the bulky output of ec2-describe-images or ec2-describe-instances.
I have a list of around 100 servers with each and every detail. I want to categorize them under suitable headings like - RESERVATION, INSTANCE, BLOCKDEVICE, TAG (whatever category is available on the output).
Did you get this sorted? If not...
run ec2-describe-images with the option --headers to give you the categories
ec2-describe-images --private-key ~/private.key --cert ~/my.crt --region us-west-1 --headers
if you just want certain fields then just pipe the output of the above through the linux command cut picking the fields (columns) you are after. Say you want ImageID, Name and Architecture then these would be fields 2,3 and 8 of the output above. Example.
ec2-describe-images --private-key ~/private.key --cert ~/my.crt --region us-west-1 --headers | cut -f2,3,8 -s
Doing the same for ec2-describe-instances would be similar.
It's a job for awk (or perl, python or other general purpose scripting language).
awk can handle records with various record/field length, can create associative arrays, and it's a reporting language, which is usually installed on every *nix.
Add this to your ~/.bashrc or ~/.bash_profile:
ez-ec2-describe-instances() {
ec2-describe-instances $* --headers | egrep '(ReservationID|running|pending)'|cut -f 2,3,4,6,7,10,12;
}
Logout/login or run ". ~/.bashrc". Then you can use:
$ ez-ec2-describe-instances
ReservationID Owner Groups
i-6f194113 ami-1624987f ec2-107-20-75-13.compute-1.amazonaws.com running t1.micro us-east-1a
You can pass arguments to ez-ec2-describe-instances, just like you would pass them to the regular ec2-describe-instances. E.g.:
$ ez-ec2-describe-instances --region eu-west-1
ReservationID Owner Groups
i-e4fd6eaf ami-c37474b7 ec2-54-246-38-35.eu-west-1.compute.amazonaws.com pending t1.micro eu-west-1a