How do I rename a Google Compute Engine VM instance?
I created a new LAMP server and I'd like to rename it in the "VM Instances" dashboard.
I've tried renaming the Custom metadata, but that didn't seem to replicate to the dashboard.
I tried the solution provided by #Marius I . It works, but I lost my description, my metas, the tags and the permissions I've set on the old instance. I had to copy my metas, had to make sure the zone for the new instance was the same as the original, and had to check that the pricing was the same.
I think, it's best to just create a clone of your original instance, this way don't have to manually copy/set them on the new instance.
As #Marius said, create a snapshot of your disk ( DO NOT skip this part: you may lose all your files/configuration )
Make sure you completed the step 1.
Clone your instance (“Create similar” button)
Name your cloned instance the way you want.
Make sure to select the snapshop of your disk created at #1 ( make sure you select the same typeof disk as well: if your original disk was SSD for example, you have to select if you want the new disk to be SSD too )
Make sure your IPs are set correctly
You're done :)
Another way to do this is:
snapshot the disk of the existing instance
create a new disk from that snapshot
create a new instance with that disk and give it the name you would like
It sounds time-consuming, but in reality should take 5 minutes.
you can't ...! Once VM is created, you can’t change the Instance Name
There's now a "native" way to do this. The feature is currently in Beta and only available with gcloud and via the API. With gcloud you can run:
$ gcloud beta compute instances set-name CURRENT_NAME -—zone=ZONE -—new-name=NEW_NAME
Some caveats:
You'll need to shut down the VM first
The Developer Console UI won't be aware of the rename until you do a browser refresh
See the official documentation for more details.
Apart from the hacks above, it's not possible.
Yet, it has been requested on uservoice and has received 593 votes. (as of 2018) Currently, it's the topmost "planned" item.
I got lost in the instructions, so I thought I include screenshots because the navigation is confusing. I hope this helps you.
Stop your instance
Click on the stopped instance name
In VM Instance Details, scroll down and click on the disk
Click on Create snapshot
give it a name like snapshot-1 (or your new instance name)
click on Create button
click on newly created snapshot
Click on Create Instance
Give your instance the new name and configure the rest of the VM.
When dealing with a robust system, it's necessary to have a way to bring up a system quickly when it goes down. This could be via custom scripts, salt, ansible, etc.
So, if you want to change your instance name, delete the instance, create a new one with the correct name, and run your script again :)
To answer your question directly. You cannot edit the VM Instance name.
However, you can create New VM instance using your old disk. To meet the VM instance name that you want.
Kindly see below procedure:
Go to Compute Engine Page
Go to Disk Page
Select the disk of your VM instance that you want to create a snapshot
Click the three dot image same line of your disk
Select +Create Snapshot (You will be go to Create Snapshot page). Kindly name your snapshot (backup)
Just Click Create.
Then once you have created a snapshot for your VM instance disk, you may now proceed on creating your new instance from snapshot pointing to other region which you can consider such: us-central1, us-west1 and us-west2. Please see below procedure:
Go to Snapshot Page
Select snapshot "backup" (You should be on Snapshot details Page)
Click Create Instance (Choose best name for your new VM Instance)
Please select the region best fit for you (us-central1, us-west1 and us-west2) except us-east1.
Lastly, Click Create
Machine images are now in pre-GA!
This is currently the easiest way to clone an instance without losing your instance configurations, check this comparison table.
Detailed steps:
Go to Compute Engine > Virtual Machines > Machine Images
Click on create Machine Image
Select your current instance under Source VM instance and click create
Once the image becomes ready go to Machine image details and click on create instance
The form will be populated by your existing instance configuration and you'll be able to change them before creating the instance!
Sorry to resurrect this thread after so long, but when I searched for an answer I kept ending up in this article... :-)
The Cloud SDK now allows renaming an instance directly, provided it's stopped:
The command looks like this:
gcloud beta compute instances set-name INSTANCE_NAME --new-name=NEW_NAME [--zone=ZONE] [GCLOUD_WIDE_FLAG …]
This is not available yet in the UI.
The following worked for me:
gcloud beta compute instances set-name currentname --new-name=newname
Googler checking in. We're rolling out this feature (renaming a VM) to all users in the cloud console. Public documentation.
In the Google Cloud console:
Go to the VM instances page.
Go to VM instances
In the Name column, click the name of the VM.
Click Stop stop.
Click Edit edit.
In Basic information > Rename > VM instance name, enter a new name
for the VM.
Click Save.
Click Start / Resume play_arrow.
Using the gcloud Command Line Interface:
gcloud compute instances stop INSTANCE_NAME
gcloud beta compute instances set-name INSTANCE_NAME --new-name=NEW_INSTANCE_NAME
I also wanted to let you all know that we monitor these forums and use your feedback to influence our roadmap. Thank you for your engagement!
I am trying to do this 03/2019 and I saw a new option on panel
click Instance link
on top menu you will see "Create Similar"
could work if you need same machine without data. (solved my case)
if you need a full copy then you should create a snapshot and clone it.
This is now possible via the web console:
Related
Until now I used QEMU\KVM and was able to start a VM from a kernel image and an initrd file (skipping the bootloader). I want to start using virt-manager to manage my VMs, but it looks like there is no option to use this method anymore but only to create VMs from ISO images etc.
Is there anyway to make it work or do I misunderstand something?
There is a way, but you have to trick it a little bit. Select 'Import', and give it any file as a disk image, just to get past the page. Click through to the end of the Wizard, select 'Customize before install'. Under the Boot page, you'll see an option to specify kernel + initrd. Remove the disk with the fake disk image
My client is in need of an AWS spring cleaning!
Before we can terminate EC2 instances, we need to find out who provisioned them and ask if they are still using the instance before we delete them. AWS doesn't seem to provide out-of-the-box features for reporting who the 'owner'/'provisioner' of an EC2 instance is, as I understand, I need to parse through gobs of archived zipped log files residing in S3.
Problem is, their automation is making use of STS AssumeRole to provision instances. This means the RunInstances event in the logs doesn't trace back to an actual user (correct me if I'm wrong, please please I hope I am wrong).
AWS blog provides a story of a fictional character, Alice, and her steps tracing a TerminateInstance event back to a user which involves 2 log events: The TerminateInstance event and an event "somewhere around the time" of an AssumeRole event containing the actual user details. Is there a pragmatic approach one can take to correlate these 2 events?
Here's my POC that's parsing through a cloudtrail log from s3:
import boto3
import gzip
import json
boto3.setup_default_session(profile_name=<your_profile_name>)
s3 = boto3.resource('s3')
s3.Bucket(<your_bucket_name>).download_file(<S3_path>, "test.json.gz")
with gzip.open('test.json.gz','r') as fin:
file_contents = fin.read().replace('\n', '')
json_data = json.loads(file_contents)
for record in json_data['Records']:
if record['eventName'] == "RunInstances":
user = record['userIdentity']['userName']
principalid = record['userIdentity']['principalId']
for index, instance in enumerate(record['responseElements']['instancesSet']['items']):
print "instance id: " + instance['instanceId']
print "user name: " + user
print "principalid " + principalid
However, the details are generic since these roles are shared by many groups. How can I find details of the user before they Assumed Role in a script?
UPDATE: Did some research and it looks like I can correlate the Runinstances event to an AssumeRole event by a shared 'accessKeyId' and that should show me the account name before it assumed a role. Tricky though. Not all RunInstances events contain this accessKeyId, for example, if 'invokedby' was an autoscaling event.
Direct answer:
For the solution you are proposing, you are unfortunately out of luck. You can take a look at http://docs.aws.amazon.com/IAM/latest/UserGuide/cloudtrail-integration.html#w28aac22b9b4b7b3b1. On the 4th row, it says that the Assume Role will save the Role identity only for all subsequent calls.
I'd contact aws support to make sure of this as I might very well be mistaken.
What I would do in your case:
First, wait a couple of days in case someone had a better idea or I was mistaken and aws support answers with an out-of-the-box solution
Create an aws config rule that would delete all instances that have a certain tag. Then tell your developers to tag all instances that they are sure that should be deleted, then these will get deleted
Tag all the production instances and still needed development instances with a tag of their own
Run a script that would tag all of the untagged instances with a separate tag. Douple and triple check these instances.
Back up and turn off the instances tagged in step 3 (without
deleting the instances).
If someone complained about something not being on, that means they
missed an instance in step 1 or 2. Tag this instance correctly and
turn it on again.
After a while (a week or so), delete the instances that are still
stopped (keep the backups)
After a couple months, delete the backups that were not restored
Note that this isn't foolproof as it has the possibility of human error and possible downtime, so double and triple check, make a clone of the same environment and test on that (if you have a development environment that already has such a configuration, that would be the best scenario), take it slow to be able to monitor everything, and be sure to keep backups of everything.
Good luck and plzz tell me what your solution ended up being.
General guidelines for the future:
Note: The following points are very opiniated, and are general rules that I abide by as I find them saving me a load of trouble from time to time. Read them, dismiss what you find as unfit for you and take the things that you find reasonable.
Don't use assume role that often as it obfuscates user access. In case it was a script run on the developer's pc, let it run with their own username. If it's running on a server, keep it with the role it was created in. The amount of management will be less that way as you just cut the middle-man (the assume-role) and don't need to create roles anymore, just assign the permissions to the correct group/user. Take a look below for when I'd consider using the assume-role as a necessity.
Automate deletions. The first things you should create is automating the task of keeping the aws account as clean as possible as this would save both $$$ and debugging pain. Tags and scripts to act on these tags are very powerful tools. So if a developer needs an instance for a day to try out something new, he can create a tag that times the instance out, then there is a script that cleans it up when the time comes. These are project-specific, and not everyone needs all of these, so see and assess what you need for your project and act on them.
What I'd recommend is giving the permissions to the users themselves in the development environment as it would make tracking things to their root and finding the most knowledgeable person to solve things easier. As of the production environment, everything should be automated anyway (creation when needed and deletion when no longer needed) and no one should have any write access to that account, ever.
As for the assume-role, I only use it in case I want to give access to read-only production logs on another account. Another case would be something that really shouldn't be happening that often, if at all, but still need to give some users access to it. So, as an extra layer of protection against the 'I did it by mistake', I make them switch role to do it, and never have a script that automatically switches roles and do the action in an attempt to make it as deliberate as possible (think deleting a database and such). Another thing would be accessing sensitive information (credit-card database, etc.). Many more scenarios can occur, and here it comes to your judgement.
Again, Good Luck.
We are using two servers, one as preprod and other as Production. When we are migrating jobs or Transformations from preprod to Prod it copies its connection properties as well and this affects our Production job execution.
Can someone let me know how to migrate transformations without coping it's connections to another server.
From the Tools->Options menu, there are two checkboxes that effect PDI's import behavior: "Replace existing objects on open/import" and "Ask before replacing objects".
Normally when migrating between environments, I set the first option to false. That way if a connection definition already exists, it is silently not replaced. The other way to go is to check both options on and answer 'No' when asked to replace an existing definition.
In this way, a transform/job that runs on pre-prod can simply be exported and imported into prod without changing anything, and it runs against prod in the new environment as long as the connections are named the same.
The only thing to watch out for is importing a new connection definition for the first time. There will be no warning that a new connection object is being created, and after import, it will still point to pre-prod. After each new connection import, you need to change the connection definition to point to the new environment. The good new is you only have to do that once.
I wish they had an option, or just an info dialog to show all new connection objects created as a result of the import; that way you would know exactly what you need to change. But alas -- earwax.
If by 'connection' you mean 'databases connection', JNDI allows you to give them a symbolic name independent of your environment : it is when you configure your environment (e.g. biserver or baserver) that you specify to which database (jdbc driver, IP and port,...) this symbolic name is related.
So your transformations don't contain any refrence to a server adress and you can deploy it "as is".
I use JNDI for my CDE dashboards in biserver too : to deploy a dashboard, I just export it from the dev environment and import it in the preprod environment without modifying anything.
There are a lot of resources on the web about JNDI. Check the Pentaho documentation too.
We have some hyper-v server images that if possible we would like to not rebuild. However the original machine they were installed on is no longer available.
So we have 1 vhdx and 8 avhdx. The vhdx is the virtual disk and the avhdx are check points if I remember correctly.
The process I used to build these machines was such as base install...checkpoint..install SQL Server...checkpoint...install visual studio...checkpoint etc...
I created a new vm on my windows 8 box and in the create wizard I told it to use an existing vhdx then pointed to my previous one.
The system boots and takes the correct password. However it comes up at the base install point. The checkpoints do not show in Hyper-V manager and consequently they do not get applied to the VM.
Now I have found that there is a sort of checkpoint database (xml) and several related folders in
C:\ProgramData\Microsoft\Windows\Hyper-V
thinking maybe there are some pointers I could realign but it seems editing them is not supported.
I also tried under the settings option of the test vm to change the checkpoint location. That seems to be a place for checkpoints to be stored and it doesn't seem to pick them up if any exist........this behavior makes sense as the pointer xml / database thingy is where that information should come from.
So is there a way to get these checkpoints associated / recreate / make use of these existing hyper v machines?
Thank You
I have created a process in IBM UCD to deploy a .Net application.
My Scenario is that i should be able to provide different application name at run time each time i run the process. How can we do this using property in IBM UCD.
I have tried enabling "Prompt on use" option and also created component property and mapped it to the parameter say ${p:component/application.name} but doesn't seem to work. May be i missing out some sequence of steps.
It would be great if i get detailed steps to making this working.
I take it that you are on version 4.x (uDeploy)?
I would steer clear of the prompt on use approach, that feature was removed in 6.x. While there is a migration in place, its simpler to just avoid it.
Using a property on the component process itself is the way to go. So go to your process configuration, and go to the properties / configuration tab. Create a property there. You'll be prompted for a value whenever you run an application process that uses this component process.
If the property is named "iis.app.name" you would reference it with just ${p:iss.app.name}.
Don't use the property "application.name". That is an automatically created property that gets the name of the UCD Application that you are deploying. If you ever can't find out the right way to reference a property, look at your executed process (at component / application levels). The normal view that lists out all the steps that were run and how long they took is sitting on a tab called "Log". Right next to it is "Properties" tab. Click that and you'll see what properties were available to the process.
Also, you'll have better luck getting fast answers about UC Deploy using their own forum: https://developer.ibm.com/answers/?community=urbancode
Did you tried using process plugin for updating the property file ?
Application >> Process >> Select Process >> Process Editor -- From left panel you can Utility plugins , try with update property option.