Cannot connect to Compute Engine CentOS Virtual Machine - virtual-machine

I am new to Virtual Machines and CLI so please bear with me.
I have a CentOS 6.5 running on Compute Engine.
I ran yum update (without creating a snapshot of the previous disk - Yes I am an idiot) and not I cannot connect to the machine using the ip address.
I tried the following steps.
Tried to connect through Filezilla - didn't work.
Tried through Putty - didn't work
Tried through the browser option given by the CE console - didn't work.
I even tried creating a snapshot and starting up another VM with the snapshot - didn't work.
If anyone knows how I can get the files and folders out from the previous disk, I can start up a new VM and transfer everything again.
I do not have the latest database and this is important.
Please help!
Thanks
Warren

The way to recover is to delete your VM without deleting the disk, then create another VM with its own boot disk, attach and mount the original disk, and recover any data that you need from it.
First things first: on the VM instances page, click on the instance name that is currently running with that disk, and uncheck the box "Delete boot disk when instance is deleted". Then delete the instance.
Now, create a new instance with its own boot disk. To differentiate this new disk from the original boot disk:
using a different OS (or version of the OS) for the new disk, e.g., if using Ubuntu, try a different version or use Debian; if using RHEL, try CentOS, or vice versa
see which one is mounted at / — this should be the new disk
Mount the original disk as read-only and recover any information you need. Once you have a backup of your data, you can remount it with read-write access and try to fix it (but back up the data first!).

I finally solved this problem thanks to Misha for sending me in the right direction.
The steps are below for anyone who has the same issue.
Problem:
While updating the Centos server using yum update, I was unable to connect back to the server.
I tried all possible combinations but no luck. This seems to be a known issue as there was some material on the Compute Engine site regarding this.
Solution:
I followed the steps as Misha suggested. I started up another VM with its own boot disk and then attached the original disk with read write access.
Note: I was unable to mount the disk as just read only.
The commands were
mkdir /mnt/sdb1
mount /dev/sdb1 /mnt/sdb1
Once I mounted the VM, I copied the files from the html folder in the sdb1 disk to the html folder in the sda1(the new boot disk).
The database was a bit more challenging.
I tried quite a few times but copying the files from /dev/sdb1/var/lib/mysql into the new disk mysql folder was not working.
I found some tutorials but nothing helped.
Finally I downloaded the files from within the /dev/sdb1/var/lib/mysql and put them in my local windows mysql installation within the data folder.
Remember you have to download everything which includes the ib_logfile0 , ib_logfile1 and ibdata1 including the folder which has the *.frm files.
Then I opened localhost/phpmyadmin and voila... the files were there.
The rest was pretty simple... Exporting and uploading the SQL scripts back to the server.
This took me about 12 hours to figure out.
Thanks again Misha.

Related

Image file on root node for a virtual machine - can it be moved?

I am using proxmox and created a virtual machine yesterday. Today, I noticed that there is hardly any memory left on my root nodes /dev/mapper disk, which causes the VM to stop. I found out that there is an image file (extension .qcow2) in the directory /var/lib/vz/images, which belongs to the newly created VM, which consumes quite a lot memory.
I know that images can be used to install operating systems from and I asked myself if this image file is a necessary component for the VM to work or if the image file is only created as a kind of backup. If it is a backup file, I could save it on another disk to solve my problem.
Thanks for your help.
It's your virtual machine disk, you cannot just remove it. You can create VM disk with "Thin provision" checked in Storage configuration on hypervisor, it will consume only what you use, not allocate all space at once. Use Clonezilla or dd to clone all data to new disk.

VMWARE - Migration went wrong - failure

Need your help:
Short story, I've tried to migrate a ESXI VM to a new ESXI server and make an huge mistake, instead of copy the VM I move it to a shared folder, NFS, when trying to add inventory it adds, but don't started. I've successfully create the vmx file and VM it worked, but now I have another problem, the VM has more then 1 year old, so it's missing the snapshot. Now start's my doubts and can't figure out after many hours... I've added the snapshot file and the vm but can't figure it out how to put it to work.
This is a picture of the files structure:
enter image description here
The image has two parts the, first it the VM working folder and the other folder "ZZ Temp", has the snapshot and the vm?. I had to create this folder because could not copy them inside the VM already exist also I've copy "Windows Server 2008 R2-000001-delta.vmdk" to the VM It later let me to do so...
Conclusion this is all a mess and can't figure it out how to reaseamble this, can someone help me to do this.
1- Can I copy the files to the old server and reassemble, but I have the same issue it don't let me copy all the files to the VM folder.
2-How can I know if the 300GB "Windows Server 2008 R2-flat.vmdk" file is the base file server?
Do I need it or I just need this 2 from ZZ Temp folder "Windows Server 2008 R2-flat.vmdk","Windows Server 2008 R2-000001-delta.vmdk"
I'm completely lost need your guidance.

Proposal to Migrate OpenNebula Datastore from Local FS to NFS

I have an instance of OpenNebula with 2 nodes running KVM and local file store. This means no live migration as vm images are scp'd to each node, so there is also no option of failover or Live Migration.
I would like to implement NFS shared storage and move the VM's from the local FS datastore to the NFS shared storage datastore. OpenNebula supports migrating VM's between datastores, but only datastores of the same type i.e. 'ssh' to 'ssh' but not 'ssh' to 'shared'.
I am working on a method of achieving this, and would love some feedback as to why this is a good or a bad idea.
Thanks
OpenNebula doesn't currently support migrating VM's from one type of datastore to another different type of datastore. I have been working on a method that is working and want to document it here to get some feedback and opinions on the method.
A datastore type is identified primarily by the Transfer manager Driver 'TM_MAD' setting. This setting cannot be changed, either through Sunstone or through the cli. So we need a method to do just this. This is what i did. I started with a fresh install of OpenNebula 5.4.13 in one VM, and 2 VM nodes all running Debian 9 within VMware virtual machines (don't forget to check virtualisation for the VM CPU options).
NOTE: This is an experimental process so make sure you Backup everything first!
Steps
To migrate to a different store, there are a few steps we need to do. They are as follows:
Setup the NFS share exports,
Move the VM images to the NFS share and mount the datastore,
Change the datastore types,
Configure the nodes for NFS share.
Setup NFS Server
First thing we want to do is setup the NFS shares that we want to use. I'm using a single share for the base datastore folder, but you could use separate shares for each datastore ID from different NFS servers.
On the NFS Server create the datastore folder i.e. mkdir /share/one_datastore,
Add the datastore path to exports and export the new share exportfs -rav,
Confirm the share is available showmount -e localhost
Prepare to Migrate
Before we modify the datastores there are a few things to do first:
Shut down any running VM's and undeploy them. This saves the machines states and copies the images back to the image store,
Stop Sunstone and OpenNebula services systemctl stop opennebula && systemctl stop opennebula-sunstone.
Migrate Data
Shared storage shares the VM disk images so all the nodes can access the same data. So copy the VM data to the NFS share ready for mounting.
From the Sunstone frontend server confirm the NFS shares showmount -e [nfs-server],
Create a temp folder to mount the share in mkdir /mnt/datastore,
Temporarily mount the NFS folder mount [nfs-server]:/share/one_datastore /mnt/datastore,
Move the datastore folders to the share mv /var/lib/one/datastores/* /mnt/datastore/
OpenNebula datastore folders now live on the NFS server: ls /mnt/datastore should list folders 0, 1 and 2,
Mount the NFS share to replace the OpenNebula datastore folder mount [nfs-server]:/share/one_datastore /var/lib/one/datastores,
Confirm the folders are available ls /var/lib/one/datastores should list our 3 folders 0, 1 and 2,
Add the mount into /etc/fstab to persist the mount on boot.
OpenNebula frontend is now configured to access the datastore folders from the NFS share. Next we want to change the datastores type from ssh to shared.
Change Datastore Types
The data for the datastore configuration is stored in the OpenNebula database /var/lib/one/one.db. We can change the driver type by editing the datastore configuration data which then tells OpenNebula whiche drivers to use, and how to handle the datastore data. By default OpenNebula uses an sqlite database with the option of MySql. i'm using sqlite but the same works for MySql.
Open the OpenNebula database sqlite3 /var/lib/one/one.db,
View all tables with .tables. datastore_pool is the table we want to modify,
List all the records in the table select * from datastore_pool; will result in a screen-full of configuration data. Each record has an identifier oid which matches the datastore ID, like this (the first 0 is the datastore ID for the default SYSTEM database):
0|system|<DATASTORE><ID>0</ID><UID>0</UID><GID>0</GID><UNAME>oneadmin</UNAME><GNAME>oneadmin</GNAME><NAME>system</NAME><PERMISSIONS><OWNER_U>1</OWNER_U><OWNER_M>1</OWNER_M><OWNER_A>0</OWNER_A><GROUP_U>1</GROUP_U><GROUP_M>0</GROUP_M><GROUP_A>0</GROUP_A><OTHER_U>0</OTHER_U><OTHER_M>0</OTHER_M><OTHER_A>0</OTHER_A></PERMISSIONS><DS_MAD><![CDATA[-]]></DS_MAD><TM_MAD><![CDATA[ssh]]></TM_MAD><BASE_PATH><![CDATA[/var/lib/one//datastores/0]]></BASE_PATH><TYPE>1</TYPE><DISK_TYPE>0</DISK_TYPE><STATE>0</STATE><CLUSTERS><ID>0</ID></CLUSTERS><TOTAL_MB>0</TOTAL_MB><FREE_MB>0</FREE_MB><USED_MB>0</USED_MB><IMAGES></IMAGES><TEMPLATE><ALLOW_ORPHANS><![CDATA[NO]]></ALLOW_ORPHANS><DISK_TYPE><![CDATA[FILE]]></DISK_TYPE><DS_MIGRATE><![CDATA[YES]]></DS_MIGRATE><RESTRICTED_DIRS><![CDATA[/]]></RESTRICTED_DIRS><SAFE_DIRS><![CDATA[/var/tmp]]></SAFE_DIRS><SHARED><![CDATA[NO]]></SHARED><TM_MAD><![CDATA[ssh]]></TM_MAD><TYPE><![CDATA[SYSTEM_DS]]></TYPE></TEMPLATE></DATASTORE>|0|0|1|1|0
Now to change the datastore type. Grab the data from the 3rd column body
(You can run select body from datastore_pool where oid=0;) and copy to your favourite text editor (that's the chunk starting with <DATASTORE> and ending with </DATASTORE>). Find and replace:
Find: <TM_MAD><![CDATA[ssh]]></TM_MAD>
Replace with: <TM_MAD><![CDATA[shared]]></TM_MAD>
Find: <SHARED><![CDATA[NO]]></SHARED>
Replace with: <SHARED><![CDATA[YES]]></SHARED>
Now to update the SYSTEM datastore record. Run the following command on the database, replacing [datastore-config] with the text block you just modified update datastore_pool set body='[datastore-config]' where oid=0,
Update IMAGE datastore is a little different. There is no SHARED option, but we want to use either shared or qcow2 drivers. I used qcow2. So: select body from datastore_pool where oid=1;:
Find: <TM_MAD><![CDATA[ssh]]></TM_MAD>
Replace: <TM_MAD><![CDATA[qcow2]]></TM_MAD>
Update the record: update datastore_pool set body='[datastore-config]' where oid=1;,
Update the FILES datastore (oid=3) by replacing <TM_MAD><![CDATA[ssh]]></TM_MAD> with <TM_MAD><![CDATA[shared]]></TM_MAD> and update using the method above.
Now that the datastores have been updated to use the shared driver, lets start Sunstone and check that the datastores show up.
systemctl start opennebula && systemctl start opennebula-sunstone
Jump into Sunstone web and go to datastores. Opening each datastore to check whether SHARED is enabled, and the correct drivers show i.e. shared or qcow2.
~DONT DO ANYTHING YET~ Still need to configure the nodes!
Configure the Nodes
So because we stopped and undeployed the VMs, there shouldn't be any data in the node datastores. So we can just set up NFS shares to the datastores folder. Confirm the folders are empty first and make sure to take backups! This is an experimental process so be warned! Right, lets get onto it:
Check the contents of /var/lib/one/datastores. If you are mounting each datastore ID based folder to its own NFS share then you can do this instead of the entire datastore folder. Empty any folders with 0, 1 and 2 folders. otherwise remove all folders from the datastores folder,
If not already installed: apt-get install nfs-common,
Check for NFS shares: showmount -e [nfs-server],
Mount the nfs share to the datastore folder: mount [nfs-server]:/share/one_datastore /var/lib/one/datastores,
Confirm the mount i.e. df,
Edit /etc/fstab adding the mount so its mounted on next boot.
Restart your node to confirm the datastore nfs persists, and to give them a restart!
Repeat with all host nodes.
Test it Out
In Sunstone go to the Hosts TAB and check they are up and running. Next go and grab a VM and deploy it. It should deploy without any issues and start booting.
Once up and running i like to constantly ping the VM while testing live migration. So start ping (ping [vm-ip] -t in windows) and then in Sunstone open the VM and do a 'Live Migrate' to another node. Watch the ping and check the logs to make sure it succeeded. I found i had to refresh the display, and go to the hosts TAB to check the VM had migrated. After that it showed correctly but i think its a caching issue in my browser. After the Live Migration you should still see the ping rolling along, with maybe one failed ping in the results.
Conclusion
So that's the process i used to migrate from ssh local storage to shared storage. I'v tested it and it is working without any issues. However, if you do have any issues or have an opinion on this process please let me know. If there are any pitfalls with this i have overlooked please also let me know.
Ok, have fun with it. I'm off to try moving the shared storage over to some kind of shared cluster like Ceph or GlusterFS!

Errors adding users to Mongodb on Ubuntu linux

I am trying to add admin users to a Mongodb running on Ubuntu Linux on AWS -
working from the mongo shell, I first 'use admin', then when I run db.addUser("admin', "password)
the command fails saying "Can't take a write lock while out of disk space",
I checked disk space and there is 1GB remaining - any help?
I have been working with EC2 instances for some years and I found similar errors using software that does not have nothing in common with MongoDB, so I bet that it's a problem related with EC2 disk volumes management rather than a MongoDB issue.
In my opinion it should be one of the following errors:
You have started MongoDB with a user that cannot modify the Mongo data directory. Are you sure that you have started up MongoDB using a user with write permissions on the data directory?
The MongoDB data directory points to a full disk volume (it is something usual when software is installed using apt, yum or whatever package manager on Amazon EC2 instances). Check your MongoDB data directory configuration and use 'df -h' on the command line to see how much available disk space you have.

Locked SQL Server Data Files

I have an SQL Server database where I have the data and log files stored on an external USB drive. I switch the external drive between my main development machine in my office and my laptop when not in my office. I am trying to use sp_detach_db and sp_attach_db when moving between desktop and laptop machines. I find that this works OK on the desktop - I can detach and reattach the database there no problems. But on the laptop I cannot reattach the database (the database was actually originally created on the laptop and the first detach happened there). When I try to reattach on the laptop I get the following error:
Unable to open the physical file "p:\SQLData\AppManager.mdf". Operating system error 5: "5(error not found)"
I find a lot of references to this error all stating that it is a permissions issue. So I went down this path and made sure that the SQL Server service account has appropriate permissions. I have also created a new database on this same path and been able to succesfully detach and reattach it. So I am confident permissions is not the issue.
Further investigation reveals that I cannot rename, copy or move the data files as Windows thinks they are locked - even when the SQL Server service is stopped. Process Explorer does not show up any process locking the files.
How can I find out what is locking the files and unlock them.
I have verified that the databases do not show up in SSMS - so SQL Server does not still think they exist.
Update 18/09/2008
I have tried all of the suggested answers to date with no success. However trying these suggestions has helped to clarify the situation. I can verify the following:
I can successfully detach and reattach the database only when the external drive is attached to the server that a copy of the database is restored to - effectively the server where the database is "created" - lets call this the "Source Server".
I can move, copy or rename the data and log files, after detaching the database, while the external drive is still attached to the Source Server.
As soon as I move the external drive to another machine the data and log files are "locked", although the 2 tools that I have tried - Process Explorer and Unlocker, both find no locking handles attached to the files.
NB. After detaching the database I tried both stopping the SQL Server service and shutting down the Source Server prior to moving the external drive - still with no success.
So at this stage all that I can do to move data between desktop and laptop is to make a backup of the data onto the external drive, move the external drive, restore the data from the backup. Works OK but takes a bit more time as the database is a reasonable size (1gb). Anyway this is the only choice I have at this stage even though I was trying to avoid having to go down this path.
Crazy as it sounds, did you try manually granting yourself perms on the files via right-click / properties / security? I think SQL Server 2005 will set permissions on a detached file exclusively to the principal that did the detach (maybe your account, maybe the account under which the SQL Server service runs) and no-one else can manipulate the file. To get around this I have had to manually grant myself file permissions on MDF and LDF files before moving or deleting them. See also blog post at onupdatecascade.com
Can you copy the files? I'd be curious to know if you can copy the files to your laptop and then attach them there. I would guess it is some kind of permissions error also, but it sounds like you've done the work to fix this.
Are there any attributes on the file?
Update: If you can't copy the files then something must be locking them. I would check out Unlocker which I haven't tried but sounds like a good starting point. You might also try taking ownership of the files under the file permissions.
When you are in Enterprise Manager or SSMS, can you see the name of the database that you are talking about? There might be a leftover database in a funky state. I'd make sure that you have a backup or a copy of the mdf somewhere safe. If this is the case, maybe try dropping the database and then re-attaching it.
I would try backing up the database on the desktop, and then see if it will restore successfully on the laptop. Doesn't explain your issue but at least you can move forward.
Run sqlservr.exe in debug mode with the /c switch and see what happens starting up. Any locking or permissions issue can be put to bed by making a copy of the file and transfering the copy to the origional.
Also check the associated log file (.ldf) .. If that file is missing or unavaliable you will not be able to mount the database to any sane/consistant state without resorting to emergency bypass mode.
I've had a similar issue. Nothing seemed to resolve it - even tried to reboot the machine completely, restarting SQL services etc. ProcMon and ProcessExplorer were showing nothing so I figured - the "lock" is done by OS.
I resolved it by DELETING the file and restoring it back from the drive mounted under another drive letter.
PS. My database file was not on a USB drive, but on a TrueCrypt-drive (in some you can say it's a "removable drive" as well)
Within SQL Server Configuration Manager, look in SQL Server Services. For all your SQL Server instances, look at which account is selected in the Log On Tab - Log On As:. I've found for instance, changing it to the Local System account resolves the issue you've had. It was the only thing that actually worked for me - and certainly, no shortage of people have had the same problem.
It's a security issue on -file level security - you have detached db with different credential and attaching it with other credential - just browse the article http://www.sqlservermanagementstudio.net/2013/12/troubleshooting-with-attaching-and.html
And try copy pasting it to different location.
I solved similar issue by granting system administrator to all permissions:
right click > properties
security tab
in group or usernames click edit.
click add > advanced
click find now to list all available permissions.
choose administrator and add it to list.
grant it to has full permission.
I had the same issue. Someone had detached the files and left, and we were unable to move it to another drive. But after taking ownership of the file (security-->advanced-->take ownership to your login id), and then adding your login id to the security tab and giving access on the file, was able to move.