Host Disk Usage: Warning message regarding disk usage - ambari

I've downloaded version HDF_3.0.2.0_vmware of the Hortonworks Sandbox. I am using VMWare Player version 6.0.7 on my laptop. Shortly after startup/logging into Ambari, I see this alert:
The message that is cut off reads: "Capacity Used: [60.11%, 32.3 GB], Capacity Total: [53.7 GB], path=/usr/hdp". I'd hoped that I would be able to focus on NiFi/Storm development rather than administering the sandbox itself, however it looks like the VM is undersized. Here are the VM settings I have for storage. How do I go about correcting the underlying issue prompting the alert?

I had similar issue, it's about node partitioning and directories mounted for data under HDFS -> Configs -> Settings -> DataNode
You can check your node partitioning using below command-
lsblk -o NAME,FSTYPE,SIZE,MOUNTPOINT,LABEL
Mostly hdfs namenode or datanode directories point to root partitions. We can change thresholds values for alerts temporary and to have permanent solution we can add additional data directories.
Below links can he helpful to do the same.
https://community.hortonworks.com/questions/21212/configure-storage-capacity-of-hadoop-cluster.html
Check from above link - I think your partitioning is wrong you are not using "/" for hdfs directory. If you want use full disk capacity, you can create any folder name under "/" example /data/1 on every data node using command "#mkdir -p /data/1" and add to it dfs.datanode.data.dir. restart the hdfs service.
https://hadooptips.wordpress.com/2015/10/16/fixing-ambari-agent-disk-usage-alert-critical/
https://community.hortonworks.com/questions/21687/how-to-increase-the-capacity-of-hdfs.html

I am not currently able to replicate this, but based on the screenshots the warning is just that there is less space available than recommended. If this is the case everything should still work.
Given that this is a sandbox that should never be used for production, feel free to ignore the warning.
If you want to get rid fo the warning sign, it may be possible to do a quick fix by changing the warning treshold via the alert definition.
If this is still not sufficient, or you want to leverage more storage, please follow the steps outlined by #manohar

Related

S3Fs directory listings slow - caching somehow possible?

I'm wondering if there is any way to practically speedup directory listings of a s3fs mount? I have a WebDAV server, only for read operations, that basically access my s3fs mount. The problem is that listing directories is slow, while transfer speed is fine.
So I started to look a bit around the web a stumbled across "JuiceFS", sadly this was also not an option for several reasons. Then I tried "vmtouch" to index the mounted s3 storage to local memory, this is also not working as it's a shared resourced managed by the fuse kernel extension.
Even using S3FS built-in cache does not solve the issue, instead it makes it even worse as the file first getting downloaded from s3 into the cache locally and then served via WebDav ...
Is there no way to just speedup directory listing using S3? Basically, this is all I need in the end and no fancy POSIX compatible Block Device like JuiceFS which basically creates its own logic on top of your s3 bucket ... Not what I was searching for.
Unfortunately s3fs 1.91 has poor readdir performance. There are a few open issues and pull requests that track future improvements:
Option to not use head requests
Consider changing -o notsup_compat_dir default
Consider changing -o noobj_cache default
Increase -o multireq_max
Issue parallel requests in get_object_attribute
You can toggle #2-4 via command-line flags today but #5 is still in-progress. #1 is the big win that would give a 100x speedup but trades off less POSIX compatibility, e.g., no UID/GID, no permissions. One alternative that you can try today is goofys which implements #1.

VMWare resize disk size using vcenter api

I have been trying to solve this for the past week.
I'm using the vcenter API to add a new disk to an existing VM
https://vdc-repo.vmware.com/vmwb-repository/dcr-public/1cd28284-3b72-4885-9e31-d1c6d9e26686/71ef7304-a6c9-43b3-a3cd-868b2c236c81/doc/operations/com/vmware/vcenter/vm/hardware/disk.create-operation.html
and as able to do it successfully.
But I cannot figure out how to resize an existing VM disk.
https://vdc-repo.vmware.com/vmwb-repository/dcr-public/1cd28284-3b72-4885-9e31-d1c6d9e26686/71ef7304-a6c9-43b3-a3cd-868b2c236c81/doc/operations/com/vmware/vcenter/vm/hardware/disk.update-operation.html
This disk update operation does not allow to update the "capacity" attribute. So I'm not sure how to resolve this, unless I use an SDK.
Can someone please point me in the right direction?
I'm not 100% up to speed on the latest version, but there are several things that the REST API cannot do compared to the "old" SDK which is based on SOAP / WSDL.
The documentation on the page also states that the call only: "Updates the configuration of a virtual disk. An update operation can be used to detach the existing VMDK file and attach another VMDK file to the virtual machine." So there's no mention of changing the size (which is pretty lame I have to say...).
So I think unfortunately it seems like you either
Wait for a new version and hope this will be included
You use the good old SDK

Way to pass parameters or share a directory/file to a qemu-kvm launched VM on Centos 7.0

I need to be able to pass some parameters to my virtual machine during it's bootup so it sets itself properly. To do that I either have to bake the info into the image or somehow pass it as parameters to my qemu-kvm command. These parameters are just few, and if it was VMware, we would just pass it as ova paramas and when the VM launches we would call the ova-environment to get these params. But launching it from qemu-kvm I have no such options. I did some homework and found that I could use virtio-9p driver for sharing files across host and guest. Unfortuantely RHEL/Centos has decided not to support 9p.
With no option of rebuilding my RHEL kernel with the 9p options enabled, how do I solve my above problem? Either solution would work, which is, pass/share some kind of json file to the VM(pre-populated on the host), which will read this and do it's setup OR set some kind of "environment variables" which I can query from within the VM to get these params and continue with setup. Any pointers would help.
If your version of QEMU supports it, you could use its -fw_cfg option to pass information to the guest. If that guest is running a Linux kernel with CONFIG_FW_CFG_SYSFS enabled, you will be able to read out the information from sysfs. An example:
If you launch your VM like so:
qemu-system-x86_64 <OPTIONS> -fw_cfg name=opt/com.example.test,string=qwerty
From inside the guest, you can then get the value back from sysfs:
cat /sys/firmware/qemu_fw_cfg/by_name/opt/com.example.test/raw
There appears to be some driver for Windows as well, but I've never used it.
When you boot your guest with -kernel and -initrd you should be able to pass environment variables with -append.
The downside is that you have to keep track of your current kernel and initrd outside of your disk image.
Other possibilities could be a small prepared disk image (as you said) or via network/dhcp or a serial link into your guest or ... this really depends on your environment.
I was just searching to see if this situation had improved and came across this question. Apparently it has not improved.
What I do is output my variable data to a temp file (eg. /tmp/xxFoo). Usually I write text or a tar straight to that file then truncate it to a minimum size and 512 byte multiple like 64K otherwise the disk controller won't configure it. Then the VM starts with a raw drive as that file. After the VM is started the temp file is deleted. From within the guest you can read/cat the raw block device and get the variable data (in BSD use the c partition as the raw drive).
In Windows guests it's tricky to get to the data. In theory you can read \\.\PhysicalDriveN but I have not ever been able to get that to work. Cygwin can do it and it works like Linux. The other option is to make your temp file a partitioned and formatted image but that's a pain to create and update.
As far as sharing a folder I use Samba which works in just about anything. I usually use several instances of smbd running with different configurations.
One option is to create a ISO file and pass as parameter. This works for both host Win and Ubuntu and Guest Win and Ubuntu. You can read the mounted CD ROM inside the guest OS
>>qemu-system-x86_64 -drive file=c:/qemuiso/winlive1.qcow2,format=qcow2 -m 8G -drive file=c:\qemuiso\sample.iso,index=1,media=cdrom
On Guest Linux Mount CDROM in Ubuntu:-
>>blkid //to check if media is there
>>sudo mkdir /mnt/cdrom
>>sudo mount /dev/sr0 /mnt/cdrom //this step can also be put in crontab
>>cd /mnt/cdrom

Cannot connect to Compute Engine CentOS Virtual Machine

I am new to Virtual Machines and CLI so please bear with me.
I have a CentOS 6.5 running on Compute Engine.
I ran yum update (without creating a snapshot of the previous disk - Yes I am an idiot) and not I cannot connect to the machine using the ip address.
I tried the following steps.
Tried to connect through Filezilla - didn't work.
Tried through Putty - didn't work
Tried through the browser option given by the CE console - didn't work.
I even tried creating a snapshot and starting up another VM with the snapshot - didn't work.
If anyone knows how I can get the files and folders out from the previous disk, I can start up a new VM and transfer everything again.
I do not have the latest database and this is important.
Please help!
Thanks
Warren
The way to recover is to delete your VM without deleting the disk, then create another VM with its own boot disk, attach and mount the original disk, and recover any data that you need from it.
First things first: on the VM instances page, click on the instance name that is currently running with that disk, and uncheck the box "Delete boot disk when instance is deleted". Then delete the instance.
Now, create a new instance with its own boot disk. To differentiate this new disk from the original boot disk:
using a different OS (or version of the OS) for the new disk, e.g., if using Ubuntu, try a different version or use Debian; if using RHEL, try CentOS, or vice versa
see which one is mounted at / — this should be the new disk
Mount the original disk as read-only and recover any information you need. Once you have a backup of your data, you can remount it with read-write access and try to fix it (but back up the data first!).
I finally solved this problem thanks to Misha for sending me in the right direction.
The steps are below for anyone who has the same issue.
Problem:
While updating the Centos server using yum update, I was unable to connect back to the server.
I tried all possible combinations but no luck. This seems to be a known issue as there was some material on the Compute Engine site regarding this.
Solution:
I followed the steps as Misha suggested. I started up another VM with its own boot disk and then attached the original disk with read write access.
Note: I was unable to mount the disk as just read only.
The commands were
mkdir /mnt/sdb1
mount /dev/sdb1 /mnt/sdb1
Once I mounted the VM, I copied the files from the html folder in the sdb1 disk to the html folder in the sda1(the new boot disk).
The database was a bit more challenging.
I tried quite a few times but copying the files from /dev/sdb1/var/lib/mysql into the new disk mysql folder was not working.
I found some tutorials but nothing helped.
Finally I downloaded the files from within the /dev/sdb1/var/lib/mysql and put them in my local windows mysql installation within the data folder.
Remember you have to download everything which includes the ib_logfile0 , ib_logfile1 and ibdata1 including the folder which has the *.frm files.
Then I opened localhost/phpmyadmin and voila... the files were there.
The rest was pretty simple... Exporting and uploading the SQL scripts back to the server.
This took me about 12 hours to figure out.
Thanks again Misha.

Rsnapshot without hard links?

I'm using Rsnapshot to backup all my servers on an EncFS encrypted partition. The partition has been created with the default paranoia mode offered by EncFS, thus it doesn't support hard links.
I'm able to run Rsnapshot the first time (creating daily.0, weekly.0, monthly.0) but not the second time.
Is there a way to use Rsnapshot without the hardlinking feature? I know it sounds a bit silly, but my rsnapshot.conf is very well configured and I don't want either to switch to another software or erase and recreate the EncFS volume.
Thank you
Look for this section in /etc/rsnapshot.conf file:
# If your version of rsync supports --link-dest, consider enable this.
# This is the best way to support special files (FIFOs, etc) cross-platform.
# The default is 0 (off).
#
#link_dest 0
Make sure the "link_dest" is disabled. This is used as a flag when rsync command is called in the background. As per the man page for rsync:
--link-dest=DIR hardlink to files in DIR when unchanged