Resize bound NFS volume in OpenShift Origin - nfs

I have a Jenkins server running on OpenShift Origin 1.1
The pod is using persistent storage using NFS. We have a pv of 3GB and a pvc on this volume. It's bound and Jenkins is using it. But when we perform:
sudo du -sh /folder we see our folder is 15GB. So we want to resize our persistent volume while it's still in use. How can we perform this?
EDIT: or is the best way to recreate the pv and pvc on the same folder as before. So all the data will remain in that folder?

This will be a manual process and is entirely dependent on the storage provider and the filesystem with which the volume is formatted.
Suppose you have NFS that has enough space say 15 Gb and you have pv only of 3Gb then you can simply edit the pv to increase the size.
"oc edit pv [name]" works and you can edit the size of the volume.

Related

Image file on root node for a virtual machine - can it be moved?

I am using proxmox and created a virtual machine yesterday. Today, I noticed that there is hardly any memory left on my root nodes /dev/mapper disk, which causes the VM to stop. I found out that there is an image file (extension .qcow2) in the directory /var/lib/vz/images, which belongs to the newly created VM, which consumes quite a lot memory.
I know that images can be used to install operating systems from and I asked myself if this image file is a necessary component for the VM to work or if the image file is only created as a kind of backup. If it is a backup file, I could save it on another disk to solve my problem.
Thanks for your help.
It's your virtual machine disk, you cannot just remove it. You can create VM disk with "Thin provision" checked in Storage configuration on hypervisor, it will consume only what you use, not allocate all space at once. Use Clonezilla or dd to clone all data to new disk.

Raspbian Swapping Log into RAM

I am trying to set up a Raspberry Pi for 24/7 Mode.I am Using Raspbian with a GUI. In this context, I want to swap the Directory /var/log into the RAM.
I tried to ad the following entry to fstab to reach
my Requirement:
none /var/log tmpfs size=10M,noatime 00
How can i ensure, that the directory was added to the RAM?
Did i forgot something ro reach my purpose?
Do you guys have any ideas how i can swap a directory to RAM?
Thank you for your help.
You can do that with the usefull log2ram tool (https://github.com/azlux/log2ram).
It will allow you to mount the /var/log folder into RAM and archive your logs in the folder /var/hdd.log once a day or whenever you want by modifying cron script. This feature is very useful for reading your logs after a stop of your RPI!

Host Disk Usage: Warning message regarding disk usage

I've downloaded version HDF_3.0.2.0_vmware of the Hortonworks Sandbox. I am using VMWare Player version 6.0.7 on my laptop. Shortly after startup/logging into Ambari, I see this alert:
The message that is cut off reads: "Capacity Used: [60.11%, 32.3 GB], Capacity Total: [53.7 GB], path=/usr/hdp". I'd hoped that I would be able to focus on NiFi/Storm development rather than administering the sandbox itself, however it looks like the VM is undersized. Here are the VM settings I have for storage. How do I go about correcting the underlying issue prompting the alert?
I had similar issue, it's about node partitioning and directories mounted for data under HDFS -> Configs -> Settings -> DataNode
You can check your node partitioning using below command-
lsblk -o NAME,FSTYPE,SIZE,MOUNTPOINT,LABEL
Mostly hdfs namenode or datanode directories point to root partitions. We can change thresholds values for alerts temporary and to have permanent solution we can add additional data directories.
Below links can he helpful to do the same.
https://community.hortonworks.com/questions/21212/configure-storage-capacity-of-hadoop-cluster.html
Check from above link - I think your partitioning is wrong you are not using "/" for hdfs directory. If you want use full disk capacity, you can create any folder name under "/" example /data/1 on every data node using command "#mkdir -p /data/1" and add to it dfs.datanode.data.dir. restart the hdfs service.
https://hadooptips.wordpress.com/2015/10/16/fixing-ambari-agent-disk-usage-alert-critical/
https://community.hortonworks.com/questions/21687/how-to-increase-the-capacity-of-hdfs.html
I am not currently able to replicate this, but based on the screenshots the warning is just that there is less space available than recommended. If this is the case everything should still work.
Given that this is a sandbox that should never be used for production, feel free to ignore the warning.
If you want to get rid fo the warning sign, it may be possible to do a quick fix by changing the warning treshold via the alert definition.
If this is still not sufficient, or you want to leverage more storage, please follow the steps outlined by #manohar

Where should a dockerized web application store uploaded files?

I'm building a web application that needs to allow users to upload profile pictures. I want the application to be self-contained, so that people don't need to have an s3 or other cloud storage service account.
It's best to keep docker containers as disposable as possible, so I guess I should create a volume. So I want the volume to be created automatically, so people don't have to specify a volume when running the container, but the documentation for the VOLUME instruction in dockerfiles confuses me.
The VOLUME instruction creates a mount point with the specified name and marks it as holding externally mounted volumes from native host or other containers.
What does it mean to be marked as such? The data is to be written by the application, it's not coming from an extenrl source.
When you mark a volume in the dockerfile, say VOLUME /site/uploads,it makes it very easy to later run another container with --volumes-from <container-name> and have /site/uploads available in the new container with all the data that has been written and that will be written (if the first container is still running).
Also, you'll be able to see that volume with docker volume ls after you start the container the first time.
The only problem that you might have if you delete the container, is that you will lose the mapping provided by docker inspect <container-name> that tells you which volume your container created. To see the volume your container created really clearly and quickly, try docker inspect <container-name> | jq '.[].Mounts' if you have jq installed. Otherwise, docker inspect <container-name> | grep Mounts -A 10 might be enough when you only have one volume. (you can also just wade through all the json yourself)
Even if you remove the container that created the volume, the volume will remain on your system, viewable with docker volume ls unless you run docker volume rm <volume-name>
Note: I'm using docker version 1.10.3
You will not have problems with that, the images will be uploaded to the mounted filesystem without problems.
Maybe you have to specify free permissions to the uploads folder so that you can write on it.

Snapshot vs. Volume Size

I am using a public dataset snapshot in Amazon ec2. The data in the snapshot is roughly 150GB and the snapshot itself is 180GB. I knew that by performing operations on the dataset I would need more than 30GB of free memory so I put the snapshot in a 300GB volume. When I look at my stats though (unfortunately as a process is running, so I think I am about to run out of room), it appears that the snapshot is still limited to 180 GB.
Is there a way to expand its size to the size of the volume without losing my work?
Is there a possibility that the snapshot is actually continuous with another drive (e.g. /dev/sdb)? (A girl can hope, right?)
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 9.9G 1.1G 8.4G 11% /
none 34G 120K 34G 1% /dev
none 35G 0 35G 0% /dev/shm
none 35G 56K 35G 1% /var/run
none 35G 0 35G 0% /var/lock
none 35G 0 35G 0% /lib/init/rw
/dev/sdb 827G 201M 785G 1% /mnt
/dev/sdf 174G 162G 2.6G 99% /var/lib/couchdb/0.10.0
My instance is running Ubuntu 10.
Is there a way to expand its size to the size of the volume without
losing my work?
That depends on whether you can live with a few minutes downtime for the computation, i.e. whether stopping the instance (hence the computation process) is a problem or not - Eric Hammond has written a detailed article about Resizing the Root Disk on a Running EBS Boot EC2 Instance, which addresses a different but pretty related problem:
[...] what if you have an EC2 instance already running and you need to
increase the size of its root disk without running a different
instance?
As long as you are ok with a little down time on the EC2 instance (few
minutes), it is possible to change out the root EBS volume with a
larger copy, without needing to start a new instance.
You have already done most of the steps he describes and created a new 300GB volume from the 180GB snapshot, but apparently you have missed the last required step indeed, namely resizing the file system on the volume - here are the instructions from Eric's article:
Connect to the instance with ssh (not shown) and resize the root file
system to fill the new EBS volume. This step is done automatically at
boot time on modern Ubuntu AMIs:
# ext3 root file system (most common)
sudo resize2fs /dev/sda1
#(OR)
sudo resize2fs /dev/xvda1
# XFS root file system (less common):
sudo apt-get update && sudo apt-get install -y xfsprogs
sudo xfs_growfs /
So the details depend on the file system in use on that volume, but there should be a respective resize command available for all but the most esoteric or outdated ones, none of which I'd expect in a regular Ubuntu 10 installation.
Good luck!
Appendix
Is there a possibility that the snapshot is actually continuous with
another drive (e.g. /dev/sdb)?
Not just like that, this would require a RAID setup of sorts, which is unlikely to be available on a stock Ubuntu 10, except if somebody provided you with a respectively customized AMI. The size of /dev/sdb does actually hint towards this being your Amazon EC2 Instance Storage:
When an instance is created from an Amazon Machine Image (AMI), in
most cases it comes with a preconfigured block of pre-attached disk
storage. Within this document, it is referred to as an instance store;
it is also known as an ephemeral store. An instance store provides
temporary block-level storage for Amazon EC2 instances. The data on
the instance store volumes persists only during the life of the
associated Amazon EC2 instance. The amount of this storage ranges from
160GiB to up to 3.3TiB and varies by Amazon EC2 instance type. [...] [emphasis mine]
Given this storage is not persisted on instance termination (in contrast to the EBS storage we all got used to enjoy - the different behavior is detailed in Root Device Storage), it should be treated with respective care (i.e. never store something on instance storage you couldn't afford to loose).