Is tmpfs (linux/ubuntu)on disk or ram? - tmpfs

On wikipedia, it says that tmpfs is stored on volatile memory rather than persistent storage device. I have done some experiments and come across contradict.
(My computer environment-->Mac OS/X parallel Ubuntu 16.04 x64 )
Type
free -m
give back
I do not know what "shared" mean. Type
man free
give back
Does that mean the "shared" column shows what is Shmem in /proc/meminfo? Keeping typing
vim /proc/meminfo
give back
It does mean part of tmpfs on ram.
But I type
df -lh
give back
That means part of tmpfs on disk.
I feel confused! Can someone tell me how tmpfs is implemented (on Linux)? Is it on disk or ram on earth? Or neither.

tmpfs is a temporary file system that uses volatile memory, like RAM. Please take a look at the manual page:
http://man7.org/linux/man-pages/man5/tmpfs.5.html

Related

Raspbian Swapping Log into RAM

I am trying to set up a Raspberry Pi for 24/7 Mode.I am Using Raspbian with a GUI. In this context, I want to swap the Directory /var/log into the RAM.
I tried to ad the following entry to fstab to reach
my Requirement:
none /var/log tmpfs size=10M,noatime 00
How can i ensure, that the directory was added to the RAM?
Did i forgot something ro reach my purpose?
Do you guys have any ideas how i can swap a directory to RAM?
Thank you for your help.
You can do that with the usefull log2ram tool (https://github.com/azlux/log2ram).
It will allow you to mount the /var/log folder into RAM and archive your logs in the folder /var/hdd.log once a day or whenever you want by modifying cron script. This feature is very useful for reading your logs after a stop of your RPI!

Host Disk Usage: Warning message regarding disk usage

I've downloaded version HDF_3.0.2.0_vmware of the Hortonworks Sandbox. I am using VMWare Player version 6.0.7 on my laptop. Shortly after startup/logging into Ambari, I see this alert:
The message that is cut off reads: "Capacity Used: [60.11%, 32.3 GB], Capacity Total: [53.7 GB], path=/usr/hdp". I'd hoped that I would be able to focus on NiFi/Storm development rather than administering the sandbox itself, however it looks like the VM is undersized. Here are the VM settings I have for storage. How do I go about correcting the underlying issue prompting the alert?
I had similar issue, it's about node partitioning and directories mounted for data under HDFS -> Configs -> Settings -> DataNode
You can check your node partitioning using below command-
lsblk -o NAME,FSTYPE,SIZE,MOUNTPOINT,LABEL
Mostly hdfs namenode or datanode directories point to root partitions. We can change thresholds values for alerts temporary and to have permanent solution we can add additional data directories.
Below links can he helpful to do the same.
https://community.hortonworks.com/questions/21212/configure-storage-capacity-of-hadoop-cluster.html
Check from above link - I think your partitioning is wrong you are not using "/" for hdfs directory. If you want use full disk capacity, you can create any folder name under "/" example /data/1 on every data node using command "#mkdir -p /data/1" and add to it dfs.datanode.data.dir. restart the hdfs service.
https://hadooptips.wordpress.com/2015/10/16/fixing-ambari-agent-disk-usage-alert-critical/
https://community.hortonworks.com/questions/21687/how-to-increase-the-capacity-of-hdfs.html
I am not currently able to replicate this, but based on the screenshots the warning is just that there is less space available than recommended. If this is the case everything should still work.
Given that this is a sandbox that should never be used for production, feel free to ignore the warning.
If you want to get rid fo the warning sign, it may be possible to do a quick fix by changing the warning treshold via the alert definition.
If this is still not sufficient, or you want to leverage more storage, please follow the steps outlined by #manohar

what will happen when the aof file for redis is too big, even after rewriting?

I am reading something about redis persistence, right now two ways are available for keeping the redis persistence.
AOF
RDB
ok, i will skip the basic meanings of "AOF" and "RDB", I have a question about the "AOF", My question is "what will happen when the aof file for redis is too big, even after being rewrited ?", I have searched on google, but no result, somesone said the redis-server will fail to startup when the size of "AOF" file reach 3G or 4G. Could anyone can tell me ? Thanks a lot.
Redis doesn't limit the size of AOF file. You can safely have a large AOF file. One of my Redis instance writes an AOF file with a size of 95G, and it can reload the file successfully. Of course, it takes a very long time.
somesone said the redis-server will fail to startup when the size of "AOF" file reach 3G or 4G
I'm not sure if the problem someone met, is because of the limit of file system. For some old file system, the size of a single file CANNOT exceed 2G or 4G. However, for modern file system, the limit has been removed.

Resize bound NFS volume in OpenShift Origin

I have a Jenkins server running on OpenShift Origin 1.1
The pod is using persistent storage using NFS. We have a pv of 3GB and a pvc on this volume. It's bound and Jenkins is using it. But when we perform:
sudo du -sh /folder we see our folder is 15GB. So we want to resize our persistent volume while it's still in use. How can we perform this?
EDIT: or is the best way to recreate the pv and pvc on the same folder as before. So all the data will remain in that folder?
This will be a manual process and is entirely dependent on the storage provider and the filesystem with which the volume is formatted.
Suppose you have NFS that has enough space say 15 Gb and you have pv only of 3Gb then you can simply edit the pv to increase the size.
"oc edit pv [name]" works and you can edit the size of the volume.

Snapshot vs. Volume Size

I am using a public dataset snapshot in Amazon ec2. The data in the snapshot is roughly 150GB and the snapshot itself is 180GB. I knew that by performing operations on the dataset I would need more than 30GB of free memory so I put the snapshot in a 300GB volume. When I look at my stats though (unfortunately as a process is running, so I think I am about to run out of room), it appears that the snapshot is still limited to 180 GB.
Is there a way to expand its size to the size of the volume without losing my work?
Is there a possibility that the snapshot is actually continuous with another drive (e.g. /dev/sdb)? (A girl can hope, right?)
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 9.9G 1.1G 8.4G 11% /
none 34G 120K 34G 1% /dev
none 35G 0 35G 0% /dev/shm
none 35G 56K 35G 1% /var/run
none 35G 0 35G 0% /var/lock
none 35G 0 35G 0% /lib/init/rw
/dev/sdb 827G 201M 785G 1% /mnt
/dev/sdf 174G 162G 2.6G 99% /var/lib/couchdb/0.10.0
My instance is running Ubuntu 10.
Is there a way to expand its size to the size of the volume without
losing my work?
That depends on whether you can live with a few minutes downtime for the computation, i.e. whether stopping the instance (hence the computation process) is a problem or not - Eric Hammond has written a detailed article about Resizing the Root Disk on a Running EBS Boot EC2 Instance, which addresses a different but pretty related problem:
[...] what if you have an EC2 instance already running and you need to
increase the size of its root disk without running a different
instance?
As long as you are ok with a little down time on the EC2 instance (few
minutes), it is possible to change out the root EBS volume with a
larger copy, without needing to start a new instance.
You have already done most of the steps he describes and created a new 300GB volume from the 180GB snapshot, but apparently you have missed the last required step indeed, namely resizing the file system on the volume - here are the instructions from Eric's article:
Connect to the instance with ssh (not shown) and resize the root file
system to fill the new EBS volume. This step is done automatically at
boot time on modern Ubuntu AMIs:
# ext3 root file system (most common)
sudo resize2fs /dev/sda1
#(OR)
sudo resize2fs /dev/xvda1
# XFS root file system (less common):
sudo apt-get update && sudo apt-get install -y xfsprogs
sudo xfs_growfs /
So the details depend on the file system in use on that volume, but there should be a respective resize command available for all but the most esoteric or outdated ones, none of which I'd expect in a regular Ubuntu 10 installation.
Good luck!
Appendix
Is there a possibility that the snapshot is actually continuous with
another drive (e.g. /dev/sdb)?
Not just like that, this would require a RAID setup of sorts, which is unlikely to be available on a stock Ubuntu 10, except if somebody provided you with a respectively customized AMI. The size of /dev/sdb does actually hint towards this being your Amazon EC2 Instance Storage:
When an instance is created from an Amazon Machine Image (AMI), in
most cases it comes with a preconfigured block of pre-attached disk
storage. Within this document, it is referred to as an instance store;
it is also known as an ephemeral store. An instance store provides
temporary block-level storage for Amazon EC2 instances. The data on
the instance store volumes persists only during the life of the
associated Amazon EC2 instance. The amount of this storage ranges from
160GiB to up to 3.3TiB and varies by Amazon EC2 instance type. [...] [emphasis mine]
Given this storage is not persisted on instance termination (in contrast to the EBS storage we all got used to enjoy - the different behavior is detailed in Root Device Storage), it should be treated with respective care (i.e. never store something on instance storage you couldn't afford to loose).