how to shrink/resize apfs? - macos-high-sierra

I want to shrink the boot file system's size(formatted as APFS), how to do it?
In macOS Sierra, HFS+ can be shrunk without reboot/remount(in disk utility), but in macOS High Sierra, it seems impossible.

The command you are looking for is diskutil apfs resizeContainer:
Usage: diskutil APFS resizeContainer <inputDisk> <newSize> [<triple>*]
where <inputDisk> = A Container Reference DiskIdentifier (preferred)
or a Physical Store DiskIdentifier
<newSize> = the desired new Container or Physical Store size
<triple> = a { fileSystemPersonality, name, size } tuple
Resize an APFS Container. One of the Container's Physical Store disks will be
resized, and therefore its Container will be resized by an equal amount. You
can specify a new size of zero to request an automatic grow-to-fit operation.
If the new size implies a shrink, you can specify ordered triples in the same
manner as diskutil partitionDisk, etc, to fill the partition map's free space
gap that would otherwise result. If there is more than one Physical Store and
you specify a Container Reference, the appropriate Physical Store will be
chosen automatically. Ownership of the affected disks is required.
Example: diskutil apfs resizeContainer disk5 110g
diskutil apfs resizeContainer disk5 110g jhfs+ foo 10g ms-dos BAR r
diskutil apfs resizeContainer disk0s2 90g jhfs+ foo 10g ms-dos BAR r

Related

Why flink container vcore size is always 1

I am running flink on yarn(more precisely in AWS EMR yarn cluster).
I read flink document and source code that by default for each task manager container, flink will request the number of slot per task manager as the number of vcores when request resource from yarn.
And I also confirmed from the source code:
// Resource requirements for worker containers
int taskManagerSlots = taskManagerParameters.numSlots();
int vcores = config.getInteger(ConfigConstants.YARN_VCORES,
Math.max(taskManagerSlots, 1));
Resource capability = Resource.newInstance(containerMemorySizeMB,
vcores);
resourceManagerClient.addContainerRequest(
new AMRMClient.ContainerRequest(capability, null, null,
priority));
When I use -yn 1 -ys 3 to start flink, I assume yarn will allocate 3 vcores for the only task manager container, but when I checked the number of vcores for each container from yarn resource manager web ui, I always see the number of vcores is 1. I also see vcore to be 1 from yarn resource manager logs.
I debugged the flink source code to the line I pasted below, and I saw value of vcores is 3.
This is really confuse me, can anyone help to clarify for me, thanks.
An answer from Kien Truong
Hi,
You have to enable CPU scheduling in YARN, otherwise, it always shows that only 1 CPU is allocated for each container,
regardless of how many Flink try to allocate. So you should add (edit) the following property in capacity-scheduler.xml:
<property>
<name>yarn.scheduler.capacity.resource-calculator</name>
<!-- <value>org.apache.hadoop.yarn.util.resource.DefaultResourceCalculator</value> -->
<value>org.apache.hadoop.yarn.util.resource.DominantResourceCalculator</value>
</property>
ALso, taskManager memory is, for example, 1400MB, but Flink reserves some amount for off-heap memory, so the actual heap size is smaller.
This is controlled by 2 settings:
containerized.heap-cutoff-min: default 600MB
containerized.heap-cutoff-ratio: default 15% of TM's memory
That's why your TM's heap size is limitted to ~800MB (1400 - 600)
#yinhua.
Use the command to start a session:./bin/yarn-session.sh, you need add -s arg.
-s,--slots Number of slots per TaskManager
details:
https://ci.apache.org/projects/flink/flink-docs-release-1.4/ops/deployment/yarn_setup.html
https://ci.apache.org/projects/flink/flink-docs-release-1.4/ops/cli.html#usage
I get the answer finally.
It's because yarn is use "DefaultResourceCalculator" allocation strategy, so only memory is counted for yarn RM, even if flink requested 3 vcores, but yarn simply ignore the cpu core number.

Adding extra device to namespace using asinfo

From what I saw in the config documentation it’s easy to configure multiple devices within the same name space using:
namespace <namespace-name> {
memory-size <SIZE>G # Maximum memory allocation for primary
# and secondary indexes.
storage-engine device { # Configure the storage-engine to use persistence
device /dev/<device> # raw device. Maximum size is 2 TiB
# device /dev/<device> # (optional) another raw device.
write-block-size 128K # adjust block size to make it efficient for SSDs.
}
}
Is there is anyway I can do that without restarting asd service? using asinfo tool for example?
No, you cannot add devices dynamically.
User also posted here:
https://discuss.aerospike.com/t/adding-extra-device-to-namespace-using-asinfo/4525

Which is the correct Disk Used Size information getting from M_DISK_USAGE or M_DISKS view?

There are 2 System Views provided by SAP Hana Database. M_DISK_USAGE and M_DISK
While comparing the two tables I came to know that USED_SIZE information of DATA,LOG,.....Usage Types are different in both tables.
Can someone please help me to understand, If I want to Monitor the Disk Usage of all usage types at the current time which view can I use to get this information?
The question really is what you want to know.
If you want to know how large the filesystems of the HANA volumes is and how much space is left there, then M_DISKS is the right view:
show free disk space in KiB:
/hana/data/SK1> df -BK .
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/sda5 403469844K 134366892K 269102952K 34% /hana
compared to the M_DISKS view (sizes converted from bytes to KiB):
DISK_ID DEVICE_ID HOST PATH SUBPATH FILESYSTEM_TYPE USAGE_TYPE TOTAL_SIZE_KB USED_SIZE_KB
1 113132 skullbox /hana/data/SK1/ mnt00001 xfs DATA 403469844 134366892
2 113132 skullbox /usr/sap/SK1/HDB01/backup/data/ xfs DATA_BACKUP 403469844 134366892
3 113132 skullbox /hana/log/SK1/ mnt00001 xfs LOG 403469844 134366892
4 113132 skullbox /usr/sap/SK1/HDB01/backup/log/ xfs LOG_BACKUP 403469844 134366892
5 113132 skullbox /usr/sap/SK1/HDB01/skullbox/ xfs TRACE 403469844 134366892
M_DISK_USAGE on the other hand shows what the HANA instance allocated in total grouped by usage types.

Aerospike cluster not clean available blocks

we use aerospike in our projects and caught strange problem.
We have a 3 node cluster and after some node restarting it stop working.
So, we make test to explain our problem
We make test cluster. 3 node, replication count = 2
Here is our namespace config
namespace test{
replication-factor 2
memory-size 100M
high-water-memory-pct 90
high-water-disk-pct 90
stop-writes-pct 95
single-bin true
default-ttl 0
storage-engine device {
cold-start-empty true
file /tmp/test.dat
write-block-size 1M
}
We write 100Mb test data after that we have that situation
available pct equal about 66% and Disk Usage about 34%
All good :slight_smile:
But we stopped one node. After migration we see that available pct = 49% and disk usage 50%
Return node to cluster and after migration we see that disk usage became previous about 32%, but available pct on old nodes stay 49%
Stop node one more time
available pct = 31%
Repeat one more time we get that situation
available pct = 0%
Our cluster crashed, Clients get AerospikeException: Error Code 8: Server memory error
So how we can clean available pct?
If your defrag-q is empty (and you can see whether it is from grepping the logs) then the issue is likely to be that your namespace is smaller than your post-write-queue. Blocks on the post-write-queue are not eligible for defragmentation and so you would see avail-pct trending down with no defragmentation to reclaim the space. By default the post-write-queue is 256 blocks and so in your case that would equate to 256Mb. If your namespace is smaller than that you will see avail-pct continue to drop until you hit stop-writes. You can reduce the size of the post-write-queue dynamically (i.e. no restart needed) using the following command, here I suggest 8 blocks:
asinfo -v 'set-config:context=namespace;id=<NAMESPACE>;post-write-queue=8'
If you are happy with this value you should amend your aerospike.conf to include it so that it persists after a node restart.

Aerospike: Failed to store record. Error: (13L, 'AEROSPIKE_ERR_RECORD_TOO_BIG', 'src/main/client/put.c', 106)

I get the following error while storing the data to aerospike ( client.put ). I have enough space on the drive.
Aerospike: Failed to store record. Error: (13L, 'AEROSPIKE_ERR_RECORD_TOO_BIG', 'src/main/client/put.c', 106).
Here is my Aerospike server namespace configuration
namespace test {
replication-factor 1
memory-size 1G
default-ttl 30d # 30 days, use 0 to never expire/evict.
storage-engine device {
file /opt/aerospike/data/test.dat
filesize 2G
data-in-memory true # Store data in memory in addition to file.
}
}
By default namespaces have a write-block-size of 1 MiB. This is also the maximum configurable size and will limit the max object size the application is able to write to Aerospike.
If you need to go beyond 1 MiB see Large Data Types as a possible solution.
UPDATE 2019/09/06
Since Aerospike 3.16, the write-block-size limit has been increased from 1 MiB to 8 MiB.
Yes, but unfortunately, Aerospike has deprecated LDT (https://www.aerospike.com/blog/aerospike-ldt/). They now recommend to use Lists or Maps, but as stated in their post:
"the new implementation does not solve the problem of the 1MB Aerospike database row size limit. A future key feature of the product will be an enhanced implementation that transcends the 1MB limit for a number of types"
In other terms, it is still an unsolved problem when storing your data on SSD or HDD. However, you can store larger data on memory namespaces.