I've created an APFS volume on a MBP running High Sierra in order to use it as a Time Machine drive for the iMac I use at work (we don't have a backup solution so this is my hacky way to keep myself safe as the MPB itself is backed up separately).
I foolishly assumed 100GB would be enough (as I only back up certain folders) however in less than a week the iMac has filled it and Time Machine is complaining about insufficient disk space.
I am trying to figure out a way to expand the volume (to say 150GB) however every help page and blog I can find shows you how to shrink a container.
I've tried using Disk Utility but it doesn't give me the option to resize individual volumes (only the overall container).
Disk Utility Screenshot
I've also tried using the diskutil in terminal but don't want to break things.
I'd prefer not to dump and re-create the partition of possible, but if it's the only option I will deal with it.
Cheers
Matt
Find out your container first with:
diskutil apfs list
Say you found out its disk0, then you do:
sudo diskutil apfs resizeContainer disk0 0
In my example (see screenshots) you see that disk1 container grew, and this was the one line in the output that indicates a happier you:
"Growing APFS Physical Store disk0s2 from 864,999,997,440 to
1,000,345,825,280 bytes"
Before and after output of diskutil list: (note disk1)
Short answer: APFS volumes cannot be resized, only APFS containers.
Longer story:
An APFS volume exists inside an APFS container but not outside of it. That APFS container is the "thing" that in reality occupies space, i.e. bytes, on a medium, not the APFS volume. Hence only APFS containers can be resized but not APFS volumes. If one thinks of APFS volumes as being "just fancy labels for virtual space", then it becomes much clearer what is what. The direct effect of this is that within an APFS container the volumes coexist and compete for the remaining/free/unallocated space of the container.
The longer story is true.
But does not mention two facts about APFS volumes.
Quota: When creating a volume, the use can specify the maximum space that new volume will be allowed to use, of the total space in that volume's container. That volume can not use more space.
Reserve: When creating a volume, the use can specify an amount of space in the container, which will be reserved for that new volume. No other volume in the container can use space that would reduce the available space below the sum of the reservations of the other volumes.
So, the next question is: can the quota or reserve be changed? Not using Disk Utility. How about using diskutil in the Terminal interface? (Answer: maybe so. But, dear reader, be very careful. And know that Apple Care says: "We do not support Terminal.")
Related
I'm running hypergraphql in a docker container with the Dockerfile:
FROM adoptopenjdk/openjdk8
RUN curl https://www.hypergraphql.org/resources/hypergraphql-1.0.3-exe.jar --output hypergraphql-1.0.3-exe.jar
EXPOSE 8080
CMD ["java", "-jar", "hypergraphql-1.0.3-exe.jar", "--config", "/config/config.json"]
I think I should adjust the JVM size inside my container in order to prevent JVM from taking all available memory into use https://developers.redhat.com/blog/2017/03/14/java-inside-docker/.
But I don't have any idea about the default JVM heap size. How can I find it and what could be the optimal value for it ?
The default for "max heap size" is usually 25% of available RAM.
It used to take the host memory into account but was later fixed for containers too (the fix was backported to Java 8u191 too: https://merikan.com/2019/04/jvm-in-a-container/#backported-to-java-8)
Usually the easiest option to adjust the default "max heap size" is -XX:MaxRAMPercentage=60.0 - here is an example of changing the default 25% to 60%.
As apangin said, there's no "optimal heap size" - you'll need to experiment with it and see what's suitable for your application. You can try to aggressively downsize "max heap size" to the point where your application is barely usable and then multiple that by a factor of 3-5:
Gil Tene - Really Understanding Garbage Collection (QCon SF 2019) (start at 56:05)
Start with big heap and shrink it down until it breaks; then tripple that size and go home
How to estimate memory consumption?
For the impatient ones – the answer will be to start with the memory equal to approximately 5 x [amount of memory consumed by Live Data] and start the fine-tuning from there.
I use for professional piloting over airport Xplane 11 on windows 10 home, and sometimes by hour this software freeze stop running while I piloting real a airfreighter in sky, and to doing revoring is slow as 5 minutes minimum, also to start 3D world for improve view, and hd camera in board pilot cabin. Also to retake recovery the control piloting on ground the aircraft and the remote view is wrong of parameters angles. What about the last update in version older to quick restart, avoid freeze and recover more high keep the cockpit equipment settings whithout reload the saved file before a new time to be secure a new bug crach this software, what I use daily and know it since last years.
there a solution, increasing virtual memory and file preference screen video : https://www.x-plane.com/kb/configuring-x-plane-to-use-less-virtual-memory/
Reducing X-Plane’s Virtual Memory Use :
There are a few ways you can reduce the amount of virtual memory that X-Plane needs to operate.
The first thing you can do is remove add-ons. Plugins, custom airplanes, and custom scenery can all increase the amount of virtual memory that X-Plane uses. Try removing add-ons and see if the problem goes away. Add-ons can also increase the amount of memory that X-Plane itself uses. You may have to mix and match add-ons depending on your activities.
The second thing you can do is turn down X-Plane’s settings. Here are some settings that make a difference in virtual memory usage.
AI traffic. More planes uses more memory, and more complex and higher detail planes use more memory.
Texture resolution: turn texture compression on – it saves a lot of memory. Turn down texture res as needed. In particular, do not run with extreme res and uncompressed textures.
Trees – turn down the forest density to save memory.
4x SSAA – when in HDR mode, turn off 4x SSAA if you use a large monitor or a large window size.
If needed, turn down object density.
Im trying to work around an issue which has been bugging me for a while. In a nutshell: on which basis should one assign a max heap space for resource-hogging application and is there a downside for tit being too large?
I have an application used to visualize huge medical datas, which can eat up to several gigabytes of memory if several imaging volumes are opened size by side. Caching the data to be viewed is essential for fluent workflow. The software is supported with windows workstations and is started with a bootloader, which assigns the heap size and launches the main application. The actual memory needed by main application is directly proportional to the data being viewed and cannot be determined by the bootloader, because it would require reading the data, which would, ultimately, consume too much time.
So, to ensure that the JVM has enough memory during launch we set up xmx as large as we dare based, by current design, on the max physical memory of the workstation. However, is there any downside to this? I've read (from a post from 2008) that it is possible for native processes to hog up excess heap space, which can lead to memory errors during runtime. Should I maybe also sniff for free virtualmemory or paging file size prior to assigning heap space? How would you deal with this situation?
Oh, and this is my first post to these forums. Nice to meet you all and be gentle! :)
Update:
Thanks for all the answers. I'm not sure if I put my words right, but my problem rose from the fact that I have zero knowledge of the hardware this software will be run on but would, nevertheless, like to assign as much heap space for the software as possible.
I came to a solution of assigning a heap of 70% of physical memory IF there is sufficient amount of virtual memory available - less otherwise.
You can have heap sizes of around 28 GB with little impact on performance esp if you have large objects. (lots of small objects can impact GC pause times)
Heap sizes of 100 GB are possible but have down sides, mostly because they can have high pause times. If you use Azul Zing, it can handle much larger heap sizes significantly more gracefully.
The main limitation is the size of your memory. If you heap exceeds that, your application and your computer will run very slower/be unusable.
A standard way around these issues with mapping software (which has to be able to map the whole world for example) is it break your images into tiles. This way you only display the image which is one the screen (or portions which are on the screen) If you need to be able to zoom in and out you might need to store data at two to four levels of scale. Using this approach you can view a map of the whole world on your phone.
Best to not set JVM max memory to greater than 60-70% of workstation memory, in some cases even lower, for two main reasons. First, what the JVM consumes on the physical machine can be 20% or more greater than heap, due to GC mechanics. Second, the representation of a particular data entity in the JVM heap may not be the only physical copy of that entity in the machine's RAM, as the OS has caches and buffers and so forth around the various IO devices from which it grabs these objects.
As I have created a Debian VM inside VirtualBox by encrypting their partition. So that the OS must be running in an encrypted partition. Although while creating a disk image(VHD), I had given for Dynamic allocation, but after OS installation it looks the disk image was consuming the entire disk space. Now the image size is 20GB. Is it possible for us to compress or compact it to some smaller sizes. I saw the documentations to compact the disk image in Virtual Box, but I may need to know whether we can do the same for encrypted disk image.
Your help is greatly appreciated.
Thanks.
It depends on the type of encryption being used. Since you're using Debian I assume you're using LUKS, which is inflexible. The space has to be pre-allocated and therefore the image will utilise the full space allocated to it.
Yes, there is a way to do it, but too complex to do it.
Each time you need/want to compact it you need to do some steps carefully.
(Maybe this is not really need it, try first without this) Blank with zeros all free space inside 'clear' mounted partitions, so free space is zeroed in 'clear', it will not be zeroed in 'encrypted' view point, since encryption will encrypt such zeros.
Shutdown the machine and boot with a LiveCD iso that let you mount the virtual hdd you are using and a new 'dynamic' and 'empty' one.
Set the partition scheme and encription identically on the new one, but ensure encription will not do the 'fill' part, so it does not write all sectors... this is the top most important part... this way the new virtual disk is smalll in size, but encypted by you LUKs, etc.
At this point, only scheme and encryption is on the new 'small' one, now is time to mount both enctypted... the old and the new, so they can see in 'plain' at the same time.
Again this is very important, clone form the old 'plain' to the new 'plain' only sectors that have data (most tools to clone partitions does that).
As i say... the top most important thing (to get a smaller virtual HDD) is:
Create a new virtual dynamic disk empty
Partition it and Encrypt it without writting all sectors; so omit the dd with random data prior to do the encryption or else the dynamic file will grow to max), also omit the fill empty space, that again will grow the virtual disk to max
Clone the partitions from the plain view (mounted and de-crypted on fly), so the clone tool will only write data areas of files, etc, but not free space.
There is a small part that will not be able to be reduced... files inside encrypted partitions that have full clusters fill with zeros (hope you do not have any of thoose)... the cause is that such space (when no encryption, as is all zeros, the normal compat see the full cluster is zero so it does not need such space; but when it is encrypted, such cluster is not all zero inside the real virtaul disk file, so the compact method can not reduce it).
The idea behind all is:
When encryption is on... to get the smallest virtual disk size, start with a dynamic and empty one and write as less as possible clusters on it when cloning the previous one.
As said, it is too much work... and time to time, each write occurs it will start growing and growing again.
My best personal recomendation is, get a 'BIG' and 'FAST' disk and use a fixed virtual disk... if i read well and your disk is only 20GiB... you gain in speed a lot for having it fixed and not dynamic and will not get worried about 'fragmentation' etc.
Remember if you use a USB for it, get one able to write at 30MiB/s (if only have USB 2.0 ports), if you are lucky (like me) and have at least one USB 3 port (better if it is a USB 3.1 Gen 2 Type C) seach for a 2.5 inch HDD 500GiB Sata III (with write speed greater than 100MiB/s, it is really cheap, less than 25 euros) and a Sata III to USB 3.1 Gen 2 Type C enclosure (also cheap, some are under 15 euros)... and avoid having to 'reduce', 'clone' etc.
I have 10 virtual machines on a 500GiB (with more than 50% free space), each is 20GiB in size (with Windows system inside them taking near 16BiG) and VeraCrypt encryption... so i am on the quite near case to you... i opted to use a USB 3.1 gen 2 Type C enclosure to hold all the fixed size VDI files... my experience is that encrypted fixed size fly if compared to non encrypted dynamic size.
Of course, ensure you do the needed test (when encryption takes place), i mean... test virtual HDD speed with no encryption, then test encryption algorithms on ram... and choose a method that is faster than 1.6 times the speed of the disk... so encription will not be a bottle neck... else you can have a really bad speed caused by encryption.
Also think on this, how much cores you show to the guest? that will make encryption speed very different... but also think the worst case... how much CPU will use the non encryption threads on that guest?
Just as an example... if inside the guest you are doing LZMA2 compression (or video transcoding H.264 for example) etc... the free CPU for encryption is very low... so encryption will slow down things a lot... sample cases also do much I/O to disks, so encrypt/decrypt a lot per second is needed.
Maybe a better aproach... would be... encrypt the 'container' not the 'system'... in other words... encrypt where the VDI files are stored, not the whole guest system... create a container per VDI if want different phrase passwords, etc. That way the VDI can also be dynamic and be compacted, etc.
Of course, i would be of more help if you told what encryption scheme (without details) are using.
This makes a really great difference in possible answers:
Are you encrypting system partition with any tool that runs on guest? Then use the 'clone' only used clusters trick
Are you encrypting but setting the VDI encryption property on? Maybe VirtualBox console will help to compat them
Are you encrypting the container where VDI is stored? I am quite sure this is not your case, since in such case compact can be done as normal, VDI is not encrypted at all, neither anything iside it.
I talk about VDI... same applies to the rest formats, VHD, VHDx, etc.
Remember... if encription is done on guest and still want to reduce (compact) the virtua hdd file... start with a new dynamic one, put partition scheme, encryption but without filling all the disk... at this point virtual disk file size must not be great, just a few megabytes... then clone form the old to the new one all used clusters, but not the not used ones.
Advise: Prepare to repeat the 'compact' by 'cloning' a few times per 100 hours of intense use of the guest... if gain is less than 50% it does not compensate the effort... then the best can be done is use fixed size.
Special note: With Fixed size the access speed is much more than with dynamic size... having a dynamic size with 100% size as if it where fixed is a big lost in speed... how much? you must do the test in your machine, depends a lot on CPU, I/O speed (input/ouptup operations per second) of storage you have and also on transfer speed (MiB/s), and other factors... so best do some test.
Since you are talking about 20GiB... better do the test of fixed size... i am quite sure you will enjoy it a lot.
Other thing would be talking about 500GiB system partition with only 10% fill... since space gain could be 450GiB, it is wellcome to do the clone method to compact it, that is why i say how to do such... for such people and your you, and for any one.
P.D.: If someone does not know how to do something, that does not mean it is not possible, and if someone say something is not possible, better for that person explain the demostration or be prepeared to be called an idiot; technology improves a lot time to time, knowledge a lot more.
I've got a machine I'm going to be using for development, and it has two 7200 RPM 160 GB SATA HDs in it.
The information I've found on the net so far seems to be a bit conflicted about which things (OS, Swap files, Programs, Solution/Source code/Other data) I should be installing on how many partitions on which drives to get the most benefit from this situation.
Some people suggest having a separate partition for the OS and/or Swap, some don't bother. Some people say the programs should be on the same physical drive as the OS with the data on the other, some the other way around. Same with the Swap and the OS.
I'm going to be installing Vista 64 bit as my OS and regularly using Visual Studio 2008, VMWare Workstation, SQL Server management studio, etc (pretty standard dev tools).
So I'm asking you--how would you do it?
If the drives support RAID configurations in your BIOS, you should do one of the following:
RAID 1 (Mirror) - Since this is a dev machine this will give you the fault tolerance and peace of mind that your code is safe (and the environment since they are such a pain to put together). You get better performance on reads because it can read from both/either drive. You don't get any performance boost on writes though.
RAID 0 - No fault tolerance here, but this is the fastest configuration because you read and write off both drives. Great if you just want as fast as possible performance and you know your code is safe elsewhere (source control) anyway.
Don't worry about mutiple partitions or OS/Data configs because on a dev machine you sort of need it all anyway and you shouldn't be running heavy multi-user databases or anything anyway (like a server).
If your BIOS doesn't support RAID configurations, however, then you might consider doing the OS/Data split over the two drives just to balance out their use (but as you mentioned, keep the programs on the system drive because it will help with caching). Up to you where to put the swap file (OS will give you dump files, but the data drive is probably less utilized).
If they're both going through the same disk controller, there's not going to be much difference performance-wise no matter which way you do it; if you're going to be doing lots of VM's, I would split one drive for OS and swap / Programs and Data, then keep all the VM's on the other drive.
Having all the VM's on an independant drive would let you move that drive to another machine seamlessly if the host fails, or if you upgrade.
Mark one drive as being your warehouse, put all of your source code, data, assets, etc. on there and back it up regularly. You'll want this to be stable and easy to recover. You can even switch My Documents to live here if wanted.
The other drive should contain the OS, drivers, and all applications. This makes it easy and secure to wipe the drive and reinstall the OS every 18-24 months as you tend to have to do with Windows.
If you want to improve performance, some say put the swap on the warehouse drive. This will increase OS performance, but will decrease the life of the drive.
In reality it all depends on your goals. If you need more performance then you even out the activity level. If you need more security then you use RAID and mirror it. My mix provides for easy maintenance with a reasonable level of data security and minimal bit rot problems.
Your most active files will be the registry, page file, and running applications. If you're doing lots of data crunching then those files will be very active as well.
I would suggest if 160gb total capacity will cover your needs (plenty of space for OS, Applications and source code, just depends on what else you plan to put on it), then you should mirror the drives in a RAID 1 unless you will have a server that data is backed up to, an external hard drive, an online backup solution, or some other means of keeping a copy of data on more then one physical drive.
If you need to use all of the drive capacity, I would suggest using the first drive for OS and Applications and second drive for data. Purely for the fact of, if you change computers at some point, the OS on the first drive doesn't do you much good and most Applications would have to be reinstalled, but you could take the entire data drive with you.
As for dividing off the OS, a big downfall of this is not giving the partition enough space and eventually you may need to use partitioning software to steal some space from the other partition on the drive. It never seems to fail that you allocate a certain amount of space for the OS partition, right after install you have several gigs free space so you think you are fine, but as time goes by, things build up on that partition and you run out of space.
With that in mind, I still typically do use an OS partition as it is useful when reloading a system, you can format that partition blowing away the OS but keep the rest of your data. Ways to keep the space build up from happening too fast is change the location of your my documents folder, change environment variables for items such as temp and tmp. However, there are some things that just refuse to put their data anywhere besides on the system partition. I used to use 10gb, these days I go for 20gb.
Dividing your swap space can be useful for keeping drive fragmentation down when letting your swap file grow and shrink as needed. Again this is an issue though of guessing how much swap you need. This will depend a lot on the amount of memory you have and how much stuff you will be running at one time.
For the posters suggesting RAID - it's probably OK at 160GB, but I'd hesitate for anything larger. Soft errors in the drives reduce the overall reliability of the RAID. See these articles for the details:
http://alumnit.ca/~apenwarr/log/?m=200809#08
http://permabit.wordpress.com/2008/08/20/are-fibre-channel-and-scsi-drives-more-reliable/
You can't believe everything you read on the internet, but the reasoning makes sense to me.
Sorry I wasn't actually able to answer your question.
I usually run a box with two drives. One for the OS, swap, typical programs and applications, and one for VMs, "big" apps (e.g., Adobe CS suite, anything that hits the disk a lot on startup, basically).
But I also run a cheap fileserver (just an old machine with a coupla hundred gigs of disk space in RAID1), that I use to store anything related to my various projects. I find this is a much nicer solution than storing everything on my main dev box, doesn't cost much, gives me somewhere to run a webserver, my personal version control, etc.
Although I admit, it really isn't doing much I couldn't do on my machine. I find it's a nice solution as it helps prevent me from spreading stuff around my workstation's filesystem at random by forcing me to keep all my work in one place where it can be easily backed up, copied elsewhere, etc. I can leave it on all night without huge power bills (it uses <50W under load) so it can back itself up to a remote site with a little script, I can connect to it from outside via SSH (so I can always SCP anything I need).
But really the most important benefit is that I store nothing of any value on my workstation box (at least nothing that isn't also on the server). That means if it breaks, or if I want to use my laptop, etc. everything is always accessible.
I would put the OS and all the applications on the first disk (1 partition). Then, put the data from the SQL server (and any other overflow data) on the second disk (1 partition). This is how I'd set up a machine without any other details about what you're building. Also make sure you have a backup so you don't lose work. It might even be worth it to mirror the two drives (if you have RAID capability) so you don't lose any progress if/when one of them fails. Also, backup to an external disk daily. The RAID won't save you when you accidentally delete the wrong thing.
In general I'd try to split up things that are going to be doing a lot of I/O (such as if you have autosave on VS going off fairly frequently) Think of it as sort of I/O multithreading
I've observed significant speedups by putting my virtual machines on a separate disk. Whenever Windows is doing something stupid in the VM (e.g., indexing yet again), it doesn't thrash my Mac's disk quite so badly.
Another issue is that many tools (Visual Studio comes to mind) break in frustrating ways when bits of them are on the non-primary disk.
Use your second disk for big random things.