How to extend default partition after creating an VM instance? [closed] - virtual-machine

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 years ago.
Improve this question
I created a Centos x64 VM instance with a 12GB disk using the FI-WARE cloud. I can access it with not problem and I have started installing software. However, the default created partition /dev/vda1 is only 5GB and I have already filled it. I would like to know how to extend the partition to use the full disk.
Thanks,

I would say you have two ways. The first one which is safe and the second one which is risky. So, let's start by the safe one:
You could use fdisk /dev/vda (or parted /dev/vda) in order to create a new partition. As the partition will be created in the same virtual disk where your '/' is created and mounted, you'll have to reboot your VM before using your new partition.
When you reboot your VM, you'll be able to format your new partition:
mkfs -t ext4 /dev/vda2
And mount your new partition wherever you want:
mount /dev/vda2 /mnt
In order to make this mounting persistent, you could add a new line to /etc/fstab:
/dev/vda2 /mnt ext4 defaults 1 1
The second way is extending your /dev/vda1 partition. This is risky and if you make any mistake, it is possible that your VM won't be able to boot again (alone), use this at your own risk. Anyway here it goes --
Using fdisk (parted will refuse to do this) you can change the partition --
# fdisk /dev/vda
Remove the dos partition flag and change units to 'sectors':
Command (m for help): c
DOS Compatibility flag is not set
Command (m for help): u
Changing display/entry units to sectors
Let's take a look at the partition table:
Command (m for help): p
Disk /dev/vda: 10.7 GB, 10737418240 bytes
181 heads, 40 sectors/track, 2896 cylinders, total 20971520 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000c897f
Device Boot Start End Blocks Id System
/dev/vda1 * 2048 10485759 5241856 83 Linux
Delete the first partition
Command (m for help): d
Selected partition 1
And create it again using the whole disk:
Command (m for help): n
Command action
e extended
p primary partition (1-4)
p
Partition number (1-4): 1
First sector (2048-20971519, default 2048):
Using default value 2048
Last sector, +sectors or +size{K,M,G} (2048-20971519, default 20971519):
Using default value 20971519
Next, you should set the boot flag to your first partition:
Command (m for help): a
Partition number (1-4): 1
You quit fdisk writing your changes with 'w' command and reboot the VM.
Once it reboots, you shoud resize your filesystem:
# resize2fs /dev/vda1

Related

Calculate HANA global allocation limit

How can I calculate the global_allocation_limit parameter? When I have SAP Netweaver and SAP HANA DB installed on a server. And the current database size in RAM is 300 GB.
Many thanks
As you correctly mentioned, the Global Allocation Limit is a parameter, which can be set by the administrator. If the administrator has set this to an arbitrary value, there is no way for you to "calculate" it.
However, if your question is referring to the default value, the official documentation may be helpful:
The default value is 0 in which case the global allocation limit is
calculated as follows: 90% of the first 64 GB of available physical
memory on the host plus 97% of each further GB. Or, in the case of
small physical memory, physical memory minus 1 GB.

How to calculate redis memory used percentage on ElastiCache

I want to monitor my redis cache cluster on ElastiCache. From AWS/Elasticache i am able to get metrics like FreeableMemory and BytesUsedForCache. If i am not wrong BytesUsedForCache is the memory used by cluster(assuming there is only one node in cluster). I want to calculate percentage uses of memory. Can any one help me to get percentage of Memory uses in Redis.
We had the same issue since we wanted to monitor the percentage of ElastiCache Redis memory that is consumed by our data.
As you wrote correctly, you need to look at BytesUsedForCache - that is the amount of memory (in bytes) consumed by the data you've stored in Redis.
The other two important numbers are
The available RAM of the AWS instance type you use for your ElastiCache node, see https://aws.amazon.com/elasticache/pricing/
Your value for parameter reserved-memory-percent (check your ElastiCache parameter group). That's the percentage of RAM that is reserved for "nondata purposes", i.e. for the OS and whatever AWS needs to run there to manage your ElastiCache node. By default this is 25 %. See https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/redis-memory-management.html#redis-memory-management-parameters
So the total available memory for your data in ElastiCache is
(100 - reserved-memory-percent) * instance-RAM-size
(In our case, we use instance type cache.r5.2xlarge with 52,82 GB RAM, and we have the default setting of reserved-memory-percent = 25%.
Checking with the info command in Redis I see that maxmemory_human = 39.61 GB, which is equal to 75 % of 52,82 GB.)
So the ratio of used memory to available memory is
BytesUsedForCache / ((100 - reserved-memory-percent) * instance-RAM-size)
By comparing the freeableMemory and bytesUsedForCache metrics, you will have the available memory for the Elasticache non-cluster mode (not sure if it applies to cluster-mode too).
Here is the NRQL we're using to monitor the cache:
SELECT Max(`provider.bytesUsedForCache.Sum`) / (Max(`provider.bytesUsedForCache.Sum`) + Min(`provider.freeableMemory.Sum`)) * 100 FROM DatastoreSample WHERE provider = 'ElastiCacheRedisNode'
This is based on the following:
FreeableMemory: The amount of free memory available on the host. This is derived from the RAM, buffers and cache that the OS reports as freeable.AWS CacheMetrics HostLevel
BytesUsedForCache: The total number of bytes allocated by Redis for all purposes, including the dataset, buffers, etc. This is derived from used_memory statistic at Redis INFO.AWS CacheMetrics Redis
So BytesUsedForCache (amount of memory used by Redis) + FreeableMemory (amount of data that Redis can have access to) = total memory that Redis can use.
With the release of the 18 additional CloudWatch metrics, you can now use DatabaseMemoryUsagePercentage and see the percentage of memory utilization in redis.
View more about the metric in the memory section here
You would have to calculate this based on the size of the node you have selected. See these 2 posts for more information.
Pricing doc gives you the size of your setup.
https://aws.amazon.com/elasticache/pricing/
https://forums.aws.amazon.com/thread.jspa?threadID=141154

Should I force MS SQL to consume x amount of memory?

I'm trying to get better performance out of our MS SQL database. One thing I noticed that the instance is taking up about 20 gigs of RAM, and the database in question is taking 19 gigs of that 20. Why isn't the instance consuming most of the 32 gigs that is on box? Also the size of the DB is a lot larger then 32 gigs, so it being smaller then the available Ram is not the issue. I was thinking on setting the min server memory to 28 gigs or something along those lines, any thoughts? I didn't find anything on the interwebs that threw up red flags on this idea. This is on a VM(VMWARE). I verified that the host is not overcommitting memory. Also I do not have access to the host.
This is the query I ran to find out what each database was consuming
SELECT DB_NAME(database_id),
COUNT (*) * 8 / 1024 AS MBUsed
FROM sys.dm_os_buffer_descriptors
GROUP BY database_id
ORDER BY COUNT (*) * 8 / 1024 DESC
If data is sitting on disk, but hasn't been requested by a query since the service has started, then there would be no reason for SQL Server to put those rows into the buffer cache, thus the size on disk would be larger than the size in memory.

What is the use of FAT32 reserved sectors?

I know sector 0 is mostly used for loading the operating system. Some windows versions have bootloadres bigger than 1 sector and use sector 1 and 2 as well. On sector 6 up to 8 is often a backup of the sectors 0-2. But what is the rest for? Why is the default in many formating tools 32 reserved sectors?
actually FAT32 boot sector have backup copy starting with 6 sectors so in total 12 sectors. AND FAT32 boot sector continues on 13 sector. So minimum is 13 sectors.
Then formatting tool wants to align first cluster with 1MB boundary, so amount of reserved sectors can be very different due to different amount of FAT sectors.
For HDD 4K boundary is enough, but some SSD can work faster with 1MB boundary. At least Windows7 formats with 1MB aligning.
Some file systems may want a specific alignment of data and therefor will increase the number of reserved sectors to meet that alignment.

Advertised disk space vs actual disk space [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
Why is it that advertised disk space is almost always higher than the disk space reported by the UI? For example, I have an "80 gb" hard drive, but the iTunes UI indicates only 74. I usually see this as well with hard disks and the amount reported with the drive letter.
There are 3 reasons why the amount of space you can actually use is different from that listed for the drive, all of which work against you:
Hard drive manufacturers treat 1GB as one billion bytes, while the operating system calls it 1,073,741,824 bytes (1000 * 1000 * 1000 vs 1024 * 1024 * 1024).
You lose some space for file tables when formatting.
Disk space is divided into chunks larger than 1 byte (typically 4K). Using typical Windows defaults, a 1 byte file takes up 4K of space on disk.
Of these, the first two can influence the amount of space reported by the drive (though IIRC the 2nd one was more of an issue with FAT32 than NTFS). The last one only influences the amount of free space remaining, but will still prevent you from using the full capacity of your drive.
It's the way the OS calculates free space vs the hard drive manufacturers.
OS: 1mb = 1024 kb
Vendor: 1mb = 1000 kb
The vendor will always use the *1000 to increase their numbers.
The main culprit is using base 10 vs. base 2 to list the storage size. It effectively becomes a rounding error.
There is a movement to try and list storage size with base 2 values instead of base 10 to reflect the true size.
It's the difference between the standard (SI) prefixes (giga, mega, kilo, etc.) which are multiples of 1000 and the binary prefixes which are multiples of 1024.
Marketing considers 80 gigabytes to be 80,000,000,000 bytes. The OS considers 80 gigabytes to be 85,899,345,920 bytes.
http://www.google.com/search?q=80000000000+bytes+in+GB
Usually due to some partitioned space that the OS or some software takes and hides for backup or system purposes.
Say manufacturer consider a MB to be 1024KB; others 1000KB. Similarly for GB. Some say 1024MB; others 1000MB.
Then, that refers to the un-formatted size. Formatting takes up some space.
Additionally many times they advertise gigabytes as slightly inaccurate numbers, which result in differences. You can see this in the disclaimer text on the outside of most hard drive boxes!