MDF File Size Not Growing - sql

I am having an issue with the MDF and IDF file sizes. I can see the physical file size paths are fine. It is set to auto-growth by 64 MB. The table sizes are changed frequently whereas MDF and LDF files sizes are still the same.
I am curious if it is some data loss or something which is not growing its size. I have any tables in the database and there are multiple transactions almost in each table every minute.
Can anyone in database expert help me out in this regard? I will be very thankful.

Related

SQL Reindex has made my database twice as big

I have reindexed all my table in a database that was 22GB, it is now 42GB, Any ideas why this has happened and is it possible to shrink it back down to its original size. This is just the .mdf file, not the .ldf file.
It cant double in size and stay like that surely?
Cheers Andrew
The increase in database size due to the index rebuilt is expected. While the index is rebuilt, parallel indexing structure is created and after it is created it switches as new cluster index. So, we need free space in a database. If you try to shrink the size of the database then you would unnecessarily waste the effort of shrinking it as the size will increase again when you rebuild the index again. So, if you have to have index rebuild in your maintenance task then you should allow your database to have enough space.
Source social.msdn.microsoft.com

Best practice for creating SQL Server databases for a data warehouse

I'm about to create 2 new SQL Server databases for our data warehouse:
Datawarehouse - where the data is stored
Datawarehouse_Stage - where the ETL is done
I'm expecting both databases to be able 30GB and grow about 5GB per year. They probably will not get bigger than 80GB (when we'll start to archive).
I'm trying to decide what settings I should use when creating these databases:
what should the initial size be?
...and should I increase the database size straight after creating it?
what should the auto-growth settings be?
I'm after any best practice advice on creating those databases.
UPDATE: the reason I suggest increasing the database size straight after creating it, because you can't shrink a database to less than its initial size.
•what should the initial size be?
45gb? 30 + 3 years grow, especially given that this fits on a LOW END CHEAP SSD DISC ;) Sizing is not an issue if your smallest SSD is 64GB.
...and should I increase the database size straight after creating it?
That would sort of be stupid, or? I mean, why create a db with a small size jsut to resize IMMEDIEATLEY after, instead of putting the right size into the script in the first step.
what should the auto-growth settings be?
This is not a data warehouse question. NO AUTOGROW. Autogrow fragemnts your discs.
Make sure you format the discs according to best practices (64kb node size, aligned partitions).

Performance implications of storing 600,000+ images in the same folder (NTFS)

I need to store about 600,000 images on a web server that uses NTFS. Am I better off storing images in 20,000-image chunks in subfolders? (Windows Server 2008)
I'm concerned about incurring operating system overhead during image retrieval
Go for it. As long has you have an external index and have a direct file path to each file with out listing the contents of the directory then you are ok.
I have a folder with that is over 500 GB in size with over 4 million folders (which have more folders and files). I have somewhere in the order of 10 million files in total.
If I accidentally open this folder in windows explorer it gets stuck at 100% cpu usage (for one core) until I kill the process. But as long as you directly refer to the file/folder performance is great (meaning I can access any of those 10 million files with no overhead)
Depending on whether NTFS has directory indexes, it should be alright from the application level.
I mean, that opening files by name, deleting, renaming etc, programmatically should work nicely.
But the problem is always tools. Third party tools (such as MS explorer, your backup tool, etc) are likely to suck or at least be extremely unusable with large numbers of files per directory.
Anything which does a directory scan, is likely to be quite slow, but worse, some of these tools have poor algorithms which don't scale to even modest (10k+) numbers of files per directory.
NTFS folders store an index file with links to all its contents. With a large amount of images, that file is going to increase a lot and impact your performance negatively. So, yes, on that argument alone you are better off to store chunks in subfolders. Fragments inside indexes are a pain.

Why would two mysql files (same table, same contents) be different in size?

I took an existing MySQL database, and set up a copy on a new host.
The file size for some tables on the new host are 1-3% smaller than their counterpart files on the old host.
I am curious why that is.
My guess is, the old host's files have grown over time, and within the b-tree structure for that file, there is more fragmentation. Whereas the new host, because it was creating the file from scratch (via a binary log), avoided such fragmentation.
Does it even make sense for there to be fragmentation within the b-tree structure itself? (Speaking within the database layer, and not with regards to the OS file system layer) I originally thought "no", but then again, isn't such fragmentation the basis for the DBA task of compressing your database files?
I'm wondering maybe if this is simply an artifact of the file system layer. i.e. the new host has a mostly empty disk drive, hence less fragmentation would result in the allocation of a new file. Then again, I didn't think that fragmentation would show up in the reported file size (Linux OS).
There can certainly be fragmentation in MySQL data files or index files. This is common, even deliberate.
That is, the storage engine may deliberately leave some extra space here and there so when you change values, it can fit the rows in without having to reorder the whole data file. There are even server properties you can use to configure how much of this slop space to allocate.
I wouldn't even blink at a file discrepancy of 1-3%.
From what i understand of mysql It has a growth algo as it approaches capacity, when mounted it chose a different size, prolly trimming excess storage

Create discrepancy between size on disk and actual size in NTFS

I keep finding files which show a size of 10kb but a size on disk on 10gb. Trying to figure out how this is done, anyone have any ideas?
You can make sparse files on NTFS, as well as on any real filesystem. :-)
Seek to (10 GB - 10 kB), write 10 kB of data. There, you have a so-called 10 GB file, which in reality is only 10 kB big. :-)
You can create streams in NTFS files. It's like a separate file, but with the same filename. See here: Alternate Data Streams
I'm not sure about your case (or it might be a mistake in your question) but when you create a NTFS sparse file it will show different sizes for these fields.
When I create a 10MB sparse file and fill it with 1MB of data windows explorer will show:
Size: 10MB
Size on disk: 1MB
But in your case its the opposite. (or a mistake.)