My server has 250 GB RAM and it's a physical server. Max memory configured to 230 GB when ran a DMV sys.dm_os_buffer_descriptors with joining other DMV, I found a table taking almost 50 GB Buffer pool space. My question is, Is this an Issue? If so what's the best way to tackle it? My PLE is very high, no slowness report. Thanks.
The data most often and recently used will remain buffer pool cache so it is expected that 50GB of table data will be cached when the table and data are used often. Since your PLE is acceptable, there may be no concerns for now.
You may still want to take a look at query plans that use the table in question. It could be that more data than needed is brought into the buffer pool cache due to large scans when fewer pages are actually needed by queries. Query and index tuning may be in order in that case. Tuning will also reduce CPU and other resource utilization, providing headroom for growth and other queries in the workload.
Related
I am running some spatial queries on tables that have records close to a billion. However, I cannot understand why Postgres is not using the memory in the dedicated server (with 32GB of memory). I tuned the server based on the suggestions in here. However, couldn't see any difference in the running time at all and see only under 100Mb of memory usage. I would expect Postgres to consume more memory by loading bigger chunks of data to it; thus, reducing the disk reads and the time. What could I be doing wrong here?
Already looked at these posts:
https://dba.stackexchange.com/questions/18484/tuning-postgresql-for-large-amounts-of-ram
http://patshaughnessy.net/2016/1/22/is-your-postgres-query-starved-for-memory
i am having recently came to know that sql server if i delete one column or modify it acquires space at backend so i need to reindex and shrink the database and i have done it and my datbase size reduced to
2.82 to 1.62
so its good like wise so now i am in a confusion
so in my mind many questions regarding this subject occurs pls help me about this one
1. So it is necessary to recreate indexes(refresh ) after particular interval
It is necessary to shrink database after particular time so performance will be up to date?
If above yes then what particular time should i refresh (Shrink) my database?
i am having no idea what should be done for disk spacing problem i am having 77000 records it takes 2.82gb dataspace which is not acceptable i am having two tables of that one only with one table nvarchar(max) so there should be minimum spaces to database can anyone help me on this one Thanks in advance
I am going to simplify things a little for you so you might want to read up about the things I talk about in my answer.
Two concepts you must understand. Allocated space vs free space. A database might be 2GB in size but it is only using 1GB so it has allocated 2GB with 1GB free space. When you shrink a database it removes the free space so free space should be about 0. Dont think smaller file size is faster. As you database grows it has to allocate space again. When you shrink the file and then it grows every so often it cannot allocate space in a contiguous fashion. This will create fragmentation of the files which slows you down even more.
With data files(.mdb) files this is not so bad but with the transaction log shrinking the log can lead to virtual log file fragmentation issues which can slow you down. So in a nutshell there is very little reason to shrink your database on a schedule. Go read about Virtual Log Files in SQL Server there are a lot of articles about it. This is a good article about shrink log files and why it is bad. Use it as a starting point.
Secondly indexes get fragmented over time. This will lead to bad performance of SELECT queries mainly but will also affect other queries. Thus you need to perform some index maintenance on the database. See this answer on how to defragment your indexes.
Update:
Well the time you rebuild indexes is not clear cut. Index rebuilds lock the index during the rebuild. Essentially they are offline for the duration. In your case it would be fast 77 000 rows is nothing for SQL server. So rebuilding the indexes will consume server resources. IF you have enterprise edition you can do online index rebuilding which will NOT lock the indexes but will consume more space.
So what you need to do is find a maintenance window. For example if your system is used from 8:00 till 17:00 you can schedule maintenance rebuilds after hours. Schedule this with SQL server agent. The script in the link can be automated to run.
Your database is not big. I have seen SQL server handle tables of 750GB without taking strain if the IO is split over several disks. The slowest part of any database server is not the CPU or the RAM but the IO pathways to the disks. This is a huge topic though. Back to your point you are storing data in NVARCHAR(MAX) fields. I assume this is large text. So after you shrink the database you see the size at 1,62GB which means that each row in your database is about 1,62/77 000 big or roughly 22Kb big. This seems reasonable. Export the table to a text file and check the size you will be suprised it will probably be larger than 1,62GB.
Feel free to ask more detail if required.
I am seeing CPU spikes on database server everyday and I found out that indexes were not rebuilt for quite a while. Could it be a reason of those spikes?
Fragmentation might cause a little more CPU load but it does not cause spikes. Why would it? Look elsewhere. Find out what queries are running during the spikes, and look at long-running queries with lots of CPU.
Yes but it is totally depend on the number of records available in that particular table, Fragmentation can cause CPU spike as well as 100% CPU utilization for server during load.
Because while searching for pages by indexes during load, CPU have 4 milliseconds of quantum for query to execute, if the query executing on CPU required additional pages in the memory(which is not available in memory) due to fragmentation storage engine has to look back and forth in the B+ Tree where pages are scattered leading CPU hast to spill that query in the waiter list, once the data available it moves to the runnable queue(where query waiting for CPU to execute).
Just imagine if the table having huge records then definitely fragmentation would have impact on CPU.
suppose I have a table that stores 100 million records of strings of varying sizes up to 20 characters in a column field. I need to index this column, I only have a 2GB-Ram machine, is this sufficient to perform such task? Is mysql recommended db engine for storage?
Databases are generally designed in a way that allows them to work with more data then you have available RAM. Giving it more working memory will speed things up, but it should be able to build the index and perform searches on it just fine.
If you have 2 GB of main memory, then yes, you should be able to build the index without any problems; virtual memory is a wonderful thing, and the DBMS may well arrange to spill data to disk as it goes.
If you only have 2 GB of disk space, you don't have enough space for the data and the index.
To no-one's surprise, it is 2 GB of main memory, not 2 GB of disk (that comment was mainly in jest - but these days, if someone says 256 GB, it is not clear whether they're referring to disk space or main memory; it could be either).
Yes, if the DBMS cannot create the index within that constraint, it is not worthy of being termed a DBMS.
MySQL probably can do the job. It isn't what I'd recommend, but I'm very biassed in this area as a result of being one of the developers of an alternative (commercial) DBMS. We don't have enough information about your budget etc to be able to advise reliably.
I have a problem with a large database I am working with which resides on a single drive - this Database contains around a dozen tables with the two main ones are around 1GB each which cannot be made smaller. My problem is the disk queue for the database drive is around 96% to 100% even when the website that uses the DB is idle. What optimisation could be done or what is the source of the problem the DB on Disk is 16GB in total and almost all the data is required - transactions data, customer information and stock details.
What are the reasons why the disk queue is always high no matter the website traffic?
What can be done to help improve performance on a database this size?
Any suggestions would be appreciated!
The database is an MS SQL 2000 Database running on Windows Server 2003 and as stated 16GB in size (Data File on Disk size).
Thanks
Well, how much memory do you have on the machine? If you can't store the pages in memory, SQL Server is going to have to go to the disk to get it's information. If your memory is low, you might want to consider upgrading it.
Since the database is so big, you might want to consider adding two separate physical drives and then putting the transaction log on one drive and partitioning some of the other tables onto the other drive (you have to do some analysis to see what the best split between tables is).
In doing this, you are allowing IO accesses to occur in parallel, instead of in serial, which should give you some more performance from your DB.
Before buying more disks and shifting things around, you might also update statistics and check your queries - if you are doing lots of table scans and so forth you will be creating unnecessary work for the hardware.
Your database isn't that big after all - I'd first look at tuning your queries. Have you profiled what sort of queries are hitting the database?
If you disk activity is that high while your site is idle, I would look for other processes that might be running that could be affecting it. For example, are you sure there aren't any scheduled backups running? Especially with a large db, these could be running for a long time.
As Mike W pointed out, there is usually a lot you can do with query optimization with existing hardware. Isolate your slow-running queries and find ways to optimize them first. In one of our applications, we spent literally 2 months doing this and managed to improve the performance of the application, and the hardware utilization, dramatically.