How many rows in a database are TOO MANY? - sql

I've a MySQL InnoDB table with 1,000,000 records. Is this too much? Or databases can handle this and more? I ask because I noticed that some queries (for example, getting the last row from a table) are slower (seconds) in the table with 1 millon rows than in one with 100.

I've a MySQL InnoDB table with 1000000 registers. Is this too much?
No, 1,000,000 rows (AKA records) is not too much for a database.
I ask because I noticed that some queries (for example, getting the last register of a table) are slower (seconds) in the table with 1 million registers than in one with 100.
There's a lot to account for in that statement. The usual suspects are:
Poorly written query
Not using a primary key, assuming one even exists on the table
Poorly designed data model (table structure)
Lack of indexes

I have a database with more than 97,000,000 records(30GB datafile), and having no problem .
Just remember to define and improve your table index.
So its obvious that 1,000,000 is not MANY ! (But if you don't index; yes, it is MANY )

Use 'explain' to examine your query and see if there is anything wrong with the query plan.

I think this is a common misconception - size is only one part of the equation when it comes to database scalability. There are other issues that are hard (or harder):
How large is the working set (i.e. how much data needs to be loaded in memory and actively worked on). If you just insert data and then do nothing with it, it's actually an easy problem to solve.
What level of concurrency is required? Is there just one user inserting/reading, or do we have many thousands of clients operating at once?
What levels of promise/durability and consistency of performance are required? Do we have to make sure that we can honor each commit. Is it okay if the average transaction is fast, or do we want to make sure that all transactions are reliably fast (six sigma quality control like - http://www.mysqlperformanceblog.com/2010/06/07/performance-optimization-and-six-sigma/).
Do you need to do any operational issues, such as ALTER the table schema? In InnoDB this is possible, but incredibly slow since it often has to create a temporary table in foreground (blocking all connections).
So I'm going to state the two limiting issues are going to be:
Your own skill at writing queries / having good indexes.
How much pain you can tolerate waiting on ALTER TABLE statements.

If you mean 1 million rows, then it depends on how your indexing is done and the configuration of your hardware. A million rows is not a large amount for an enterprise database, or even a dev database on decent equipment.
if you mean 1 million columns (not sure thats even possible in MySQL) then yes, this seems a bit large and will probably cause problems.

Register? Do you mean record?
One million records is not a real big deal for a database these days. If you run into any issue, it's likely not the database system itself, but rather the hardware that you're running it on. You're not going to run into a problem with the DB before you run out of hardware to throw at it, most likely.
Now, obviously some queries are slower than others, but if two very similar queries run in vastly different times, you need to figure out what the database's execution plan is and optimize for it, i.e. use correct indexes, proper normalization, etc.
Incidentally, there is no such thing as a "last" record in a table, from a logical standpoint they have no inherent order.

I've seen non-partitioned tables with several billion (indexed) records, that self-joined for analytical work. We eventually partitioned the thing but honestly we didn't see that much difference.
That said, that was in Oracle and I have not tested that volume of data in MySQL. Indexes are your friend :)

Assuming you mean "records" by "registers" no, it's not too much, MySQL scales really well and can hold as many records as you have space for in your hard disk.
Obviously though search queries will be slower. There is really no way around that except making sure that the fields are properly indexed.

The larger the table gets (as in more rows in it), the slower queries will typically run if there are no indexes. Once you add the right indexes your query performance should improve or at least not degrade as much as the table grows. However, if the query itself returns more rows as the table gets bigger, then you'll start to see degradation again.
While 1M rows are not that many, it also depends on how much memory you have on the DB server. If the table is too big to be cached in memory by the server, then queries will be slower.

Using the query provided will be exceptionally slow because of using a sort merge method to sort the data.
I would recommend rethinking the design so you are using indexes to retrieve it or make sure it is already ordered in that manner so no sorting is needed.

Related

Make query run faster - IT HAS NO JOIN

I got a really huge amount of data that are used to be joined anywhere just to get it (because it was really slow the team decided to gather it all into one table), but now even though they're literally right in one table (no join needed).
It's still so slow. Taking a one day range filter event will lead to time out (took more than 10s, yes that's how bad it is).
What should I suggest to my DBA?
What is the "selectivity"? That is, how many rows does your select expect to retrieve? 100% of the rows? 1% of the rows? 0.01% of the rows?
1. Low selectivity
If the selectivity is low (i.e less than 5%, ideally less than 0.5%) then good indexing is the best practice.
If so, which columns in the where clause (filtering columns) have the best (lowest) selectivity? Add these columns first in the index.
Once you have decided on the best index, you can make the table a "clustered index" table using that index. That way the heap will be presorted (fast lookup) by the index columns, for improved io since the disk blocks will be looked up sequentially.
2. High selectivity
If the selectivity is high (20% or more), there's no much you can do on your side (development). You could still get some improvement by:
Removing unneeded columns.
Make sure the select uses a FULL TABLE SCAN.
Ask the DBA to assign more resources (SGA, disk priority, paralellism, etc.)
3. Otherwise
The amount of data you have vastly exceeds the database resources you have. There's nothing you can do about it, except to tell the client about this reality, and:
Find together a way of defining smaller queries that can be achievable.
4. Finally
If you don't understanf the terms of selectivity, full table scan, indexing, database resources, heap, disk blocks, I would recommend you study them. I'm fairly sure you need to fully understand them right now!
As others have said, you need an index. However if it's really huge you can partition the data.
This allows you to drop sections of the data without using time consuming deletes. For example if you're working with some sort of historical data and want to keep 3 months worth, you can partition by month, then each month drop the oldest partition.
However on a more general note, it's rarely a good idea to take a slow multi-table query and glom it all together to improve performance. What you really need is to figure out what's wrong with the slow query and fix it.
This is a job for your DBA.

SQL - split a large table according to how frequently they're accessed?

I have a table that has 50 fields:
10 Fields that are almost always needed.
40 Fields that are very rarely needed.
I would roughly say that the fields in (1) are needed to be accessed 1000 times more frequently than the fields in (2).
Should I split them to two tables with one-to-one relation, or keep all in the same table?
The process that you are describing is sometimes referred to as "vertical partitioning". Taken to an extreme (one column per vertical partition), this is how columnar databases store data. Unfortunately (to the best of my knowledge), Postgres does not currently have direct support for vertical partitioning.
Your idea of splitting the data into two tables is fine. I would note the following:
You will need to modify queries that use the extra columns to use the second table. (You can wrap the join into a view which you use when you want the extra columns.)
If both tables have a clustered primary key that connects them, then the join should be really fast.
If you are inserting/updating/deleting data, then you need to be careful about synchronization. I think you can handle this with an INSTEAD OF trigger on a view combining the tables.
If some records do not have extra columns, this can be a big win on the space side.
If all records and all columns are going to be loaded into the cache, then this probably is not a big win.
This can be a big performance win, under some circumstances. But there is additional manual work to keep the tables synchronized.
There's really not nearly enough information here to estimate (never mind actually quantify) what the benefits might be, but the costs are very clear -- more complex code, a more complex schema, probably greater overall space usage, and a performance overhead when adding and removing rows.
A performance improvement might come from scanning a smaller amount of data when performing a full table scan, or from an increased likelihood in finding data blocks in memory when required, and an overall smaller memory footprint, but without specific information on the types of operation commonly performed, and whether the server is under memory pressure, no reliable advice can be given.
Be very wary of making your system more complex as a side-effect of uncertain performance gains.

Oracle performance after removing duplicate records

I have a table in Oracle 11g R2 that contains approximately 90,000 spatial (geographic) records. Hundreds of the records are duplicated due to bad practice of the users.
Is there anyway to measure the performance of the database/table before and after removing the duplicates?
A table with 90000 records is quite little table. Hundreds of duplicates is less then 1% - it is also quite little amount of a "garbage". This amount can't make big performance problems (if your application have a good design). I don't think that you can create tests that shows any significant difference in performance between "before" and "after".
Also you can delete duplicates and then create unique constraint to prevent such situation in future.
One way to measure global performance of an Oracle database is via the facilities of the Grid Control (aka Enterprise Manager) that shows a number of measurements (CPU, IOs, memory, etc).
Another way is to run some typical queries in sqlplus (with SET TIMING ON) and compare their response times before the removal and after the removal. That is assuming that by "performance" you mean the elapsed time for those queries.
Like Dmitry said 90,000 rows is a very small table, with a tiny fraction of duplicate rows. The presence or absence of those duplicates is unlikely to make any noticeable difference.
i, create a temp table from the source table(with the indexes of
course)
ii, after it delete the duplicated rows from the temp table (or the source, its egal)
iii, see the explain
plans both of these tables and you will get the answer

How long should a query that returns 5 million records take?

I realise the answer should probably be 'as little time as possible' but I'm trying to learn how to optimise databases and I have no idea what an acceptable time is for my hardware.
For a start I'm using my local machine with a copy of sql server 2008 express. I have a dual-core processor, 2GB ram and a 64bit OS (if that makes a difference). I'm only using a simple table with about 6 varchar fields.
At first I queried the data without any indexing. This took a ridiculously long amount of time so I cancelled and added a clustered index (using the PK) to the table. This cut the time down to 1 minute 14 sec. I have no idea if this is the best I can get or whether I'm still able to cut this down even further?
Am I limited by my hardware or is there anything else I can do to my table/database/queries to get results faster?
FYI I'm only using a standard SELECT * FROM <Table> to retrieve my results.
EDIT: Just to clarify, I'm only doing this for testing purposes. I don't NEED to pull out all the data, I'm just using that as a consistent test to see if I can cut down the query times.
I suppose what I'm asking is: Is there anything I can do to speed up the performance of my queries other than a) upgrading hardware and b) adding indexes (assuming the schema is already good)?
I think you are asking the wrong question.
First of all - why do you need so many articles at one time on the local machine? What do you want to do with them? I'm asking because I think you want to transfer this of data to somewhere, so you should be measuring how long it takes to transfer the data.
Some advice:
Your applications should not select 5 million records at the time. Try to split your query and get the data in smaller sets.
UPDATE:
Because you are doing this for testing, I suggest that you
Remove * from your query - it takes SQL server some time to resolve this.
Put your data in temporary storage, try using VIEW or a temporary table for this.
Use plan caching on your server
to improve performance. But even if you're just testing, I still don't understand why you would need such tests if your application would never use such a query. Testing just for the sake of testing is a bad use of time
Look at the query execution plan. If your query is doing a table scan, it will obviously take a long time. The query execution plan can help you decide what kind of indexing you would need on the table. Also, creating table partitions can help sometimes in cases where the data is partitioned by a condition (usually date and time).
I did 5.5 million in 20 seconds. That's taking over 100k schedules with different frequencies and forecasting them for the next 25 years. Just max scenario testing, but proves the speed you can achieve in a scheduling system as an example.
The best optimized way depends on the indexing strategy you choose. As many of the above answers, i too would say partitioning the table would help sometimes. And its not the best practice to query all the billion record in a single time frame. Will give you much better results if you could try to query partially with the iterations. you may check this link to clear the doubts on the minimum requirements for the Sql server 2008 Minimum H/W and S/W Requirements for Sql server 2008
When fecthing 5 million rows you are almost 100% going spool to tempdb. you should try to optimize your temp Db by adding additional files. if you have multiple drives on seperate disks you should split the table data into different ndf files located on seperate disks. parititioning wont help when querying all the data on the disk
U can also use a query hint to force parrallelism MAXDOP this will increase the CPU utilization. Ensure that the columns contain few nulls as possible and rebuild ur indexes and stats

Performance benefit when SQL query is limited vs calling entire row?

How much of a performance benefit is there by selecting only required field in query instead of querying the entire row? For example, if I have a row of 10 fields but only need 5 fields in the display, is it worth querying only those 5? what is the performance benefit with this limitation vs the risk of having to go back and add fields in the sql query later if needed?
It's not just the extra data aspect that you need to consider. Selecting all columns will negate the usefulness of covering indexes, since a bookmark lookup into the clustered index (or table) will be required.
It depends on how many rows are selected, and how much memory do those extra fields consume. It can run much slower if several text/blobs fields are present for example, or many rows are selected.
How is adding fields later a risk? modifying queries to fit changing requirements is a natural part of the development process.
The only benefit I know of explicitly naming your columns in your select statement is that if a column your code is using gets renamed your select statement will fail before your code. Even better if your select statement is within a proc, your proc and the DB script would not compile. This is very handy if you are using tools like VS DB edition to compile/verify DB scripts.
Otherwise the performance difference would be negligible.
The number of fields retrieved is a second order effect on performance relative to the large overhead of the SQL request itself -- going out of process, across the network to another host, and possibly to disk on that host takes many more cycles than shoveling a few extra bytes of data.
Obviously if the extra fields include a megabyte blob the equation is skewed. But my experience is that the transaction overhead is of the same order, or larger, than the actual data retreived. I remember vaguely from many years ago than an "empty" NOP TNS request is about 100 bytes on the wire.
If the SQL server is not the same machine from which you're querying, then selecting the extra columns transfers more data over the network (which can be a bottleneck), not forgetting that it has to read more data from the disk, allocate more memory to hold the results.
There's not one thing that would cause a problem by itself, but add things up and they all together cause performance issues. Every little bit helps when you have lots of either queries or data.
The risk I guess would be that you have to add the fields to the query later which possibly means changing code, but then you generally have to add more code to handle extra fields anyway.