Postgres optimize for insert/read only - sql

I'm working on a database with the following characteristics:
Many inserts (in the range of 1k/second)
Lots of indices on the data, complex joins
NO Deletes or updates, only inserts, read and table drops
I don't care if the reads to the database reflect accurate state
Data isn't critical, I'm already running fsync=off
I already know a fair bit about postgres optimization, but I was hoping there might be some additional tricks that are more suited to my particular use case.

You can disable the WAL, perhaps by pointing it to /dev/null or RAMDISK. Note there is some speculation that you may not be able to restart the DB after even a clean stop, so I advise caution and testing.
Make sure you cluster your tables. Partitioning might help as well.
Certainly disable synchronous_commits.
http://wiki.postgresql.org/wiki/FAQ#How_do_I_tune_the_database_engine_for_better_performance.3F

You might want to use unlogged tables (available as of 9.1) for that. It's basically a table for which the WAL is disabled.

Related

What happens if I restart a MariaDB server while it is Repairing or Optimizing a very large table?

What happens if I restart a MariaDB server while it is Repairing or Optimizing a very large table (like at least 20GB)? Probably because I need to use the table for other stuff and I'm just getting plain bored.
REPAIR and OPTIMIZE are designed to be crash-safe. (Or at least to a large extent.)
OPTIMIZE, for example, copies the table over to a tmp table name. When finished, it internally does a RENAME TABLE, which is fast.
OPTIMIZE is needed in only very rare cases for MyISAM. It is even less needed for InnoDB. What is your use case? I will probably counter that it is 'futile' or 'not worth the effort'.
Repair is needed only for MyISAM. I hope you are not using that antiquated engine.
More
Consider switching to InnoDB; we can discuss this further. REPAIR is much less often needed, plus is automated.
What is the schema? What is the data flow like? I may have tips on avoiding the indexes getting out of date. (The way the .MYD file is laid out is problematic for certain data flows.)
Use ANALYZE TABLE (instead) when there is no complaint about index corruption yet queries are suddenly slow.

Slow Simultaneous Writes to Same Table in PostgreSQL Database

I suspect this question may be better suited for the Database Administrators site, so LMK if it is and I'll move it. :)
I'm something of a database/Postgres beginner here so help me out. I have a system set up to process 10 things in parallel and write output of those things to the same table in the same Postgres database. The writes happen ok but they take forever. My log files show that I'll have results for 30,000 of these things, but only 7,000 of them are reflected in the database.
I suspect Postgres is queueing up the writes for some reason, and my guess is that this happens because that table has an auto-incrementing primary key. If I'm trying to write 10 records to the same table simultaneously, I would assume they'd have to be queued, because otherwise how is the primary key going to be set?
Do I have this right, or is my database horribly misconfigured? My sysadmin doesn't typically do databases, so if you have any tuning suggestions, even basic stuff, I'd be glad to hear them. :)
I suspect Postgres is queueing up the writes for some reason, and my guess is that this happens because that table has an auto-incrementing primary key. If I'm trying to write 10 records to the same table simultaneously, I would assume they'd have to be queued, because otherwise how is the primary key going to be set?
Nope, that's not it.
If you read the documentation on sequences you'll see that they're exempt from transactional visibility and rollback specifically for this reason. An ID generated with nextval is not re-used on rollback.
Do I have this right, or is my database horribly misconfigured? My sysadmin doesn't typically do databases, so if you have any tuning suggestions, even basic stuff, I'd be glad to hear them. :)
It's more likely that you're doing individual commits, one per insert, on a system with really slow fsync()s like a single magnetic hard drive. You might also have your checkpoint intervals too low (see the PostgreSQL logs where warnings about this will appear if so), might have too many indexes causing a slowdown, etc.
You should look at the PostgreSQL logs.
Also, please see the primer I wrote on the topic of improving insert performance.

Improve Log Exceptions

I am planning to use log4net in a new web project. In my experience, I see how big the log table can get, also I notice that errors or exceptions are repeated. For instance, I just query a log table that have more than 132.000 records, and I using distinct and found that only 2.500 records are unique (~2%), the others (~98%) are just duplicates. so, I came up with this idea to improve logging.
Having a couple of new columns: counter and updated_dt, that are updated every time try to insert same record.
If want to track the user that cause the exception, need to create a user_log or log_user table, to map N-N relationship.
Create this model may made the system slow and inefficient trying to compare all these long text... Here the trick, we should also has a hash column of binary of 16 or 32, that hash the message and the exception, and configure an index on it. We can use HASHBYTES to help us.
I am not an expert in DB, but I think that will made the faster way to locate a similar record. And because hashing doesn't guarantee uniqueness, will help to locale those similar record much faster and later compare by message or exception directly to make sure that are unique.
This is a theoretical/practical solution, but will it work or bring more complexity? what aspects I am leaving out or what other considerations need to have? the trigger will do the job of insert or update, but is the trigger the best way to do it?
I wouldn't be too concerned with a log table of 132,000 records to be honest, I have seen millions, if not billions of records in a log table. If you are logging out 132,000 records every few minutes then you might want to tone it down a bit.
I think the idea is interesting but here is my major concerns:
You could actually hurt the performance of your application by doing this. The Log4Net ADO.NET appender is synchronous. This means if you make your INSERT anymore complicated than it needs to be (aka looking up if the data already exists, calculating hash codes etc.) you will block the thread calling logging. That's not good! You could fix this writing to some sort of a staging table and doing it out of band with a job or something but now you've created a bunch of moving parts for something that could be much simpler.
Time could probably be better spent doing other things. Storage is cheap, developer hours aren't and logs don't need to be extremely fast to access so a denormalized model should be fine.
Thoughts?
Yes you can do that. It is a good idea and it will work. Watch out for concurrency issues when inserting from multiple threads or processes. You probably need to investigate locking in detail. You should look into locking hints (in your case UPDLOCK, HOLDLOCK, ROWLOCK) and the MERGE statement. They can be used to maintain the dimension table.
As an alternative you could log to a file and compress it. Typical compression algorithms are very good at eliminating this type of exact redundancy.

How to test performance for your database Best practice

I need to test indexes performances for some table in my database.
After I run my query with indexes or without them I always use this code;
SELECT * FROM sys.dm_exec_query_optimizer_info;
And I receive details about my query.
My problem is:
using sys.dm_exec_query_optimizer
The details for my query are always changing making difficult to understand.
What is the best solution?
Do you know any way or best practices?
You have to learn what the query optimizer is telling you. That the data changes is good; it means that things behave differently depending on whether you have indexes or not. However, there is no standardization on how optimizer information is presented - each DBMS does it differently. If you are going to interpret the data, you must understand it.
Looking at the query plan is important. Ultimately, so to is measuring the actual performance. It depends in part on why you are looking at the indexing at all. If there's a perceived performance problem that you are addressing, then clearly you need to ensure that the problem is resolved by the index or indexes you add. You also need to ensure that the cost of adding the indexes on maintenance operations (insert, delete, update operations) is not intolerable - you have not added too many indexes. You may also need to consider disk space usage - is it OK to commit so much disk space to so many indexes.
Without more specific information about your DBMS or the particular queries, it is hard to give more specific advice.

ALTER TABLE without locking the table?

When doing an ALTER TABLE statement in MySQL, the whole table is read-locked (allowing concurrent reads, but prohibiting concurrent writes) for the duration of the statement. If it's a big table, INSERT or UPDATE statements could be blocked for a looooong time. Is there a way to do a "hot alter", like adding a column in such a way that the table is still updatable throughout the process?
Mostly I'm interested in a solution for MySQL but I'd be interested in other RDBMS if MySQL can't do it.
To clarify, my purpose is simply to avoid downtime when a new feature that requires an extra table column is pushed to production. Any database schema will change over time, that's just a fact of life. I don't see why we should accept that these changes must inevitably result in downtime; that's just weak.
The only other option is to do manually what many RDBMS systems do anyway...
- Create a new table
You can then copy the contents of the old table over a chunk at a time. Whilst always being cautious of any INSERT/UPDATE/DELETE on the source table. (Could be managed by a trigger. Although this would cause a slow down, it's not a lock...)
Once finished, change the name of the source table, then change the name of the new table. Preferably in a transaction.
Once finished, recompile any stored procedures, etc that use that table. The execution plans will likely no longer be valid.
EDIT:
Some comments have been made about this limitation being a bit poor. So I thought I'd put a new perspective on it to show why it's how it is...
Adding a new field is like changing one field on every row.
Field Locks would be much harder than Row locks, never mind table locks.
You're actually changing the physical structure on the disk, every record moves.
This really is like an UPDATE on the Whole table, but with more impact...
Percona makes a tool called pt-online-schema-change that allows this to be done.
It essentially makes a copy of the table and modifies the new table. To keep the new table in sync with the original it uses triggers to update. This allows the original table to be accessed while the new table is prepared in the background.
This is similar to Dems suggested method above, but this does so in an automated fashion.
Some of their tools have a learning curve, namely connecting to the database, but once you have that down, they are great tools to have.
Ex:
pt-online-schema-change --alter "ADD COLUMN c1 INT" D=db,t=numbers_are_friends
This question from 2009. Now MySQL offers a solution:
Online DDL (Data Definition Language)
A feature that improves the performance, concurrency, and availability
of InnoDB tables during DDL (primarily ALTER TABLE) operations. See
Section 14.11, “InnoDB and Online DDL” for details.
The details vary according to the type of operation. In some cases,
the table can be modified concurrently while the ALTER TABLE is in
progress. The operation might be able to be performed without doing a
table copy, or using a specially optimized type of table copy. Space
usage is controlled by the innodb_online_alter_log_max_size
configuration option.
It lets you adjust the balance between performance and concurrency during the DDL operation, by choosing whether to block access to the table entirely (LOCK=EXCLUSIVE clause), allow queries but not DML (LOCK=SHARED clause), or allow full query and DML access to the table (LOCK=NONE clause). When you omit the LOCK clause or specify LOCK=DEFAULT, MySQL allows as much concurrency as possible depending on the type of operation.
Performing changes in-place where possible, rather than creating a new copy of the table, avoids temporary increases in disk space usage and I/O overhead associated with copying the table and reconstructing secondary indexes.
see MySQL 5.6 Reference Manual -> InnoDB and Online DDL for more info.
It seems that online DDL also available in MariaDB
Alternatively you can use ALTER ONLINE TABLE to ensure that your ALTER
TABLE does not block concurrent operations (takes no locks). It is
equivalent to LOCK=NONE.
MariaDB KB about ALTER TABLE
See Facebook's online schema change tool.
http://www.facebook.com/notes/mysql-at-facebook/online-schema-change-for-mysql/430801045932
Not for the faint of heart; but it will do the job.
I recommend Postgres if that's an option. With postgres there is essentially no downtime with the following procedures:
ALTER TABLE ADD COLUMN (if the column can be NULL)
ALTER TABLE DROP COLUMN
CREATE INDEX (must use CREATE INDEX CONCURRENTLY)
DROP INDEX
Other great feature is that most DDL statements are transactional, so you could do an entire migration within a SQL transaction, and if something goes wrong, the entire thing gets rolled back.
I wrote this a little bit ago, perhaps it can shed some more insight on the other merits.
Since you asked about other databases, here's some information about Oracle.
Adding a NULL column to an Oracle table is a very quick operation as it only updates the data dictionary. This holds an exclusive lock on the table for a very short period of time. It will however, invalidate any depedant stored procedures, views, triggers, etc. These will get recompiled automatically.
From there if necessary you can create index using the ONLINE clause. Again, only very short data dictionary locks. It'll read the whole table looking for things to index, but does not block anyone while doing this.
If you need to add a foreign key, you can do this and get Oracle to trust you that the data is correct. Otherwise it needs to read the whole table and validate all the values which can be slow (create your index first).
If you need to put a default or calculated value into every row of the new column, you'll need to run a massive update or perhaps a little utility program that populates the new data. This can be slow, especially if the rows get alot bigger and no longer fit in their blocks. Locking can be managed during this process. Since the old versino of your application, which is still running, does not know about this column you might need a sneaky trigger or to specify a default.
From there, you can do a switcharoo on your application servers to the new version of the code and it'll keep running. Drop your sneaky trigger.
Alternatively, you can use DBMS_REDEFINITION which is a black box designed to do this sort of thing.
All this is so much bother to test, etc that we just have an early Sunday morning outage whenever we release a major version.
If you cannot afford downtime for your database when doing application updates you should consider maintaining a two-node cluster for high availability. With a simple replication setup, you could do almost fully online structural changes like the one you suggest:
wait for all changes to be replicated on a passive slave
change the passive slave to be the active master
do the structural changes to the old master
replicate changes back from the new master to the old master
do the master swapping again and the new app deployment simultaneously
It is not always easy but it works, usually with 0 downtime! The second node does not have to be only passive one, it can be used for testing, doing statistics or as a fallback node.
If you do not have infrastructure replication can be set up within a single machine (with two instances of MySQL).
Nope. If you are using MyISAM tables, to my best understanding they only do table locks - there are no record locks, they just try to keep everything hyperfast through simplicity. (Other MySQL tables operate differently.) In any case, you can copy the table to another table, alter it, and then switch them, updating for differences.
This is such a massive alteration that I doubt any DBMS would support it. It's considered a benefit to be able to do it with data in the table in the first place.
Temporary solution...
Other solution could be, add a another table with primary key of the original table, along with your new column.
Populate your primary key onto the new table and populate values for new column in your new table, and modify your query to join this table for select operations and you also need to insert, update separately for this column value.
When you able to get downtime, you can alter the original table, modify your DML queries and drop your new table created earlier
Else, you may go for clustering method, replication, pt-online-schema tool from percona
You should definitely try pt-online-schema-change. I have been using this tool to do migrations on AWS RDS with multiple slaves and it has worked very well for me. I wrote an elaborate blog post on how to do that which might be helpful for you.
Blog: http://mrafayaleem.com/2016/02/08/live-mysql-schema-changes-with-percona/
Using the Innodb plugin, ALTER TABLE statements which only add or drop secondary indexes can be done "quickly", i.e. without rebuilding the table.
Generally speaking however, in MySQL, any ALTER TABLE involves rebuilding the entire table which can take a very long time (i.e. if the table has a useful amount of data in it).
You really need to design your application so that ALTER TABLE statements do not need to be done regularly; you certainly don't want any ALTER TABLE done during normal running of the application unless you're prepared to wait or you're altering tiny tables.
I would recommend one of two approaches:
Design your database tables with the potential changes in mind. For example, I've worked with Content Management Systems, which change data fields in content regularly. Instead of building the physical database structure to match the initial CMS field requirements, it is much better to build in a flexible structure. In this case, using a blob text field (varchar(max) for example) to hold flexible XML data. This makes structural changes very less frequent. Structural changes can be costly, so there is a benefit to cost here as well.
Have system maintenance time. Either the system goes offline during changes (monthly, etc), and the changes are scheduled during the least heavily trafficked time of the day (3-5am, for example). The changes are staged prior to production rollout, so you will have a good fixed window estimate of downtime.
2a. Have redundant servers, so that when the system has downtime, the whole site does not go down. This would allow you to "roll" your updates out in a staggered fashion, without taking the whole site down.
Options 2 and 2a may not be feasible; they tend to be only for larger sites/operations. They are valid options, however, and I have personally used all of the options presented here.
If anyone is still reading this or happens to come here, this is the big benefit of using a NoSQL database system like mongodb. I had the same issue dealing with altering the table to either add columns for additional features or indexes on a large table with millions of rows and high writes. It would end up locking for a very long time so doing this on the LIVE database would frustrate our users. On small tables you can get away with it.
I hate the fact that we have to "design our tables to avoid altering them". I just don't think that works in today's website world. You can't predict how people will use your software that's why you rapidly change things based on user feedback. With mongodb, you can add "columns" at will with no downtime. You don't really even add them, you just insert data with new columns and it does it automatically.
Worth checking out: www.mongodb.com
In general, the answer is going to be "No". You're changing the structure of the table which potentially will require a lot of updates" and I definitely agree with that. If you expect to be doing this often, then I'll offer an alternative to "dummy" columns - use VIEWs instead of tables for SELECTing data. IIRC, changing the definition of a view is relatively lightweight and the indirection through a view is done when the query plan is compiled. The expense is that you would have to add the column to a new table and make the view JOIN in the column.
Of course this only works if you can use foreign keys to perform cascading of deletes and whatnot. The other bonus is that you can create a new table containing a combination of the data and point the view to it without disturbing client usage.
Just a thought.
The difference between Postgres and MySQL in this regard is that in Postgres it doesn't re-creates a table, but modifies data dictionary which is similar to Oracle. Therefore, the operation is fast, while it's still requires to allocate an exclusive DDL table lock for very short time as stated above by others.
In MySQL the operation will copy data to a new table while blocking transactions, which has been main pain for MySQL DBAs prior to v. 5.6.
The good news is that since MySQL 5.6 release the restriction has been mostly lifted and you now can enjoy the true power of the MYSQL DB.
As SeanDowney has mentioned, pt-online-schema-change is one of the best tools to do what you have described in the question here. I recently did a lot of schema changes on a live DB and it went pretty well. You can read more about it on my blog post here: http://mrafayaleem.com/2016/02/08/live-mysql-schema-changes-with-percona/.
Dummy columns are a good idea if you can predict their type (and make them nullable). Check how your storage engine handles nulls.
MyISAM will lock everything if you even mention a table name in passing, on the phone, at the airport. It just does that...
That being said, locks aren't really that big a deal; as long as you are not trying to add a default value for the new column to every row, but let it sit as null, and your storage engine is smart enough not to go writing it, you should be ok with a lock that is only held long enough to update the metadata. If you do try to write a new value, well, you are toast.
TokuDB can add/drop columns and add indexes "hot", the table is fully available throughout the process. It is available via www.tokutek.com
Not really.
You ARE altering the underlying structure of the table, after all, and that's a bit of information that's quite important to the underlying system. You're also (likely) moving much of the data around on disk.
If you plan on doing this a lot, you're better off simply padding the table with "dummy" columns that are available for future use.