I have to create a data warehouse (a star schema) and customer wants it in SQL 2014 in memory. My understanding is in-memory has lot of limitations with FK constraints, indexes etc. These are crucial for us, because our fact table volume are in millions. As an alternative I was thinking suggesting creating bunch of de-normalized tables and join them in SQL for reporting and not go with Kimball DWH. I have around 9 transaction and 4 master tables.
Any better suggestion or alternative to go around this?
Normally when you say in-memory with SQL Server, it means the in-memory oltp (Hekaton), which is designed for specific situations, mainly to handle bottlenecks in locking and latching. I would assume that in this case that's not what you mean.
Microsoft also uses in-memory name with clustered columnstore, which to my mind at least makes things quite confusing. Clustered columnstore is designed for data warehouses, and instead of normal row based approach, it stores the data in column format. If you have enterprise edition, it's at least worth trying for fact tables. You should get significant space savings when comparing to normal compressed tables (my fact tables shrunk between 75% - 90% compared page compressed row store) -- which of course helps a lot in terms of what fits into the cache and performance should be a better too, but of course a lot depends on your data, database structure and your queries.
There are quite many restrictions too, biggest ones probably being that you can't have unique indexes or primary / foreign keys. This restriction will be removed in SQL Server 2016, so if you can wait until that, or possibly upgrade once it's available, that might not be such a big issue.
You mentioned that there is no support for indexes. That is true, but you don't need other indexes with clustered columnstore, because the data is stored in column format and is highly compressed.
Related
I am new to SQL and have started learning Postgres. I want to create a database in which one of the column of table in that database is email. Now since I will be using email to uniquely identify the person for sign-up & logging in, I used UNIQUE command while creating the table so that if an account already exists with that email I will pass an error. But will this be efficient in large databases?
Here is the screenshot of the SQL command I have used.
What do you mean by efficient? A unique constraint or index adds overhead to prevent duplicates. This overhead is needed on inserts or updates on the affected columns.
How much overhead? Well a uniqueness is validating by using an index. On most databases, looking up a value in an index can be quite fast and usually does not pose a significant performance issue. Of course, such overhead can sometimes be important, particularly on very busy databases with lots of data changes (think dozens or hundreds of transactions per second).
In general, the size of the table is going to have little impact on the performance -- well, indexes do slow down a little bit (but hardly noticeable) as tables get bigger, assuming there is sufficient memory.
On the other hand, the cost of not having data integrity can be much larger -- affecting the performance and quality of queries that run on the data.
I'm not a trained DBA, but perform some SQL tasks and have this question:
In SQL databases I've noticed the use archive tables that mimic another table with the exact same fields and which are used to accept rows from the original table when that data is deemed for archiving. Since I've seen examples where those tables reside in the same database and on the same drive, my assumption is that this was done to increase performance. Such tables didn't have more than a about 10 million rows in them...
Why would this be done instead of using a column to designate the status of the row, such as a boolean for an in/active flag?
At what point would this improve performance ?
What would be the best pattern to structure this correctly, given that the data may still need to be queried (or unioned with current data) ?
What else is there to say about this ?
The notion of archiving is a physical, not logical, one. Logically the archive table contains the exact same entity and ought to be the same table.
Physical concerns tend to be pragmatic. The overarching notion is that the "database is getting too (big/slow"). Archiving records makes it easier to do things like:
Optimize the index structure differently. Archive tables can have more indexes without affecting insert/update performance on the working table. In addition, the indexes can be rebuilt with full pages, while the working table will generally want to have pages that are 50% full and balanced.
Optimize storage media differently. You can put the archive table on slower/less expensive disk drives that maybe have more capacity.
Optimize backup strategies differently. Working tables may require hot backups or log shipping while archive tables can use snapshots.
Optimize replication differently, if you are using it. If an archive table is only updated once per day via nightly batch, you can use snapshot as opposed to transactional replication.
Different levels of access. Perhaps you want different security access levels for the archive table.
Lock contention. If you working table is very hot you'd rather have your MIS developers access the archive table where they are less likely to halt your operations when they run something and forget to specify dirty read semantics.
The best practice would not to use archive tables but to move the data from the OLTP database to an MIS database, data warehouse, or data marts with denormalized data. But some organizations will have trouble justifying the cost of an additional DB system (which aren't cheap). There are far fewer hurdles to adding an additional table to an existing DB.
I say this frequently, but...
Multiple tables of identical structure almost never makes sense.
A status flag is a much better idea. There are proper ways to increase performance (partitioning/indexing) without denormalizing data or otherwise creating redundancies. 10 million records is pretty small in the world of modern rdbms, so what you're seeing is the product of poor planning or misunderstanding of databases.
What are the best options/recommendations and optimizations that can be performed when working with a large SQL Server 2005 table that contains anywhere from 100-200 Million records?
Since you didn't state the purpose of the database, or the requirements, here are some general things, in no particular order:
Small clustered index on each table. Consider making this your primary key on each table. This will be very efficient and save on space in the main table and dependent tables.
Appropriate non-clustered indexes (covering indexes where possible)
Referential Integrity
Normalized Tables
Consistent naming on all database objects for easier maintenance
Appropriate Partitioning (table and index) if you have the Enterprise Edition of SQL Server
Appropriate check constraints on tables if you are going to allow direct data manipulation in the database.
Decide where your business rules are going to reside and don't deviate from that. In most cases they do not belong in the database.
Run Query Analyzer on your heavily used queries (at least) and look for table scans. This will kill performance.
Be prepared to deal with deadlocks. With a database of this size, especially if there will be heavy writing, deadlocks could very well be a problem.
Take ample advantage of views to hide query join complexity and potential for query optimization and flexible security implementation.
Consider using schemas to better organize data and flexible security implementation.
Get familiar with Profiler. With a database of this size, you will more than likely be spending some time trying to determine query bottlenecks. Profiler can help you here.
A rule of thumb is if a table will contain more than 25 million records, you should consider table (and index) partitioning, but this feature is only available in the Enterprise edition of SQL Server (and correspondingly, developer edition).
I found a posting on the MySQL forums from 2005, but nothing more recent than that. Based on that, it's not possible. But a lot can change in 3-4 years.
What I'm looking for is a way to have an index over a view but have the table that is viewed remain unindexed. Indexing hurts the writing process and this table is written to quite frequently (to the point where indexing slows everything to a crawl). However, this lack of an index makes my queries painfully slow.
I don't think MySQL supports materialized views which is what you would need, but it wouldn't help you in this situation anyway. Whether the index is on the view or on the underlying table, it would need to be written and updated at some point during an update of the underlying table, so it would still cause the write speed issues.
Your best bet would probably be to create summary tables that get updated periodically.
Have you considered abstracting your transaction processing data from your analytical processing data so that they can both be specialized to meet their unique requirements?
The basic idea being that you have one version of the data that is regularly modified, this would be the transaction processing side and requires heavy normalization and light indexes so that write operations are fast. A second version of the data is structured for analytical processing and tends to be less normalized and more heavily indexed for fast reporting operations.
Data structured around analytical processing is generally built around the cube methodology of data warehousing, being composed of fact tables that represent the sides of the cube and dimension tables that represent the edges of the cube.
Flexviews supports materialized views in MySQL by tracking changes to underlying tables and updating the table which functions as a materialized view. This approach means that SQL supported by the view is a bit restricted (as the change logging routines have to figure out which tables it should track for changes), but as far as I know this is the closest you can get to materialized views in MySQL.
Do you only want one indexed view? It's unlikely that writing to a table with only one index would be that disruptive. Is there no primary key?
If each record is large, you might improve performance by figuring out how to shorten it. Or shorten the length of the index you need.
If this is a write-only table (i.e. you don't need to do updates), it can be deadly in MySQL to start archiving it, or otherwise deleting records (and index keys), requiring the index to start filling (reusing) slots from deleted keys, rather than just appending new index values. Counterintuitive, but you're better off with a larger table in this case.
In a recent project the "lead" developer designed a database schema where "larger" tables would be split across two separate databases with a view on the main database which would union the two separate database-tables together. The main database is what the application was driven off of so these tables looked and felt like ordinary tables (except some quirky things around updating). This seemed like a HUGE performance problem. We do see problems with performance around these tables but nothing to make him change his mind about his design. Just wondering what is the best way to do this, or if it is even worth doing?
I don't think that you are really going to gain anything by partitioning the table across multiple databases in a single server. All you have essentially done there is increased the overhead in working with the "table" in the first place by having several instances (i.e. open in two different DBs) of it under a single SQL Server instance.
How large of a dataset do you have? I have a client with a 6 million row table in SQL Server that contains 2 years worth of sales data. They use it transactionally and for reporting without any noticiable speed problems.
Tuning the indexes and choosing the correct clustered index is crucial to performance of course.
If your dataset is really large and you are looking to partition, you will get more bang for your buck partitioning the table across physical servers.
Partitioning is not something to be undertaken lightly as there can be many subtle performance implications.
My first question is are you referring simply to placing larger table objects in separate filegroups (on separate spindles) or are you referring to data partitioning inside of a table object?
I suspect that the situation described is an attempt to have the physical storage of certain large tables on different spindles from the rest of the tables. In this case, adding the extra overhead of separate databases, losing any ability to enforce referential integrity across databases, and the security implications of enabling cross-database ownership chaining does not provide any benefit over using multiple filegroups within a single database. If, as is quite possible, the separate databases you refer to in your question are not even stored on separate spindles but are all stored on the same spindle then you negate even the slight performance benefit you could have gained by physically separating your disk activity and have received absolutely no benefit.
I would suggest instead of using additional databases to hold large tables you look into the Filegroup topic in SQL Server Books Online or for a quick review see this article:
If you are interested in data partitioning (including partitioning into multiple file groups) then I recommend reading articles by Kimberly Tripp, who gave an excellent presentation at the time SQL Server 2005 came out about the improvements available there. A good place to start is this whitepaper
Which version of SQL Server are you using? SQL Server 2005 has partitioned tables, but in 2000 (or 7.0) you needed to use partition views.
Also, what was the reasoning for putting the table partitions in a separate database?
When I've had to partition tables in the past (pre-2005), it's usually by a date column or something similar, with a view over the various partitions. Books Online has a section that talks about how to do this and all of the rules around it. You need to follow the rules to make it work how it's supposed to work.
The key thing to remember is that your partitioning column must be part of the primary key and you want to try to always use that column in any access against the table so that the optimizer can ignore partitions that shouldn't be affected by the query.
Look up "partitioned table" in MSDN and you should be able to find a more complete tutorial for SQL Server 2005 partitioned tables as well as advice on how to set them up for maximum performance.
Are you asking about best practices in terms of database design, or convincing your lead to change his mind? :)
In terms of design... Back in the goode olde days, vertical partitioning was sometimes needed to work around database engine limitations, where the number of columns in a table was a hard limit, like 255 columns. These days the main benefits are purely for performance: putting rarely used columns, or blobs on a separate disk array. But if you're regularly pulling things from both tables it will likely be a loss. It sounds like your lead is suffering from a case of premature optimisation.
In terms of telling your lead is wrong... that requires diplomacy. If he's aware of mutterings of discontent in terms of performance, a benchmark is probably the best way to show the difference.
Create a new physical table somewhere with 'create table t1 as select * from view1' and then run some lengthy batch with the vertically partitioned table and your new table. If it's as bad as you say, the difference should be evident.
But this too may be premature optimisation. Find out what the end-users think of the performance. If the performance is good enough, for some definition of good, then don't fix what ain't broke.
There is a definite benefit for table partitioning (regardless whether it's on same or different filegroups /disks). If the partition column is correctly selected, you'll realize that your queries will hit only the required partition. So imagine if you have 100 million records (I've partitioned tables much bigger than that - about 20+ Billion rows) and if for the most part, more than 70% of your data access is only a certain category or timeline or type of data then it helps to keep the most accessed data in a separate partition. Plus you can align the partition with separate file groups with various type of disks (SATA, Fiber channel, SSDs) so that the most accessed/busy data are on the fastest storage and the least/rarely accessed are virtually on slower disks.
Although, in SQL Server there's limited partitioning ability, unlike Oracle. You can choose only one column for partitioning (even in SQL 2008). So you've to choose a column wisely where that column also is part of most of your frequent queries. For the most part, people find it easy to choose to partition by a date column. However although it seems logical to partition that way, if your queries do not have that column as part of the condition, you won't be gaining sufficient benefits from partitioning (in other words, your query will hit all the partition regardless).
It's much easier to partition for data warehouse/data mining type databases than OLTP as most DW database queries are limited by time period.
That's why these days due to the volume of data being handled by databases, it's wise to design the application in such a way that ever query is limited by some broader group such as time, geographical location or such so that when such columns are chosen for partitioning you'll gain maximum benefits.
I would disagree with the assumption that nothing can be gained by partitioning.
If the partition data is physically and logically aligned, then the potential IO of queries should be dramatically reduced.
For example, We have a table which has the batch field as an INT representing an INT.
If we partition the data by this field and then re-run a query for a particular batch, we should be able to run set statistics io ON before and after partitioning and see a reduction in IO,
If we have a million rows per partition and each partition is written to a separate device. The query should be able to eliminate the nonessential partitions.
I've not done a lot of partitioning on SQL Server, but I do have experience of partitioning on Sybase ASE, and this is known as partition eliminiation. When I have time I'm going to test out the scenario on a SQL Server 2005 machine.