We have a fairly large SSAS Tabular cube with many different tables (some which contain measures and dimensions, etc). On occasion we will run into scenarios where I have to optimize the cube partitions (break them into smaller parts) or cube structure so that not as much memory is consumed when it processes (daily). Occasionally we've had to increase the memory limits of the server just to make sure the job doesn't crash. One of our sql server consultants asked if we had considered changing the process mode on the scripted job to 'Default' rather than 'Full' (since every table in the script is set to full in the process mode). I said I hadn't considered this but my concern is that, it seems based on my research, that default won't actually update the data but will really only replenish the tables structure if it changes in some way. I need a processing mode that will just pull in any new rows (and update any rows that have changed) since the last time the partition was processed. Is there any mode which accomplishes this rather than Process full (which obviously wipes the current partition it's processing and rebuilds the entire thing = memory intensive)? Anything less memory intensive that will still pull in new rows and update outdated ones?
fyi, all the tables are based on sql queries
One option is doing a Process Data instead of a Process full on the tables in your tabular model. You may also want to consider implementing partitioning in your tables in order to take advantage of the ability for SSAS to process these in parallel. Since your tables are already based on SQL queries, you'll only need to modify the filters in the queries to make the data uniform across multiple partitions. Partitioning the tables will also allow for incremental processing using Process Add to incrementally update the partition. Looking into other ways to reduce unnecessary memory, such as removing unused columns and replacing calculated columns where possible (read about the cost calculated columns here) will also help the memory issues.
Related
I am learning cassandra. Now, I am thinking about SQL's problems that NoSQL addresses, and I have a question about cases of very big data.
About SQL handling very big data, I thought that many pages are saying that tables will be on different servers and queries are slow because of joining tables on different servers. This is a problem of SQL that NoSQL addresses. But, even with NoSQL, if partitions are too big, do not I need to change my data model, make smaller partitions and make multiple queries on them to get the same result? And, is not it slow? Or, you never run out of a space in partition because 2B cells are big enough?
I think your question is mixing several different issues.
First of all, the problem with big data and SQL is usually not that queries become slow, but that the solution cannot scale as the data grows bigger and bigger. If you choose to manually split your tables to several servers, as you suggested, what do you do when you need even more servers - redesign your data model? Also, how do you ensure consistency when an update requires modifying several tables but they are on different hosts?
Second, you mentioned joins, and this is something which NoSQL solutions like Cassandra do not support. You need to manually denormalize the data yourself (i.e., put the already joined data in a table). For some things, Cassandra's new "Materialized Views" feature can come in handy.
Third, and perhaps most importantly, you asked about huge partitions. Indeed Cassandra is not designed to handle huge partitions, and the best practice is far below the 2-billion hard limit which you mentioned: Datastax (the commercial company behind Cassandra's development) suggests in https://docs.datastax.com/en/dse-planning/doc/planning/planningPartitionSize.html that a good rule of thumb is to "keep the maximum number of rows below 100,000 items and the disk size under 100 MB.".
There are several reasons why huge partitions are ill-advised in Cassandra. One of them is that the disk format (sstables and their so-called "promoted index") makes it inefficient to jump to the middle of a huge partition, and you need to do this when you want to read a specific row or iterate through all the rows. Some operations such as compaction and repair work on entire partitions and can become very slow (and in the worst case, also use a lot of memory). E.g., a case that a billion-row partition differs on two nodes by just one row, and the partition-based repair needs to send the entire partition over the network.
Scylla (https://en.wikipedia.org/wiki/Scylla_(database)), a Cassandra clone which is generally more efficient than Apache Cassandra, also has similar issues with huge partitions (as in Cassandra, moderately large partitions are fine), but these issues are actively being worked on, including re-designing the file format, so eventually Scylla should support arbitrary-sized partitions. However, we're still not there yet, and today the recommendation of not letting partitions grow too huge still applies to Scylla as well.
Finally, if you want to get around the problem of too many rows in a single partition, then, yes, you need to tweak your data model to avoid these huge partitions. Sometimes, you just need to fix design mistakes in your model - e.g., I have seen people sticking a lot of unrelated data into the same partition, when it could have easily (and more efficiently!) be put in separate partitions. Sometimes, you need to artificially split your partitions. This is common in so-called "time-series data" modeling in Cassandra, where we (for example) get a new value of some measurement every second and add it as a row to a partition. Here, instead of having one huge partition for all data ever, the accepted practice is to create a separate partition per time window (e.g., a new partition every day, or week, or whatever). Since most queries involve just one time window anyway, they don't even become slower.
I have a system that needs to snapshot specific tables at certain points in time.
At present the process that takes the snapshot queries the data in the table, and puts the output into a Temp table (like a stage table not an in memory table).
Many of these processes can be running at the same time in parallel (100+ per hour). And the tables to be copied can run into GBs worth of data.
I am considering the use of database snapshots, so each process can take its own snapshot and work with it.
What are the pros and cons of this approach?
Is there a better way to approach this?
I would certainly consider using snapshots, but there are a couple of things you need to consider before you decide one way or the other:
Snapshots use sparse files to store data, effectively only storing the changes to each table from the time of its creation. The more rows in your table that change, the smaller the disk saving over a simple replication.
Snapshots happen to the entire database, rather than just one table. Therefore, if your DB is huge, and need 100+ copies of your DB per hour, that's going to cost you a lot of disk space and performance. On the other hand, if you're happy with one snapshot per hour for all your processes and the data doesn't change too much, snapshots may be exactly what you need.
Snapshots will have a performance overhead on your original DB, as the snapshots need to be kept up to date with what data has changed. If you're already tight on performance, this may be an issue.
There are certainly some other things to consider, but that's not a bad starting point. Books OnLine has a fairly detailed article regarding the use of Snapshots, which I would read before deciding. http://msdn.microsoft.com/en-us/library/ms175158%28v=SQL.90%29.aspx There's also a section in there on limitations: http://msdn.microsoft.com/en-us/library/ms189940%28v=SQL.90%29.aspx
Hope this helps.
If you need to take a snapshot of the data for backup/recovery purposes, I suggest you utilize backup features of SQL Server (i.e., full, transaction log backups).
If your goal is to capture data from some table for historical purposes, you could implement a bunch of sql jobs running something like:
select *
into archive_table
from normal_table
Make sure you turn off transaction log (i.e., set database to "simple") right before you do this as it will drastically speed up the process.
I'm working on designing an application with a SQL backend (Postgresql) and I've got a some design questions. In short, the DB will serve to store network events as they occur on the fly, so insertion speed and performance is critical due 'real-time' actions depending on these events. The data is dumped into a speedy default format across a few tables, and I am currently using postgresql triggers to put this data into some other tables used for reporting.
On a typical event, data is inserted into two different tables each share the same primary key (an event ID). I then need to move and rearrange the data into some different tables that are used by a web-based reporting interface. My primary goal/concern is to keep the load off the initial insertion tables, so they can do their thing. Reporting is secondary, but it would still be nice for this to occur on the fly via triggers as opposed to a cron job where I have to query and manage events that have already been processed. Reporting should/will never touch the initial insertion tables. Performance wise..does this make sense or am I totally off?
Once the data is in the appropriate reporting tables, I won't need to hang on to the data in the insertion tables too long, so I'll keep those regularly pruned for insertion performance. In thinking about this scenario, which I'm sure is semi-common, I've come up with three options:
Use triggers to trigger on the initial row insert and populate the reporting tables. This was my original plan.
Use triggers to copy the insertion data to a temporary table (same format), and then trigger or cron to populate the reporting tables. This was just a thought, but I figure that a simple copy operation to a temp table will offload any of the query-ing of the triggers in the solution above.
Modify my initial output program to dump all the data to a single table (vs across two) and then trigger on that insert to populate the reporting tables. So where solution 1 is a multi-table to multi-table trigger situation, this would be a single-table source to multi-table trigger.
Am I over thinking this? I want to get this right. Any input is much appreciated!
You may experience have a slight increase in performance since there are more "things" to do (although they should not affect operations in any way). But using Triggers/other PL is a good way to reduce it to minimum subce they are executed faster than code that gets sent from your application to the DB-Server.
I would go with your first idea 1) since it seems to me the cleanest and most efficient way.
2) is the most performance hungry solution since cron will do more queries than the other solutions that use server-side functions. 3) would be possible but will resulst in an "uglier" database layout.
This is an old one but adding my answer here.
Reporting is secondary, but it would still be nice for this to occur on the fly via triggers as opposed to a cron job where I have to query and manage events that have already been processed. Reporting should/will never touch the initial insertion tables. Performance wise..does this make sense or am I totally off?
That may be way off, I'm afraid, but under a few cases it may not be. It depends on the effects of caching on the reports. Keep in mind that disk I/O and memory are your commodities, and that writers and readers rarely block eachother on PostgreSQL (unless they explicitly elevate locks--- a SELECT ... FOR UPDATE will block writers for example). Basically if your tables fit comfortably in RAM you are better off reporting from them since you are keeping disk I/O free for the WAL segment commit of your event entry. If they don't fit in RAM then you may have cache miss issues induced by reporting. Here materializing your views (i.e. making trigger-maintained tables) may cut down on these but they have a significant complexity cost. This, btw, if your option 1. So I would chalk this one up provisionally as premature optimization. Also keep in mind you may induce cache misses and lock contention on materializing the views this way, so you might induce performance problems regarding inserts this way.
Keep in mind if you can operate from RAM with the exception of WAL commits, you will have no performance problems.
For #2. If you mean temporary tables as CREATE TEMPORARY TABLE, that's asking for a mess including performance issues and reports not showing what you want them to show. Don't do it. If you do this, you might:
Force PostgreSQL to replan your trigger on every insert (or at least once per session). Ouch.
Add overhead creating/dropping tables
Possibilities of OID wraparound
etc.....
In short I think you are overthinking it. You can get very far by bumping RAM up on your Pg box and making sure you have enough cores to handle the appropriate number of inserting sessions plus the reporting one. If you plan your hardware right, none of this should be a problem.
I am working on a project that must store very large datasets and associated reference data. I have never come across a project that required tables quite this large. I have proved that at least one development environment cannot cope at the database tier with the processing required by the complex queries against views that the application layer generates (views with multiple inner and outer joins, grouping, summing and averaging against tables with 90 million rows).
The RDBMS that I have tested against is DB2 on AIX. The dev environment that failed was loaded with 1/20th of the volume that will be processed in production. I am assured that the production hardware is superior to the dev and staging hardware but I just don't believe that it will cope with the sheer volume of data and complexity of queries.
Before the dev environment failed, it was taking in excess of 5 minutes to return a small dataset (several hundred rows) that was produced by a complex query (many joins, lots of grouping, summing and averaging) against the large tables.
My gut feeling is that the db architecture must change so that the aggregations currently provided by the views are performed as part of an off-peak batch process.
Now for my question. I am assured by people who claim to have experience of this sort of thing (which I do not) that my fears are unfounded. Are they? Can a modern RDBMS (SQL Server 2008, Oracle, DB2) cope with the volume and complexity I have described (given an appropriate amount of hardware) or are we in the realm of technologies like Google's BigTable?
I'm hoping for answers from folks who have actually had to work with this sort of volume at a non-theoretical level.
The nature of the data is financial transactions (dates, amounts, geographical locations, businesses) so almost all data types are represented. All the reference data is normalised, hence the multiple joins.
I work with a few SQL Server 2008 databases containing tables with rows numbering in the billions. The only real problems we ran into were those of disk space, backup times, etc. Queries were (and still are) always fast, generally in the < 1 sec range, never more than 15-30 secs even with heavy joins, aggregations and so on.
Relational database systems can definitely handle this kind of load, and if one server or disk starts to strain then most high-end databases have partitioning solutions.
You haven't mentioned anything in your question about how the data is indexed, and 9 times out of 10, when I hear complaints about SQL performance, inadequate/nonexistent indexing turns out to be the problem.
The very first thing you should always be doing when you see a slow query is pull up the execution plan. If you see any full index/table scans, row lookups, etc., that indicates inadequate indexing for your query, or a query that's written so as to be unable to take advantage of covering indexes. Inefficient joins (mainly nested loops) tend to be the second most common culprit and it's often possible to fix that with a query rewrite. But without being able to see the plan, this is all just speculation.
So the basic answer to your question is yes, relational database systems are completely capable of handling this scale, but if you want something more detailed/helpful then you might want to post an example schema / test script, or at least an execution plan for us to look over.
90 million rows should be about 90GB, thus your bottleneck is disk.
If you need these queries rarely, run them as is.
If you need these queries often, you have to split your data and precompute your gouping summing and averaging on the part of your data that doesn't change (or didn't change since last time).
For example if you process historical data for the last N years up to and including today, you could process it one month (or week, day) at a time and store the totals and averages somewhere. Then at query time you only need to reprocess period that includes today.
Some RDBMS give you some control over when views are updated (at select, at source change, offline), if your complicated grouping summing and averaging is in fact simple enough for the database to understand correctly, it could, in theory, update a few rows in the view at every insert/update/delete in your source tables in reasonable time.
It looks like you're calculating the same data over and over again from normalized data. One way to speed up processing in cases like this is to keep SQL with it's nice reporting and relationships and consistency and such, and use a OLAP Cube which is calculated every x amount of minutes. Basically you build a big table of denormalized data on a regular basis which allows quick lookups. The relational data is treated as the master, but the Cube allows quick precalcuated values to be retrieved from the database at any one point.
If that is only 1/20 of your data, you almost surely need to look into more scalable and efficient solutions, such as Google's Big Table. Have a look at NoSQL
I personally think that MongoDB is an awesome inbetween of NoSQL and RDMS. It isn't relational, but it provides a lot more features than a simple document store.
In dimensional (Kimball methodology) models in our data warehouse on SQL Server 2005, we regularly have fact tables with that many rows just in a single month partition.
Some things are instant and some things take a while, it depends on the operation and how many stars are being combined and what's going on.
The same models perform poorly on Teradata, but it is my understanding that if we re-model in 3NF, Teradata parallelization will work a lot better. The Teradata installation is many times more expensive than the SQL Server installation, so it just goes to show how much of a difference modeling and matching your data and processes to the underlying feature set matters.
Without knowing more about your data, and how it's currently modeled and what indexing choices you've made it's hard to say anything more.
I need to extract some management information (MI) from data which is updated in overnight batches. I will be using aggregate functions to generate the MI from tables with hundreds of thousands and potentially millions of rows. The information will be displayed on a web page.
The critical factor here is the efficiency of SQL Server's handling of aggregate functions.
I am faced with two choices for generating the data:
Write stored procs/views to generate the information from the raw data which are called every time someone accesses a page
Create tables which are refreshed daily and act as a cache for the MI
What is the best approach to take?
Cache the values during your nightly load if the data doesn't change throughout the day. It will make retrieval much faster. I'm a big fan of summary tables when necessary. In your case, they're necessary!
One thing you may want to look into, since you own SQL Server, is Analysis Services. By creating a Multidimensional Database, or a cube, these aggregations all happen automagically, and you can drill down and across your data to find numbers at the speed of thought, instead of trying to write reports that capture all of those numbers. Spend 10 minutes and watch the intro video of it, and I think you'll garner a real appreciation for SSAS's power.
It sounds to me like an Analysis Services Cube would actually be the best fit to your problem. The cube processesing can be run after the data loads occur to aggregate the data for later use.
However, you could also possibly use an indexed view, which if designed correctly and used in conjunction with the NO EXPAND table hint can provide a significant performance increase.
SQL 2005 Indexed Views
SQL 2008 Indexed Views