Improving query performance of of database table with large number of columns and rows(50 columns, 5mm rows) - sql

We are building an caching solution for our user data. The data is currently stored i sybase and is distributed across 5 - 6 tables but query service built on top of it using hibernate and we are getting a very poor performance. In order to load the data into the cache it would take in the range of 10 - 15 hours.
So we have decided to create a denormalized table of 50 - 60 columns and 5mm rows into another relational database (UDB), populate that table first and then populate the cache from the new denormalized table using JDBC so the time to build us cache is lower. This gives us a lot better performance and now we can build the cache in around an hour but this also does not meet our requirement of building the cache whithin 5 mins. The denormlized table is queried using the following query
select * from users where user id in (...)
Here user id is the primary key. We also tried a query
select * from user where user_location in (...)
and created a non unique index on location also but that also did not help.
So is there a way we can make the queries faster. If not then we are also open to consider some NOSQL solutions.
Which NOSQL solution would be suited for our needs. Apart from the large table we would be making around 1mm updates on the table on a daily basis.
I have read about mongo db and seems that it might work but no one has posted any experience with mongo db with so many rows and so many daily updates.
Please let us know your thoughts.

The short answer here, relating to MongoDB, is yes - it can be used in this way to create a denormalized cache in front of an RDBMS. Others have used MongoDB to store datasets of similar (and larger) sizes to the one you described, and can keep a dataset of that size in RAM. There are some details missing here in terms of your data, but it is certainly not beyond the capabilities of MongoDB and is one of the more frequently used implementations:
http://www.mongodb.org/display/DOCS/The+Database+and+Caching
The key will be the size of your working data set and therefore your available RAM (MongoDB maps data into memory). For larger solutions, write heavy scaling, and similar issues, there are numerous approaches (sharding, replica sets) that can be employed.
With the level of detail given it is hard to say for certain that MongoDB will meet all of your requirements, but given that others have already done similar implementations and based on the information given there is no reason it will not work either.

Related

How to store millions of statistics records efficiently?

We have about 1.7 million products in our eshop, we want to keep record of how many views this products had for 1 year long period, we want to record the views every atleast 2 hours, the question is what structure to use for this task?
Right now we tried keeping stats for 30 days back in records that have 2 columns classified_id,stats where stats is like a stripped json with format date:views,date:views... for example a record would look like
345422,{051216:23212,051217:64233} where 051216,051217=mm/dd/yy and 23212,64233=number of views
This of course is kinda stupid if you want to go 1 year back since if you want to get the sum of views of say 1000 products you need to fetch like 30mb from the database and calculate it your self.
The other way we think of going right now is just to have a massive table with 3 columns classified_id,date,view and store its recording on its own row, this of course will result in a huge table with hundred of millions of rows , for example if we have 1.8 millions of classifieds and keep records 24/7 for one year every 2 hours we need
1800000*365*12=7.884.000.000(billions with a B) rows which while it is way inside the theoritical limit of postgres I imagine the queries on it(say for updating the views), even with the correct indices, will be taking some time.
Any suggestions? I can't even imagine how google analytics stores the stats...
This number is not as high as you think. In current work we store metrics data for websites and total amount of rows we have is much higher. And in previous job I worked with pg database which collected metrics from mobile network and it collected ~2 billions of records per day. So do not be afraid of billions in number of records.
You will definitely need to partition data - most probably by day. With this amount of data you can find indexes quite useless. Depends on planes you will see in EXPLAIN command output. For example that telco app did not use any indexes at all because they would just slow down whole engine.
Another question is how quick responses for queries you will need. And which steps in granularity (sums over hours/days/weeks etc) for queries you will allow for users. You may even need to make some aggregations for granularities like week or month or quarter.
Addition:
Those ~2billions of records per day in that telco app took ~290GB per day. And it meant inserts of ~23000 records per second using bulk inserts with COPY command. Every bulk was several thousands of records. Raw data were partitioned by minutes. To avoid disk waits db had 4 tablespaces on 4 different disks/ arrays and partitions were distributed over them. PostreSQL was able to handle it all without any problems. So you should think about proper HW configuration too.
Good idea also is to move pg_xlog directory to separate disk or array. No just different filesystem. It all must be separate HW. SSDs I can recommend only in arrays with proper error check. Lately we had problems with corrupted database on single SSD.
First, do not use the database for recording statistics. Or, at the very least, use a different database. The write overhead of the logs will degrade the responsiveness of your webapp. And your daily backups will take much longer because of big tables that do not need to be backed up so frequently.
The "do it yourself" solution of my choice would be to write asynchronously to log files and then process these files afterwards to construct the statistics in your analytics database. There is good code snippet of async write in this response. Or you can benchmark any of the many loggers available for Java.
Also note that there are products like Apache Kafka specifically designed to collect this kind of information.
Another possibility is to create a time series in column oriented database like HBase or Cassandra. In this case you'd have one row per product and as many columns as hits.
Last, if you are going to do it with the database, as #JosMac pointed, create partitions, avoid indexes as much as you can. Set fillfactor storage parameter to 100. You can also consider UNLOGGED tables. But read thoroughly PostgreSQL documentation before turning off the write-ahead log.
Just to raise another non-RDBMS option for you (so a little off topic), you could send text files (CSV, TSV, JSON, Parquet, ORC) to Amazon S3 and use AWS Athena to query it directly using SQL.
Since it will query free text files, you may be able to just send it unfiltered weblogs, and query them through JDBC.

SQL Server Time Series Modelling Huge datacollection

I have to implement data collection for replay for electrical parameters for 100-1000's of devices with at least 20 parameters to monitor. This amounts to huge data collection as it will be based very similar to time series.I have to support resolution for 1 second. thinking about 1 year [365*24*60*60*1000]=31536000000 rows.
I did my research but still have few questions
As data will be huge is it good to keep data in same table or should the tables be spitted. [data structure is same] or i should
rely on indexes?
Data inserts also will be very frequent but i can batch them still what is the best way? Is it directly writing to same database
or using a temporary database for write and sync with it?
Does SQL Server has a specific schema recommendation to do time series optimization for select,update and inserts? any out of box
helps for day average ? or specific common aggregate functions i can
write my own but just to know as this a standard problem so they
might have some best practices and samples out of box.**
please let me know any help is appreciated, thanks in advance
1) You probably want to explore the use of partitions. This will allow very effective inserts (its a meta operation if you do the partitioning correctly) and very fast (2). You may want to explore columnstore indexes because the data (once collected) will never change and you will have very large data sets. Partitioning and columnstore require a learning curve but its very doable. There are lots of code on the internet describing the use of date functions in SQL Server.
That is a big number but I would start with one table see if it hold up. If you split it in multiple tables it is still the same amount of data.
Do you ever need to search across devices? If not you can have a separate table for each device.
I have some audit tables that are not that big but still big and have not had any problems. If the data is loaded in time order then make date the first (or only) column of the clustered index.
If the the PK is date, device then fine but if you can get two reading in the same seconds you cannot do that. If this is the PK then if you can load the data by that sort. Even if you have to stage each second and load. You just cannot afford to fragment a table that big. If you cannot load by the sort then take a fill factor of 50%.
If you cannot have a PK then just use date as clustered index but not as PK and put a non clustered index on device.
I have some tables of 3,000,000,000 and I have the luxury of loading by PK with no other indexes. There is no measurable degradation in insert from row 1 to row 3,000,000,000.

Large Volume Database

We are creating a database where we store large number of records. We estimate millions (billions after few years) of record in one table and we always INSERT and rarely UPDATE or DELETE any of the record. Its a kind of archive system where we insert historic record on daily basis. We will generate different sort of reports on this historic record on user request so we've some concerns and require technical input from you people:
What is the best way to manage this kind of table and database?
What impact we may see in future for very large table?
Is there any limitation on number of records in one table or size of table?
How we suppose to INSERT bulk record from different sources (mostly from Excel sheet)?
What is the best way to index large data tables?
Which is the best ORM (object relational Mapping) should we use in this project?
You last statement sums it up. There is no ORM that will deal nicely with this volume of data and reporting queries: employ SQL experts to do it for you. You heard it here first.
Otherwise
On disk: filegroups, partitioning etc
Compress less-used data
Is all data required? (Data retention policies)
No limit of row numbers or table size
INSERT via staging tables or staging databases, clean/scrub/lookup keys, then flush to main table: DO NOT load main table directly
As much RAM as you can buy. Then add more.
Few, efficient indexes
Do you have parent tables or flat data mart? Have FKs but don't use them (eg bene update/delete in parent table) so no indexes needed
Use a SAN (easier to add disk space, more volumes etc)
Normalise
Some of these are based on our experiences of around 10 billion rows through one of our systems in 30 months, with peaks of 40k rows+ per second.
See this too for high volume systems: 10 lessons from 35K tps
Summary: do it properly or not at all...
What is the best way to manage this kind of table and database?
If you are planning to store billions of records then you'll be needing plenty of diskspace, I'd recommend a 64bit OS running SQL 2008 R2 and as much RAM and HD space as is available. Depending on what performance you need I'd be tempted to look into SSDs.
What impact we may see in future for very large table?
If you have the right hardware, with a properly indexed table and properly normalized the only thing you should notice are the reports will begin to run slower. Inserts may slow down slightly as the Index file becomes bigger and you'll just have to keep an eye on it.
Is there any limitation on number of records in one table or size of table?
On the right setup I described above, no. It's only limited by disk space.
How we suppose to INSERT bulk record from different sources (mostly from Excel sheet)?
I've run into problems running huge SQL queries but I've never tried to import from very large flat files.
What is the best way to index large data tables?
Index as few fields as necessary and keep them to numerical fields only.
Which is the best ORM (object relational Mapping) should we use in this project?
Sorry can't advise here.
Billions of rows in a "few years" is not an especially large volume. SQL Server should cope perfectly well with it - assuming your design and implementation is appropriate. There is no particular limit on the size of a table. Stick to solid design principles: normalize your tables, choose keys and data types carefully and have a suitable partitioning and indexing strategy.

What would be the most efficient method for storing/updating Interval based data in SQL?

I have a database table with about 700 millions rows plus (growing exponentially) of time based data.
Fields:
PK.ID,
PK.TimeStamp,
Value
I also have 3 other tables grouping this data into Days, Months, Years which contains the sum of the value for each ID in that time period. These tables are updated nightly by a SQL job, the situation has arisen where by the tables will need to updated on the fly when the data in the base table is updated, this can be however up to 2.5 million rows at a time (not very often, typically around 200-500k up to every 5 minutes), is this possible without causing massive performance hits or what would be the best method for achieving this?
N.B
The daily, monthly, year tables can be changed if needed, they are used to speed up queries such as 'Get the monthly totals for these 5 ids for the last 5 years', in raw data this is about 13 million rows of data, from the monthly table its 300 rows.
I do have SSIS available to me.
I cant afford to lock any tables during the process.
700M recors in 5 months mean 8.4B in 5 years (assuming data inflow doesn't grow).
Welcome to the world of big data. It's exciting here and we welcome more and more new residents every day :)
I'll describe three incremental steps that you can take. The first two are just temporary - at some point you'll have too much data and will have to move on. However, each one takes more work and/or more money so it makes sense to take it a step at a time.
Step 1: Better Hardware - Scale up
Faster disks, RAID, and much more RAM will take you some of the way. Scaling up, as this is called, breaks down eventually, but if you data is growing linearly and not exponentially, then it'll keep you floating for a while.
You can also use SQL Server replication to create a copy of your database on another server. Replication works by reading transaction logs and sending them to your replica. Then you can run the scripts that create your aggregate (daily, monthly, annual) tables on a secondary server that won't kill the performance of your primary one.
Step 2: OLAP
Since you have SSIS at your disposal, start discussing multidimensional data. With good design, OLAP Cubes will take you a long way. They may even be enough to manage billions of records and you'll be able to stop there for several years (been there done that, and it carried us for two years or so).
Step 3: Scale Out
Handle more data by distributing the data and its processing over multiple machines. When done right this allows you to scale almost linearly - have more data then add more machines to keep processing time constant.
If you have the $$$, use solutions from Vertica or Greenplum (there may be other options, these are the ones that I'm familiar with).
If you prefer open source / byo, use Hadoop, log event data to files, use MapReduce to process them, store results to HBase or Hypertable. There are many different configurations and solutions here - the whole field is still in its infancy.
Indexed views.
Indexed views will allow you to store and index aggregated data. One of the most useful aspects of them is that you don't even need to directly reference the view in any of your queries. If someone queries an aggregate that's in the view, the query engine will pull data from the view instead of checking the underlying table.
You will pay some overhead to update the view as data changes, but from your scenario it sounds like this would be acceptable.
Why don't you create monthly tables, just to save the info you need for that months. It'd be like simulating multidimensional tables. Or, if you have access to multidimensional systems (oracle, db2 or so), just work with multidimensionality. That works fine with time period problems like yours. At this moment I don't have enough info to give you, but you can learn a lot about it just googling.
Just as an idea.

What is the maximum recommended number of rows that a SQL 2008 R2 standalone server should store in a single table?

I'm designing my DB for functionality and performance for realtime AJAX web applications, and I don't currently have the resources to add DB server redundancy or load-balancing.
Unfortunately, I have a table in my DB that could potentially end up storing hundreds of millions of rows, and will need to read and write quickly to prevent lagging the web-interface.
Most, if not all, of the columns in this table are individually indexed, and I'd love to know if there are other ways to ease the burden on the server when running querys on large tables. But is there eventually a cap for the size (in rows or GB) of a table before a single unclustered SQL server starts to choke?
My DB only has a dozen tables, with maybe a couple dozen foriegn key relationships. None of my tables have more than 8 or so columns, and only one or two of these tables will end up storing a large number of rows. Hopefully the simplicity of my DB will make up for the massive amounts of data in these couple tables ...
Rows are limited strictly by the amount of disk space you have available. We have SQL Servers with hundreds of millions of rows of data in them. Of course, those servers are rather large.
In order to keep the web interface snappy you will need to think about how you access that data.
One example is to stay away from any type of aggregate queries which require processing large swaths of data. Things like SUM() can be a killer depending on how much data it's trying to process. In these situations you are much better off calculating any summary or grouped data ahead of time and letting your site query these analytic tables.
Next you'll need to partition the data. Split those partitions across different drive arrays. When SQL needs to go to disk it makes it easier to parallelize the reads. (#Simon touched on this).
Basically, the problem boils down to how much data you need to access at any one time. This is the main problem regardless of the amount of data you have on disk. Even small databases can be choked if the drives are slow and the amount of available RAM in the DB server isn't enough to keep enough of the DB in memory.
Usually for systems like this large amounts of data are basically inert, meaning that it's rarely accessed. For example, a PO system might maintain a history of all invoices ever created, but they really only deal with any active ones.
If your system has similar requirements, then you might have a table that is for active records and simply archive them to another table as part of a nightly process. You could even have statistics like monthly averages (as an example) recomputed as part of that archival.
Just some thoughts.
The only limit is the size of your primary key. Is it an INT or a BIGINT?
SQL will happily store the data without a problem. However, with 100 millions of rows, your best off partitioning the data. There are many good articles on this such as this article.
With partitions, you can have 1 thread per partition working at the same time to parallelise the query even more than is possible without paritioning.
My gut tells me that you will probably be okay, but you'll have to deal with performance. It's going to depend on the acceptable time-to-retrieve results from queries.
For your table with the "hundreds of millions of rows", what percentage of the data is accessed regularly? Is some of the data, rarely accessed? Do some users access selected data and other users select different data? You may benefit from data partitioning.