Slow queries on 'transaction' table - sql partition as a solution? - sql

I have a table with 281,433 records in it, ranging from March 2010 to the current date (Sept 2014). It's a transaction table which consists of records that determine stock which is currently in and out of the warehouse.
When making picks from the warehouse, the system needs to look over every transaction from a particular customer that was ever made (based on the AccountListID field, which determines the customer, a customer might on average have about 300 records in the table). This happens 2-3 times per request from the particular .NET application when a picking run is done.
There are times when the database seemingly locks out. Some requests complete no bother, within about 3 seconds. Others hang for 'up to 4 minutes' according to the end users.
My guess is with 4-5 requests at the same time all looking at this one transaction table things are getting locked up.
I'm thinking about partitioning this table so that the primary transaction table only contains record from the last 2 years. The end user has agreed that any records past this date are unnecessary.
But I can't just delete them, they're used elsewhere in the system. I have indexes already in place and they make a massive difference (going from >30 seconds to <2, on the accountlistid field). It seems partitioning is the next step.
1) Am I going down the right route as a solution to my 'locking' problem?
2) When moving a set of records (e.g. records where the field DateTimeCheckedIn is more than 2 years old) is this a manual process or does partitioning automatically do this?

Partitioning shouldn't be necessary on a table with fewer than 300,000 rows, unless each record is really big. If a record is occupying more than 4k bytes, then you have 300,000 pages (2,400,000,000 bytes) and that is getting larger.
Indexes are usually the solution for something like this. Taking more than a second to return 300 records in an indexed database seems like a long time (unless the records are really big and the network overhead adds to the time). Your table and index should both fit into memory. Check your memory configuration.
The next question is about the application code. If it uses cursors, then these might be the culprit by locking rows under certain circumstances. For read-only cursors, "FAST_FORWARD" or "FORWARD READ_ONLY" should be fast. It is possible that if the application code is locking all the historical records, then you might get contention. After all, this would occur when two records (for different) customers are on the same data page. The solution is to not lock the historical records as you read them. Or, to avoid using cursors all together.

I don't think partitioning will be necessary here. You can probably fix this with a well-placed index: I'm thinking a single index covering (in order) company, part number, and quantity. Or, if it's an old server, possibly just add ram. Finally, since this is reading a lot of older data for transactions, where individual transactions themselves are likely never (or at most very rarely) updated once written, you might do better with a READ UNCOMMITTED isolation level for this query.

Related

Deleting millions of rows without impacting transaction log

Our client database is growing at increasing pace, more specifically entities like Auditing & Logging which are growing much greater speed.
For instance, as of now the Auditing table has ~30 million rows and its is growing with the rate of 1.5 million rows per week.
Similarly, the Logging table is growing at the rate of ~1 million rows per week. This table has ~50 million rows.
We have decided to archive tables based on our data retention policy & delete some 'N' number of records from these tables when ever archiving jobs runs.
I am looking for best advice for defining the chuckSize which will not impact transaction logs of sql server db or table locking. I know this value cannot be straight way derived, we need to run different test scenarios to come with this magic number.
The best advice is to partition by data, presumably by date.
Then you can remove entire partitions without having to log the results.
The subject of partitioning tables is rather broad. The documentation is a good place to start.

Design question: Best approach to store and retrieve deltas in an SQL table

I have a historical table which contains many price columns and only few columns change at a time. Currently I am just inserting all the data as new records and this change could come 100+ times every second. So it is resulting in growing of table size pretty quick.
I am trying to find the better design for the table to keep the table size to minimum and the best query to retrieve the data when required. I am not much worried about the data retrieval performance, but it should be somewhere in the middle when used for reports. Priority is to keep the table size to its minimum.
Data from this historical table is not retrieved on a day to day basis. I have a transaction table like *1 Current Design for that purpose.
Here are the details of my implementation.
1) Current Design
2) Planned design - 1
Question:
1) If I use the above table structure what is the best query to get the results like shown in Current design #1
3) Planned design - 2
Question:
1) How much performance hit this would be compared to Planned design #1.
2) Also if I go in that route what is the best query to get the results like what shown in Current design #1?
End question:
I assume planned design #1 will take more table space VS planned design #2. But planned design 2 will take more time to retrieve the query. Is there any case I assumption can go wrong?
Edit: I have only inserts going to this table. No updates or deletion is ever made to this.
In fact, I think you have better plan. You can use Temporal Tables that come from SQL Server 2016.
This type managed by sql to track change of table in best way.
Visit this Link: https://learn.microsoft.com/en-us/sql/relational-databases/tables/temporal-tables?view=sql-server-2017
I have s similar situation where I'm loading a bunch of temperature sensors every 10 seconds. As I'm using the express version of MSSQL I'm looking a at max database size of 10GB so I went creative to make it last as long as possible.
My table-layout is pretty much identical to yours in that I've got 1 timestamp + 30 value columns + another 30 flag columns.
The value columns are numeric(9,2)
The value columns are marked SPARSE, if the value is identical (enough) to the value before it I store NULL instead of repeating the value.
The flag columns are bit and indicate whether the value is 'extrapolated' or from an actual source (later on that)
I've also got another table that holds the following information for each sensor:
last time the sensor was updated; that way if a new value comes in I can easily decide if this requires just a new insert at the end of the table or whether I need to go through all the logic of inserting/updating a value somewhere in-between existing numbers.
the value of that latest entry
the sensitivity for said sensor; this way I don't have to hardcode it and can set it on a per-sensor basis
Anyway, for now my stream of information is that I've got several programs each reading data from different sources (arduino, web, ...) and dumping this in .csv files and then a 'parser' program that reads these files into the database once in a while. As I'm loading the values 1 by 1 instead of row-based this isn't very efficient but I'm now doing about 3500 values / second so I'm not overly concerned. I'll agree that this is only true when loading the values in historical order and because I'm using the helper table.
Currently I've got almost 1 year of information in there which corresponds to
2.209.883 rows
5.799.511 values spread over 18 sensors (yes, I've still got room for 12 more without needing to change the table)
This means I've only got 15% of the fields filled in, or looking at it the other way around, when I'd fill in every record rather than putting NULL in case of repetition, I'd have almost 8 times that many numbers there.
As for space requirements: I decided to reload all numbers last night 'for fun' but noticed that even though most .csv files come in historically, they would do a range of columns from Jan to Dec, then another couple of columns from Jan to Dec etc.. This resulted in quite a bit of fragmentation: 70% in fact! At that time the table required 282Mb of disk-space.
I then rebuilt the table bringing the fragmentation down to 0% and the reserved space went down to 118Mb (!).
For me this is more than good enough
it's unlikely the table will outgrow the 10GB limit anytime soon, especially if I stick to (online) rebuilding it now and then.
loading data is sufficiently fast (although reloading the entire year took a couple of hours)
reporting is sufficiently fast (for now, haven't tried to connect any 'interactive' reporting tools to it yet; but for some simple graphs in excel it works just fine IMHO).
FYI: for reporting I've created a rater simple stored procedure that picks a from-to range for a given set of columns; dumps it in a temp-table and then fills in the blanks by figuring out the NULL-ranges and then filling these in with the value that preceded the range. This works quite well although fetching the 'first' value sometimes takes a while as I can't predict how far back in time the last value should be looked for (sometimes there is none).
To work around this I've added another process that extrapolates the values for every 'hour' timestamp. That way the report never needs to go back more than 1 hour. A flag-column in the readings table indicates whether the value on a record for a given field was extrapolated or not.
(note: this makes updating values in the past more problematic but not impossible)
Hope this helps you out a bit in your endeavors, good luck!

Relation with DB size and performance

Is there any relation between DB size and performance in my case:
There is a table in my Oracle DB that is used for logging. Now it has almost close to over 120 million rows and increases at a rate of 1000 rows per min. Each row has 6-7 columns with basic string data.
It is for our client. We never take any data from there but we might need that in case of any issues. However its fine if we clean up every month or so.
However the actual issue is will it affect performance of other transactional tables in the same db? Assuming the disk space as unlimited.
If 1000 rows/minute are being inserted into this table then about 40 million rows would be added per month. If this table has indexes I'd say that the biggest issue will be that eventually index maintenance will become a burden on the system, so in that case I'd expect performance to be affected.
This table seems like a good candidate for partitioning. If it's partitioned on the date/time that each row is added, with each partition containing one month's worth of data, maintenance would be much simpler. The partitioning scheme can be set up so that partitions are created automatically as needed (assuming you're on Oracle 11 or higher), and then when you need to drop a month's worth of data you can just drop the partition containing that data, which is a quick operation which doesn't burden the system with a large number of DELETE operations.
Best of luck.

Postgresql: Autovacuum partitioned tables

We have a very large table that was partitioned into monthly tables. We have no autovacuum parameters set in the postgresql.conf file, so it's on by default with default parameters.
The past months tables table_201404, table_201403 do not get written to or updated/deleted once they are passed, they are only read from for historical data. Why is it that we are noticing autovacuum processes running on these tables? Is it because they are part of a main partition and PostgreSQL is seeing those tables as one?
We are toying with the idea of setting autovacuum_enabled to off for these past tables, but I wanted to consult the wisdom of Stackoverflow first.
Thanks all...
Even read-only tables need to be vacuumed for wrap-around once every 2 billion transactions, and under the default settings are vacuumed for wrap-around once every 150 million transactions.
The transaction IDs stored with each row are 32 bits, so they wrap around eventually. To prevent this from causing problems, any very old transactionID has to be replaced with a magic value meaning "Older than all other IDs". So the table has to be scanned to do that replacement. If the table never changes, eventually every transaction ID will be replaced with the magic value and conceptually that table no longer needs to be scanned. But that fact is not stored anywhere, so the table still needs to be scanned every now then so that the system can observe that they are all still OK. Fortunately the scan is done in sequentially and only needs to read, not write, so it should be fairly efficient.
It is possible that the whole thing will be redone in 9.5 so that tables like that would no longer need to be scanned.

T-SQL Optimize DELETE of many records

I have a table that can grew to millions records (50 millions for example). On each 20 minutes records that are older than 20 minutes are deleted.
The problems is that if the table has so many records such deletion can take a lot of time and I want to make it faster.
I can not do "truncate table" because I want to remove only records that are older than 20 minutes. I suppose that when doing the "delete" and filtering the information that need to be delete, the server is creating log file or something and this take much time?
Am I right? Is there a way to stop any flag or option to optimize the delete, and then to turn on the stopped option?
To expand on the batch delete suggestion, i'd suggest you do this far more regularly (every 20 seconds perhaps) - batch deletions are easy:
WHILE 1 = 1
BEGIN
DELETE TOP ( 4000 )
FROM YOURTABLE
WHERE YourIndexedDateColumn < DATEADD(MINUTE, -20, GETDATE())
IF ##ROWCOUNT = 0
BREAK
END
Your inserts may lag slightly whilst they wait for the locks to release but they should insert rather than error.
In regards to your table though, a table with this much traffic i'd expect to see on a very fast raid 10 array / perhaps even partitioned - are your disks up to it? Are your transaction logs on different disks to your data files? - they should be
EDIT 1 - Response to your comment
TO put a database into SIMPLE recovery:
ALTER DATABASE Database Name SET RECOVERY='SIMPLE'
This basically turns off transaction logging on the given database. Meaning in the event of data loss you would need loose all data since your last full backup. If you're OK with that, well this should save a lot of time when running large transactions. (NOTE that as the transaction is running, the logging still takes place in SIMPLE - to enable the rolling back of the transaction).
If there are tables within your database where you cant afford to loose data you'll need to leave your database in FULL recovery mode (i.e. any transaction gets logged (and hopefully flushed to *.trn files by your servers maintenance plans). As i stated in my question though, there is nothing stopping you having two databases, 1 in FULL and 1 in SIMPLE. the FULL database would be fore tables where you cant afford to loose any data (i.e. you could apply the transaction logs to restore data to a specific time) and the SIMPLE database would be for these massive high-traffic tables that you can allow data loss on in the event of a failure.
All of this is relevant assuming your creating full (*.bak) files every night & flushing your log files to *.trn files every half hour or so).
In regards to your index question, it's imperative your date column is indexed, if you check your execution plan and see any "TABLE SCAN" - that would be an indicator of a missing index.
Your date column i presume is DATETIME with a constraint setting the DEFAULT to getdate()?
You may find that you get better performance by replacing that with a BIGINT YYYYMMDDHHMMSS and then apply a CLUSTERED index to that column - note however that you can only have 1 clustered index per table, so if that table already has one you'll need to use a Non-Clustered index. (in case you didnt know, a clustered index basically tells SQL to store the information in that order, meaning that when you delete rows > 20 minutes SQL can literally delete stuff sequentially rather than hopping from page to page.
The log problem is probably due to the number of records deleted in the trasaction, to make things worse the engine may be requesting a lock per record (or by page wich is not so bad)
The one big thing here is how you determine the records to be deleted, i'm assuming you use a datetime field, if so make sure you have an index on the column otherwise it's a sequential scan of the table that will really penalize your process.
There are two things you may do depending of the concurrency of users an the time of the delete
If you can guarantee that no one is going to read or write when you delete, you can lock the table in exclusive mode and delete (this takes only one lock from the engine) and release the lock
You can use batch deletes, you would make a script with a cursor that provides the rows you want to delete, and you begin transtaction and commit every X records (ideally 5000), so you can keep the transactions shorts and not take that many locks
Take a look at the query plan for the delete process, and see what it shows, a sequential scan of a big table its never good.
Unfortunately for the purpose of this question and fortunately for the sake of consistency and recoverability of the databases in SQL server, putting a database into Simple recovery mode DOES NOT disable logging.
Every transaction still gets logged before committing it to the data file(s), the only difference would be that the space in the log would get released (in most cases) right after the transaction is either rolled back or committed in the Simple recovery mode, but this is not going to affect the performance of the DELETE statement in one way or another.
I had a similar problem when I needed to delete more than 70% of the rows from a big table with 3 indexes and a lot of foreign keys.
For this scenario, I saved the rows I wanted in a temp table, truncated the original table and reinserted the rows, something like:
SELECT * INTO #tempuser FROM [User] WHERE [Status] >= 600;
TRUNCATE TABLE [User];
INSERT [User] SELECT * FROM #tempuser;
I learned this technique with this link that explains:
DELETE is a a fully logged operation , and can be rolled back if something goes wrong
TRUNCATE Removes all rows from a table without logging the individual row deletions
In the article you can explore other strategies to resolve the delay in deleting many records, that one worked to me