I am running into an issue dropping and recreating unique clustered indexes on some sybase databases. I have been unable to reproduce the issue in my test env.
The error that results when the concurrency issue arrises is as follows:
Cannot drop or modify partition descriptor database 'xxx' (43), object xx, index xx (0), partition 'xx' as it is in use. Please retry your command later. Total reference count '4'. Task reference count '2'.
I know an exclusive lock on a table or row from an open tran will not cause this, and I don't think anything the end-users could be doing would change the sort order of the data.
The data is a clustered round robin, and is a single partition table.
Please advise.
Could you use "reorg" instead - that'd have the same effect and should not be vulnerable to this? But I'm not sure because I like you don't see how this happens - building the new clustered index shouldn't start until Sybase gets a table lock (it has to for a clustered,) so why does it appear something else is accessing? (DBCCs, or something system level with locks on system catalogs maybe, so that although the index can build, something about updating the system catalogs fails?)
Before 15.0.3 esd 4 REORG causes other queries that try to access the table being reorged to fail, not to be blocked, which can be annoying.
Related
After reading many of articles from the internet, I am still not sure what is the actual purpose of DB2 Runstats.
As my understanding, DB2 Runstats will "register" the table index to the DB2 catalog, so that next time when the related query run, it will use the index to increase the performance. (Please correct me if I am wrong)
Meaning, if for a long period of time the DB2 Runstats is not run, the index will be removed from the DB2 catalog?
I am creating a new index for a table. Originally that table already contained another index.
After creating the new index, I ran DB2 Runstats on the table for the old index, but I hit the following error:
SQL2314W Some statistics are in an inconsistent state. The newly collected
"INDEX" statistics are inconsistent with the existing "TABLE" statistics. SQLSTATE=01650
At first I was thinking it's cause by the activity to create the new index, and the table was still in the "processing" stage. I ran the DB2 Runstats command again the next day but still got the same error.
Your understanding about db2 runstats is not correct. This command collects statistics on the given table and its indexes and placed it to views in the SYSSTAT schema like SYSSTAT.TABLES, SYSSTAT.INDEXES, etc. This information is used by the DB2 optimizer to produce better access plans of your queries. It doesn't "register" indexes itself. Indexes are not removed automatically if you don't collect statistics on them.
As for the warning message SQL2314W.
It's just a warning that table and index statistics is not logically compatible (for example, number of index keys is more than number of rows in the table). Sometimes it happens when you collect statistics on actively updated table at the same time even you run such a collection on a table and its indexes using a single RUNSTATS command. You can either ignore this message or make the RUNSTATS utility lock the table during the statistics collection on table and its indexes using a single command (ALLOW READ ACCESS clause).
I am using Oracle 11g.
I am trying to realize scenario of concurrent loading into a table with index rebuild. I have few flows which are trying to realize this scenario:
1. load data from source,
2. transform the data,
3. turn off the index on DWH table,
4. load data into DWH,
5. rebuild index on DWH table.
I turn off and rebuild indexes for better performance, there are situations where one flow is rebuilding the index while the other tries to turn it off. What I need to do is to place some lock between points 2 and 3, which would be released after point 5.
Oracle built in LOCK TABLE mechanism is not sufficient, as the lock is released by the end of transaction, so any ALTER statement releases the lock.
The question is how to solve the problem?
Problem solved. What I wanted to do can be easily achieved using DBMS_LOCK package.
1. DBMS_LOCK.REQUEST(...)
2. ALTER INDEX ... UNUSABLE
3. Load data
4. ALTER INDEX ... REBUILD
5. DBMS_LOCK.RELEASE(...)
You can use DBMS_SCHEDULER:
run jobs that load and transform the data
turn off indexes
run jobs that load data into DWH
rebuild indexes
Using Chains:
http://docs.oracle.com/cd/B28359_01/server.111/b28310/scheduse009.htm#CHDCFBHG
You have to range partition your table on minutes/hours basis of insertion time and enable indexes only when time is up. Each partition should have all indexes disabled after creation. Only one process can enable indexes.
I've written a SQL query for a report that creates a permanent table and then performs a bunch of inserts and updates to get all the data, according to company policy. It runs fine in SQL Server Management Studio and in Crystal Reports 2008 on my machine. However, when I schedule it to run on the server with SAP BusinessObjects Central Management Console, it fails with the error "Associated statement not prepared."
I have found that changing this permanent table to be a temp table makes the query work. Why would this be?
Some research shows that this error is sometimes sent instead of the true error. Other people reporting it talk of foreign key and (I would also assume) duplicate key errors.
Things I would check:
Does your permanent table have any unique constraints that might be violated? Or any foreign key constraints?
Are you creating indexes on the table after it has been created?
Are you creating any views over this permanent table?
What happens if the table already exists before the job is run?
What happens to the table if the job fails?
Are there any intermediate steps (such as within a stored procedure) that might involve additional temp or permanent tables?
ETA: Also check what schema the permanent table belongs to: is it usually created with "dbo"? Are you specifying that explicitly? Is there any chance that there might be a permissions problem?
That is often a generic error. Are you able to run it on the server as the account that it is scheduled to run as? It is most likely a permission error or constraint issue.
Assuming you really need a regular table, why it's not possible to create the permanent table once, vs creating it every time you run the query?
Recreating regular user table each time query runs does not seem right. But to make it work you may try to recreate the table in a separate batch or query (e.g. put GO in the script, that splits it into separate queries).
Regarding why it happens, I'm thinking about statement caching. Server compiles the query and stores the result for some time in case same query has to run again. So it's my speculation that it tries to run the compiled query which refers to the table you have already dropped and recreated under the same name. Name is the same, but physically it's a new table. You could hit some bug in the server this way. Just a speculation, it can be different kind of problem.
Without seeing code it's a guess, but being that you are creating a permanent table everytime you run the report, I assume you must be dropping the table at some point? (Or you'd have a LOT of tables building up over time.)
I suggest a couple angles to consider:
1) Make certain to prefix tables (perhaps by a session ID or soemthing) if you are concerned about concurrency/locking issues and the like so each report run has a table exclusive to itself.
2) If you are dropping the table at the end, instead adjust your logic to leave the table be. Write code that drops when you (re)start the operation. It's possible the report is clinging to the table and you are destroying it prematurely.
Bit of a long shot here, but I have a simple query below:
begin transaction
update s
set s.SomeField = null
from someTable s (NOLOCK)
rollback transaction
This runs in ~30 seconds sitting close to the SQL Server box. Are there any tricks I can use to improve the speed. The table has 144,306 rows in it.
thanks.
The single largest component of the performance of a large UPDATE command like this is going to be the speed of your DB log.
For best performance:
Make sure the DB log (LDF file) is on a separate physical spindle from the DB data (MDF file)
Avoid parity RAID for the log volume, such as RAID-5; RAID-1 or RAID-10 are better
Make sure that the DB log file is pre-grown, and that it's physically contiguous on disk
Make sure your server has enough RAM -- ideally, at least enough to hold all of the DB pages containing the modified rows
Using SSDs for your data drive may also help, because the command will create a large number of dirty buffers, which be flushed to disk later by the lazy writer; this can make other operations on the DB slow while it's happening.
If there's no constraint on it, and you really need to set all values of that column to NULL, then I would test dropping the column and re-adding it.
Not sure if that would be faster or not, but I'd investigate it.
Try disabling the index temporarily.
You could change the syntax of your query slightly, but I had no difference in my testing by doing that. I was using STATISTICS IO and STATISTICS TIME.
You mention the column is indexed. You could disable it / re-enable it as part of your transaction. The t-sql for that is simple, see this - http://blog.sqlauthority.com/2007/05/17/sql-server-disable-index-enable-index-alter-index/
I've had to do that in the past for similar jobs and it has worked out well for me.
Try to implement like this
Disable Index
Drop the column
Create the column
Rebuild index
I can guess that it will improve performance.
We have a requirement to delete rows in the order of millions from multiple tables as a batch job (note that we are not deleting all the rows, we are deleting based on a timestamp stored in an indexed column). Obviously a normal DELETE takes forever (because of logging, referential constraint checking etc.). I know in the LUW world, we have ALTER TABLE NOT LOGGED INITIALLY but I can't seem to find the an equivalent SQL statement for DB2 v8 z/OS. Any one has any ideas on how to do this really fast? Also, any ideas on how to avoid the referential checks when deleting the rows? Please let me know.
In the past I have solved this kind of problem by exporting the data and re-loading it with a replace style command. For example:
EXPORT to myfile.ixf OF ixf
SELECT *
FROM my_table
WHERE last_modified < CURRENT TIMESTAMP - 30 DAYS;
Then you can LOAD it back in, replacing the old stuff.
LOAD FROM myfile.ixf OF ixf
REPLACE INTO my_table
NONRECOVERABLE INDEXING MODE INCREMENTAL;
I'm not sure whether this will be faster or not for you (probably it depends on whether you're deleting more than you're keeping).
Do the foreign keys already have indexes as well?
How do you have your delete action set? CASCADE, NULL, NO ACTION
Use SET INTEGRITY to temporarily disable constraints on the batch process.
http://www.ibm.com/developerworks/data/library/techarticle/dm-0401melnyk/index.html
http://publib.boulder.ibm.com/infocenter/db2luw/v8/index.jsp?topic=/com.ibm.db2.udb.doc/admin/r
We modified the tablespace so the lock would occur at the tablespace level instead of at the page level. Once we changed that DB2 only required one lock to do the DELETE and we didn't have any issues with locking. As for the logging, we just asked the customer to be aware of the amount of logging required (as there did not seem to be a solution to get around the logging issue). As for the constraints, we just dropped and recreated them after the delete.
Thanks all for your help.