SQL Error adding constraint to table, ORA-01652 - unable to extend temp segment - sql

I've got this table with millions of rows that I loaded via the append hint.
Now I go to turn the constraints back on, I get the following:
2012-03-23 01:08:53,065 ERROR [SQL] [main]: Error in executing SQL:
alter table summarydata add constraint table_pk primary key (a, b, c, d, e, f)
java.sql.SQLException: ORA-30032: the suspended (resumable) statement has timed out
ORA-01652: unable to extend temp segment by 128 in tablespace MY_TEMP_TABLESPACE
Are there any best practices to avoid this? I'm adding some more datafiles, but why would this even be a problem?

The error is related to the temporay tablespace, not the data tablespace that holds the table and/or the primary key. You need to increase the size of the MY_TEMP_TABLESPACE so it has enough space to do the comparison, as #Lamak indicated.
If you don't know bow much space it wil need the you can turn AUTOEXTEND on as #DCookie said, and if it already on (for the temp, not data, tablespace!) then check the MAXSIZE setting and increase if necessary. On some platforms the maximum size of a datafile (or for a temp tablespace, hopefully a tempfile) is constrainted so you may need to add additional tempfiles.
If this is a one-off task and you don't want temp to stay big you can shrink it afterwards, but you also have the option to: create a new, large temporary tablespace; modify the user so it uses that instead; build the constraint; modify the user back to the original temp area; drop the new, large temp tablespace.

Any reason why you can't turn AUTOEXTEND on for the tablespace?

Related

Failed to allocate an extent of the required number of blocks for an index segment in the tablespace indicated

i tried to run a stored procedure as follows,
insert into process_state_archive select * from process_state
where tstamp BETWEEN trunc(ADD_MONTHS(SYSDATE, -12)) AND trunc(ADD_MONTHS(SYSDATE, -3))
Got below error :
Error report:
SQL Error: ORA-01654: Kan index WEBDEV.PROCESS_STAT_TSTAMP_ACTION niet uitbreiden met 8 in tabelruimte USERS.
01654. 00000 - "unable to extend index %s.%s by %s in tablespace %s"
*Cause: Failed to allocate an extent of the required number of blocks for
an index segment in the tablespace indicated.
*Action: Use ALTER TABLESPACE ADD DATAFILE statement to add one or more
files to the tablespace indicated.
But Yesterday i could able to run the procedure without any error.
Can any one please tell me the resolution for the above error ?
Your USERS tablespace is full. You may be able to free up some space by dropping something, possibly old objects - if this is a development environment in particular, see if you've been accumulating old objects in the recycle bin and purge any you no longer require.
If you can't fee up any space then you need to do what the error message tells you to do, add an additional data file to the tablespace, or increase the size of an existing data file, assuming you have sufficient disk space to do so.
The documentation has a section about managing data files, including adding data files and changing the size of existing data files. Which action is appropriate will depend on your circumstances, and you'll need to decide what size is appropriate. You might also want to consider creating a new dedicated tablespace for your application rather than using the USERS tablespace, but again depends on your circumstances and needs.

Teradata Drop Column returns with "no more room"

I am trying to drop a varchar(100) column of a 150 GB table (4.6 billion records). All the data in this column is null. I have 30GB more space in the database.
When I attempt to drop the column, it says "no more room in database XY". Why does such an action needs so much space?
The ALTER TABLE statement needs a temporary storage for the altered version before overwriting the original table. I guess the the table that you are trying to alter occupies at least 1/3 of your total storage size
This could happen for a variety of reasons. It's possible that one of the AMP's in your database are full, this would cause that error even with a minor table alteration.
try running the following SQL to check space
select VProc, CurrentPerm, MaxPerm
from dbc.DiskSpace
where DatabaseName='XY';
also, you should check to see what column your primary index is on in this very large table. if the table is not skewed properly, you could also run into space issues when trying to alter a table or by running a query against it.
For additional suggestions I found a decent article on the kind of things you may want to investigate when the "no more room in database" error occurs - Teradata SQL Tutorial. Some of the suggestions include:
dropping any intermediary work or "sandbox" tables
implementing single value or multi-value compression.
dropping unwanted/unnecessary secondary indexes
removing data in dbc tables like accesslog or dbql tables
remove and archive old tables that are no longer used.

Deleting column doesn't reduce database size

I have a test database with 1 table having some 50 million records. The table initially had 50 columns. The table has no indexes. When I execute the "sp_spaceused" procedure I get "24733.88 MB" as result. Now to reduce the size of this database, I remove 15 columns (mostly int columns) and run the "sp_spaceused", I still get "24733.88 MB" as result.
Why is the database size not reducing after removing so many columns? Am I missing anything here?
Edit: I have tried database shrinking but it didn't help either
Try running the following command:
DBCC CLEANTABLE ('dbname', 'yourTable', 0)
It will free space from dropped variable-length columns in tables or indexed views. More information here DBCC CLEANTABLE and here Space used does not get changed after dropping a column
Also, as correctly pointed out on the link posted on the first comment to this answer. After you've executed the DBCC CLEANTABLE command, you need to REBUILD your clustered index in case the table has one, in order to get back the space.
ALTER INDEX IndexName ON YourTable REBUILD
When any variable length column is dropped from table, it does not reduce the size of table. Table size stays the same till Indexes are reorganized or rebuild.
There is also DBCC command DBCC CLEANTABLE, which can be used to reclaim any space previously occupied with variable length columns. Here is the syntax:
DBCC CLEANTABLE ('MyDatabase','MySchema.MyTable', 0)WITH NO_INFOMSGS;
GO
Raj
The database size will not shrink simply because you have deleted objects. The database server usually holds the reclaimed space to be used for subsequent data inserts or new objects.
To reclaim the space freed, you have to shrink the database file. See How do I shrink my SQL Server Database?

Should I disable or drop indexes when doing a high volume insert on Oracle?

I'm doing a high-volume insert-into-select-from statement in Oracle, and I'm running out of undo space (ORA-30036: unable to extend segment by 8 in undo tablespace 'UNDOTBS1').
The general consensus seems to be to drop indexes on the target table and then recreate them when the insert completes.
Is just disabling the indexes/constraints on the table ok? How is it different from dropping the indexes/constraints?
Dropping the index should be faster for loading. If you are not interested in the recoverability of the table you could also turn off logging during the insert on the table so it will not generate undo. The danger is that if the DBA needs to recover the database you will see corruption errors on the table. Depending on the use of table/database that may be okay. I would be rebuilding the index after the load to make sure everything indexed after the load of the data, so I do not believe that you would gain anything from turning it off.

Bulk delete (truncate vs delete)

We have a table with a 150+ million records. We need to clear/delete all rows. Delete operation would take forever due to it writing to the t-logs and we cannot change our recovery model for the whole DB. We have tested the truncate table option.
What we realized that truncate deallocates pages from the table, and if I am not wrong makes them available for reuse but doesn't shrink the db automatically. So, if we want to reduce the DB size, we really would need to do run the shrink db command after truncating the table.
Is this normal procedure? Anything we need to be careful or aware about, or are there any better alternatives?
truncate is what you're looking for. If you need to slim down the db afterwards, run a shrink.
This MSDN refernce (if you're talking T-SQL) compares the behind the scenes of deleting rows versus truncating.
"Delete all rows"... wouldn't DROP TABLE (and re-recreate an empty one with same schema / indices) be preferable ? (I personally like "fresh starts" ;-) )
This said TRUNCATE TABLE is quite OK too, and yes, DBCC SHRINKFILE may be required afterwards if you wish to recover the space.
Depending on the size of the full database, the shrink may take a while; I've found it to go faster if it is shrunk in smaller chunks, rather than trying to get it back all at once.
One thing to remember with Truncate Table (as well as drop table) is going forward this will not work if you ever have foreign keys referencing the table.
As pointed out, if you can't use truncate or drop
SELECT 1
WHILE ##ROWCOUNT <> 0
DELETE TOP (100000) MyTable
You have a normal solution (truncate + shrink db) to remove all the records from a table.
As Irwin pointed out. The TRUNCATE command won't work while being referenced by a Foreign key constraint. So first drop the constraints, truncate the table and recreate the constraints.
If your concerned about performance and this is a regular routine for your system. You might want to look into moving this table to it's own data file, then run shrink only against the target datafile!