ORA-01652 error when fetching from cursor - sql

I have a stored procedure where I use a cursor to loop through items in a temporary table:
OPEN CURSOR_SCORE_ITEMS FOR SELECT
ID_X, ID_Y
FROM
SCORE_ITEMS
GROUP BY
ID_X, ID_Y
HAVING
SUM(SCORE) > 10;
LOOP
FETCH CURSOR_SCORE_ITEMS BULK COLLECT INTO COMPARE_ITEMS LIMIT 100;
---loop over items and do stuff---
END LOOP;
CLOSE CURSOR_SCORE_ITEMS;
The procedure is working fine for instances where the 'SCORE_ITEMS' table is small, but for large tables (several millions of rows) I am receiving error
"ORA-01652: Temp-Segment kann nicht um 12800 in Tablespace TEMP_ALL
erweitert werden"
(sorry, its in German).
Note that SCORE_ITEMS is a temporary table which is generated earlier in the procedure. It seems that the cursor query is exceeding the size of the temp tablespace.
I read some solutions already that involve increasing the size of the tablespace but I do not have any privileges on this database so I do not think that is possible. Is there an alternative way, or some kind of preprocessing I might consider, that reduce the overhead in the temp tablespace?

Global Temporary Tables are written to TEMPORARY tablespace (that is, not the usual tablespace for heap tables). Do you have a separate temporary tablespace for GTTs? I suspect not. Most places don't.
So (assuming No), when SCORE_ITEMS has millions of rows you've already eaten a big chunk of TEMP. Then your query kicks off with an aggregation that is big enough to spill into TEMP - because GROUP BY needs sorting.
You have already ruled out the obvious solution:
increasing the size of the tablespace but I do not have any privileges on this database so I do not think that is possible.
I don't know whether this also rules out the radical idea of talking to your DBA and seeing whether they will increase the space allocated to TEMP, or - better - create a new tablespace for Global Temporary Tables.
The other thing to consider is whether you actually need TEMP_SCORE. It's not unusual for people to populate a GTT when they could just write a more efficient SELECT instead. There's a lot of overhead in a GTT - all that I/O to disk, not to mention contending for shared TEMP tablespace. It's definitely an option to consider.

Related

SQL Delete * causing performance issues

I have a PL/SQL file that at one point needs to delete an entire table. The challenge is:
truncate table cannot be used since the DB user in question cannot be provided rights to execute DDL commands (owing to SOX compliance)
delete * works well in case the table has lesser number of records. However, the table might have millions of records in which case the redo log file size increases drastically causing performance issues
Following solutions work:
Increasing the redo log file size
Deleting records in batch mode (within nested transactions)
Is there any better and more efficient way to address this issue?
If the redo log file size is the problem, you can delete in portions with COMMIT after each delete. For example:
LOOP
--delete 1000000 records in each iteration
DELETE FROM tmp_test_table
WHERE 1=1--:WHERE_CONDITION
AND rownum <= 1000000;
-- exit when there is no more to delete
EXIT WHEN SQL%rowcount = 0;
COMMIT;
END LOOP;
There are some ways that you can use :
Use partitioning: The fastest way to do a mass delete is to drop an Oracle partition.
Tune the delete subquery: Many Oracle deletes use a where clause subquery and optimizing the subquery will improve the SQL delete speed.
Use bulk deletes: Oracle PL/SQL has a bulk delete operator that often is faster than a standard SQL delete.
Drop indexes & constraints: If you are tuning a delete in a nighttime batch job, consider dropping the indexes and rebuilding them after the delete job as completed.
Small pctused: For tuning mass deletes you can reduce freelist overhead by setting Oracle to only re-add a block to the freelists when the block is dead empty by setting a low value for pctused.
Parallelize the delete job: You can run massive delete in parallel with the parallel hint. If you have 36 processors, the full-scan can run 35 times faster (cpu_count-1)
Consider NOARCHIVELOG: Take a full backup first and bounce the database into NOLOGGING mode for the delete and bounce it again after, into ARCHIVELOG mode.
Use CTAS: Another option you can try would be to create a new table using CTAS where the select statement filters out the rows that you want to delete. Then do a rename of the original followed by a rename of the new table and transfer constraints and indexes.
Lastly, resist the temptation to do "soft" deletes, a brain-dead approach that can be fatal.
This method has advantages.
You can create a table in the required tablespace without fragmentation on the required partition of the disk.
The physical re-creation of the table will remove the fragmentation of the tables and remove the chained rows.
If you need to remove 60-99% of the rows from the table. emp
Then you need to make a new table emp_new.
create table emp_new ;
Copy the required rows to a new table
insert into emp_new select * from emp where date_insert> sysdate-30
Create indexes on a new table
create index PK_EMP on emp_new.
drop the old emp table
drop table emp.
Rename the new table to the old name.
rename table emp_new to emp
In Oracle PL/SQL DDL statements should use Execute Immediate before the statement. Hence you should use:
execute immediate 'truncate table schema.tablename';
As well as you can also use
DBMS_UTILITY.EXEC_DDL_STATEMENT('TRUNCATE TABLE tablename;');
Try anyone it may work in your case

Use of variable tables, tempdb growth and db session sleeping

While i have got some further insight from this post i am still struggling to get a fuller understanding of my recent use of variable tables and the associated tempdb growth.
I have recently been using variable tables within both stored procedures and table-valued functions. My use of variable tables v's local / global temporary tables is one area that might affect the larger challenge i'm experiencing.
Since using this type of temp table the tempdb has grown to around the 50+GB region and when inspecting the table using exec sp_spaceused #updateusage=true i am seeing: database_size: 51935.13MB unallocated_space: 51908.80MBand when checking the contents of the DB no temporary or system tables are present. For ref the tempdb.ldf is very small.
On inspecting my session usage using exec sp_who i am also seeing multiple rows which indicate sleeping which i suspect may be where we are experiencing issues with connections not closing properly.
From reading various posts and SO's the general consensus is not to attempt shrinking of the tempdb and associated files and in truth i'd prefer to resolve the underlying issue than move to more fragmented data storage.
Is there any advice on why my existing approach could be affecting the tempdb growth and if the use of local / global temporary tables would be more appropriate.
Regarding the tempdb it's self, while storage is cheap, i need to ensure this growth is contained so any advice on maintenance (Splitting the DB into multiple files, possible shrinking, moving the DB to a separate drive) ect would be appreciated.
You can inspect objects inside tempdb database to know what is going on
The following code list objects in tempdb and sort asc based on creation date and compute duration in minute
use TempDb
go
SELECT
name ,object_id , SCHEMA_NAME(schema_id) obj_schema,
type , type_desc,
create_date , modify_date,
OBJECT_NAME(parent_object_id) parent_obj ,
DATEDIFF(hour,create_date , GETDATE()) duration_hour
FROM sys.objects
where name not like 'sys%'
order by create_date
The objects in tempDb is classified into three groups:
Internal objects
External objects
Version store
The following code show how much the tempdb disk space is allocated to each category
SELECT
SUM(user_object_reserved_page_count)/128.0 UserObjectsMB,
SUM(user_object_reserved_page_count) UserObjectPages_count ,
SUM(version_store_reserved_page_count)/128.0 VersionStoreMB,
SUM(version_store_reserved_page_count) VersionStorePages_count,
SUM(internal_object_reserved_page_count) InternalObjectPages_count,
SUM(internal_object_reserved_page_count)/128.0 InternalObjectsMB,
SUM(unallocated_extent_page_count)/128.0 FreeSpaceMB,
SUM(unallocated_extent_page_count) FreePages_count
FROM sys.dm_db_file_space_usage;
you can review:
local-and-global-temporary-tables-in-sql-server
The size of tempdb should be big enough for daily and peak workload, to avoid the grow while the WorkLoad is running.
My advice, Don't shrink tempDb during startup or any other time, unless if necessary needed.
Storage is cheap, and you can assign dedicated storage for tempDb (even SSD)
For more details:
Capacity Planning for tempdb
Optimizing tempdb Performance

TempDB usage SQL Server 2012

We have a 60 GB production database in my new organization. We run closely 500 reports overnight from this DB. I notice that all the report scripts create tables in TempDB and then populate the final report. TempDB size is 6 GB. There is no dependency set up for these report scripts, which are called from PowerShell.
Is it a good practice to use TempDB extensively in this manner? Or is it better to create all the staging tables in the production database itself and drop them after the report is generated?
Thanks,
Roopesh
Temporary tables always gets created in TempDb. However, it is not necessary that size of TempDb is only due to temporary tables. TempDb is used in various ways
Internal objects (Sort & spool, CTE, index rebuild, hash join etc)
User objects (Temporary table, table variables)
Version store (AFTER/INSTEAD OF triggers, MARS)
So, as it is clear that it is being use in various SQL operations so size can grow due to other reasons also. However, in your case if your TempDb has sufficient space to operate normally and if your internal process is using TempDb for creating temporary tables and it is not an issue. You can consider TempDb as an toilet for SQL Server.
You can check what is causing TempDb to grow its size with below query
SELECT
SUM (user_object_reserved_page_count)*8 as usr_obj_kb,
SUM (internal_object_reserved_page_count)*8 as internal_obj_kb,
SUM (version_store_reserved_page_count)*8 as version_store_kb,
SUM (unallocated_extent_page_count)*8 as freespace_kb,
SUM (mixed_extent_page_count)*8 as mixedextent_kb
FROM sys.dm_db_file_space_usage
if above query shows,
Higher number of user objects then it means that there is more usage of Temp tables , cursors or temp variables
Higher number of internal objects indicates that Query plan is using a lot of database. Ex: sorting, Group by etc.
Higher number of version stores shows Long running transaction or high transaction throughput
You can monitor TempDb via above script and identify the real cause of its growth first. However, 60 GB is quite a small database with 6GB TempDB size is fairly acceptable.
Part of above answer is copied from my other answer from SO.

Oracle performance when dropping all tables

I have the following Oracle SQL:
Begin
-- tables
for c in (select table_name from user_tables) loop
execute immediate ('drop table '||c.table_name||' cascade constraints');
end loop;
-- sequences
for c in (select sequence_name from user_sequences) loop
execute immediate ('drop sequence '||c.sequence_name);
end loop;
End;
It was given to me by another dev, and I have no idea how it works, but it drops all tables in our database.
It works, but it takes forever!
I don't think dropping all of my tables should take that long. What's the deal? And, can this script be improved?
Note: There are somewhere around 100 tables.
"It works, but it takes forever!"
Forever in this case meaning less than three seconds a table :)
There is more to dropping a table than just dropping the table. There are dependent objects to drop as well - constraints, indexes, triggers, lob or nested table storage, etc. There are views, synonyms stored procedures to invalidate. There are grants to be revoked. The table's space (and that of its indexes, etc) has to be de-allocated.
All of this activity generates recursive SQL, queries which select from or update the data dictionary, and which can perform badly. Even if we don't use triggers, views, stored procs, the database still has to run the queries to establish their absence.
Unlike normal SQL we cannot tune recursive SQL but we can shape the environment to make it run quicker.
I'm presuming that this is a development database, in which objects get built and torn down on a regular basis, and that you're using 10g or higher.
Clear out the recycle bin.
SQL> purge recyclebin;
Gather statistics for the data dictionary (will require DBA privileges). These may already be gathered, as that is the default behaviour in 10g and 11g. Find out more.
Once you have dictionary stats ensure you're using the cost-based optimizer. Ideally this should be set at the database level, but we can fix it at the session level:
SQL> alter session set optimizer_mode=choose;
I would try changing the DROP TABLE statement to use the Purge keyword. Since you are dropping all tables, you don't really need to cascade the constraints at the same time. This action is probably what is causing it to be slow. I don't have an instance of Oracle to test this with though, so it may throw an error.
If it does throw an error, or not go faster, I would remove the Sequence drop commands to figure out which command is taking so much time.
Oracle's documentation on the DROP TABLE command is here.
One alternative is to drop the user instead of the individual tables etc., and recreate them if needed. It's generally more robust as is drops all of the tables, view, procedures, sequences etc., and would probably be faster.

Tablespace after Create Table. What does it mean?

Im reading some Oracle scripts, and I found one with this
Create Table XXY(...)
Tablespace SOME_TABLESPACE
....
NOCOMPRESS
NOPARALLEL
..
What does this mean? What is if for? There are many CreateTable statemsnts, but the Tablespace statement after is exactly the same.
As I understand SOME_TABLESPACE must exist when this script is excecuted, but thats all I could get. Is the purpose of the statement to store all the tables in the same place?
EDIT
I read this
http://www.adp-gmbh.ch/ora/concepts/tablespaces.html
"Same place" doesn't quite describe it ...
A tablespace is a grouping of common data files. With your "Create" statements you can define in which tablespace an object gets stored. Usually different types of oracle objects are stored in different tablespaces that can have different capabilities.
Some examples:
"Data" (your tables and the rows stored in these tables) are stored in a different tablespace than "system information" (such as transaction logs or "caches"). This allows you to store system information on a local drive (quick, but somewhat limited in space) and data in a Storage Area network (basically unlimited space, but not quite as fast).
Table Data and Indexes can be stored in different tablespaces which may be on different disks. Therefore, table lookup and index lookup can use different disks and be faster than if both were on the same disk.
Tablespaces and their different characteristics are oen of the many ways of tuning an Oracle DB. Lots of capabilities, lots of complexity. If all you're doing is a little development machine, there is little need to worry about it.
It creates the table in that tablespace. In the case of partitioned tables it defines that tablespace as the default for new partitions and subpartitions also.
Please read:
http://download.oracle.com/docs/cd/B10501_01/server.920/a96524/c04space.htm