Dropping all tables sqlplus - sql

Whenever I perform select * from tab; I get tables that i did not create.
It looks like:
TNAME TABTYPE CLUSTERID
------------------------------ ------- ----------
BIN$GGrKjbVGTVaus4568IEhUQ==$0 TABLE
BIN$H+a0o3uyTTKTOA8WMkNltg==$0 TABLE
BIN$IUNyfOwkS0WSEVjbn04mNw==$0 TABLE
BIN$K/3NJw5zRXyRqPixL3tqDA==$0 TABLE
BIN$KQw9SejEToywXlHp18FMZA==$0 TABLE
BIN$MOEfgWgsS0GkC/CpYW+cxA==$0 TABLE
BIN$QkUYVciPQpWBwqBhxH+Few==$0 TABLE
BIN$QmtbaOYiTHCGEE0PRiLzmg==$0 TABLE
BIN$QxF4/JShTxu8PYIx8g/L7Q==$0 TABLE
BIN$UtEI7RbiQvOYzKqJEibwKQ==$0 TABLE
BIN$VMG0FXp2ROCKbedj3Ge9hg==$0 TABLE
I tried performing
select 'drop table '||table_name||' cascade constraints;' from user_tables;
on spool and executing but those table were not selected. It just looks really messy and is bothering me a lot. What are they? Is there any way I can get rid of them ? Or do I have to just deal with it and work with it?

Q What are they?
A Looks like tables that were dropped and preserved in the RECYCLEBIN.
Q Is there any way I can get rid of them ?
A You can use e.g. PURGE TABLE BIN$GGrKjbVGTVaus4568IEhUQ==$0 ; to remove them.
That will do them individually. Note that indexes and LOBs (and other out-of-line storage) may also have entries in the recycle bin. There are other statements you can use to clear out all entries from the recycle bin.
Q Or do I have to just deal with it and work with it?
A That's up to you.
There are statements to purge the recycle bin for the current user, or if you have privileges, for the entire database. You can also disable the recyclebin, or ( I think) there's an option on the DROP TABLE that will drop a table without keeping it in the recyclebin (so these won't be created in the future.)
There's no need for me to repeat the contents of the Oracle documentation.
Refer to: http://docs.oracle.com/cd/B28359_01/server.111/b28310/tables011.htm#ADMIN01511

Posting as a comment for better visibility..
Credits to #Tomás for pointing me in the right direction.
I had to do
purge recyclebin;
instead of purge dba_recyclebin since I have insufficient privilege(I am a student logged in via ssh).
If you want to turn your recycle bin off, Use ALTER SESSION SET recyclebin = OFF;

Related

The "proper" way to atomically replace all contents in a PostgreSQL table?

In the project I have been recently working on, many (PostgreSQL) database tables are just used as big lookup arrays. We have several background worker services, which periodically pull the latest data from a server, then replace all contents of a table with the latest data. The replacing has to be atomic because we don't want a partially completed table to be seen by lookup-ers.
I thought the simplest way to do the replacing is something like this:
BEGIN;
DELETE FROM some_table;
COPY some_table FROM 'source file';
COMMIT;
But I found a lot of production code use this method instead:
BEGIN;
CREATE TABLE some_table_tmp (LIKE some_table);
COPY some_table_tmp FROM 'source file';
DROP TABLE some_table;
ALTER TABLE some_table_tmp RENAME TO some_table;
COMMIT;
(I omit some logic such as change the owner of a sequence, etc.)
I just can't see any advantage of this method. Especially after some discoveries and experiments. SQL statements like ALTER TABLE and DROP TABLE acquire an ACCESS EXCLUSIVE lock, which even blocks a SELECT.
Can anyone explain what problem the latter SQL pattern is trying to solve? Or it's wrong and we should avoid using it?

oracle creating table from another table created partially ; unable to extend temp space

We are trying to create a table from another table with method -
create table tab1 as select * from tab2;
But the process failed with error
ORA-01652: unable to extend temp segment by 8192 in tablespace
However the table tab1 is created with partial data only. There is a count mismatch in tab1 and tab2. Any of these two tables being not populated/ updated by any transaction. This happened with a couple of tables.
What my knowledge says about it, a create table should create a table at all or not at all. There is no possibility of table being created partially.
Any insight is suggested from experts.
Putting the cause of the error aside (addressed by #Leo in his answer):
I have not found anything specific on transactions for CREATE TABLE ... AS SELECT. Any CREATE TABLE statement is a DDL operation, which in turn are generally non-transactional operations.
This is just a speculation, but I'd say that the table creation did succeed. The instruction you gave is basically a two in one, where the first one is the actual table creation, which does work (and as it is not transactional, it can't be affected by the second one) and the second is a variant of a bulk insert from select (with implicit commits for batches), which breaks at some point.
This is probably not answering your question, but as the operation is apparently two-phase anyway, if you need more transactional approach, you would benefit from splitting the operation into two separate ones:
first:
CREATE TABLE tab1 AS SELECT * FROM tab2 WHERE 1 = 2;
second:
INSERT INTO tab1 SELECT * FROM tab2;
This way if the second part fails, you will not end up with a partial insert. You will still have the table in place though.
Execute the following to determine the filename for the existing tablespace as sysadmin
SELECT * FROM DBA_DATA_FILES;
Then extend the size of the datafile as follows (replace the filename with the one from the previous query):
ALTER DATABASE DATAFILE 'C:\ORACLEXE\ORADATA\XE\SYSTEM.DBF' RESIZE 4096M;
You can first try below command or ask DBA to give the privilege:
grant unlimited tablespace to <schema_name>;

SQL Delete * causing performance issues

I have a PL/SQL file that at one point needs to delete an entire table. The challenge is:
truncate table cannot be used since the DB user in question cannot be provided rights to execute DDL commands (owing to SOX compliance)
delete * works well in case the table has lesser number of records. However, the table might have millions of records in which case the redo log file size increases drastically causing performance issues
Following solutions work:
Increasing the redo log file size
Deleting records in batch mode (within nested transactions)
Is there any better and more efficient way to address this issue?
If the redo log file size is the problem, you can delete in portions with COMMIT after each delete. For example:
LOOP
--delete 1000000 records in each iteration
DELETE FROM tmp_test_table
WHERE 1=1--:WHERE_CONDITION
AND rownum <= 1000000;
-- exit when there is no more to delete
EXIT WHEN SQL%rowcount = 0;
COMMIT;
END LOOP;
There are some ways that you can use :
Use partitioning: The fastest way to do a mass delete is to drop an Oracle partition.
Tune the delete subquery: Many Oracle deletes use a where clause subquery and optimizing the subquery will improve the SQL delete speed.
Use bulk deletes: Oracle PL/SQL has a bulk delete operator that often is faster than a standard SQL delete.
Drop indexes & constraints: If you are tuning a delete in a nighttime batch job, consider dropping the indexes and rebuilding them after the delete job as completed.
Small pctused: For tuning mass deletes you can reduce freelist overhead by setting Oracle to only re-add a block to the freelists when the block is dead empty by setting a low value for pctused.
Parallelize the delete job: You can run massive delete in parallel with the parallel hint. If you have 36 processors, the full-scan can run 35 times faster (cpu_count-1)
Consider NOARCHIVELOG: Take a full backup first and bounce the database into NOLOGGING mode for the delete and bounce it again after, into ARCHIVELOG mode.
Use CTAS: Another option you can try would be to create a new table using CTAS where the select statement filters out the rows that you want to delete. Then do a rename of the original followed by a rename of the new table and transfer constraints and indexes.
Lastly, resist the temptation to do "soft" deletes, a brain-dead approach that can be fatal.
This method has advantages.
You can create a table in the required tablespace without fragmentation on the required partition of the disk.
The physical re-creation of the table will remove the fragmentation of the tables and remove the chained rows.
If you need to remove 60-99% of the rows from the table. emp
Then you need to make a new table emp_new.
create table emp_new ;
Copy the required rows to a new table
insert into emp_new select * from emp where date_insert> sysdate-30
Create indexes on a new table
create index PK_EMP on emp_new.
drop the old emp table
drop table emp.
Rename the new table to the old name.
rename table emp_new to emp
In Oracle PL/SQL DDL statements should use Execute Immediate before the statement. Hence you should use:
execute immediate 'truncate table schema.tablename';
As well as you can also use
DBMS_UTILITY.EXEC_DDL_STATEMENT('TRUNCATE TABLE tablename;');
Try anyone it may work in your case

How do I replace a table in Postgres?

Basically I want to do this:
begin;
lock table a;
alter table a rename to b;
alter table a1 rename to a;
drop table b;
commit;
i.e. gain control and replace my old table while no one has access to it.
Simpler:
BEGIN;
DROP TABLE a;
ALTER TABLE a1 RENAME TO a;
COMMIT;
DROP TABLE acquires an ACCESS EXCLUSIVE lock on the table anyway. An explicit LOCK command is no better. And renaming a dead guy is just a waste of time.
You may want to write-lock the old table while preparing the new, to prevent writes in between. Then you'd issue a lock like this earlier in the process:
LOCK TABLE a IN SHARE MODE;
What happens to concurrent transactions trying to access the table? It's not that simple, read this:
Best way to populate a new column in a large table?
Explains why you may have seen error messages like this:
ERROR: could not open relation with OID 123456
Create SQL-backup, make changes you need directly at the backup.sql file and restore database. I used this trick when have added INHERIT for group of tables (Postgres dbms) to remove inherited fields from subtable.
I would use answer#13, but I agree, it will not inherit the constraints, and drop table might fail
line up the relevant constraints first (like from pg_dump --schema-only,
drop the constraints
do the swap per answer#13
apply the constraints (sql snippets from the schema dump)

How to efficiently remove all rows from a table in DB2

I have a table that has something like half a million rows and I'd like to remove all rows.
If I do simple delete from tbl, the transaction log fills up. I don't care about transactions this case, I do not want to rollback in any case. I could delete rows in many transactions, but are there any better ways to this?
How to efficiently remove all rows from a table in DB2? Can I disable the transactions for this command somehow or is there special commands to do this (like truncate in MySQL)?
After I have deleted the rows, I will repopulate the database with similar amount of new data.
It seems that following command works in newer versions of DB2.
TRUNCATE TABLE someschema.sometable IMMEDIATE
To truncate a table in DB2, simply write:
alter table schema.table_name activate not logged initially with empty table
From what I was able to read, this will delete the table content without doing any kind of logging which will go much easier on your server's I/O.