oracle creating table from another table created partially ; unable to extend temp space - sql

We are trying to create a table from another table with method -
create table tab1 as select * from tab2;
But the process failed with error
ORA-01652: unable to extend temp segment by 8192 in tablespace
However the table tab1 is created with partial data only. There is a count mismatch in tab1 and tab2. Any of these two tables being not populated/ updated by any transaction. This happened with a couple of tables.
What my knowledge says about it, a create table should create a table at all or not at all. There is no possibility of table being created partially.
Any insight is suggested from experts.

Putting the cause of the error aside (addressed by #Leo in his answer):
I have not found anything specific on transactions for CREATE TABLE ... AS SELECT. Any CREATE TABLE statement is a DDL operation, which in turn are generally non-transactional operations.
This is just a speculation, but I'd say that the table creation did succeed. The instruction you gave is basically a two in one, where the first one is the actual table creation, which does work (and as it is not transactional, it can't be affected by the second one) and the second is a variant of a bulk insert from select (with implicit commits for batches), which breaks at some point.
This is probably not answering your question, but as the operation is apparently two-phase anyway, if you need more transactional approach, you would benefit from splitting the operation into two separate ones:
first:
CREATE TABLE tab1 AS SELECT * FROM tab2 WHERE 1 = 2;
second:
INSERT INTO tab1 SELECT * FROM tab2;
This way if the second part fails, you will not end up with a partial insert. You will still have the table in place though.

Execute the following to determine the filename for the existing tablespace as sysadmin
SELECT * FROM DBA_DATA_FILES;
Then extend the size of the datafile as follows (replace the filename with the one from the previous query):
ALTER DATABASE DATAFILE 'C:\ORACLEXE\ORADATA\XE\SYSTEM.DBF' RESIZE 4096M;
You can first try below command or ask DBA to give the privilege:
grant unlimited tablespace to <schema_name>;

Related

Temp table doesn't store updated values

I've been trying to create a temp table and update it but when I go to view the temp table, it doesn't show any of the updated rows
declare global temporary table hierarchy (
code varchar(5)
description varchar(30);
INSERT INTO session.hierarchy
SELECT code, 30_description
FROM table1
WHERE code like '_....';
SELECT *
FROM session.hierarchy;
This is a frequently asked question.
When using DGTT with Db2 (declare global temporary table), you need to know that the default is to discard all rows after a COMMIT action. That is the reason the table appears to be empty after you insert - the rows got deleted if autocommit is enabled. If that is not what you want, you should use the on commit preserve rows clause when declaring the table.
It is also very important to the with replace option when creating stored procedures, this is often the most friendly for development and testing, and it is not the default. Otherwise, if the same session attempts to repeat the declaration of the DGTT the second and subsequent attempts will fail because the DGTT already exists.
It can also be interesting for problem determination sometimes to use on rollback preserve rows but that is less often used.
When using a DGTT, one of the main advantages is that you can arrange for the population of the table (inserts, updates ) to be unlogged which can give a great performance boost if you have millions of rows to add to the DGTT.
Suggestion is therefore:
declare global temporary table ... ( )...
not logged
on commit preserve rows
with replace;
For DPF installations, also consider using distribute by hash (...) for best performance.

Why CREATE TABLE AS SELECT is more faster than INSERT with SELECT

I make a query with INNER JOIN and the result was 12 millions lines.
I like to put this in a table.
I did some tests and when I created the table using clause AS SELECT was more faster than, create the table first and run a INSERT with SELECT after.
I don't understand why.
Somebody can explain for me?
Tks
If you use 'create table as select' (CTAS)
CREATE TABLE new_table AS
SELECT *
FROM old_table
you automatically do a direct-path insert of the data. If you do an
INSERT INTO new_table AS
SELECT *
FROM old_table
you do a conventional insert. You have to use the APPEND-hint, if you want to do a direct path insert instead. So you have to do
INSERT /*+ APPEND */ INTO new_table AS
SELECT *
FROM old_table
to get a similar performance as in 'CREATE TABLE AS SELECT'.
How does the usual conventional insert work?
Oracle checks the free list of the table for an already used block of the table segment that has still free space. If the block isn't in the buffer cache it is read into the buffer cache. Eventually this block is read back to the disk.
During this process undo for the block is written (only a small amount of data is necessary here), data structures are updated, e.g. if necessary, the free list,that esides in the segment header and all these changes are written to the redo-buffer, too.
How does a direct-path insert work?
The process allocates space above the high water mark of the table, that is, beyond the already used space. It writes the data directly to the disk, without using a buffer cache. And it is also written to the redo buffer. When the session is committed, the highwater mark is raised beyond the new written data and this data is now visible to other sessions.
How can I improve CTAS and direct-path inserts?
You can create he tale in NOLOGGING mode, than no redo information is written. If you do this, you should make a backup of the tablespace that contains the table after the insert, otherwisse you can not recover the table if you need this.
You can do the select in parallel
You can do the insert in parallel
If you have to maintain indexes and constraints or even triggers during an insert operation this can slow down your insert operation drastically. So you should avoid this and create indexes after the insert and maybe create constraints with novalidata.
With SELECT STATEMENT The table you create has no primary key, index, identity ... the columns are always allow NULL.
And It does not have to be written to the transaction log (and therefore does not rollback). It's seem like a "Naked Table".
With INSERT ... SELECT then table must be created before so when you create table you can define key, index, identity ... And it will use transaction logs
When applied to large amounts of data, it is very slow.

creating postgresql temporary tables for search / reporting routine

Background Information
We have some lua code that generates a web report. It's taking a really long time right now and so in attempt to simplify some of the logic, I'm looking at creating a temporary table, and then joining that temp table with the results of another query.
Sample Code:
I tried the following as a test on the commandline:
psql -U username -d databasename
DROP TABLE IF EXISTS TEMP1;
CREATE TABLE TEMP1 AS SELECT d_id, name as group, pname as param
FROM widgets
WHERE widget_id < 50;
SELECT count(*) from TEMP1;
\q
The select against the TEMP1 table show the correct results.
Questions:
Question 1 - How do I code this to ensure that one report request doesn't clobber another? For example, if person A requests report A and before it's done processing, person B request report B... Will report B's creation of TEMP1 clobber the temp table created for report A?
Is this a good reason to put everything into a transaction?
Question 2 - After running my little test described above, I quit postgresql command line.... and then logged in again. TEMP1 was still around. So it looks like I have to clean up the temp table when I'm done.
I found this post:
PostgreSQL temporary tables
which seems to indicate that temp tables are cleaned up for you when a session ends... but that doesn't seem to be working for me. Not sure what I'm doing wrong.
Thanks.
Just use:
CREATE TEMPORARY TABLE temp1 AS ...
This solves both questions #1 and #2 because:
Temporary tables live in a namespace that's private to the session, so when concurrent sessions use the same name for a temporary table, it refers to different tables, each session its own table.
TEMP1 was still around after quitting because it's not temporary. You want to add the TEMPORARY clause (or TEMP for short) to the CREATE TABLE statement.

PL/SQL embedded insert into table that may not exist

I much prefer using this 'embedded' style inserts in a pl/sql block (opposed to the execute immediate style dynamic sql - where you have to delimit quotes etc).
-- a contrived example
PROCEDURE CreateReport( customer IN VARCHAR2, reportdate IN DATE )
BEGIN
-- drop table, create table with explicit column list
CreateReportTableForCustomer;
INSERT INTO TEMP_TABLE
VALUES ( customer, reportdate );
END;
/
The problem here is that oracle checks if 'temp_table' exists and that it has the correct number of colunms and throws a compile error if it doesn't exist.
So I was wondering if theres any way round that?! Essentially I want to use a placeholder for the table name to trick oracle into not checking if the table exists.
EDIT:
I should have mentioned that a user is able to execute any 'report' (as above). A mechanism that will execute an arbitrary query but always write to the temp_table ( in the user's schema). Thus each time the report proc is run it drops the temp_table and recreates it with, most probably, a different column list.
You could use a dynamic SQL statement to insert into the maybe-existent temp_table, and then catch and handle the exception that occurs when the table doesn't exist.
Example:
execute immediate 'INSERT INTO '||TEMP_TABLE_NAME||' VALUES ( :customer, :reportdate )' using customer, reportdate;
Note that having the table name vary in a dynamic SQL statement is not very good, so if you ensure the table names stay the same, that would be best.
Maybe you should be using a global temporary table (GTT). These are permanent table structures that hold temporary data for an Oracle session. Many different sessions can insert data into the same GTT, and each will only be able to see their own data. The data is automatically deleted either on COMMIT or when the session ends, according to the GTT's definition.
You create the GTT (once only) like this:
create globabal temporary table my_gtt
(customer number, report_date date)
on commit delete/preserve* rows;
* delete as applicable
Then your programs can just use it like any other table - the only difference being it always begins empty for your session.
Using GTTs are much preferable to dropping/recreating tables on the fly - if your application needs a different structure for each report, I strongly suggest you work out all the different structures that each report needs, and create separate GTTs as needed by each, instead of creating ordinary tables at runtime.
That said, if this is just not feasible (and I've seen good examples when it's not, e.g. in a system that supports a wide range of ad-hoc requests from users), you'll have to go with the EXECUTE IMMEDIATE approach.

SQL: Insert all records from one table to another table without specific the columns

I want to insert all the record from the back up table foo_bk into foo table without specific the columns.
if i try this query
INSERT INTO foo
SELECT *
FROM foo_bk
i'll get error "Insert Error: Column name or number of supplied values does not match table definition."
Is it possible to do bulk insert from one table to another without supply the column name?
I've google it but can't seem to find an answer. all the answer require specific the columns.
You should not ever want to do this. Select * should not be used as the basis for an insert as the columns may get moved around and break your insert (or worse not break your insert but mess up your data. Suppose someone adds a column to the table in the select but not the other table, you code will break. Or suppose someone, for reasons that surpass understanding but frequently happen, decides to do a drop and recreate on a table and move the columns around to a different order. Now your last_name is is the place first_name was in originally and select * will put it in the wrong column in the other table. It is an extremely poor practice to fail to specify columns and the specific mapping of one column to the column you want in the table you are interested in.
Right now you may have several problems, first the two structures don't match directly or second the table being inserted to has an identity column and so even though the insertable columns are a direct match, the table being inserted to has one more column than the other and by not specifying the database assumes you are going to try to insert to that column. Or you might have the same number of columns but one is an identity and thus can't be inserted into (although I think that would be a different error message).
Per this other post: Insert all values of a..., you can do the following:
INSERT INTO new_table (Foo, Bar, Fizz, Buzz)
SELECT Foo, Bar, Fizz, Buzz
FROM initial_table
It's important to specify the column names as indicated by the other answers.
Use this
SELECT *
INTO new_table_name
FROM current_table_name
You need to have at least the same number of columns and each column has to be defined in exactly the same way, i.e. a varchar column can't be inserted into an int column.
For bulk transfer, check the documentation for the SQL implementation you're using. There are often tools available to bulk transfer data from one table to another. For SqlServer 2005, for example, you could use the SQL Server Import and Export Wizard. Right-click on the database you're trying to move data around in and click Export to access it.
SQL 2008 allows you to forgo specifying column names in your SELECT if you use SELECT INTO rather than INSERT INTO / SELECT:
SELECT *
INTO Foo
FROM Bar
WHERE x=y
The INTO clause does exist in SQL Server 2000-2005, but still requires specifying column names. 2008 appears to add the ability to use SELECT *.
See the MSDN articles on INTO (SQL2005), (SQL2008) for details.
The INTO clause only works if the destination table does not yet exist, however. If you're looking to add records to an existing table, this won't help.
All the answers above, for some reason or another, did not work for me on SQL Server 2012. My situation was I accidently deleted all rows instead of just one row. After our DBA restored the table to dbo.foo_bak, I used the below to restore. NOTE: This only works if the backup table (represented by dbo.foo_bak) and the table that you are writing to (dbo.foo) have the exact same column names.
This is what worked for me using a hybrid of a bunch of different answers:
USE [database_name];
GO
SET IDENTITY_INSERT dbo.foo ON;
GO
INSERT INTO [dbo].[foo]
([rown0]
,[row1]
,[row2]
,[row3]
,...
,[rown])
SELECT * FROM [dbo].[foo_bak];
GO
SET IDENTITY_INSERT dbo.foo OFF;
GO
This version of my answer is helpful if you have primary and foreign keys.
As you probably understood from previous answers, you can't really do what you're after.
I think you can understand the problem SQL Server is experiencing with not knowing how to map the additional/missing columns.
That said, since you mention that the purpose of what you're trying to here is backup, maybe we can work with SQL Server and workaround the issue.
Not knowing your exact scenario makes it impossible to hit with a right answer here, but I assume the following:
You wish to manage a backup/audit process for a table.
You probably have a few of those and wish to avoid altering dependent objects on every column addition/removal.
The backup table may contain additional columns for auditing purposes.
I wish to suggest two options for you:
The efficient practice (IMO) for this can be to detect schema changes using DDL triggers and use them to alter the backup table accordingly. This will enable you to use the 'select * from...' approach, because the column list will be consistent between the two tables.
I have used this approach successfully and you can leverage it to have DDL triggers automatically manage your auditing tables. In my case, I used a naming convention for a table requiring audits and the DDL trigger just managed it on the fly.
Another option that might be useful for your specific scenario is to create a supporting view for the tables aligning the column list. Here's a quick example:
create table foo (id int, name varchar(50))
create table foo_bk (id int, name varchar(50), tagid int)
go
create view vw_foo as select id,name from foo
go
create view vw_foo_bk as select id,name from foo_bk
go
insert into vw_foo
select * from vw_foo_bk
go
drop view vw_foo
drop view vw_foo_bk
drop table foo
drop table foo_bk
go
I hope this helps :)
You could try this:
SELECT * INTO foo FROM foo_bk
This is a valid question for example when wanting to append newly imported rows from an imported csv file of the same raw structure into an existing table which may have DB constraints set up such as PKs and FKs.
I would simply do the following, for example:
INSERT INTO roles select * from new_imported_roles_from_csv_file
I also like this when if any new rows violate uniqueness during this operation, the INSERT will fail, not insert anything and in away 'protect' the target table from bad inbound data.