how to insert large number of rows in oracle? - sql

Can anyone tell me how to insert large number of rows in Oracle ?
Using insert statement we can insert data into rows of table.
insert into example values(1,'name','address');
Suppose I want to insert 100,000 rows , do I need to insert one by one by following above procedure? Or is there any other way to insert a large number of rows at a time? Can any one advise me with an example please.
Note: here i'm not asking copying data from another table.. just consider we have an XL sheet consist of 1,00,000 rows,then how we can insert them into a particular table..
Thanks,
Sai.

If you are loading using individual insert statements from a script, using SQL*Plus, say, then one handy speed-up is to bunch sets of inserts into anonymous PL/SQL blocks ...
begin
insert into example values(1,'name','address');
insert into example values(1,'name','address');
insert into example values(1,'name','address');
...
end;
/
begin
insert into example values(1,'name','address');
insert into example values(1,'name','address');
insert into example values(1,'name','address');
...
end;
/
This reduces the client/server chatter enormously.
An original file can often be easily modified with unix scripts or macro in a decent text editor.
Not necessarily what you'd want to embed into a production process but handy for the occasional job.

Use sqlldr with the direct path option.

I suspect you have it in a CSV file.
Create directory object
Create external table. You can query external table the same way as regular table the difference is that the data in the table is from a file located in a directory object.
http://www.oracle-base.com/articles/9i/external-tables-9i.php

Related

SQL Server : Query using data from a file

I need to run a query in SQL Server, where I have a particular number of values stored individually on separate lines in a text file, and I need to run a query in SQL server to check if a value in a column of the table matches any one of the value stored in the txt file.
How should I go about doing this ?
I am aware of how to formulate various types of queries in SQL Server, just not sure how to run a query that is dependent on a file for its query parameters.
EDIT :
Issue 1 : I am not doing this via a program since the query that I need to run traverses over 7 million datapoints which results in the program timing out before it can complete, hence the only alternative I have left is to run the query in SQL Server itself without worrying about the timeout.
Issue 2 : I do not have admin rights to the database that I am accessing which is why there is no way I could create a table, dump the file into it, then perform a query by joining those tables.
Thanks.
One option would be to use BULK INSERT and a temp table. Once in the temp table, you can parse the values. This is likely not the exact answer you need, but based on your experience, I'm sure you could tweak as needed.
Thanks...
SET NOCOUNT ON;
USE Your_DB;
GO
CREATE TABLE dbo.t (
i int,
n varchar(10),
d decimal(18,4),
dt datetime
);
GO
BULK INSERT dbo.t
FROM 'D:\import\data.txt'
WITH (FIELDTERMINATOR = ',', ROWTERMINATOR = '\n');
There are lots of approaches.
Mine would be to import the file to a table, do the comparison with a regular SQL query, and then delete the file-data table if you don't need it anymore.
Bulk import the data from text file into a temporary table.
Execute the query to do the comparison between your actual physical table & temporary table.

Need to BULK INSERT to a temporary table. It doesn't work with table variables, so how do I do it from within a function?

I have an old SQL script that is currently run by loading it into SQL Server Management studio and running it. I'd like to clean this up by turning it into a series of functions that are stored in the database itself.
The basic sequence of steps that the current code does is like this:
(Miles of SQL logic)
Create a temporary table
BULK INSERT from a CSV file into the temporary table
Massage the data
Merge the data into the "real" table
DROP the temporary table
(Miles of SQL logic)
I'd like to wrap steps 1-5 in a function, but I'm stuck at how to perform a BULK INSERT when you can't BULK INSERT into a table variable, and you're also not allowed to create temporary tables from within a function.
So what's the right way to fix this issue?
Thanks!
As already mentionned in the comment, the solution that differs the less to yours is doing that in a stored procedure rather than in a functoin, which is intended to modify the content of a table.
On a short term perspective, this should be clearly the easiest to implement for you but on a long term learnin SSIS could be a good investment.

Reading values inserted by trigger in a different table

I'm having the following issue: I have a trigger on a table A, whose purpose is to compute some values and insert them in a completely different table B.
The problem is that, somewhere in that logic, there is a loop that requires the values that would have been freshly inserted into table B.
I've noticed that SQL Server executes all the INSERT commands at once, after exiting the trigger.
ALTER TRIGGER [dbo].[InsertTrade]
ON [dbo].[Blotter]
AFTER INSERT
AS
BEGIN
/* compute #Variables */
INSERT INTO [dbo].[CompletelyUnrelatedTableWithoutTriggersOnIt]
VALUES #Variables
Is there any way of COMMMIT-ing that INSERT and being able to read those values while still in the trigger?
Thanks,
D.
First of all, be very careful with how you are constructing your trigger. If you're using INSERT...VALUES() in a trigger, it's a good indication that you're assuming there will only ever be one record in the INSERTED table. Never make that assumption. Instead your logic should be INSERT...SELECT <computed cols> FROM INSERTED
Second, if you want to get out the values you just put in, you could use the OUTPUT clause but I'm not sure that's what you mean (it's not entirely clear what you want to do with the values) then you will have access to the final values that were inserted "while still in the trigger"
If that's not what you want, perhaps it would be better to encapsulate all this functionality into a proc.

PL/SQL embedded insert into table that may not exist

I much prefer using this 'embedded' style inserts in a pl/sql block (opposed to the execute immediate style dynamic sql - where you have to delimit quotes etc).
-- a contrived example
PROCEDURE CreateReport( customer IN VARCHAR2, reportdate IN DATE )
BEGIN
-- drop table, create table with explicit column list
CreateReportTableForCustomer;
INSERT INTO TEMP_TABLE
VALUES ( customer, reportdate );
END;
/
The problem here is that oracle checks if 'temp_table' exists and that it has the correct number of colunms and throws a compile error if it doesn't exist.
So I was wondering if theres any way round that?! Essentially I want to use a placeholder for the table name to trick oracle into not checking if the table exists.
EDIT:
I should have mentioned that a user is able to execute any 'report' (as above). A mechanism that will execute an arbitrary query but always write to the temp_table ( in the user's schema). Thus each time the report proc is run it drops the temp_table and recreates it with, most probably, a different column list.
You could use a dynamic SQL statement to insert into the maybe-existent temp_table, and then catch and handle the exception that occurs when the table doesn't exist.
Example:
execute immediate 'INSERT INTO '||TEMP_TABLE_NAME||' VALUES ( :customer, :reportdate )' using customer, reportdate;
Note that having the table name vary in a dynamic SQL statement is not very good, so if you ensure the table names stay the same, that would be best.
Maybe you should be using a global temporary table (GTT). These are permanent table structures that hold temporary data for an Oracle session. Many different sessions can insert data into the same GTT, and each will only be able to see their own data. The data is automatically deleted either on COMMIT or when the session ends, according to the GTT's definition.
You create the GTT (once only) like this:
create globabal temporary table my_gtt
(customer number, report_date date)
on commit delete/preserve* rows;
* delete as applicable
Then your programs can just use it like any other table - the only difference being it always begins empty for your session.
Using GTTs are much preferable to dropping/recreating tables on the fly - if your application needs a different structure for each report, I strongly suggest you work out all the different structures that each report needs, and create separate GTTs as needed by each, instead of creating ordinary tables at runtime.
That said, if this is just not feasible (and I've seen good examples when it's not, e.g. in a system that supports a wide range of ad-hoc requests from users), you'll have to go with the EXECUTE IMMEDIATE approach.

Is it possible in SQL Server to create a function which could handle a sequence?

We are looking at various options in porting our persistence layer from Oracle to another database and one that we are looking at is MS SQL. However we use Oracle sequences throughout the code and because of this it seems moving will be a headache. I understand about #identity but that would be a massive overhaul of the persistence code.
Is it possible in SQL Server to create a function which could handle a sequence?
That depends on your current use of sequences in Oracle. Typically a sequence is read in the Insert trigger.
From your question I guess that it is the persistence layer that generates the sequence before inserting into the database (including the new pk)
In MSSQL, you can combine SQL statements with ';', so to retrieve the identity column of the newly created record, use INSERT INTO ... ; SELECT SCOPE_IDENTITY()
Thus the command to insert a record return a recordset with a single row and a single column containing the value of the identity column.
You can of course turn this approach around, and create Sequence tables (similar to the dual table in Oracle), in something like this:
INSERT INTO SequenceTable (dummy) VALUES ('X');
SELECT #ID = SCOPE_IDENTITY();
INSERT INTO RealTable (ID, datacolumns) VALUES (#ID, #data1, #data2, ...)
I did this last year on a project. Basically, I just created a table with the name of the sequence, current value, & increment amount.
Then I created a 4 procs :
GetCurrentSequence( sequenceName)
GetNextSequence( sequenceName)
CreateSequence( sequenceName, startValue, incrementAmount)
DeleteSequence( sequenceName)
But there is a limitation you may not appreciate; functions cannot have side effects. So you could create a function for GetCurrentSequence(...), but GetNextSequence(...) would need to be a proc, since you will probably want to increment the current sequence value. However, if it's a proc, you won't be able to use it directly in your insert statements.
So instead of
insert into mytable(id, ....) values( GetNextSequence('MySequence'), ....);
Instead you will need to break it up over 2 lines;
declare #newID int;
exec #newID = GetNextSequence 'MySequence';
insert into mytable(id, ....) values(#newID, ....);
Also, SQL Server doesn't have any mechanism that can do something like
MySequence.Current
or
MySequence.Next
Hopefully, somebody will tell me I am incorrect with the above limitations, but I'm pretty sure they are accurate.
Good luck.
If you have a lot of code, you're going to want to do a massive overhaul of the code anyway; what works well in Oracle is not always going to work well in MSSQL. If you have a lot of cursors, for instance, while you could convert them line for line to MSSQL, you're not going to get good performance.
In short, this is not an easy undertaking.