best way to insert data using dephi in sql server 2008 - sql

I've always used such script to insert data into a table in delphi 7
sql := 'INSERT INTO table_foo (field1,field2,field3) VALUES ('
+quotedstr('value1')
+','+quotedstr('value2')
+','+quotedstr('value3')
+')';
adoquery1.close;
adoquery1.sql.text := sql;
adoquery1.execsql;
but one of my friend just showed me another way that looks cleaner, like so:
sql := 'SELECT * FROM table_foo';
adoquery1.close;
adoquery1.sql.text := sql;
adoquery1.open;
adoquery1.insert;
adoquery1.fieldbyname('field1').asstring := quotedstr('value1');
adoquery1.fieldbyname('field2').asstring := quotedstr('value2');
adoquery1.fieldbyname('field3').asstring := quotedstr('value3');
adoquery1.post;
which of the two methods are better (faster, easier to read/debug)? especially when the data in table_foo is large or there are a lot more fields to fill.

If you do use INSERT INTO statements use parameters (for reasons of readability, avoid SQL injection, SQL caching) e.g.:
adoquery1.sql.text := 'INSERT INTO table_foo (field1, field2) values (:field1, :field2)';
adoquery1.Parameters.ParamByName('field1').Value := value1;
adoquery1.Parameters.ParamByName('field2').Value := value2;
I prefer the second way (with a small tweak which I'll explain).
Since you are inserting one record, the tweak is to select an empty recordset i.e.:
SELECT * FROM table_foo where 1=0
This way you don't select all records form the table.
Also no need to use QuotedStr when assigning the values i.e.:
adoquery1.FieldByName('field1').AsString := 'value1';
The main reason I use this method is because it's easy to read and to maintain.
I don't need to bother myself with pure SQL queries. I don't need to deal with Parameters which sometime required to specify the data type for the parameters (e.g. Parameters.ParamByName('field1').DataType := ftInteger). No need to ParseSQL.
I simply use the DataSet As(Type) e.g.
FieldByName('field1').AsBoolean := True;
I would also prefer to use this method if I need to insert multiple records in a single transaction.
The downside for the second method is the short trip to the SQL server via SELECT FROM.
Another option would be to create a SQL stored procedure, pass your values to the SP, and write all the SQL logic inside the SP.

The second approach demands more local resources from the dataset, since it will keep a memory of the original result set and then use that memory to decide which records should be sent to the server by using which SQL statement. That approach also requires a live connection with the server and a bidirectional local cursor set in the dataset. TADODataset does all that for you. It works more to you work less, but it will consume more from the system. The decison, under my view, depends on which resource is more important, your time or computer resources.
Personaly, I prefer using TClientDataset (CDS). It will allow you to have an in-memory dataset and by using TDatasetProvider.BeforeUpdateRecord event in the corresponding TDatasetProvider you will get the best of both worlds: absolute control over which sentence will be submited to the server and a flexible and bidirectinal dataset that works very well on GUIs.
Besides (this is the most important to me), with CDS you will be able to isolate the specifics of your DBMS away from the main logic of your application, because that logic will be operating on a DB-independent dataset. If you have to shift from ADO to, let´s say, DBX, your main code will not be hurt because it´s written on CDS.

Related

pl/sql procedure with variable numbers of parameters

I want to know if I can create a PL/SQL procedure that the number of parameters and their types changes.
For example procedure p1.
I can use it like this
p1 (param1, param2,......., param n);
i want to pass table name and data in procedure, but the attributes change for every table,
create or replace PROCEDURE INSERTDATA(NOMT in varchar2) is
num int;
BEGIN
EXECUTE IMMEDIATE 'SELECT count(*) FROM user_tables WHERE table_name = :1'
into num using NOMT ;
IF( num < 1 )
THEN
dbms_output.put_line('table not exist !!! ');
ELSE
dbms_output.put_line('');
-- here i want to insert parameters in the table,
-- but the table attributes are not the same !!
END IF;
NULL;
END INSERTDATA;
As far as I can tell, no, you can not. Number and datatypes of all parameters must be fixed.
You could pass a collection as a parameter (and have different number of values within it), but - that's still a single parameter.
Where would you want to use such a procedure?
If you need to store, update and query a variable amount of information, might I recommend switching to JSON queries and objects in Oracle. Oracle has deep support for both fixed and dynamic querying of json data, both in SQL and PLSQL.
i want to pass table name and data in procedure, but the attributes change for every table,
The problem with such a universal procedure is that something needs to know the structure of the target table. Your approach demands that the caller has to discover the projection of the table and arrange the parameters in a correct fashion.
In no particular order:
This is bad practice because it requires the calling program to do the hard work regarding the data dictionary.
Furthermore it breaks the Law Of Demeter because the calling program needs to understand things like primary keys (sequences, identity columns, etc), foreign key lookups, etc
This approach mandates that all columns must be populated; it makes no allowance for virtual columns, optional columns, etc
To work the procedure would have to use dynamic SQL, which is always hard work because it turns compilation errors into runtime errors, and should be avoided if at all possible.
It is trivially simple to generate a dedicated insert procedure for each table in a schema, using dynamic SQL against the data dictionary. This is the concept of the Table API. It's not without its own issues but it is much safer than what your question proposes.

Using two updates where second update statement is using column from first update statement as input

I've small doubt about update code block which has been written by someone before and now I'll be using it in my Java program.
Is it possible to update a column first, then commit and afterwards use same column as an input in another update statement inside same block, as listed in below code. I know using sub-query way to do this but have never seen this way before. It'll great if someone can confirm
1) Whether it is correct?
2) If not, what can be updated to make it work beyond using sub-query format.
3) Also, bas_capital_calc_cd is column in same table derivatives which is being updated. Can we pass column as an input to functions, such as bas2_rwa_calc here? Moreover, can we pass column name at all in plsql function as input.
Thanks in advance for help!
--BAS_EB_RWA_COMMT is being used in BAS_EB_TOTAL_CAPITAL calculation. similarly, BAS_AB_RWA_COMMT is being used in BAS_AB_TOTAL_CAPITAL calculation.
IF ID = 17 THEN
UPDATE derivatives
SET BAS_CAPITAL_CALC_CD = 'T',
BAS_CATEGORY_CD = case when nvl(rec.ssfa_resecure_flag,'N') = 'Y' then 911 else 910 end,
BAS_EB_RWA_COMMT = bas2_rwa_calc(bas_capital_calc_cd, v_SSFA_COMMT_AMT,v_BAS_CAP_FACTOR_K_COMMT, v_basel_min,v_bas_rwa_rate) + NVL(BAS_CVA_PORTFOLIO_RWA,0),
BAS_AB_RWA_COMMT = bas2_rwa_calc(bas_capital_calc_cd, v_SSFA_COMMT_AMT,V_BAS_CAP_FACTOR_K_COMMT, v_basel_min,v_bas_rwa_rate) + NVL(BAS_CVA_PORTFOLIO_RWA,0),
BAS_ICAAP_EB_RWA_COMMT = bas2_rwa_calc(bas_capital_calc_cd,bas_unused_commt,bas_icaap_factor_k_commt,v_basel_min,v_bas_rwa_rate),
WHERE AS_OF_DATE = v_currect_DATE
COMMIT;
UPDATE derivatives
SET BAS_EB_TOTAL_CAPITAL = round(BAS2_MGRL_CAPITAL(v_date, BAS_EB_RWA, BAS_EB_RWA_COMMT),2),
BAS_AB_TOTAL_CAPITAL = round(BAS2_MGRL_CAPITAL(v_date, BAS_AB_RWA, BAS_AB_RWA_COMMT),2)
WHERE AS_OF_DATE = v_DATE
AND ID_NUMBER = rec.ID_NUMBER
AND IDENTITY_CODE = rec.IDENTITY_CODE;
COMMIT;
WHERE AS_OF_DATE = v_currect_DATE;
COMMIT;
END IF
In DB2 and the SQL standard you use a feature called FINAL_TABLE to do this. In Oracle you use a statement called "RETURNING".
cf - https://blog.jooq.org/tag/final-table/
As I understood from your question statement you need to understand the PLSQL. Hoping, I got it correct.
To understand the concept let us first discuss what is a PL/SQL?
Theory Source: https://www.geeksforgeeks.org/plsql-introduction/
PL/SQL is a block structured language that enables developers to combine the power of SQL with procedural statements.All the statements of a block are passed to oracle engine all at once which increases processing speed and decreases the traffic.
Disadvantages of SQL:
SQL doesn’t provide the programmers with a technique of condition checking, looping and branching.
SQL statements are passed to Oracle engine one at a time which increases traffic and decreases speed.
SQL has no facility of error checking during manipulation of data.
Features of PL/SQL:
PL/SQL is basically a procedural language, which provides the functionality of decision making, iteration and many more features of procedural programming languages.
PL/SQL can execute a number of queries in one block using single
command.
One can create a PL/SQL unit such as procedures, functions, packages, triggers, and types, which are stored in the database for reuse by applications.
PL/SQL provides a feature to handle the exception which occurs in PL/SQL block known as exception handling block.
Applications written in PL/SQL are portable to computer hardware or operating system where Oracle is operational.
PL/SQL Offers extensive error checking.
Now please check the highlighted point PL/SQL can execute a number of queries in one block using single command.
Let us take an example of the situation you described.
create table test as select 0 as col1, 0 as col2 from dual;
declare
v_col1 test.col1%type;
v_col2 test.col2%type;
begin
update test set col1 = col1 + 1;
commit;
dbms_output.put_line('col1='+v_col1);
dbms_output.put_line('col2='+v_col2);
update test set col2 = col1 + 1;
commit;
dbms_output.put_line('col1='+v_col1);
dbms_output.put_line('col2='+v_col2);
end;
Please run above code, it is just a simple example of your question.
Ans Point 1: (Considering Oracle as sample database) So, according to me yes, it is possible, However, way you are writing these two updates, I am not sure that this is the best way or only way to handle such situations.
Ans Point 3: You can use Dynamic SQL to achieve the same in Oracle.
Reference Link : https://docs.oracle.com/cd/B10500_01/appdev.920/a96590/adg09dyn.htm

Inserting a record in sybase db table using stored procedure - delphi programming

I am new at programming with delphi. I am currently creating a simple notebook program and i need some help. I have a form called contacts with 5 tEdit fields. I am thinking i could create a stored procedure in my sybase database to insert record into Contacts table, so I can call it with my delphi programm. How do I call this procedure in delphi? the values that will be inserted should be taken from users input into these tEdit fields. Anyone has any suggestions? Or am I thinking the wrong way? thanks in advance
You have several options here, and it will depend on what VCL controls you are using.
(1). You can insert via a tTable component. This let's you have a quick, easy, low level control. You drop the component on the form, set the component properties (tablename, etc), then something like
MyTable.Open;
MyTable.Insert; (or maybe append)
MyTable.FieldByName('MY_FIELD').AsString := 'Bob'; // set the field values
MyTable.post;
(2). Use SQL. Drop a SQL component on the form. Set the SQLText property, using parameters;
for example : "Insert into table (MyField) values :X". My opinion is that this is easier to do in complex situations, correlated subselects, etc.
MySQL.Close;
MySQL.ParamByName('X').AsString := 'BOB';
ExecSQL;
(3). Use stored procedures. - The advantage to this is that they are useable by multiple applications, and can be changed easily. If you want to update the SQL code, you update it once (in the database), versus having to change it in an app, and then distribute the app to multiple users.
The code for this will be nearly identify to (2), although I don't know the specifics of your VCL library. In effect though, you will specify the routine to run, specify the parameter values, and then execute the stored procedure.
Note that all these routines will return an error code or exception code. It is best practice to always check for that...
Here is a little more complex example, using a SQL statement called qLoader. qLoader exists on a datamodule. I am passing a parameter, executing the SQL statement, then iterating through all the results.
try
with dmXLate.qLoader do
begin
Close;
ParamByName('DBTYPE').AsString := DBType;
Open;
while not dmXLate.qLoader.Eof do
begin
// Here is where we process each result
UserName:= dmXLate.qLoader.FieldByName('USERNAME').AsString;
dmXLate.qLoader.Next;
end;
end;
except
on E: Exception do
begin
ShowMEssage(E.Message);
exit;
end;
end;

Query performance difference pl/sql forall insert and plain SQL insert

We have been using temporary table to store intermediate results in pl/sql Stored procedure. Could anyone tell if there is a performance difference between doing bulk collect insert through pl/sql and a plain SQL insert.
Insert into [Table name] [Select query Returning huge amount of data]
or
Cursor for [Select query returning huge amount of data]
open cursor
fetch cursor bulk collect into collection
Use FORALL to perform insert
Which of the above 2 options is better to insert huge amount of temporary data?.
Some experimental data for your problem (Oracle 9.2)
bulk collect
DECLARE
TYPE t_number_table IS TABLE OF NUMBER;
v_tab t_number_table;
BEGIN
SELECT ROWNUM
BULK COLLECT INTO v_tab
FROM dual
CONNECT BY LEVEL < 100000;
FORALL i IN 1..v_tab.COUNT
INSERT INTO test VALUES (v_tab(i));
END;
/
-- 2.6 sec
insert
-- test table
CREATE global TEMPORARY TABLE test (id number)
ON COMMIT preserve ROWS;
BEGIN
INSERT INTO test
SELECT ROWNUM FROM dual
CONNECT BY LEVEL < 100000;
END;
/
-- 1.4 sec
direct path insert
http://download.oracle.com/docs/cd/B10500_01/server.920/a96524/c21dlins.htm
BEGIN
INSERT /*+ append */ INTO test
SELECT ROWNUM FROM dual
CONNECT BY LEVEL < 100000;
END;
/
-- 1.2 sec
Insert into select must certainly be faster. Skips the overhead of storing the data in a collection first.
It depends on the nature of the work you're doing to populate the intermediate results. If the work can be done relatively simply in the SELECT statement for the INSERT, that will generally perform better.
However, if you have some complex intermediate logic, it may be easier (from a code maintenance point of view) to fetch and insert the data in batches using bulk collects/binds. In some cases it might even be faster.
One thing to note very carefully: the query plan used by the INSERT INTO x SELECT ... will sometimes be quite different to that used when the query is run by itself (e.g. in a PL/SQL explicit cursor). When comparing performance, you need to take this into account.
Tom Kyte of asktomhome fame has answered this question more firmly. If you are willing to do some searching you can find the question and his response which constains detailed testing results and explanations. He shows plsql cursor vs. plsql bulk collect including affect of periodic commit, vs. sql insert as select.
insert as select wins hands down all the time and the difference on even modest datasets is dramatic.
That said. the comment was made earlier about the complexity of intermediary computations. I can think of three situations where this would be relevant.
1) If computations require going outside of the Oracle database, then clearly a simple insert as select does not do the trick.
2) If the solution requires the use of PLSQL function calls then context switching can potentially kill your query and you may have better results with plsql calling plsql functions. PLSQl was made to call SQL but not the other way around. Thus calling PLSQL from SQL is expensive.
3) If computations make the sql code very difficulty to read then even though it may be slower, a plsql bulk collect solution may be better for these other reasons.
Good luck.
When we declare cursor explicitly, oracle will allocate a private SQL work area in our RAM. When you have select statement that returns multiple rows will be copied from table or view to private SQL work area as ACTIVE SET. Its size is the number of rows that meet your search criteria. Once cursor is opened, your pointer will be placed in the first row of ACTIVE SET. Here you can perform DML. For example if you perform some update operation. It will update any changes in rows in the work area and not in the table directly. So it is not using the table every time we need to update. It fetches once to the work area, then after performing operation, the update will be done once for all operations. This reduces input/output data transfer between database and user.
I Suggest using PL\SQL explicit cursor, u r just going to perform any DML operation at the private workspace alloted for the cursor. This will not hit the database server performance during peak hours

Bulk Insert into Oracle database: Which is better: FOR Cursor loop or a simple Select?

Which would be a better option for bulk insert into an Oracle database ?
A FOR Cursor loop like
DECLARE
CURSOR C1 IS SELECT * FROM FOO;
BEGIN
FOR C1_REC IN C1 LOOP
INSERT INTO BAR(A,
B,
C)
VALUES(C1.A,
C1.B,
C1.C);
END LOOP;
END
or a simple select, like:
INSERT INTO BAR(A,
B,
C)
(SELECT A,
B,
C
FROM FOO);
Any specific reason either one would be better ?
I would recommend the Select option because cursors take longer.
Also using the Select is much easier to understand for anyone who has to modify your query
The general rule-of-thumb is, if you can do it using a single SQL statement instead of using PL/SQL, you should. It will usually be more efficient.
However, if you need to add more procedural logic (for some reason), you might need to use PL/SQL, but you should use bulk operations instead of row-by-row processing. (Note: in Oracle 10g and later, your FOR loop will automatically use BULK COLLECT to fetch 100 rows at a time; however your insert statement will still be done row-by-row).
e.g.
DECLARE
TYPE tA IS TABLE OF FOO.A%TYPE INDEX BY PLS_INTEGER;
TYPE tB IS TABLE OF FOO.B%TYPE INDEX BY PLS_INTEGER;
TYPE tC IS TABLE OF FOO.C%TYPE INDEX BY PLS_INTEGER;
rA tA;
rB tB;
rC tC;
BEGIN
SELECT * BULK COLLECT INTO rA, rB, rC FROM FOO;
-- (do some procedural logic on the data?)
FORALL i IN rA.FIRST..rA.LAST
INSERT INTO BAR(A,
B,
C)
VALUES(rA(i),
rB(i),
rC(i));
END;
The above has the benefit of minimising context switches between SQL and PL/SQL. Oracle 11g also has better support for tables of records so that you don't have to have a separate PL/SQL table for each column.
Also, if the volume of data is very great, it is possible to change the code to process the data in batches.
If your rollback segment/undo segment can accomodate the size of the transaction then option 2 is better. Option 1 is useful if you do not have the rollback capacity needed and have to break the large insert into smaller commits so you don't get rollback/undo segment too small errors.
A simple insert/select like your 2nd option is far preferable. For each insert in the 1st option you require a context switch from pl/sql to sql. Run each with trace/tkprof and examine the results.
If, as Michael mentions, your rollback cannot handle the statement then have your dba give you more. Disk is cheap, while partial results that come from inserting your data in multiple passes is potentially quite expensive. (There is almost no undo associated with an insert.)
I think that in this question is missing one important information.
How many records will you insert?
If from 1 to cca. 10.000 then you should use SQL statement (Like they said it is easy to understand and it is easy to write).
If from cca. 10.000 to cca. 100.000 then you should use cursor, but you should add logic to commit on every 10.000 records.
If from cca. 100.000 to millions then you should use bulk collect for better performance.
As you can see by reading the other answers, there are a lot of options available. If you are just doing < 10k rows you should go with the second option.
In short, for approx > 10k all the way to say a <100k. It is kind of a gray area. A lot of old geezers will bark at big rollback segments. But honestly hardware and software have made amazing progress to where you may be able to get away with option 2 for a lot of records if you only run the code a few times. Otherwise you should probably commit every 1k-10k or so rows. Here is a snippet that I use. I like it because it is short and I don't have to declare a cursor. Plus it has the benefits of bulk collect and forall.
begin
for r in (select rownum rn, t.* from foo t) loop
insert into bar (A,B,C) values (r.A,r.B,r.C);
if mod(rn,1000)=0 then
commit;
end if;
end;
commit;
end;
I found this link from the oracle site that illustrates the options in more detail.
You can use:
Bulk collect along with FOR ALL that is called Bulk binding.
Because PL/SQL forall operator speeds 30x faster for simple table inserts.
BULK_COLLECT and Oracle FORALL together these two features are known as Bulk Binding. Bulk Binds are a PL/SQL technique where, instead of multiple individual SELECT, INSERT, UPDATE or DELETE statements are executed to retrieve from, or store data in, at table, all of the operations are carried out at once, in bulk. This avoids the context-switching you get when the PL/SQL engine has to pass over to the SQL engine, then back to the PL/SQL engine, and so on, when you individually access rows one at a time. To do bulk binds with INSERT, UPDATE, and DELETE statements, you enclose the SQL statement within a PL/SQL FORALL statement. To do bulk binds with SELECT statements, you include the BULK COLLECT clause in the SELECT statement instead of using INTO.
It improves performance.
I do neither for a daily complete reload of data. For example say I am loading my Denver site. There are other strategies for near real time deltas.
I use a create table SQL as I have found is just almost as fast as a bulk load
For example, below a create table statement is used to stage the data, casting the columns to the correct data type needed:
CREATE TABLE sales_dataTemp as select
cast (column1 as Date) as SALES_QUARTER,
cast (sales as number) as SALES_IN_MILLIONS,
....
FROM
TABLE1;
this temporary table mirrors my target table's structure exactly which is list partitioned by site.
I then do a partition swap with the DENVER partition and I have a new data set.