Number of SQL execution in Oracle 10g database - sql

Is there a way to identify the number of sql running parallel in Oracle 10g database?
What would be the maximum number of SQL run in Oracle at a time?
This is really needed to identify the performance of database response time. There is a use case wherein nearly 1 million sql need to be fired and amount of data in the table also few billions.
Really much appreciated your help in this.

Related

SSMA - Rows counts are not matching after migration from Oracle to SQL Server

I migrated data from Oracle to SQL Server, the rows count in Oracle is around 820m rows, but when SSMA finished the migration "message said 100% successfully migrated" the rows count in SQL Server was around 745m rows.
Is there a limit on the number of rows?
There is no limit on number of rows for SSMA
Please keep in mind that Oracle has read consistency, meaning that the cursor will produce rows as they were in the beginning of select. Maybe, when SSMA started to run there were 745m in Oracle.

very slow oracle sql fast after explain plan for

Hello i have SQL with few left joins and cases. Time on postgres is around one second and on oracle is 20minuts! (390 row postgres, 300 rows oracle). Configuration of tables are the same in both db. If i run in oracle EXPLAIN PLAN FOR for this sql first, then when i run SQL (without EPF) it is fast as on postgres. I can even make some changes, add colums etc and it still run nice and fast.
Is here someone who understand what is this and what solution i can use in java where query is used?

Performance Tuning in ODI 12c

I want to migrate more than 10 million of data from source (oracle) to target (oracle) in shorter time in Oracle Data Integrator 12c. I have tried to create multiple scenarios and assign each scenario a million records and run the package of 10 scenarios. Time was reduced but is there any other way so that i can increase the performance of my ODI mapping having more than 10 million records?
I expect a less time for the execution of the mapping for better performance.
Further to add, on the ODI 12c you have option to place the source side indxes, in case the query is performing bad on the source side under the joins option you that option in the physical tab, else you can also deploy the hints in the physical tab. please let me know if these work.

SSIS performance vs OpenQuery with Linked Server from SQL Server to Oracle

We have a linked server (OraOLEDB.Oracle) defined in the SQL Server environment. Oracle 12c, SQL Server 2016. There is also an Oracle client (64 bit) installed on SQL Server.
When retrieving data from Oracle (a simple query, getting all columns from a 3M row, fairly narrow table, with varchars, dates and integers), we are seeing the following performance numbers:
sqlplus: select from Oracle > OS File on the SQL Server itself
less than 2k rows/sec
SSMS: insert into a SQL Server table select from Oracle using OpenQuery (passthrough to Oracle, so remote execution)
less than 2k rows/sec
SQL Export/Import tool (in essence, SSIS): insert into a SQL Server table, using the OLEDB Oracle for source and OLEDB SQL Server for target
over 30k rows/second
Looking for ways to improve throughput using OpenQuery/OpenResultSet, to match SSIS throughput. There is probably some buffer/flag somewhere that allows to achieve the same?
Please advise...
Thank you!
--Alex
There is probably some buffer/flag somewhere that allows to achieve the same?
Probably looking for the FetchSize parameter
FetchSize - specifies the number of rows the provider will fetch at a
time (fetch array). It must be set on the basis of data size and the
response time of the network. If the value is set too high, then this
could result in more wait time during the execution of the query. If
the value is set too low, then this could result in many more round
trips to the database. Valid values are 1 to 429,496, and 296. The
default is 100.
eg
exec sp_addlinkedserver N'MyOracle', 'Oracle', 'ORAOLEDB.Oracle', N'//172.16.8.119/xe', N'FetchSize=2000', ''
See, eg https://blogs.msdn.microsoft.com/dbrowne/2013/10/02/creating-a-linked-server-for-oracle-in-64bit-sql-server/
I think there are many way to enhance the performance on the INSERT query, I suggest reading the following article to get more information about data loading performance.
The Data Loading Performance Guide
There are one method you can try which is minimizing the logging by using clustered index. check the link below for more information:
New update on minimal logging for SQL Server 2008

Oracle Performance Issues

I'm new to Oracle. I've worked with Microsoft SQL Server for years, though. I was brought into a project that was already overdue and over budget, and I need to be "the Oracle expert." I've got a table that has 14 million rows in it. And I've got a complicated query that updates the table. But I'm going to start with the simple queries. When I issue a simple update that modifies a small number of records (100 to maybe 10,000 or so) it takes no more than 2 to 5 minutes to table scan the table and update the affected records. (Less time if the query can use an index.) But if I update the entire table with:
UPDATE MyTable SET MyFlag = 1;
Then it takes 3 hours!
If the table scan completes in minutes, why should this take hours? I could certainly use some advise on how to troubleshoot this, since I don't have enough experience with Oracle to know what diagnostics queries to run. (I'm on Oracle 11g and using Oracle SQL Developer as the client.)
Thanks.
When you do the UPDATE in Oracle, the data you are modifying are sequentially appended to the redo log and then distributed among the data blocks by a process called CHECKPOINT.
In addition, the old version of the data are copied into UNDO space to support possible transaction rollbacks and access to the old version of data by concurrent processes.
This all may take significantly more time than pure read operations which don't modify data.