Oracle : Indexes not being used - sql

I have a query which is not using my indexes. Can someone say why?
explain plan set statement_id = 'bad8' for
select
g1.g1id,a.a1id from atable a,
(
select
phone,address,g1id from gtable g
where
g.active = 0 and
(g.name is not null) AND
(SYSDATE - g.CTIME <= 2*365)
) g1
where
(
(a.phone.ph1 = g1.phone.ph1 and
a.phone.ph2 = g1.phone.ph2 and
a.phone.ph3 = g1.phone.ph3
)
OR
(a.address.ad1 = g1.address.ad1 and a.address.ad2 = g1.address.ad2)
)
In both the tables : atable,gtable I have these indexes :
1. On phone.ph1,phone.ph2,phone.ph3
2. On address.ad1,address.ad2
phone,address are of custom data types.
Using Oracle 11g.
Here is the explain plan query and output :
SELECT cardinality "Rows",
lpad(' ',level-1)||operation||' '||
options||' '||object_name "Plan"
FROM PLAN_TABLE
CONNECT BY prior id = parent_id
AND prior statement_id = statement_id
START WITH id = 0
AND statement_id = 'bad8'
ORDER BY id;
Result:
> Rows Plan
490191190 SELECT STATEMENT
> null CONCATENATION
> 490190502 HASH JOIN
> 511841 TABLE ACCESS FULL gtable
> 41332965 PARTITION LIST ALL
> 41332965 TABLE ACCESS FULL atable
> 688 HASH JOIN
> 376893 TABLE ACCESS FULL gtable
> 41332965 PARTITION LIST ALL
> 41332965 TABLE ACCESS FULL atable
Both atable,gtable have more than 10 million rows each.
Most values in columns phone and address don't repeat.

What indices Oracle chosen depends on many factor including things you haven't mentioned in your question such as the number of rows in the table, frequency of values within a column and whether you have separate or combined indices when more than one column is indexed.
Having said that, I suppose that the main reason your indices aren't used are:
You don't join directly with GTABLE / GLOBAL. Instead you join with a view that has three additional WHERE clauses that aren't part of the index and thus make it less effective in this constellation.
The JOIN condition includes an OR, which makes it difficult to use indices.
Update:
If Oracle used your indices to do the join - which is already very difficult due to the OR condition - it would end up with a huge number of ROWIDs. For each ROWID, it then had to fetch the full row. Since a full table scan can easily be up to 50 times faster than a fetch by ROWID (I don't know what value Oracle uses), it will only use the indices if it reliably knows that the join will reduce the number of rows to fetch by a factor of 50.
In your case, there are the remaining WHERE conditions (g.active = 0, g.name is not null, SYSDATE - g.CTIME <= 2*365), which aren't represented in the indices. So they have to applied after the join and after the GTABLE rows have been fetched. This makes it even more difficult to reach a 50 times smaller result set than a full table scan.
So I'm pretty sure the Oracle cost estimate is correct, i.e. using the indices would result in a more expensive query and even longer execution time.

We can say "your query does not use your indexes because does not need them". A hash join is better. To use your indexes, oracle need to full scan them(4 indexes), make two joins, make a rowid or, and after that read from tables probably many blocks. If he belives that the result has many rows, the CBO coose the full scans, because is faster.
There are no conditions that reduce the number of rows taken from tables. There is no range scan. It must do full scans.

Related

Oracle FIRST_ROWS optimizer hint

I'm writing a query against what is currently a small table in development. In production, we expect it to grow quite large over the life of the table (the primary key is a number(10)).
My query does a selection for the top N rows of my table, filtered by specific criteria and ordered by date ascending. Essentially, we're assigning records, in bulk, to a specific user for processing. In my case, N will only be 10, 20, or 30.
I'm currently selecting my primary keys inside a subselect, using rownum to limit my results, like so:
SELECT log_number FROM (
SELECT
il2.log_number,
il2.final_date
FROM log il2
INNER JOIN agent A ON A.agent_id = il2.agent_id
INNER JOIN activity lat ON il2.activity_id = lat.activity_id
WHERE (p_criteria1 IS NULL OR A.criteria1 = p_criteria1)
WHERE lat.criteria2 = p_criteria2
AND lat.criteria3 = p_criteria3
AND il2.criteria3 = p_criteria4
AND il2.current_user IS NULL
GROUP BY il2.log_number, il2.final_date
ORDER BY il2.final_date ASC)
WHERE ROWNUM <= p_how_many;
Although I have a stopkey due to the rownum, I'm wondering if using an Oracle hint here (/*+ FIRST_ROWS(p_how_many) */) on the inner select will affect the query plan in the future. I'd like to know more about what the database does when this hint is specified; does it actually make a difference if you have to order the table? (Seems like it wouldn't.) Or does it only affect the select portion, after the access and join parts?
Looking at the explain plan now doesn't get me much as the table hasn't grown yet.
Thanks for your help!
Even with an ORDER BY, different execution plans could be selected when you limit the number of rows returned. It can be easier to select the top n rows by some order key, then sort those, than to sort the entire table then select the top n rows.
However, the GROUP BY is likely to restrict the benefit of this sort of optimization. Grouping (or a DISTINCT operation) generally prevents the optimizer from using a plan that can pipe individual rows into a STOPKEY operation.

Performance of nested select

I know this is a common question and I have read several other posts and papers but I could not find one that takes into account indexed fields and the volume of records that both queries could return.
My question is simple really. Which of the two is recommended here written in an SQL-like syntax (in terms of performance).
First query:
Select *
from someTable s
where s.someTable_id in
(Select someTable_id
from otherTable o
where o.indexedField = 123)
Second query:
Select *
from someTable
where someTable_id in
(Select someTable_id
from otherTable o
where o.someIndexedField = s.someIndexedField
and o.anotherIndexedField = 123)
My understanding is that the second query will query the database for every tuple that the outer query will return where the first query will evaluate the inner select first and then apply the filter to the outer query.
Now the second query may query the database superfast considering that the someIndexedField field is indexed but say that we have thousands or millions of records wouldn't it be faster to use the first query?
Note: In an Oracle database.
In MySQL, if nested selects are over the same table, the execution time of the query can be hell.
A good way to improve the performance in MySQL is create a temporary table for the nested select and apply the main select against this table.
For example:
Select *
from someTable s1
where s1.someTable_id in
(Select someTable_id
from someTable s2
where s2.Field = 123);
Can have a better performance with:
create temporary table 'temp_table' as (
Select someTable_id
from someTable s2
where s2.Field = 123
);
Select *
from someTable s1
where s1.someTable_id in
(Select someTable_id
from tempTable s2);
I'm not sure about performance for a large amount of data.
About first query:
first query will evaluate the inner select first and then apply the
filter to the outer query.
That not so simple.
In SQL is mostly NOT possible to tell what will be executed first and what will be executed later.
Because SQL - declarative language.
Your "nested selects" - are only visually, not technically.
Example 1 - in "someTable" you have 10 rows, in "otherTable" - 10000 rows.
In most cases database optimizer will read "someTable" first and than check otherTable to have match. For that it may, or may not use indexes depending on situation, my filling in that case - it will use "indexedField" index.
Example 2 - in "someTable" you have 10000 rows, in "otherTable" - 10 rows.
In most cases database optimizer will read all rows from "otherTable" in memory, filter them by 123, and than will find a match in someTable PK(someTable_id) index. As result - no indexes will be used from "otherTable".
About second query:
It completely different from first. So, I don't know how compare them:
First query link two tables by one pair: s.someTable_id = o.someTable_id
Second query link two tables by two pairs: s.someTable_id = o.someTable_id AND o.someIndexedField = s.someIndexedField.
Common practice to link two tables - is your first query.
But, o.someTable_id should be indexed.
So common rules are:
all PK - should be indexed (they indexed by default)
all columns for filtering (like used in WHERE part) should be indexed
all columns used to provide match between tables (including IN, JOIN, etc) - is also filtering, so - should be indexed.
DB Engine will self choose the best order operations (or in parallel). In most cases you can not determine this.
Use Oracle EXPLAIN PLAN (similar exists for most DBs) to compare execution plans of different queries on real data.
When i used directly
where not exists (select VAL_ID FROM #newVals = OLDPAR.VAL_ID) it was cost 20sec. When I added the temp table it costs 0sec. I don't understand why. Just imagine as c++ developer that internally there loop by values)
-- Temp table for IDX give me big speedup
declare #newValID table (VAL_ID int INDEX IX1 CLUSTERED);
insert into #newValID select VAL_ID FROM #newVals
insert into #deleteValues
select OLDPAR.VAL_ID
from #oldVal AS OLDPAR
where
not exists (select VAL_ID from #newValID where VAL_ID=OLDPAR.VAL_ID)
or exists (select VAL_ID from #VaIdInternals where VAL_ID=OLDPAR.VAL_ID);

Optimize self join on millions of rows

I have a table which is a link table from objects in my SQL Server 2012 database, (annonsid, annonsid2). This table is used to create chains of triangle or even rectangles to see who can swap with who.
This is the query I use on the table Matching_IDs which has 1,5 million rows in it, producing 14 million possible chains using this query:
SELECT COUNT(*)
FROM Matching_IDs AS m
INNER JOIN Matching_IDs AS m2
ON m.annonsid2 = m2.annonsid
INNER JOIN Matching_IDs AS m3
ON m2.annonsid2 = m3.annonsid
AND m.annonsid = m3.annonsid2
I must improve performance to take maybe 1 second or less, Is there a faster way to do this? The query takes about 1 minute on my computer. I normally use a WHERE m.annonsid=x, but it takes just the same amount of time, cause it has to go through all possible combinations anyway.
Update: the latest query plan
|--Compute Scalar(DEFINE:([Expr1006]=CONVERT_IMPLICIT(int,[globalagg1011],0)))
|--Stream Aggregate(DEFINE:([globalagg1011]=SUM([partialagg1010])))
|--Parallelism(Gather Streams)
|--Stream Aggregate(DEFINE:([partialagg1010]=Count(*)))
|--Hash Match(Inner Join, HASH:([m2].[annonsid2], [m2].[annonsid])=([m3].[annonsid], [m].[annonsid2]), RESIDUAL:([MyDatabase].[dbo].[Matching_IDs].[annonsid2] as [m2].[annonsid2]=[MyDatabase].[dbo].[Matching_IDs].[annonsid] as [m3].[annonsid] AND [MyDatabase].[dbo].[Matching_IDs].[annonsid2] as [m].[annonsid2]=[MyDatabase].[dbo].[Matching_IDs].[annonsid] as [m2].[annonsid]))
|--Parallelism(Repartition Streams, Hash Partitioning, PARTITION COLUMNS:([m2].[annonsid2], [m2].[annonsid]))
| |--Index Scan(OBJECT:([MyDatabase].[dbo].[Matching_IDs].[NonClusteredIndex-20121229-133207] AS [m2]))
|--Parallelism(Repartition Streams, Hash Partitioning, PARTITION COLUMNS:([m3].[annonsid], [m].[annonsid2]))
|--Merge Join(Inner Join, MANY-TO-MANY MERGE:([m].[annonsid])=([m3].[annonsid2]), RESIDUAL:([MyDatabase].[dbo].[Matching_IDs].[annonsid] as [m].[annonsid]=[MyDatabase].[dbo].[Matching_IDs].[annonsid2] as [m3].[annonsid2]))
|--Parallelism(Repartition Streams, Hash Partitioning, PARTITION COLUMNS:([m].[annonsid]), ORDER BY:([m].[annonsid] ASC))
| |--Index Scan(OBJECT:([MyDatabase].[dbo].[Matching_IDs].[NonClusteredIndex-20121229-133152] AS [m]), ORDERED FORWARD)
|--Parallelism(Repartition Streams, Hash Partitioning, PARTITION COLUMNS:([m3].[annonsid2]), ORDER BY:([m3].[annonsid2] ASC))
|--Index Scan(OBJECT:([MyDatabase].[dbo].[Matching_IDs].[NonClusteredIndex-20121229-133207] AS [m3]), ORDERED FORWARD)
Some ideas:
Try two indexes (annonsid,annonsid2) and (annonsid2,annonsid)
Have you tried a column store index? It makes the table read only but it might improve performance.
Also, some variations of the query could help. Examples:
SELECT COUNT(*)
FROM Matching_IDs AS m
INNER JOIN Matching_IDs AS m2
ON m.annonsid2 = m2.annonsid
INNER JOIN Matching_IDs AS m3
ON m2.annonsid2 = m3.annonsid
where m.annonsid = m3.annonsid2
or
SELECT COUNT(*)
FROM Matching_IDs AS m, Matching_IDs AS m2, Matching_IDs AS m3
where m2.annonsid2 = m3.annonsid
and m.annonsid2 = m2.annonsid
and m.annonsid = m3.annonsid2
Did you check the CPU/IO-Load? If IO-Load is high, then the server is not crunching numbers but swapping => more RAM solves the problem.
How fast is this query?
SELECT COUNT(*)
FROM Matching_IDs AS m
INNER JOIN Matching_IDs AS m2
ON m.annonsid2 = m2.annonsid
If this is very fast but adding the next join slows thing down then you propably need more RAM.
It seems like you already indexed this quite well. You can try converting the hash to a merge join by adding the right multi-column index, but it won't give you the desired speedup of 60x.
I think this index would be on annonsid, annonsid2 although I might have made a mistake here.
It would be nice to materialize all of this but indexed views do not support self-joins. You can try to materialize this query (unaggregated) into a new table. Whenever you execute DML against the base table, also update the second table (using either application logic or triggers). That would allow you to query blazingly fast.
You should make this query a bit more separated. I think first you should create a table, where you can store the primary key + annonsid, annonsid2 -if annosid is not the primary key itself.
DECLARE #AnnonsIds TABLE
(
primaryKey int,
-- if you need later more info from the original rows like captions
-- AND it is not (just) the annonsid
annonsid int,
annonsid2 int
)
If you declare a table, and you have index on this column, it is quite fast to get the specified rows by the WHERE annonsid = #annonsid OR annonsid2 = #annosid
After the first step you have a much smaller (I guess), and "thin" table to work with. Then you just have to use the joins here or make a temp table and a CTE on it.
I think it should be faster, depending on the selectivity of the condition in the WHERE, of yourse if 1.1 million rows fits in it, then it does not make sense, but if just a couple hundred or tousend, then you should give it a try!
1 - change select Count(*) to Count(1) or Count(id)
2 - Write set Nocount on at the first of your Stored procedure or at the first of your Query
3 - Use index on annonsid , annonsid2
4 - Having your Indexes after the primary key in your Table
You could denormalize the data by adding a table RelatedIds with AnnonsId, RelatedAnnonId and Distance. For every value of AnnonsId the table would contains rows for each RelatedAnnonId and the number of relations that need to be traversed to reach it, aka Distance. Triggers on the existing MatchingIds table would maintain the new table with some configured maximum value for Distance, e.g. 3 to handle rectangular shares. Index the table on (AnnonsId, Distance).
Edit: An index on (Distance, AnnonsId) will allow you to quickly find rows that have enough related entries to form a particular shape. Adding a column for MaxDistance may be useful if you want to be able to exclude rows based on, for example, having a triangular but not rectangular relationship.
The new query would inner join RelatedIds as RI on RI.AnnonsId = m.AnnonsId and RI.Distance <= #MaxDistance with the desired "shape" dictating the value of #MaxDistance.
It should provide much better performance on the select. Downsides are another table with a large number of rows and overhead of the triggers when altering the MatchingIds table.
Example: There are two entries in Matching_IDs: (1,2) and (2,3).
The new table would contain 3 entries:
1-> 2: distance = 1
1-> 3: distance = 2 (it takes one intermediate 'node' to go from 1 to 3)
2-> 3: distance = 1
adding one more entry to the matching ids (3,1) would result in another entry:
1-> 1: distance = 3
And voilá: You found a triangle (distance=3).
Now, to find all triangles, simply do this:
select *
from RelatedIds
where AnnonsId=RelatedAnnonId
and Distance=3

Horrible Oracle update performance

I am performing an update with a query like this:
UPDATE (SELECT h.m_id,
m.id
FROM h
INNER JOIN m
ON h.foo = m.foo)
SET m_id = id
WHERE m_id IS NULL
Some info:
Table h is roughly ~5 million rows
All rows in table h have NULL values for m_id
Table m is roughly ~500 thousand rows
m_id on table h is an indexed foreign key pointing to id on table m
id on table m is the primary key
There are indexes on m.foo and h.foo
The EXPLAIN PLAN for this query indicated a hash join and full table scans, but I'm no DBA, so I can't really interpret it very well.
The query itself ran for several hours and did not complete. I would have expected it to complete in no more than a few minutes. I've also attempted the following query rewrite:
UPDATE h
SET m_id = (SELECT id
FROM m
WHERE m.foo = h.foo)
WHERE m_id IS NULL
The EXPLAIN PLAN for this mentioned ROWID lookups and index usage, but it also went on for several hours without completing. I've also always been under the impression that queries like this would cause the subquery to be executed for every result from the outer query's predicate, so I would expect very poor performance from this rewrite anyway.
Is there anything wrong with my approach, or is my problem related to indexes, tablespace, or some other non-query-related factor?
Edit:
I'm also having abysmal performance from simple count queries like this:
SELECT COUNT(*)
FROM h
WHERE m_id IS NULL
These queries are taking anywhere from ~30 seconds to sometimes ~30 minutes(!).
I am noticing no locks, but the tablespace for these tables is sitting at 99.5% usage (only ~6MB free) right now. I've been told that this shouldn't matter as long as indexes are being used, but I don't know...
Some points:
Oracle does not index NULL values (it will index a NULL that is part of a globally non-null tuple, but that's about it).
Oracle is going for a HASH JOIN because of the size of both h and m. This is likely the best option performance-wise.
The second UPDATE might get Oracle to use indexes, but then Oracle is usually smart about merging subqueries. And it would be a worse plan anyway.
Do you have recent, reasonable statistics for your schema? Oracle really needs decent statistics.
In your execution plan, which is the first table in the HASH JOIN? For best performance it should be the smaller table (m in your case). If you don't have good cardinality statistics, Oracle will get messed up. You can force Oracle to assume fixed cardinalities with the cardinality hint, it may help Oracle get a better plan.
For example, in your first query:
UPDATE (SELECT /*+ cardinality(h 5000000) cardinality(m 500000) */
h.m_id, m.id
FROM h
INNER JOIN m
ON h.foo = m.foo)
SET m_id = id
WHERE m_id IS NULL
In Oracle, FULL SCAN reads not only every record in the table, it basically reads all storage allocated up to the maximum used (the high water mark in Oracle documentation). So if you have had a lot of deleted rows your tables might need some cleaning up. I have seen a SELECT COUNT(*) on an empty table consume 30+ seconds because the table in question had like 250 million deleted rows. If that is the case, I suggest analyzing your specific case with a DBA, so he/she can reclaim space from deleted rows and lower the high water mark.
As far as I remember, a WHERE m_id IS NULL performs a full-table scan, since NULL values cannot be indexed.
Full-table scan means, that the engine needs to read every record in the table to evaluate the WHERE condition, and cannot use an index.
You could try to add a virtual column set to a not-null value if m_id IS NULL, and index this column, and use this column in the WHERE condition.
Then you could also move the WHERE condition from the UPDATE statement to the sub-select, which will probably make the statement faster.
Since JOINs are expensive, rewriting INNER JOIN m ON h.foo = m.foo as
WHERE h.foo IN (SELECT m.foo FROM m WHERE m.foo IS NOT NULL)
may also help.
For large tables, MERGE is often much faster than UPDATE. Try this (untested):
MERGE INTO h USING
(SELECT h.h_id,
m.id as new_m_id
FROM h
INNER JOIN m
ON h.foo = m.foo
WHERE h.m_id IS NULL
) new_data
ON (h.h_id = new_data.h_id)
WHEN MATCHED THEN
UPDATE SET h.m_id = new_data.new_m_id;
Try undocumented hint /*+ BYPASS_UJVC */. If it works, add an UNIQUE/PK constraint on m.foo.
I would update the table in iterations, for example, add a condition according to where h.date_created > sysdate-30 and after it finishes I would run the same query and change the condition to: where h.date_created between sysdate-30 and sysdate-60 etc. If you don't have a column like date_created maybe there's another column you can filter by ? for example: WHERE m.foo = h.foo AND m.foo between 1 and 10
Only the result of plan can explain why the cost of this update is high, but an educated guess will be that both tables are very big and that there are many NULL values as well as a lot of matching (m.foo = h.foo)...

How do I go about optimizing an Oracle query?

I was given a SQL query, saying that I have to optimize this query.
I came accross explain plan. So, in SQL developer, I ran explain plan for ,
It divided the query into different parts and showed the cost for each of them.
How do I go about optimizing the query? What do I look for? Elements with high costs?
I am a bit new to DB, so if you need more information, please ask me, and I will try to get it.
I am trying to understand the process rather than just posting the query itself and getting the answer.
The query in question:
SELECT cr.client_app_id,
cr.personal_flg,
r.requestor_type_id
FROM credit_request cr,
requestor r,
evaluator e
WHERE cr.evaluator_id = 96 AND
cr.request_id = r.request_id AND
cr.evaluator_id = e.evaluator_id AND
cr.request_id != 143462 AND
((r.soc_sec_num_txt = 'xxxxxxxxx' AND
r.soc_sec_num_txt IS NOT NULL) OR
(lower(r.first_name_txt) = 'test' AND
lower(r.last_name_txt) = 'newprogram' AND
to_char(r.birth_dt, 'MM/DD/YYYY') = '01/02/1960' AND
r.last_name_txt IS NOT NULL AND
r.first_name_txt IS NOT NULL AND
r.birth_dt IS NOT NULL))
On running explain plan, I am trying to upload the screenshot.
OPERATION OBJECT_NAME OPTIONS COST
SELECT STATEMENT 15
NESTED LOOPS
NESTED LOOPS 15
HASH JOIN 12
Access Predicates
CR.EVALUATOR_ID=E.EVALUATOR_ID
INDEX EVALUATOR_PK UNIQUE SCAN 0
Access Predicates
E.EVALUATOR_ID=96
TABLE ACCESS CREDIT_REQUEST BY INDEX ROWID 11
INDEX CRDRQ_DONE_EVAL_TASK_REQ_NDX SKIP SCAN 10
Access Predicates
CR.EVALUATOR_ID=96
Filter Predicates
AND
CR.EVALUATOR_ID=96
CR.REQUEST_ID<>143462
INDEX REQUESTOR_PK RANGE SCAN 1
Access Predicates
CR.REQUEST_ID=R.REQUEST_ID
Filter Predicates
R.REQUEST_ID<>143462
TABLE ACCESS REQUESTOR BY INDEX ROWID 3
Filter Predicates
OR
R.SOC_SEC_NUM_TXT='XXXXXXXX'
AND
R.BIRTH_DT IS NOT NULL
R.LAST_NAME_TXT IS NOT NULL
R.FIRST_NAME_TXT IS NOT NULL
LOWER(R.FIRST_NAME_TXT)='test'
LOWER(R.LAST_NAME_TXT)='newprogram'
TO_CHAR(INTERNAL_FUNCTION(R.BIRTH_DT),'MM/DD/YYYY')='01/02/1960'
As a quick update to your query, you're going to want to refactor it to something like this:
SELECT
cr.client_app_id,
cr.personal_flg,
r.requestor_type_id
FROM
credit_request cr
inner join requestor r on
cr.request_id = r.request_id
inner join evaluator e on
cr.evaluator_id = e.evaluator_id
WHERE
cr.evaluator_id = 96
and cr.request_id != 143462
and (r.soc_sec_num_txt = 'xxxxxxxxx'
or (
lower(r.first_name_txt) = 'test'
and lower(r.last_name_txt) = 'newprogram'
and r.birth_dt = date '1960-01-02'
)
)
Firstly, joining by commas creates a cross join, which you want to avoid. Luckily, Oracle's smart enough to do it as an inner join since you specified join conditions, but you want to be explicit so you don't accidentally miss something.
Secondly, your is not null checks are pointless--if a column is null, and = check you do will return false for that row. In fact, any comparison with a null column, even null = null returns false. You can try this with select 1 where null = null and select 1 where null is null. Only the second one returns.
Thirdly, Oracle's smart enough to compare dates with the ISO format (at least the last time I used it, it was). You can just do r.birth_dt = date '1960-01-02' and avoid doing a string format on that column.
That being said, your query isn't exactly poorly written in terms of egregious performance mistakes. What you want to look for are indices. Does evaluator have one on evaluator_id? Does credit_request? What types are they? Typically, evaluator will have a one on the PK evaluator_id, and credit_request will have one for that column, as well. The same for requestor and the request_id columns.
Other indices you may want to consider are all the fields you're using to filter. In this case, soc_sec_num_txt, first_name_txt, last_name_txt, birth_dt. Consider putting a multi-column index on the latter three, and a single column index on the soc_sec_num_txt column.
After refactoring the query comes indexes, so following on from #eric's post:
credit_request:
You're joining this onto requestor on request_id, which I hope is unique. In your where clause you then have a condition on evaluator_id and select client_app_id and personal_flg in the query. So, you probably need a unique index, on credit_request of (request_id, evaulator_id, client_app_id, personal_flg.
By putting the columns you're selecting into the index you avoid the by index rowid, which means that you have selected your values from the index then re-entered the table to pick up more information. If this information is already in the index then there's no need.
You're joining it onto evaluator on evaluator_id, which is included in the first index.
requestor:
This is being joined onto on request_id and your where clause include soc_sec_num_text, lower(first_name_txt), lower(last_name_txt) and birth_dt. So, you need a unique if possible, index on (request_id, soc_sec_num_text) because of the or this is further complicated because you should really have an index on as many of the conditions as possible. You're also selecting requestor_type_iud.
In this case to avoid a functional index, with many columns, I'd index on (request_id, soc_sec_num_text, birth_dt ) if you have the space, time and inclination then adding lower(first_name_txt)... etc to this may improve the speed depending on how selective the column is. This means that if there are far more values in for instance, first_name_txt than birth_dt you'd be better of putting this in front of birth_dt in the index so your query has less to scan if it's a non-unique index.
You notice that I haven't added the selected column into this index as you're already going to have to go into the table so you gain nothing by adding it.
evaluator:
This is only being joined on evaluator_id so you need a unique, if possible, index on this column.