I have a table T with some 500000 records. That table is a hierarchical table.
My goal is to update the table by self joining the same table based on some condition for parent - child relationship
The update query is taking really long because the number of rows is really high. I have created an unique index on the column which helps identifying the rows to update (meanign x and Y). After creating the index the cost has reduced but still the query is performing a lot slower.
This my query format
update T
set a1, b1
= (select T.parent.a1, T.parent.b1
from T T.paremt, T T.child
where T.parent.id = T.child.Parent_id
and T.X = T.child.X
and T.Y = T.child.Y
after creating the index the execution plan shows that it is doing an index scan for CRS.PARENT but going for a full table scan for for CRS.CHILD and also during update as a result the query is taking for ever to complete.
Please suggest any tips or recommendations to solve this problem
You are updating all 500,000 rows, so an index is a bad idea. 500,000 index lookups will take much longer than it needs to.
You would be better served using a MERGE statement.
It is hard to tell exactly what your table structure is, but it would look something like this, assuming X and Y are the primary key columns in T (...could be wrong about that):
MERGE INTO T
USING ( SELECT TC.X,
TC.Y,
TP.A1,
TP.A2
FROM T TC
INNER JOIN T TP ON TP.ID = TC.PARENT_ID ) U
ON ( T.X = U.X AND T.Y = U.Y )
WHEN MATCHED THEN UPDATE SET T.A1 = U.A1,
T.A2 = U.A2;
Related
I ran the following query on a previous years data and it took 3 hours, this year it took 13 days. I don't know why this is though. Any help would be much appreciated.
I have just tested the queries in the old SQL server and it works in 3 hours. Therefore the problem must have something to do with the new SQL server I created. Do you have any ideas what the problem might be?
The query:
USE [ABCJan]
CREATE INDEX Link_Oct ON ABCJan2014 (Link_ref)
GO
CREATE INDEX Day_Oct ON ABCJan2014 (date_1)
GO
UPDATE ABCJan2014
SET ABCJan2014.link_id = LT.link_id
FROM ABCJan2014 MT
INNER JOIN [Central].[dbo].[LookUp_ABC_20142015] LT
ON MT.Link_ref = LT.Link_ref
UPDATE ABCJan2014
SET SumAvJT = ABCJan2014.av_jt * ABCJan2014.n
UPDATE ABCJan2014
SET ABCJan2014.DayType = LT2.DayType
FROM ABCJan2014 MT
INNER JOIN [Central].[dbo].[ABC_20142015_days] LT2
ON MT.date_1 = LT2.date1
With the following data structures:
ABCJan2014 (70 million rows - NO UNIQUE IDENTIFIER - Link_ref & date_1 together are unique)
Link_ID nvarchar (17)
Link_ref int
Date_1 smalldatetime
N int
Av_jt int
SumAvJT decimal(38,14)
DayType nvarchar (50)
LookUp_ABC_20142015
Link_ID nvarchar (17) PRIMARY KEY
Link_ref int INDEXED
Link_metres int
ABC_20142015_days
Date1 smalldatetime PRIMARY KEY & INDEXED
DayType nvarchar(50)
EXECUTION PLAN
It appears to be this part of the query that is taking such a long time.
Thanks again for any help, I'm pulling my hair out.
Create Index on ABCJan2014 table as it is currently a heap
If you look at the execution plan the time is in the actual update
Look at the log file
Is the log file on a fast disk?
Is the log file on the same physical disk?
Is the log file required to grow?
Size the log file to like 1/2 the size of the data file
As far as indexes test and tune this
If the join columns are indexed not much to do here
select count(*)
FROM ABCJan2014 MT
INNER JOIN [Central].[dbo].[LookUp_ABC_20142015] LT
ON MT.Link_ref = LT.Link_ref
select count(*)
FROM ABCJan2014 MT
INNER JOIN [Central].[dbo].[ABC_20142015_days] LT2
ON MT.date_1 = LT2.date1
Start with a top (1000) to get update tuning working
For grins please give this a try
Please post this query plan
(do NOT add an index to ABCJan2014 link_id)
UPDATE top (1000) ABCJan2014
SET MT.link_id = LT.link_id
FROM ABCJan2014 MT
JOIN [Central].[dbo].[LookUp_ABC_20142015] LT
ON MT.Link_ref = LT.Link_ref
AND MT.link_id <> LT.link_id
If LookUp_ABC_20142015 is not active then add a nolock
JOIN [Central].[dbo].[LookUp_ABC_20142015] LT with (nolock)
nvarchar (17) for a PK to me is just strange
why n - do you really have some unicode?
why not just char(17) and let it allocate space?
Why have 3 update statements when you can do it in one?
UPDATE MT
SET MT.link_id = CASE WHEN LT.link_id IS NULL THEN MT.link_id ELSE LT.link_id END,
MT.SumAvJT = MT.av_jt * MT.n,
MT.DayType = CASE WHEN LT2.DayType IS NULL THEN MT.DayType ELSE LT2.DayType END
FROM ABCJan2014 MT
LEFT OUTER JOIN [Central].[dbo].[LookUp_ABC_20142015] LT
ON MT.Link_ref = LT.Link_ref
LEFT OUTER JOIN [Central].[dbo].[ABC_20142015_days] LT2
ON MT.date_1 = LT2.date1
Also, I would create only one index for the join. Create the following index after the updates.
CREATE INDEX Day_Oct ON ABCJan2014 (date_1)
GO
Before you run, compare the execution plan by putting the update query above and your 3 update statements altogether in one query window, and do Display Estimated Execution Plan. It will show the estimated percentages and you'll be able to tell if it's any better (if new one is < 50%).
Also, it looks like the query is slow because it's doing a Hash Match. Please add a PK index on [LookUp_ABC_20142015].Link_ref.
[LookUp_ABC_20142015].Link_ID is a bad choice for PK, so drop the PK on that column.
Then add an index to [ABCJan2014].Link_ref.
See if that makes any improvement.
If you are going to update a table you need a unique identifier, so put on on ABCJan2014 ASAP especially since it is so large. There is no reason why you can't create a unique index on the fields that together compose the unique record. In the future, do not ever design a table that does not have a unique index or PK. This is simply asking for trouble both in processing time and more importantly in data integrity.
When you have a lot of updating to do to a large table, it is sometimes more effective to work in batches. You don't tie up the table in a lock for a long period of time and sometimes it is even faster due to how the database internals are working the problem. Consider processing 50,000 K records at a time (you may need to experiment to find the sweet spot of records to process in a batch, there is generally a point where the update starts to take significantly longer) in a loop or cursor.
UPDATE ABCJan2014
SET ABCJan2014.link_id = LT.link_id
FROM ABCJan2014 MT
JOIN [Central].[dbo].[LookUp_ABC_20142015] LT ON MT.Link_ref = LT.Link_ref
The code above will update all records from the join. If some of the records already have the link_id you might save considerable time by only updating the records where link_id is null or ABCJan2014.link_id <> LT.link_id. You have a 70 million record table, you don't need to be updating records that do not need a change. The same thing of course applies to your other updates as well.
Not knowing how much data gets added to this table or how often this number need updating, consider that this SumAvJT might be best defined as a persisted calculated field. Then it gets updated automatically when one of the two values changes. This wouldn't help if the table is bulk loaded but might if records come in individually.
In the execution plan, it makes recommendations for indexes being added. Have you created those indexes? Also, take a look at your older server's data structure - script out the table structures including indexes - and see if there are differences between them. At some point somebody's possibly built an index on your old server's tables to make this more efficient.
That said, what volume of data are you looking at? If you're looking at significantly different volumes of data, it could be that the execution plans generated by the servers differ significantly. SQL Server doesn't always guess right, when it builds the plans.
Also, are you using prepared statements (i.e., stored procedures)? If you are, then it's possible that the cached data access plan is simply out of date & needs to be updated, or you need to update statistics on the tables and then run the procedure with recompile so that a new data access plan is generated.
where is located the [Central] server ?
It is possible to duplicate your [Central].[dbo].[LookUp_ABC_20142015] and [Central].[dbo].[ABC_20142015_days] table locally ?
1) Do:
select * into [ABC_20142015_days] from [Central].[dbo].[ABC_20142015_days]
select * into [LookUp_ABC_20142015] from [Central].[dbo].[LookUp_ABC_20142015]
2) Recreate the index on [ABC_20142015_days] and [LookUp_ABC_20142015]...
3) Rewrite your updates by removing the "[Central].[dbo]." prefix !
Just after writing this solution, I found an other solution, but I'm not sure if it's applicable to your server: add the "REMOTE" join hints... I never use it, but you can found the documentation at https://msdn.microsoft.com/en-us/library/ms173815.aspx
Hopping it could help you...
All the previous answers that suggest improving the structure of the tables and the queries itself are nice to know for you, there is doubt about that.
However your question is why the SAME data/structure and the SAME queries give this huge difference.
So before you look at optimising sql you must find the real cause. And the real cause is hardware or software or configuration. Start by compating sql server with the old one then move to the hardware and benchmark it. Lastly look at the software for differences.
Only when you solved the actual problem you can start improving the sql itself
ALTER TABLE dbo.ABCJan2014
ADD SumAvJT AS av_jt * n --PERSISTED
CREATE INDEX ix ON ABCJan2014 (Link_ref) INCLUDE (link_id)
GO
CREATE INDEX ix ON ABCJan2014 (date_1) INCLUDE (DayType)
GO
UPDATE ABCJan2014
SET ABCJan2014.link_id = LT.link_id
FROM ABCJan2014 MT
JOIN [Central].[dbo].[LookUp_ABC_20142015] LT ON MT.Link_ref = LT.Link_ref
UPDATE ABCJan2014
SET ABCJan2014.DayType = LT2.DayType
FROM ABCJan2014 MT
JOIN [Central].[dbo].[ABC_20142015_days] LT2 ON MT.date_1 = LT2.date1
I guess there is a lot of page splitting. Can You try this?
SELECT
(SELECT LT.link_id FROM [Central].[dbo].[LookUp_ABC_20142015] LT
WHERE MT.Link_ref = LT.Link_ref) AS Link_ID,
Link_ref,
Date_1,
N,
Av_jt,
MT.av_jt * MT.n AS SumAvJT,
(SELECT LT2.DayType FROM [Central].[dbo].[ABC_20142015_days] LT2
WHERE MT.date_1 = LT2.date1) AS DayType
INTO ABCJan2014new
FROM ABCJan2014 MT
In addition to all answer above.
i) Even 3 hour is lot.I mean even if any query take 3 hours,I first check my requirement and revise it.Raise the issue.Of course I will optimize my query.
Like in your query,none of the update appear to be serious matter.
Like #Devart pointed,one of the column can be calculated columns.
ii) Trying running other query in new server and compare.?
iii) Rebuild the index.
iv) Use "with (nolock)" in your join.
v) Create index on table LookUp_ABC_20142015 column Link_ref.
vi)clustered index on nvarchar (17) or datetime is always a bad idea.
join on datetime column or varchar column always take time.
Try with alias instead of recapturing table name in UPDATE query
USE [ABCJan]
CREATE INDEX Link_Oct ON ABCJan2014 (Link_ref)
GO
CREATE INDEX Day_Oct ON ABCJan2014 (date_1)
GO
UPDATE MT
SET MT.link_id = LT.link_id
FROM ABCJan2014 MT
INNER JOIN [Central].[dbo].[LookUp_ABC_20142015] LT
ON MT.Link_ref = LT.Link_ref
UPDATE ABCJan2014
SET SumAvJT = av_jt * n
UPDATE MT
SET MT.DayType = LT2.DayType
FROM ABCJan2014 MT
INNER JOIN [Central].[dbo].[ABC_20142015_days] LT2
ON MT.date_1 = LT2.date1
Frankly, I think you've already answered your own question.
ABCJan2014 (70 million rows - NO UNIQUE IDENTIFIER - Link_ref & date_1 together are unique)
If you know the combination is unique, then by all means 'enforce' it. That way the server will know it too and can make use of it.
Query Plan showing the need for an index on [ABCJAN2014].[date_1] 3 times in a row!
You shouldn't believe everything that MSSQL tells you, but you should at least give it a try =)
Combining both I'd suggest you add a PK to the table on the fields [date_1] and [Link_ref] (in that order!). Mind: adding a Primary Key -- which is essentially a clustered unique index -- will take a while and require a lot of space as the table pretty much gets duplicated along the way.
As far as your query goes, you could put all 3 updates in 1 statement (similar to what joordan831 suggests) but you should take care about the fact that a JOIN might limit the number of rows affected. As such I'd rewrite it like this:
UPDATE ABCJan2014
SET ABCJan2014.link_id = (CASE WHEN LT.Link_ref IS NULL THEN ABCJan2014.link_id ELSE LT.link_id END), -- update when there is a match, otherwise re-use existig value
ABCJan2014.DayType = (CASE WHEN LT2.date1 IS NULL THEN ABCJan2014.DayType ELSE LT2.DayType END), -- update when there is a match, otherwise re-use existig value
SumAvJT = ABCJan2014.av_jt * ABCJan2014.n
FROM ABCJan2014 MT
LEFT OUTER JOIN [Central].[dbo].[LookUp_ABC_20142015] LT
ON MT.Link_ref = LT.Link_ref
LEFT OUTER JOIN [Central].[dbo].[ABC_20142015_days] LT2
ON MT.date_1 = LT2.date1
which should have the same effect as running your original 3 updates sequentially; but hopefully taking a lot less time.
PS: Going by the Query Plans, you already have indexes on the tables you JOIN to ([LookUp_ABC_20142015] & [LookUp_ABC_20142015]) but they seem to be non-unique (and not always clustered). Assuming they're suffering from the 'we know it's unique but the server doesn't'-illness: it would be advisable to also add a Primary Key to those tables on the fields you join to, both for data-integrity and performance reasons!
Good luck.
Update data
set
data.abcKey=surrogate.abcKey
from [MyData].[dbo].[fAAA_Stage] data with(nolock)
join [MyData].[dbo].[dBBB_Surrogate] surrogate with(nolock)
on data.MyKeyID=surrogate.MyKeyID
The surrogate table must have a nonclustered index with an unique key. myKeyID must be created as an unique non-clustered key. The performance results improvements are significant.
I am using a query the join columns one has a clustered index and the other has a non-clustered index. The query is taking a long time. Is this the reason that I am using different type of indexes ?
SELECT #NoOfOldBills = COUNT(*)
FROM Billing_Detail D, Meter_Info m, Meter_Reading mr
WHERE D.Meter_Reading_ID = mr.id
AND m.id = mr.Meter_Info_ID
AND m.id = #Meter_Info_ID
AND BillType = 'Meter'
IF (#NoOfOldBills > 0) BEGIN
SELECT TOP 1 #PReadingDate = Bill_Date
FROM Billing_Detail D, Meter_Info m, Meter_Reading mr
WHERE D.Meter_Reading_ID = mr.id
AND m.id = mr.Meter_Info_ID
AND m.id = #Meter_Info_ID
AND billtype = 'Meter'
ORDER BY Bill_Date DESC
END
Without knowing more details and the context, it's tricky to advise - but it looks like you are trying to find out the date of the oldest bill. You could probably rewrite those two queries as one, which would improve the performance significantly (assuming that there are some old bills).
I would suggest something like this - which in addition to probably performing better, is a little easier to read!
SELECT count(d.*) NoOfOldBills, MAX(d.Billing_Date) OldestBillingDate FROM Billing_Detail d
INNER JOIN Meter_Reading mr ON mr.id=d.Meter_Reading_ID
INNER JOIN Meter_Info m ON m.Id=mr.Meter_Info_ID
WHERE
m.id = #Meter_Info_ID AND billtype = 'Meter'
The reason is not because you have different types of indices. Since you said you have clustered indices on all primary keys, you should be fine there. To support this query, you would also need an index with two columns on BillType and Bill_Date to cut down the time.
Hopefully they are in the same table. Otherwise, you may need a few indices to finally create one that does have two columns.
OK there are a number of things here. First, are the indexes you have relevant (or as relevant as they can be). There should be a clustered index on each table - a non clustered index does not work effectively if the table does not have a clustered index which non clustered indexes use to identify rows.
Does your index cover only one column? SQL Server uses indexes with the order of the columns for that index being very important. The left most column of the index should (in general) be the column that has the greatest ordinality (divides the data into the smallest amounts) Do either of the indexes cover all the columns referred to in the query (this is known as a covering index (Google this for more info).
In general indexes should be wide, if SQL Server has an index on col1, col2, col3, col4 and another on col1, col2 the later is redundant, as the information from the second index is fully contained in the first and SQL Server understands this.
Are you statistics up to date? SQL Server can / will choose a bad execution plan if the statistics are not up to date. What does query analyser show for plan execution (SSMS Query | show execution plan)?
I am performing an update with a query like this:
UPDATE (SELECT h.m_id,
m.id
FROM h
INNER JOIN m
ON h.foo = m.foo)
SET m_id = id
WHERE m_id IS NULL
Some info:
Table h is roughly ~5 million rows
All rows in table h have NULL values for m_id
Table m is roughly ~500 thousand rows
m_id on table h is an indexed foreign key pointing to id on table m
id on table m is the primary key
There are indexes on m.foo and h.foo
The EXPLAIN PLAN for this query indicated a hash join and full table scans, but I'm no DBA, so I can't really interpret it very well.
The query itself ran for several hours and did not complete. I would have expected it to complete in no more than a few minutes. I've also attempted the following query rewrite:
UPDATE h
SET m_id = (SELECT id
FROM m
WHERE m.foo = h.foo)
WHERE m_id IS NULL
The EXPLAIN PLAN for this mentioned ROWID lookups and index usage, but it also went on for several hours without completing. I've also always been under the impression that queries like this would cause the subquery to be executed for every result from the outer query's predicate, so I would expect very poor performance from this rewrite anyway.
Is there anything wrong with my approach, or is my problem related to indexes, tablespace, or some other non-query-related factor?
Edit:
I'm also having abysmal performance from simple count queries like this:
SELECT COUNT(*)
FROM h
WHERE m_id IS NULL
These queries are taking anywhere from ~30 seconds to sometimes ~30 minutes(!).
I am noticing no locks, but the tablespace for these tables is sitting at 99.5% usage (only ~6MB free) right now. I've been told that this shouldn't matter as long as indexes are being used, but I don't know...
Some points:
Oracle does not index NULL values (it will index a NULL that is part of a globally non-null tuple, but that's about it).
Oracle is going for a HASH JOIN because of the size of both h and m. This is likely the best option performance-wise.
The second UPDATE might get Oracle to use indexes, but then Oracle is usually smart about merging subqueries. And it would be a worse plan anyway.
Do you have recent, reasonable statistics for your schema? Oracle really needs decent statistics.
In your execution plan, which is the first table in the HASH JOIN? For best performance it should be the smaller table (m in your case). If you don't have good cardinality statistics, Oracle will get messed up. You can force Oracle to assume fixed cardinalities with the cardinality hint, it may help Oracle get a better plan.
For example, in your first query:
UPDATE (SELECT /*+ cardinality(h 5000000) cardinality(m 500000) */
h.m_id, m.id
FROM h
INNER JOIN m
ON h.foo = m.foo)
SET m_id = id
WHERE m_id IS NULL
In Oracle, FULL SCAN reads not only every record in the table, it basically reads all storage allocated up to the maximum used (the high water mark in Oracle documentation). So if you have had a lot of deleted rows your tables might need some cleaning up. I have seen a SELECT COUNT(*) on an empty table consume 30+ seconds because the table in question had like 250 million deleted rows. If that is the case, I suggest analyzing your specific case with a DBA, so he/she can reclaim space from deleted rows and lower the high water mark.
As far as I remember, a WHERE m_id IS NULL performs a full-table scan, since NULL values cannot be indexed.
Full-table scan means, that the engine needs to read every record in the table to evaluate the WHERE condition, and cannot use an index.
You could try to add a virtual column set to a not-null value if m_id IS NULL, and index this column, and use this column in the WHERE condition.
Then you could also move the WHERE condition from the UPDATE statement to the sub-select, which will probably make the statement faster.
Since JOINs are expensive, rewriting INNER JOIN m ON h.foo = m.foo as
WHERE h.foo IN (SELECT m.foo FROM m WHERE m.foo IS NOT NULL)
may also help.
For large tables, MERGE is often much faster than UPDATE. Try this (untested):
MERGE INTO h USING
(SELECT h.h_id,
m.id as new_m_id
FROM h
INNER JOIN m
ON h.foo = m.foo
WHERE h.m_id IS NULL
) new_data
ON (h.h_id = new_data.h_id)
WHEN MATCHED THEN
UPDATE SET h.m_id = new_data.new_m_id;
Try undocumented hint /*+ BYPASS_UJVC */. If it works, add an UNIQUE/PK constraint on m.foo.
I would update the table in iterations, for example, add a condition according to where h.date_created > sysdate-30 and after it finishes I would run the same query and change the condition to: where h.date_created between sysdate-30 and sysdate-60 etc. If you don't have a column like date_created maybe there's another column you can filter by ? for example: WHERE m.foo = h.foo AND m.foo between 1 and 10
Only the result of plan can explain why the cost of this update is high, but an educated guess will be that both tables are very big and that there are many NULL values as well as a lot of matching (m.foo = h.foo)...
I have two tables, one small (~ 400 rows), one large (~ 15 million rows), and I am trying to find the records from the small table that don't have an associated entry in the large table.
I am encountering massive performance issues with the query.
The query is:
SELECT * FROM small_table WHERE NOT EXISTS
(SELECT NULL FROM large_table WHERE large_table.small_id = small_table.id)
The column large_table.small_id references small_table's id field, which is its primary key.
The query plan shows that the foreign key index is used for the large_table:
PLAN (large_table (RDB$FOREIGN70))
PLAN (small_table NATURAL)
Statistics have been recalculated for indexes on both tables.
The query takes several hours to run. Is this expected?
If so, can I rewrite the query so that it will be faster?
If not, what could be wrong?
I'm not sure about Firebird, but in other DBs often a join is faster.
SELECT *
FROM small_table st
LEFT JOIN large_table lt
ON st.id = lt.small_id
WHERE lt.small_id IS NULL
Maybe give that a try?
Another option, if you're really stuck, and depending on the situation this needs to be run in, is to take the small_id column out of the large_table, possibly into a temp table, and then do a left join / EXISTS query.
If the large table only has relatively few distinct values for small_id, the following might perform better:
select *
from small_table st left outer join
(select distinct small_id
from large_table
) lt
on lt.small_id = st.id
where lt.small_id is null
In this case, the performance would be better by doing a full scan of the large table and then index lookups in the small table -- the opposite of what it is doing. Doing a distinct could do just an index scan on the large table which then uses the primary key index on the small table.
I have a query which is not using my indexes. Can someone say why?
explain plan set statement_id = 'bad8' for
select
g1.g1id,a.a1id from atable a,
(
select
phone,address,g1id from gtable g
where
g.active = 0 and
(g.name is not null) AND
(SYSDATE - g.CTIME <= 2*365)
) g1
where
(
(a.phone.ph1 = g1.phone.ph1 and
a.phone.ph2 = g1.phone.ph2 and
a.phone.ph3 = g1.phone.ph3
)
OR
(a.address.ad1 = g1.address.ad1 and a.address.ad2 = g1.address.ad2)
)
In both the tables : atable,gtable I have these indexes :
1. On phone.ph1,phone.ph2,phone.ph3
2. On address.ad1,address.ad2
phone,address are of custom data types.
Using Oracle 11g.
Here is the explain plan query and output :
SELECT cardinality "Rows",
lpad(' ',level-1)||operation||' '||
options||' '||object_name "Plan"
FROM PLAN_TABLE
CONNECT BY prior id = parent_id
AND prior statement_id = statement_id
START WITH id = 0
AND statement_id = 'bad8'
ORDER BY id;
Result:
> Rows Plan
490191190 SELECT STATEMENT
> null CONCATENATION
> 490190502 HASH JOIN
> 511841 TABLE ACCESS FULL gtable
> 41332965 PARTITION LIST ALL
> 41332965 TABLE ACCESS FULL atable
> 688 HASH JOIN
> 376893 TABLE ACCESS FULL gtable
> 41332965 PARTITION LIST ALL
> 41332965 TABLE ACCESS FULL atable
Both atable,gtable have more than 10 million rows each.
Most values in columns phone and address don't repeat.
What indices Oracle chosen depends on many factor including things you haven't mentioned in your question such as the number of rows in the table, frequency of values within a column and whether you have separate or combined indices when more than one column is indexed.
Having said that, I suppose that the main reason your indices aren't used are:
You don't join directly with GTABLE / GLOBAL. Instead you join with a view that has three additional WHERE clauses that aren't part of the index and thus make it less effective in this constellation.
The JOIN condition includes an OR, which makes it difficult to use indices.
Update:
If Oracle used your indices to do the join - which is already very difficult due to the OR condition - it would end up with a huge number of ROWIDs. For each ROWID, it then had to fetch the full row. Since a full table scan can easily be up to 50 times faster than a fetch by ROWID (I don't know what value Oracle uses), it will only use the indices if it reliably knows that the join will reduce the number of rows to fetch by a factor of 50.
In your case, there are the remaining WHERE conditions (g.active = 0, g.name is not null, SYSDATE - g.CTIME <= 2*365), which aren't represented in the indices. So they have to applied after the join and after the GTABLE rows have been fetched. This makes it even more difficult to reach a 50 times smaller result set than a full table scan.
So I'm pretty sure the Oracle cost estimate is correct, i.e. using the indices would result in a more expensive query and even longer execution time.
We can say "your query does not use your indexes because does not need them". A hash join is better. To use your indexes, oracle need to full scan them(4 indexes), make two joins, make a rowid or, and after that read from tables probably many blocks. If he belives that the result has many rows, the CBO coose the full scans, because is faster.
There are no conditions that reduce the number of rows taken from tables. There is no range scan. It must do full scans.