Oracle SQL Loader discarding data and showing NULL values ins - sql

I have this issue with SQLLoader. I am working with a dataset with 99 columns. Everything is being loaded to a VARCHAR2(500 BYTE) - I went through the data manually to make sure there was enough space in each column.
CSV Properties
Comma delimited
Line terminated by CRLF
Text Qualifier '"'
The issue is, when I load the data manually (I use Toad for Oracle) - the data appears just fine. However, using my SQLLoader - it "successfully" loads all the data however when you query the table, the table has all the rows but all the data is missing. (Null values appear) - Can anyone help with this?
This is the ctl file code:
OPTIONS (direct=true)
LOAD DATA
INFILE 'C:\Paths\DATA_PRCS\MarketData\FitchRatings\DBUpload\test.csv'
DISCARDFILE 'C:\Paths\DATA_PRCS\MarketData\FitchRatings\DBUploadProcess\ErrorFiles\FitchIssue_Errors.csv'
TRUNCATE INTO TABLE MARKETDATA.PRLD_FITCH_ISSUE_DATA
fields terminated by "," OPTIONALLY ENCLOSED BY '"' trailing nullcols
(
REPORT_DATE_TIME
, AGENT_COMMON_ID
, AGENT_CUSIP
, AGENT_LEI
, CUSTOMER_IDENTIFIER
, MARKET_SECTOR_ID
, COUNTRY_NAME
, ISSUER_ID
, ISSUER_NAME
, ISSUE_RECORD_CHANGE_CODE_DATE
, FITCH_ISSUE_ID_NUMBER
, COUNTRY_CODE
, STATE_PROVINCE
, CUSIP_IDENTIFIER
, ISIN_IDENTIFIER
, ISMA_IDENTIFIER
, LOANX_ID
, COMMON_NUMBER
, WERTPAPIER_IDENTIFIER
, RECORD_GROUP_TYPE_CODE
, ISSUE_DEBT_LEVEL_CODE
, CLASS_TYPE
, ISSUE_DESCRIPTION
, ISSUE_MATURITY_DATE
, ISSUE_TRANCHE_SERIES
, ISSUE_CLASS
, ISSUE_CURRENCY_CODE
, ISSUE_AMOUNT
, ISSUE_COUPON_TYPE
, ISSUE_COUPON_FIXED_RATE
, ISSUE_COUP_NON_FIXED_RATE_DESC
, ISSUE_COUPON_INDEX_DESCRIPTION
, ISSUE_COUPON_SPREAD
, ISSUE_COUPON_CAPPED_RATE
, ENHANCEMENT_TYPE
, ENHANCEMENT_PROVIDER
, PROJECT
, PRIVATE_PLACEMENT_144A_CODE
, US_FED_TAX_EXEMPT_STATUS_CODE
, ISSUE_RECORD_CHANGE_CODE
, LT_ISSUE_RATING
, LT_ISSUE_RATING_ACTION
, LT_ISSUE_RATING_EFFECTIVE_DATE
, LT_ISSUE_RATING_ALERT_CODE
, LT_ISSUE_RATING_SOL_STATUS
, ISSUE_RECOVERY_RATING
, ISSUE_DISTRESSED_RECOV_RATING
, UNENHANCED_LT_ISSUE_RATING
, UNENHANCED_LTR_ACTION
, UNENHANCED_LTR_EFFECTIVE_DATE
, UNENHANCED_LTR_ALERT_CODE
, UNENHANCED_LTR_SOL_STATUS
, LT_NATIONAL_ISSUE_RATING
, LT_NATIONAL_RATING_ACTION
, LT_NATL_RATING_EFFECTIVE_DATE
, LT_NATIONAL_RATING_ALERT_CODE
, LT_NATIONAL_RATING_SOL_STATUS
, ST_ISSUE_RATING
, ST_ISSUE_RATING_ACTION
, ST_ISSUE_RATING_EFFECTIVE_DATE
, ST_ISSUE_RATING_ALERT_CODE
, ST_ISSUE_RATING_SOL_STATUS
, UNENHANCED_ST_ISSUE_RATING
, UNENHANCED_STR_ACTION
, UNENHANCED_STR_EFFECTIVE_DATE
, UNENHANCED_STR_ALERT_CODE
, UNENHANCED_STR_SOL_STATUS
, ST_NATIONAL_ISSUE_RATING
, ST_NATIONAL_RATING_ACTION
, ST_NATL_RATING_EFFECTIVE_DATE
, ST_NATIONAL_RATING_ALERT_CODE
, ST_NATIONAL_RATING_SOL_STATUS
, ENHANCED_LTR
, ENHANCED_LTR_ACTION
, ENHANCED_LTR_EFFECTIVE_DATE
, ENHANCED_LTR_ALERT_CODE
, ENHANCED_LTR_SOL_STATUS
, ENHANCED_STR
, ENHANCED_STR_ACTION
, ENHANCED_STR_EFFECTIVE_DATE
, ENHANCED_STR_ALERT_CODE
, ENHANCED_STR_SOL_STATUS
, SECURITY_IDENTIFIER_TYPE
, ENDORSEMENT_COMPLIANCE
, RATINGS_SUFFIX
, CLO_SECTOR
, CLO_INDUSTRY
, ALTERNATE_CUSIP
, ALTERNATE_ISIN
, DATA_ENTRY_TIMESTAMP expression "(select SYSDATE from dual)"
)

Rather simple in the end. The most important thing to realize was that the file was using Unicode characters. You have to specify in the ctl file CHARACTERSET UT16 in this example. Thank for trying to help all!
load data CHARACTERSET UTF8 TRUNCATE INTO TABLE "GLOBALIZATIONRESOURCE" FIELDS TERMINATED BY "," OPTIONALLY ENCLOSED BY '"' TRAILING NULLCOLS

Related

Oracle get UNIQUE constraint violation error too late

What should I check why Oracle server takes more then 20 sec to return UNIQUE constraint violation error for specific data?
One of our processes is processing over 30000 data one day with multi process and some time gets UNIQUE constraint violation error in 1 sec
but it takes more then 20 sec to return UNIQUE constraint violation error for specific data.
Query is same as below. (Modified only table name)
MERGE
INTO TableA S
USING (
SELECT NVL(:sccm_cd , ' ') SCCM_CD
, NVL(:oder_dt , ' ') ODER_DT
, NVL(:mrkt_dstn_cd, ' ') MRKT_DSTN_CD
, NVL(:oder_no , ' ') ODER_NO
, NVL(:cncd_unpr , 0) CNCD_UNPR
, B.SLBY_FEE_GRD_CD
, B.ACCT_MNGR_EMPL_NO
, C.AO_FEE_GRD_CD
FROM DUAL A
, TableB B
, TableC C
WHERE 1 = 1
AND B.SCCM_CD = :sccm_cd
AND B.ACNO = :acno
AND C.SCCM_CD(+) = B.SCCM_CD
AND C.EMPL_NO(+) = B.ACCT_MNGR_EMPL_NO
) T
ON ( S.sccm_cd = T.sccm_cd
AND S.oder_dt = T.oder_dt
AND S.mrkt_dstn_cd = T.mrkt_dstn_cd
AND S.oder_no = T.oder_no
AND S.cncd_unpr = T.cncd_unpr
)
WHEN MATCHED THEN
UPDATE
SET S.cncd_qty = S.cncd_qty + NVL(:cncd_qty ,0)
, S.slby_fee = S.slby_fee + NVL(:slby_fee ,0)
, S.slby_fee_srtx = S.slby_fee_srtx + NVL(:slby_fee_srtx,0)
, S.idx_fee_amt = S.idx_fee_amt + NVL(:idx_fee_amt ,0)
, S.cltr_fee = S.cltr_fee + NVL(:cltr_fee ,0)
, S.trtx = S.trtx + NVL(:trtx ,0)
, S.otc_fee = S.otc_fee + NVL(:otc_fee ,0)
, S.wht_fee = S.wht_fee + NVL(:wht_fee ,0)
WHEN NOT MATCHED THEN
INSERT (
sccm_cd
, oder_dt
, mrkt_dstn_cd
, oder_no
, cncd_unpr
, acno
, item_cd
, slby_dstn_cd
, md_dstn_cd
, cncd_qty
, stlm_dt
, trtx_txtn_dstn_cd
, proc_cmpl_dstn_cd
, item_dstn_cd
, slby_fee_grd_cd
, slby_fee
, slby_fee_srtx
, idx_fee_amt
, cltr_fee
, trtx
, wht_fee
, otc_fee
, acct_mngr_empl_no
, ao_fee_grd_cd
)
VALUES
( T.sccm_cd
, T.oder_dt
, T.mrkt_dstn_cd
, T.oder_no
, T.cncd_unpr
, :acno
, :item_cd
, :slby_dstn_cd
, :md_dstn_cd
, NVL(:cncd_qty ,0)
, DECODE(:mrkt_dstn_cd, 'TN', T.oder_dt, :stlm_dt)
, :trtx_txtn_dstn_cd
, '0'
, :item_dstn_cd
, NVL(:slby_fee_grd_cd, T.SLBY_FEE_GRD_CD)
, NVL(:slby_fee ,0)
, NVL(:slby_fee_srtx ,0)
, NVL(:idx_fee_amt ,0)
, NVL(:cltr_fee ,0)
, NVL(:trtx ,0)
, NVL(:wht_fee , 0)
, NVL(:otc_fee , 0)
, T.acct_mngr_empl_no
, T.ao_fee_grd_cd
)
There could be multiple reasons for it. I will list here some of the possible causes for this behavior.
Concurrency issue
Your insert might be waiting for other operations, like other inserts or updated or deletions.
Network issues
It is possible that for some reason your network is overwhelmed with requests or, if the server is remote, this could be an internet speed issue as well.
Server load
The server might be overwhelmed with lots of jobs to do.
Slow query
It's also possible that the select you use in your insert command is very slow. It would make sense to test its speed. Also, it would make sense to test insert speed as well.

/*+ APPEND PARALLEL */ implementation in a oracle SQL [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 2 years ago.
Improve this question
I will like to understand what does exactly the /*+ APPEND PARALLEL(TEST,12) */, it is a improve but I'm not really sure what it does.
--FIRST SQL
insert into TEST
select ORDER_DATE , ORDER_NO , ORDER_INV_LINE , CUSTOMER_NO , ORDER_INV_LINE_TYPE , ORDER_INV_LOC_CD , CUST_REF_NO , GROUP_ACCT_NO , SELL_STYLE , RCS_CD , GRADE
, INV_STYLE_NO , DISCOUNT_CD , CREDITED_SELL_CO , DELI_VEHICLE_CD , QUANTITY , GROSS_AMT , REBATE_NET_AMT , nvl(TERM_SAVG_AMT,TERMS_AVG_AMT) , TERMS_AMT , UNIT_PRICE , DISCOUNT_AMT
, COMM_LOAD , DELIVERED_FRT_AMT , CREDITED_DISTRICT_ID , INVOICE_NO , INVOICE_DATE , INVOICE_MONTH , SELL_COLOR , WIDTH_FT , WIDTH_IN , LENGTH_FT , LENGTH_IN , ROLL_NO
, ACTUAL_DUTY , GST_AMT , BROKERAGE_FEE , CRED_REGION_ID , TERMS_PCT , CRED_TERRITORY_ID , WHSE_UPCHARGE , OVERBILL_A_AMOUNT , OVERBILL_B_AMOUNT , OVERBILL_C_AMOUNT , OVERBILL_D_AMOUNT
, OVERBILL_E_AMOUNT , OVERBILL_F_AMOUNT , OVERBILL_G_AMOUNT , OVERBILL_H_AMOUNT , OVERBILL_I_AMOUNT , TERMS_CD , ORDER_LINE_STATUS_CD , NET_UNIT_PRICE , INV_FOB_COST , NET_SALES_AMT_CCA
, NET_UNIT_PRICE_CCA , INVOICE_PAID_FLAG , DISC_FLAG , NVL(BUILDER_NO,BUILDER_NUMBER) , BUILDER_NAME , SUB_DIVISION , BLOCK_NBR , LOT , PROJECT_NAME , INV_PRICING_UOM , PRO_ROLL_OVB , PRO_CUT_OVB , EFF_DATE
, EXP_DATE, CCA_PROGRAM, OVBG_FLAG, REBATE_NET_AMTCN, sysdate as ARCHIVE_DATE, ENDUSER_CODE, ENDUSER_NAME, SELL_BUSINESS_GRP, SALES_MIX_GRP, BUSINESS_GRP_CAT, MIX_GRP_CAT, BDF_GROUP
FROM SCHEMA.prestg_order_invoices poi
WHERE NOT EXISTS (
SELECT 1
FROM SCHEMA.TEST ar
WHERE ar.order_no = poi.order_no
and nvl(ar.invoice_no, 'XYZ') = nvl(poi.invoice_no, 'XYZ')
and ar.order_inv_line = poi.order_inv_line)
----
--SQL MODIFIED
insert /*+ APPEND PARALLEL(TEST,12) */ into TEST
select /*+ PARALLEL(poi,12) */ ORDER_DATE , ORDER_NO , ORDER_INV_LINE , CUSTOMER_NO , ORDER_INV_LINE_TYPE , ORDER_INV_LOC_CD , CUST_REF_NO , GROUP_ACCT_NO , SELL_STYLE , RCS_CD , GRADE
, INV_STYLE_NO , DISCOUNT_CD , CREDITED_SELL_CO , DELI_VEHICLE_CD , QUANTITY , GROSS_AMT , REBATE_NET_AMT , nvl(TERM_SAVG_AMT,TERMS_AVG_AMT) , TERMS_AMT , UNIT_PRICE , DISCOUNT_AMT
, COMM_LOAD , DELIVERED_FRT_AMT , CREDITED_DISTRICT_ID , INVOICE_NO , INVOICE_DATE , INVOICE_MONTH , SELL_COLOR , WIDTH_FT , WIDTH_IN , LENGTH_FT , LENGTH_IN , ROLL_NO
, ACTUAL_DUTY , GST_AMT , BROKERAGE_FEE , CRED_REGION_ID , TERMS_PCT , CRED_TERRITORY_ID , WHSE_UPCHARGE , OVERBILL_A_AMOUNT , OVERBILL_B_AMOUNT , OVERBILL_C_AMOUNT , OVERBILL_D_AMOUNT
, OVERBILL_E_AMOUNT , OVERBILL_F_AMOUNT , OVERBILL_G_AMOUNT , OVERBILL_H_AMOUNT , OVERBILL_I_AMOUNT , TERMS_CD , ORDER_LINE_STATUS_CD , NET_UNIT_PRICE , INV_FOB_COST , NET_SALES_AMT_CCA
, NET_UNIT_PRICE_CCA , INVOICE_PAID_FLAG , DISC_FLAG , NVL(BUILDER_NO,BUILDER_NUMBER) , BUILDER_NAME , SUB_DIVISION , BLOCK_NBR , LOT , PROJECT_NAME , INV_PRICING_UOM , PRO_ROLL_OVB , PRO_CUT_OVB , EFF_DATE
, EXP_DATE, CCA_PROGRAM, OVBG_FLAG, REBATE_NET_AMTCN, sysdate as ARCHIVE_DATE, ENDUSER_CODE, ENDUSER_NAME, SELL_BUSINESS_GRP, SALES_MIX_GRP, BUSINESS_GRP_CAT, MIX_GRP_CAT, BDF_GROUP
FROM SCHEMA.prestg_order_invoices poi
WHERE NOT EXISTS (
SELECT 1
FROM SCHEMA.TEST ar
WHERE ar.order_no = poi.order_no
and nvl(ar.invoice_no, 'XYZ') = nvl(poi.invoice_no, 'XYZ')
and ar.order_inv_line = poi.order_inv_line)
APPEND or PARALLEL hints invoke direct path load. This means blocks are allocated from above the HWM (high water mark). That is, blocks that do not, and never have had any rows in them. For that reason, Oracle does not generate UNDO. (There's no need for a 'before image', since the 'before image is that the block didn't exist in the segment.) Redo is still generated for direct path load, unless NOLOGGING is also set.
it isn't necessarily always faster in general. It does a direct path load to disk - bypassing the buffer cache. There are many cases - especially with smaller sets - where the direct path load to disk would be far slower than a conventional path load into the cache.
Also, you cannot query a table after direct pathing into it until you commit or rollback. And also consider the fact that one and only one user can direct path into a table at a time. It would cause all modifications to serialize. No one else could insert/update/delete or merge into this table until the transaction that direct paths commits.

SQL - insert into from one table to another but manipulate data

I am trying to copy data from one table to another by using the following script:
insert into test_report
( company_id
, report_id
, brch_code
, definition
, description
, editable_flag
, executable_flag
, name
, report_type )
values
( 2420
, 'RP00002004'
, '0001'
, (select definition from test_template_report where template_id='RP00001242')
, (select description from test_template_report where template_id='RP00001242')
, (select editable_flag from test_template_report where template_id='RP00001242')
, (select executable_flag from test_template_report where template_id='RP00001242')
, (select name from test_template_report where template_id='RP00001242')
, '01' );
This is working fine, but the definition field contains XML which would need to be modified slightly.
The following is part of the definition data:
<listdef page='25'><reportId>RP00000390</reportId><name>Fund Transfer</name><description>Fund Transfer</description>
The <reportId>RP00000390</reportId> part would need to be changed to be RP00002004 as per the insert into script.
Like the following:
<listdef page='25'><reportId>RP00002004</reportId><name>Fund Transfer</name><description>Fund Transfer</description>
Is this possible?
You can use XMLQuery with a modify ... replace value of node:
insert into test_report (company_id,report_id,brch_code,definition,description,
editable_flag,executable_flag,name,report_type)
select 2420, 'RP00002004', '0001',
XMLQuery('copy $i := $d modify
(for $j in $i//reportId return replace value of node $j with $r)
return $i'
passing definition as "d", 'RP00002004' as "r"
returning content),
description, editable_flag, executable_flag, name, '01'
from test_template_report where template_id='RP00001242';
You don't need all the individual selects from the template table, a single insert-select will do.
The XML manipulation assumes definition is an XMLType; if it isn't you can convert it to one in the passing clause, i.e. passing XMLType(definition) as "d". The value of the reportId node (or nodes) is replaced with the string passed as "r".
As a quick static demo of that replacement happening, with the XML supplied in-line as a string literal:
select
XMLQuery('copy $i := $d modify
(for $j in $i//reportId return replace value of node $j with $r)
return $i'
passing XMLType(q'[<listdef page='25'><reportId>RP00000390</reportId><name>Fund Transfer</name><description>Fund Transfer</description></listdef>]') as "d",
'RP00002004' as "r"
returning content)
as modified_definition
from dual;
MODIFIED_DEFINITION
------------------------------------------------------------------------------------------------------------------------------
<listdef page="25"><reportId>RP00002004</reportId><name>Fund Transfer</name><description>Fund Transfer</description></listdef>
Read more.
The replace function replaces one text string with another, so you could change
definition
to
replace(definition, '<reportId>RP00000390</reportId>', '<reportId>RP00002004</reportId>')
You can also get all the columns you need from test_template_report in one go:
insert into test_report
( company_id
, report_id
, brch_code
, definition
, description
, editable_flag
, executable_flag
, name
, report_type )
select 2420
, 'RP00002004'
, '0001'
, replace(tr.definition, '<reportId>RP00000390</reportId>', '<reportId>RP00002004</reportId>')
, tr.description
, tr.editable_flag
, tr.executable_flag
, tr.name
, '01'
from test_template_report tr
where tr.template_id = 'RP00001242';
If you wanted to replace any value of reportIf and not just 'RP00000390', you could use regexp_replace:
insert into test_report
( company_id
, report_id
, brch_code
, definition
, description
, editable_flag
, executable_flag
, name
, report_type )
select 2420
, 'RP00002004'
, '0001'
, regexp_replace(definition,'<reportId>[^<]+</reportId>','<reportId>RP00002004</reportId>')
, tr.description
, tr.editable_flag
, tr.executable_flag
, tr.name
, '01'
from test_template_report tr
where tr.template_id = 'RP00001242';

could not create a list of fields for the query [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
I am clicking on REFRESH FIELDS for my query for a shared dataset, and getting this error:
here's my query:
Select
p.AgeCodePL ,
p.BirthDate ,
p.CommunityIDY ,
p.Company ,
p.CreatedbyUser ,
p.DateCreated ,
p.DateExpired ,
p.DateUpdated ,
p.Department ,
p.EthnicityFK ,
p.FirstName firstFirstName,
p.GenderUL ,
p.LastName ,
p.MaritalStatusUL ,
p.MiddleName ,
p.NickName ,
p.PeopleIDY ,
p.PrefixUL ,
p.RaceUL ,
p.ReligionUL ,
p.Salutation ,
p.SpouseName ,
p.SSN ,
p.SuffixUL ,
p.Title ,
p.UpdatedbyUser ,
r.ACT_ID ,
r.HEA_ID ,
r.INT_ID ,
r.LIF_ID ,
r.NetworkSet ,
r.PER_ID ,
r.RES_Active ,
r.RES_Bio ,
r.RES_BioUpdate ,
r.RES_BioUpdateBy ,
r.RES_CommunityIDY ,
r.RES_CurrentUnitNumber ,
r.RES_DateStarted ,
r.RES_DiscNotes1 ,
r.RES_DiscNotes2 ,
r.RES_DiscNotes3 ,
r.RES_DiscNotes4 ,
r.RES_DiscNotes5 ,
r.RES_ExpiredDate ,
r.RES_ExpiredUser ,
r.RES_FinishedDate ,
r.RES_HasImage ,
r.RES_LastUserUpdated ,
r.RES_LobbyBio ,
r.RES_LobbyBioUpdate ,
r.RES_LobbyBioUpdateBy ,
r.RES_NoPart ,
r.RES_PeopleIDY ,
r.RES_PhyMoveInDate ,
r.RES_TasksSet ,
r.RES_UpdatedImage ,
r.STA_ID ,
r.STA_Type ,
r.TES_ID ,
rr.CommunityIDY ,
rr.CurrentUnitNumber ,
rr.Gender ,
rr.MainBirthDate ,
rr.MainFirstName ,
rr.MainLastName ,
rr.MainPeopleIDY ,
rr.Name ,
rr.ProspectIDY ,
s.RES_ID ,
s.STA_Active ,
s.STA_CreatedOn ,
s.STA_DateUpdated ,
s.STA_Details ,
s.STA_EditedDate ,
s.STA_EditedUser ,
s.STA_ID ,
s.STA_Reason ,
s.STA_Solution ,
s.STA_Type ,
s.STA_User ,
u.PRO_ID ,
u.STU_ID ,
u.TEA_ID ,
u.USR_AboutMe ,
u.USR_Active ,
u.USR_AllComm ,
u.USR_Bday ,
u.USR_BdayDay ,
u.USR_CellPhone ,
u.USR_CommLocation ,
u.USR_CommunityIDY ,
u.USR_ContactFor ,
u.USR_DirectPhone ,
u.USR_Email ,
u.USR_EntAdmin ,
u.USR_FavBooks ,
u.USR_FavMovies ,
u.USR_FavPart ,
u.USR_FavQuotes ,
u.USR_Fax ,
u.USR_First ,
u.USR_Goals ,
u.USR_HasImage ,
u.USR_HomeTown ,
u.USR_ID ,
u.USR_Interests ,
u.USR_INTL_Password ,
u.USR_INTL_UserName ,
u.USR_IsSales ,
u.USR_JoinedKisco ,
u.USR_Last ,
u.USR_LastLogin ,
u.USR_LastProUpdate ,
u.USR_Name ,
u.USR_OtherTeams ,
u.USR_Password ,
u.USR_PlacesBeen ,
u.USR_REPS_Password ,
u.USR_REPS_UserIDY ,
u.USR_REPS_UserName ,
u.USR_Role ,
u.USR_RoleDescrip
from
Status s
Inner Join Residents r
ON r.RES_ID = s.RES_ID
Left Join REPSResidents rr
ON rr.MainPeopleIDY = r.RES_PeopleIDY
Inner Join Associate u
ON s.STA_User = u.USR_ID
Inner Join KSLSQL1.[RPS-201065-000].dbo.people p
ON r.RES_PeopleIDY = p.PeopleIDY
Where
rr.CommunityIDY in (#Community)
and (s.STA_Reason is not null and s.STA_Reason <> '')
and s.STA_Active = 1
and s.STA_DateUpdated between #BegDate and dateadd(d,1,#EndDate)
Order by
s.STA_DateUpdated DESC
what am i doing wrong?
The issue is because you have two columns with the exact same name in your query:
r.RES_ID ,
s.RES_ID ,
The columns must have different names. Either give them an alias to make them distinct or remove one of them, since they will have identical values based on your join.

Get every row of duplicates within a table

I have code that grabs the duplicates in a SQL table and groups them by tracking number. I want to see EVERY row that duplicates, not just have them group. The code for getting the group of duplicates is below:
Select
CarrierID
, Mode
, TrackingNumber
, PickupID
, Reference1
, Reference2
, Quantity
, BilledWeight
, ActualWeight
, Zone
, ServiceLevel
, PickupDate
, SenderCompany
, SenderAddress
, SenderCity
, SenderState
, SenderZip
, ReceiverCompany
, ReceiverAddress
, ReceiverCity
, ReceiverState
, ReceiverZip
, FreightCharge
, Fuel
, Accessories
, TotalCharges
, WrongName
, WrongCompany
, WrongAddress
, WrongCity
, WrongState
, WrongZip
, WrongCountry
, CorrectedName
, CorrectedCompany
, CorrectedAddress
, CorrectedCity
, CorrectedState
, CorrectedZip
, CorrectedCountry
, Count(TrackingNumber) as TrackingNumberTotal
, Count(TotalCharges) as NumberofDuplicates
from Prasco_GencoShipments
group by
TrackingNumber
, TotalCharges
, CarrierID
, Mode
, TrackingNumber
, PickupID
, Reference1
, Reference2
, Quantity
, BilledWeight
, ActualWeight
, Zone
, ServiceLevel
, PickupDate
, SenderCompany
, SenderAddress
, SenderCity
, SenderState
, SenderZip
, ReceiverCompany
, ReceiverAddress
, ReceiverCity
, ReceiverState
, ReceiverZip
, FreightCharge
, Fuel
, Accessories
, TotalCharges
, WrongName
, WrongCompany
, WrongAddress
, WrongCity
, WrongState
, WrongZip
, WrongCountry
, CorrectedName
, CorrectedCompany
, CorrectedAddress
, CorrectedCity
, CorrectedState
, CorrectedZip
, CorrectedCountry
having (count(TrackingNumber) > 1 and (count(TotalCharges) > 1))
If CTEs are available (could also be done with a subselect):
WITH dups AS (
SELECT TrackingNumber, TotalCharges
FROM Prasco_GencoShipments
GROUP BY TrackingNumber, TotalCharges
HAVING COUNT(*) > 1
)
SELECT ta.*
FROM Prasco_GencoShipments ta
JOIN dups du ON du.TrackingNumber = ta.TrackingNumber AND du.TotalCharges = ta.TotalCharges
ORDER BY
TrackingNumber
, TotalCharges
;
Find duplicates for field1 ( and field2 , commented):
SELECT t1.*
FROM test t1
INNER JOIN test t2
ON t2.field1 = t1.field1 -- AND t2.field2 = t1.field2
WHERE t1.id <> t2.id
SQLFiddle