Array of objects not keeping set values between for each loops - c++-cli

I am doing an assignment for school and have been stuck for the last few days, with a problem that I'm sure I have created myself. were supposed to be making one of three different dice games I chose one called LCR. I have most of the logic for the program finished I'm just having a problem with the array of objects I'm creating when the game starts (user defined number). For some reason when I initialize all the objects in the array with a for each loop I can set and get all the different parts of the object but once the loop is exited all the object information is lost. I, unfortunately, have to use CLI in this program because of another requirement. any help with this problem would be appreciated.
Here is the code that is giving me the problem.
System::Array^ players = System::Array::CreateInstance(Player::typeid, numberOfPlayers);
for each(Player^ p in players){
p = gcnew Player;
tempNameAsString = tempNameAsNumber.ToString();
p->setName(tempNameAsString);
}
con.writeStringToConsole("rolling dice \n");
for each(Player^ p in players){
p->setDice(rollAllDice(3, 6, 1));
//con.writeStringToConsole("player : ");
//con.writeStringToConsole(p->getName());
con.writeStringToConsole("rolled : ");
con.writeVectorToConsole(translateDice(p->getDice()), " , ");
con.writeStringToConsole("\n");
}
for each(Player^ players in players){
con.writeVectorToConsole((players)->getDice(), " ,");
con.writeStringToConsole("\n");
}
If I dont comment out the line
//con.writeStringToConsole(p->getName());
I get the error
An unhandled exception of type 'System.NullReferenceException' occurred in IT 312 LCR Dice Game.exe
here is the output when I do comment out the line
How many players are there?: 10
creating players
rolling dice
rolled : * , * , * ,
rolled : C , * , * ,
rolled : * , C , R ,
rolled : * , * , C ,
rolled : C , * , * ,
rolled : * , * , * ,
rolled : L , L , * ,
rolled : * , R , * ,
rolled : * , * , L ,
rolled : C , * , * ,
For some reason if I try to accsess the player objects again I only get the
last objects rolled dice.
3 ,6 ,4 ,
3 ,6 ,4 ,
3 ,6 ,4 ,
3 ,6 ,4 ,
3 ,6 ,4 ,
3 ,6 ,4 ,
3 ,6 ,4 ,
3 ,6 ,4 ,
3 ,6 ,4 ,
3 ,6 ,4 ,
Please let me know if im doing something stupid.
Thanks for any help.

Related

Hierarchy trouble

I have a set of data pulling from two joined tables that are built on joined tables, it ends up with a path/hierarchy that the data should sit under but the root folders aren't included because of the joins so getting the recursive hierarchy to work correctly isn't happening. One of the tables in the 1st round of joins actually has the full hierarchy clearly laid out but because entries for the root folders aren't used in tables that have been joined later the structure isn't there. Is there a way to get the full hierarchy back into the data for the folder structure?
I have also tried use the results from the first query below to build a recursive hierarchy using the ElementID and ParentElementID relationships in SSRS, and it works for the hierarchy levels 6 and 7 but it still looks flat for the rest, so level 8 stuff loses its parent folder and ends up at the bottom of all the children at level 6. Similar things are happening at other levels, but this always happens at level 8.
2nd level join statement. Ends up with EHLevel always 4-8:
SELECT ppd.tag
, ppd.descriptor
, ppd.instrumenttag
, afi.AFPath + '\' + afi.AFElement AS RelativePath
, afi.EHLevel
, afi.EHLevel + 1 AS TagLevel
, afi.PIServer
, MasterTagList.dbo.UFN_SEPARATES_COLUMNS(afi.RelativePath, 1, '\') AS [AFModel]
, MasterTagList.dbo.UFN_SEPARATES_COLUMNS(afi.RelativePath, 2, '\') AS [Plant]
, MasterTagList.dbo.UFN_SEPARATES_COLUMNS(afi.RelativePath, 3, '\') AS [Level3]
, MasterTagList.dbo.UFN_SEPARATES_COLUMNS(afi.RelativePath, 4, '\') AS [Level4]
, MasterTagList.dbo.UFN_SEPARATES_COLUMNS(afi.RelativePath, 5, '\') AS [Level5]
, MasterTagList.dbo.UFN_SEPARATES_COLUMNS(afi.RelativePath, 6, '\') AS [Level6]
, MasterTagList.dbo.UFN_SEPARATES_COLUMNS(afi.RelativePath, 7, '\') AS [Level7]
, MasterTagList.dbo.UFN_SEPARATES_COLUMNS(afi.RelativePath, 8, '\') AS [Level8]
, afi.ElementID
, afi.ParentElementID
, afi.AFPath
, afi.AFElement
, afi.AttributeName
, ppd.pointtypex
, ppd.engunits
, ppd.displaydigits
, ppd.digitalset
, ppd.zero
, ppd.span
, ppd.typicalvalue
, ppd.pointsource
, ppd.location1
, ppd.location2
, ppd.location3
, ppd.location4
, ppd.location5
, ppd.scan
, ppd.creationdate
, ppd.changedate
FROM [PIPointData] AS ppd
LEFT JOIN [AFInfo] AS afi ON afi.Tag = ppd.tag
First Level Table with full hierarchy (Levels 0-8)
SELECT [Path] -- This is the same as the AFPath above
,[Name] -- This is the same as the AFElement above
,[Level] -- This is the same as the EHLevel above
,[ElementID]
,[ParentElementID]
FROM [TAGAUSWARPIDEV02D].[IOPS US Devlopment Database].[Asset].[ElementHierarchy]

Oracle get UNIQUE constraint violation error too late

What should I check why Oracle server takes more then 20 sec to return UNIQUE constraint violation error for specific data?
One of our processes is processing over 30000 data one day with multi process and some time gets UNIQUE constraint violation error in 1 sec
but it takes more then 20 sec to return UNIQUE constraint violation error for specific data.
Query is same as below. (Modified only table name)
MERGE
INTO TableA S
USING (
SELECT NVL(:sccm_cd , ' ') SCCM_CD
, NVL(:oder_dt , ' ') ODER_DT
, NVL(:mrkt_dstn_cd, ' ') MRKT_DSTN_CD
, NVL(:oder_no , ' ') ODER_NO
, NVL(:cncd_unpr , 0) CNCD_UNPR
, B.SLBY_FEE_GRD_CD
, B.ACCT_MNGR_EMPL_NO
, C.AO_FEE_GRD_CD
FROM DUAL A
, TableB B
, TableC C
WHERE 1 = 1
AND B.SCCM_CD = :sccm_cd
AND B.ACNO = :acno
AND C.SCCM_CD(+) = B.SCCM_CD
AND C.EMPL_NO(+) = B.ACCT_MNGR_EMPL_NO
) T
ON ( S.sccm_cd = T.sccm_cd
AND S.oder_dt = T.oder_dt
AND S.mrkt_dstn_cd = T.mrkt_dstn_cd
AND S.oder_no = T.oder_no
AND S.cncd_unpr = T.cncd_unpr
)
WHEN MATCHED THEN
UPDATE
SET S.cncd_qty = S.cncd_qty + NVL(:cncd_qty ,0)
, S.slby_fee = S.slby_fee + NVL(:slby_fee ,0)
, S.slby_fee_srtx = S.slby_fee_srtx + NVL(:slby_fee_srtx,0)
, S.idx_fee_amt = S.idx_fee_amt + NVL(:idx_fee_amt ,0)
, S.cltr_fee = S.cltr_fee + NVL(:cltr_fee ,0)
, S.trtx = S.trtx + NVL(:trtx ,0)
, S.otc_fee = S.otc_fee + NVL(:otc_fee ,0)
, S.wht_fee = S.wht_fee + NVL(:wht_fee ,0)
WHEN NOT MATCHED THEN
INSERT (
sccm_cd
, oder_dt
, mrkt_dstn_cd
, oder_no
, cncd_unpr
, acno
, item_cd
, slby_dstn_cd
, md_dstn_cd
, cncd_qty
, stlm_dt
, trtx_txtn_dstn_cd
, proc_cmpl_dstn_cd
, item_dstn_cd
, slby_fee_grd_cd
, slby_fee
, slby_fee_srtx
, idx_fee_amt
, cltr_fee
, trtx
, wht_fee
, otc_fee
, acct_mngr_empl_no
, ao_fee_grd_cd
)
VALUES
( T.sccm_cd
, T.oder_dt
, T.mrkt_dstn_cd
, T.oder_no
, T.cncd_unpr
, :acno
, :item_cd
, :slby_dstn_cd
, :md_dstn_cd
, NVL(:cncd_qty ,0)
, DECODE(:mrkt_dstn_cd, 'TN', T.oder_dt, :stlm_dt)
, :trtx_txtn_dstn_cd
, '0'
, :item_dstn_cd
, NVL(:slby_fee_grd_cd, T.SLBY_FEE_GRD_CD)
, NVL(:slby_fee ,0)
, NVL(:slby_fee_srtx ,0)
, NVL(:idx_fee_amt ,0)
, NVL(:cltr_fee ,0)
, NVL(:trtx ,0)
, NVL(:wht_fee , 0)
, NVL(:otc_fee , 0)
, T.acct_mngr_empl_no
, T.ao_fee_grd_cd
)
There could be multiple reasons for it. I will list here some of the possible causes for this behavior.
Concurrency issue
Your insert might be waiting for other operations, like other inserts or updated or deletions.
Network issues
It is possible that for some reason your network is overwhelmed with requests or, if the server is remote, this could be an internet speed issue as well.
Server load
The server might be overwhelmed with lots of jobs to do.
Slow query
It's also possible that the select you use in your insert command is very slow. It would make sense to test its speed. Also, it would make sense to test insert speed as well.

/*+ APPEND PARALLEL */ implementation in a oracle SQL [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 2 years ago.
Improve this question
I will like to understand what does exactly the /*+ APPEND PARALLEL(TEST,12) */, it is a improve but I'm not really sure what it does.
--FIRST SQL
insert into TEST
select ORDER_DATE , ORDER_NO , ORDER_INV_LINE , CUSTOMER_NO , ORDER_INV_LINE_TYPE , ORDER_INV_LOC_CD , CUST_REF_NO , GROUP_ACCT_NO , SELL_STYLE , RCS_CD , GRADE
, INV_STYLE_NO , DISCOUNT_CD , CREDITED_SELL_CO , DELI_VEHICLE_CD , QUANTITY , GROSS_AMT , REBATE_NET_AMT , nvl(TERM_SAVG_AMT,TERMS_AVG_AMT) , TERMS_AMT , UNIT_PRICE , DISCOUNT_AMT
, COMM_LOAD , DELIVERED_FRT_AMT , CREDITED_DISTRICT_ID , INVOICE_NO , INVOICE_DATE , INVOICE_MONTH , SELL_COLOR , WIDTH_FT , WIDTH_IN , LENGTH_FT , LENGTH_IN , ROLL_NO
, ACTUAL_DUTY , GST_AMT , BROKERAGE_FEE , CRED_REGION_ID , TERMS_PCT , CRED_TERRITORY_ID , WHSE_UPCHARGE , OVERBILL_A_AMOUNT , OVERBILL_B_AMOUNT , OVERBILL_C_AMOUNT , OVERBILL_D_AMOUNT
, OVERBILL_E_AMOUNT , OVERBILL_F_AMOUNT , OVERBILL_G_AMOUNT , OVERBILL_H_AMOUNT , OVERBILL_I_AMOUNT , TERMS_CD , ORDER_LINE_STATUS_CD , NET_UNIT_PRICE , INV_FOB_COST , NET_SALES_AMT_CCA
, NET_UNIT_PRICE_CCA , INVOICE_PAID_FLAG , DISC_FLAG , NVL(BUILDER_NO,BUILDER_NUMBER) , BUILDER_NAME , SUB_DIVISION , BLOCK_NBR , LOT , PROJECT_NAME , INV_PRICING_UOM , PRO_ROLL_OVB , PRO_CUT_OVB , EFF_DATE
, EXP_DATE, CCA_PROGRAM, OVBG_FLAG, REBATE_NET_AMTCN, sysdate as ARCHIVE_DATE, ENDUSER_CODE, ENDUSER_NAME, SELL_BUSINESS_GRP, SALES_MIX_GRP, BUSINESS_GRP_CAT, MIX_GRP_CAT, BDF_GROUP
FROM SCHEMA.prestg_order_invoices poi
WHERE NOT EXISTS (
SELECT 1
FROM SCHEMA.TEST ar
WHERE ar.order_no = poi.order_no
and nvl(ar.invoice_no, 'XYZ') = nvl(poi.invoice_no, 'XYZ')
and ar.order_inv_line = poi.order_inv_line)
----
--SQL MODIFIED
insert /*+ APPEND PARALLEL(TEST,12) */ into TEST
select /*+ PARALLEL(poi,12) */ ORDER_DATE , ORDER_NO , ORDER_INV_LINE , CUSTOMER_NO , ORDER_INV_LINE_TYPE , ORDER_INV_LOC_CD , CUST_REF_NO , GROUP_ACCT_NO , SELL_STYLE , RCS_CD , GRADE
, INV_STYLE_NO , DISCOUNT_CD , CREDITED_SELL_CO , DELI_VEHICLE_CD , QUANTITY , GROSS_AMT , REBATE_NET_AMT , nvl(TERM_SAVG_AMT,TERMS_AVG_AMT) , TERMS_AMT , UNIT_PRICE , DISCOUNT_AMT
, COMM_LOAD , DELIVERED_FRT_AMT , CREDITED_DISTRICT_ID , INVOICE_NO , INVOICE_DATE , INVOICE_MONTH , SELL_COLOR , WIDTH_FT , WIDTH_IN , LENGTH_FT , LENGTH_IN , ROLL_NO
, ACTUAL_DUTY , GST_AMT , BROKERAGE_FEE , CRED_REGION_ID , TERMS_PCT , CRED_TERRITORY_ID , WHSE_UPCHARGE , OVERBILL_A_AMOUNT , OVERBILL_B_AMOUNT , OVERBILL_C_AMOUNT , OVERBILL_D_AMOUNT
, OVERBILL_E_AMOUNT , OVERBILL_F_AMOUNT , OVERBILL_G_AMOUNT , OVERBILL_H_AMOUNT , OVERBILL_I_AMOUNT , TERMS_CD , ORDER_LINE_STATUS_CD , NET_UNIT_PRICE , INV_FOB_COST , NET_SALES_AMT_CCA
, NET_UNIT_PRICE_CCA , INVOICE_PAID_FLAG , DISC_FLAG , NVL(BUILDER_NO,BUILDER_NUMBER) , BUILDER_NAME , SUB_DIVISION , BLOCK_NBR , LOT , PROJECT_NAME , INV_PRICING_UOM , PRO_ROLL_OVB , PRO_CUT_OVB , EFF_DATE
, EXP_DATE, CCA_PROGRAM, OVBG_FLAG, REBATE_NET_AMTCN, sysdate as ARCHIVE_DATE, ENDUSER_CODE, ENDUSER_NAME, SELL_BUSINESS_GRP, SALES_MIX_GRP, BUSINESS_GRP_CAT, MIX_GRP_CAT, BDF_GROUP
FROM SCHEMA.prestg_order_invoices poi
WHERE NOT EXISTS (
SELECT 1
FROM SCHEMA.TEST ar
WHERE ar.order_no = poi.order_no
and nvl(ar.invoice_no, 'XYZ') = nvl(poi.invoice_no, 'XYZ')
and ar.order_inv_line = poi.order_inv_line)
APPEND or PARALLEL hints invoke direct path load. This means blocks are allocated from above the HWM (high water mark). That is, blocks that do not, and never have had any rows in them. For that reason, Oracle does not generate UNDO. (There's no need for a 'before image', since the 'before image is that the block didn't exist in the segment.) Redo is still generated for direct path load, unless NOLOGGING is also set.
it isn't necessarily always faster in general. It does a direct path load to disk - bypassing the buffer cache. There are many cases - especially with smaller sets - where the direct path load to disk would be far slower than a conventional path load into the cache.
Also, you cannot query a table after direct pathing into it until you commit or rollback. And also consider the fact that one and only one user can direct path into a table at a time. It would cause all modifications to serialize. No one else could insert/update/delete or merge into this table until the transaction that direct paths commits.

Stop Recursion Once Condition Satisfied

I am using a CTE to explode out a Bill of Materials and need to locate all those materials that have recursive components.
What I was attempting, was to limit the number of cycles (levels) deep, by setting BOM_Level in the child node to a maximum bound:
exec pr_sys_drop_object '#BOMExploded'
;with BOM
as
(
select
Prod_Plant_CD
, Demand_Plant_CD
, material_cd
, component_cd
, component_quantity
, component_quantity AS Calculated_Component_Quantity
, BOM_Level
, Demand_Quantity
, CONVERT(float,1) AS Produced_Ratio
, Material_CD AS Demand_Material_CD
from #firstLevel a
UNION ALL
SELECT
b.Plant_CD as 'Prod_Plant_CD'
, a.Demand_Plant_CD
, b.Material_CD
, b.Component_CD
, b.component_quantity
, b.component_quantity
, a.BOM_Level + 1
, a.Demand_Quantity
, a.Produced_Ratio * a.Component_Quantity -- Produced Quantity for the current level = Produced Quantity (level -1) * Component_Quantity (level -1)
, a.Demand_Material_CD
FROM BOM a
inner join #BOM_ProdVersion_Priority b
on a.component_cd = b.material_cd
inner join #base_routes c
on a.Demand_Plant_CD = c.Recipient_Plant_CD
and b.Plant_CD = c.Source_Plant_CD
and c.Material_CD = b.Material_CD -- Need to have material_cd to link
where b.Material_CD != b.Component_CD
and b.Component_Quantity > 0
and BOM_Level < 5 -- Give the max number of levels deep we are allowed to cyncle to
)
select *
into #BOMExploded
from BOM
OPTION (MAXRECURSION 20)
Using this method however, would require a post-process to locate when the cycling on the recursive component level started, then back trace.
How can a CTE recursive query be stopped given a certain condition?
ie. when top-level material_cd = component_cd for a deeper BOM_Level
If I understand you correctly, you don't need to stop at a certain depth/level, or rather you want to stop at a certain level, but you also need to stop in case you start cycling through materials repeatedly.
In the case of the following recursive path: mat_1->mat_2->mat_3->mat_1, you would want to stop before that last mat_1 starts cycling again to mat_2 and so on.
If that's correct, then your best bet is to add a Path field to your recursive query that tracks each term that it finds as it moves down the recursive path:
exec pr_sys_drop_object '#BOMExploded'
;with BOM
as
(
select
Prod_Plant_CD
, Demand_Plant_CD
, material_cd
, component_cd
, component_quantity
, component_quantity AS Calculated_Component_Quantity
, BOM_Level
, Demand_Quantity
, CONVERT(float,1) AS Produced_Ratio
, Material_CD AS Demand_Material_CD
, CAST(material_cd AS VARCHAR(100)) AS Path
from #firstLevel a
UNION ALL
SELECT
b.Plant_CD as 'Prod_Plant_CD'
, a.Demand_Plant_CD
, b.Material_CD
, b.Component_CD
, b.component_quantity
, b.component_quantity
, a.BOM_Level + 1
, a.Demand_Quantity
, a.Produced_Ratio * a.Component_Quantity -- Produced Quantity for the current level = Produced Quantity (level -1) * Component_Quantity (level -1)
, a.Demand_Material_CD
, a.Path + '|' + b.material_cd
FROM BOM a
inner join #BOM_ProdVersion_Priority b
on a.component_cd = b.material_cd
inner join #base_routes c
on a.Demand_Plant_CD = c.Recipient_Plant_CD
and b.Plant_CD = c.Source_Plant_CD
and c.Material_CD = b.Material_CD -- Need to have material_cd to link
where b.Material_CD != b.Component_CD
and b.Component_Quantity > 0
and BOM_Level < 5 -- Give the max number of levels deep we are allowed to cyncle to
and a.path NOT LIKE '%' + b.material_cd + '%'
)
select *
into #BOMExploded
from BOM
OPTION (MAXRECURSION 20)
Now you have a path that is pipe delimited and you can test your current material_cd to see if it's already in the path. If it is, then you end that leg of the recursion to prevent cycling.
Updating to include a version where we capture material_cd cycles and only report those at the end of the recursion:
exec pr_sys_drop_object '#BOMExploded'
;with BOM
as
(
select
Prod_Plant_CD
, Demand_Plant_CD
, material_cd
, component_cd
, component_quantity
, component_quantity AS Calculated_Component_Quantity
, BOM_Level
, Demand_Quantity
, CONVERT(float,1) AS Produced_Ratio
, Material_CD AS Demand_Material_CD
, CAST(material_cd AS VARCHAR(100)) AS Path
, CAST(NULL AS CHAR(5)) as Cycle_Flag
, 0 as Cycle_Depth
from #firstLevel a
UNION ALL
SELECT
b.Plant_CD as 'Prod_Plant_CD'
, a.Demand_Plant_CD
, b.Material_CD
, b.Component_CD
, b.component_quantity
, b.component_quantity
, a.BOM_Level + 1
, a.Demand_Quantity
, a.Produced_Ratio * a.Component_Quantity -- Produced Quantity for the current level = Produced Quantity (level -1) * Component_Quantity (level -1)
, a.Demand_Material_CD
, a.Path + '|' + b.material_cd
, CASE WHEN a.path NOT LIKE '%' + b.material_cd + '%' then 'Cycle' END AS Cycle_Flag,
, CASE WHEN a.path NOT LIKE '%' + b.material_cd + '%' then a.Cycle_Depth + 1 END as Cycle_Depth
FROM BOM a
inner join #BOM_ProdVersion_Priority b
on a.component_cd = b.material_cd
inner join #base_routes c
on a.Demand_Plant_CD = c.Recipient_Plant_CD
and b.Plant_CD = c.Source_Plant_CD
and c.Material_CD = b.Material_CD -- Need to have material_cd to link
where b.Material_CD != b.Component_CD
and b.Component_Quantity > 0
and a.cycle_depth < 2 --stop the query if we start cycling, but only after we capture at least one full cycle
)
select *
into #BOMExploded
from BOM
WHERE cycle_flag IS NOT NULL
OPTION (MAXRECURSION 20)
This will capture cycle_depth which is a counter that measures how deep into a cycle we get. We stop the recursion after we get to cycle_depth of 1 so the cycle can be captures in the final select.

How to create a view with 14000 Columns in it?

I have two tables
Shop
Item.
and a third table which shows availability of Item in a shop with cost of the item in that shop.
Shop_Item_Mapping.
Some sample data of Item would be (1, Candy) where 1 is id and Candy is name of the Item.
(2 , Chocolate)
(3 , Chair)
(4 , Mobile)
(5 , Bulb)
Some sample data of Shop table would be
(1 , Address Of Shop)
(2 , Address Of Shop)
(3 , Address of the shop)
Now my mapping table shows me that which Item is available in which shop and at what cost.
Shop_Item_Mapping ( Shop_id , Item_Id , Cost of Item).
So my mapping tables have these entries
SID , IID , Cost
(1 , 1, 5)
(1 , 2 ,10)
(1 ,4 ,2300)
(2 ,3 ,50)
(2 , 5 ,10)
(3 , 1 , 4)
(3 , 2 , 5 )
(3 , 4 , 2500 )
(3 , 5 , 12 )
Now I have a query that I want all those shops which have both Mobile ( id = 4 ) and Chocolates(id = 2) with mobile price < 3000 and chocolate price less than 7.
I am trying to make a view where I will have data like this
Shop_ID , I1 , I2 , I3 , I4 , I5 where I1 , I2 , I3 , I4 , I5 are the id of Items and value of these will be cost of the item in that shop.
So my view would be
(1 , 5 , 10 , NULL , 2300 , NULL )
(2 , NULL , NULL , 50 , NULL , 10)
(3 , 4 , 5 , NULL , 2500 , 12 ).
I am able to do so when my items are less. But IF I have more than 15000 Items in my repository, Can I create a view with these many columns ?
Seriously? 14,000 columns in a view? You have a serious design issue here. However if you want to have a go, try this dynamic pivot query. It works with the limited data you have provided:
DECLARE #ColumnList VARCHAR (MAX)
DECLARE #SQL VARCHAR(MAX)
-- Create a list of distinct Item IDs which will become column headers
SELECT #ColumnList = COALESCE(#ColumnList + ', ','') + 'ItemID' + CAST(I.ItemID AS VARCHAR(12)) FROM (SELECT DISTINCT ItemID FROM Item) I
SET #SQL = '
SELECT
ShopID, ' + #ColumnList + '
FROM
(
SELECT
s.ShopID,
ItemID = ''ItemID'' + Cast(i.ItemID as varchar(12)),
sim.ItemCost
FROM
dbo.Shop_Item_Mapping AS sim
JOIN dbo.Shop AS s ON sim.ShopID = s.ShopID
JOIN dbo.Item AS i ON SIM.ItemID = i.ItemID
) T
PIVOT
(
MIN(ItemCost)
FOR T.ItemID IN (' + #ColumnList + ')
) AS PVT'
exec (#SQL)
Edited field names as per updated question.