Get length from item set - ampl

I have a set of Data in this format
# Input Data for items in the format (i,l,w,h)
i for item, l for length , w for width, h for height
set itemData :=
271440 290 214 361
1504858 394 194 114
4003733 400 200 287
4012512 396 277 250
4013886 273 221 166;
I am trying to get the lengths of each item, using the following code
set IL = setof {i in items, (i,l,w,h) in itemData} (i,l); #length of item i
This method only does not allow me to access the individual item length.
What i am trying to do is to have
display IL[271440] = 290;
how can i go about doing this?

Careful with terminology. In AMPL terms, that table isn't a "set". You have a table of parameters. Your sets are the row and column indices for that table: {"l","w","h"} for the columns, and item ID numbers for the rows.
In AMPL it would be handled something like this:
(.mod part)
set items;
set attributes := {"l","w","h"};
param itemData{items, attributes};
(.dat part)
set items :=
271440
1504858
4003733
4012512
4013886
;
param itemData: l w h :=
271440 290 214 361
1504858 394 194 114
4003733 400 200 287
4012512 396 277 250
4013886 273 221 166
;
You can then do:
ampl: display itemData[271440,"l"];
itemData[271440,'l'] = 290
I think it's possible to define set "items" at the same time as itemData and avoid the need to duplicate the ID numbers. Section 9.2 of the AMPL Book shows how to do this for a parameter that has a single index set, but I'm not sure of the syntax for doing this when you have two index sets as above. (If anybody does know, please add it!)

Related

translate Dataframe using crosswalk in julia

I have a very large dataframe (original_df) with columns of codes
14 15
21 22
18 16
And a second dataframe (crosswalk) which maps 'old_codes' to 'new_codes'
14 104
15 105
16 106
18 108
21 201
22 202
Of course, the resultant df (resultant_df) that I would like would have values:
104 105
201 202
108 106
I am aware of two ways to accomplish this. First, I could iterate through each code in original_df, find the code in crosswalk, then rewrite the corresponding cell in original_df with the translated code from crosswalk. The faster and more natural option would be to leftjoin() each column of original_df on 'old_codes'. Unfortunately, it seems I have to do this separately for each column, and then delete each column after its conversion column has been created -- this feels unnecessarily complicated. Is there a simpler way to convert all of original_df at once using the crosswalk?
You can do the following (I am using column numbers as you have not provided column names):
d = Dict(crosswalk[!, 1] .=> crosswalk[!, 2])
resultant_df = select(original_df, [i => ByRow(x -> d[x]) for i in 1:ncol(original_df)], renamecols=false)

.replace only replacing first value

I have the following code:
df_demo['Age'] = df_demo['Age'].replace([23842674135270370,
23842674044440370, 23842674044420370, 23842674044430370],
['18-24', '25-34', '35-44', '45+'])
(The numbers are ad id tags, and I'm trying to replace them to the age groups they are targeting.)
The code is only reading the first number and replacing it (to 18-24). The rest of the numbers are not reading and replacing. If I flip the order of the numbers (like move the 25-34 pairing to the first set) it replaces that first pairing but none of the others.
I have exactly the same construction for .replace() -- using two lists within the () -- further up in my program and it's working perfectly. But this one is not, and I can't figure out why it is not working.
For me working convert column Age to string by dtype and then replace strings by another one:
df_demo = pd.read_csv('demographics - Sheet1.csv', dtype={'Age':str})
print (df_demo.tail())
190 191 23842674135270370 Yes
191 192 23842674135270370 Yes
192 193 23842674044420370 Yes
193 194 23842674135270370 Yes
194 195 23842674044420370 Yes
df_demo['Age'] = df_demo['Age'].replace(
['23842674135270370','23842674044440370','23842674044420370','23842674044430370'],
['18-24', '25-34', '35-44', '45+'])
print (df_demo.tail())
Name Age Newsletter
190 191 18-24 Yes
191 192 18-24 Yes
192 193 35-44 Yes
193 194 18-24 Yes
194 195 35-44 Yes

Update Field WIth Multiple Values Based on Multiple Options

I have a table "Master" with fields called "DFM" and "Target". I ideally need 1 UPDATE query that will populate the "Target" field based on the value of DFM as below:
DFM Target
50001 85
50009 255
50011 233
50012 290
50062 183
50063 150
50064 159.5
50142 187
50143 174
50179 284.25
50180 195.75
50286 157.25
50287 231.25
For example if the DFM value is 50142 it should UPDATE the field for that row with 187.
So can this be done with 1 query, or do I need 13?
I only know the long winded way ie
UPDATE Master, SET Target = 85 WHERE DFM = 50001
I don't really want 13 queries though.
You can use a switch:
update master
set target = switch(dfm = 50001, 85,
dfm = 50009, 255,
. . .
)
where dfm in (50001, 50009, . . .);

Select minimum value from column A where column B is not in an array

I'm trying to select accesses for patients where d11.xblood is a minimum value grouped by d11.xpid - and where d11.xcaccess_type is not 288, 289, or 292. (d11.xblood is a chronological index of accesses.)
d11.xpid: Patient ID (int)
d11.xblood: Unique chronological index of patients' accesses (int)
d11.xcaccess_type: Unique identifier for accesses (int)
I want to report one row for each d11.xpid where d11.xblood is the minimum (initial access) for its respective d11.xpid . Moreover, I want to exclude the row if the initial access for a d11.xpid has a d11.xcaccess_type value of 288, 289 or 292.
I have tried several variations of this in the Select Expert:
{d11.xblood} = Minimum({d11.xblood},{d11.xpid}) and
not ({d11.xcaccess_type} in [288, 289, 292])
This correctly selects rows with the initial access but eliminates rows where the current access is not in the array. I only want to eliminate rows where the initial access is not in the array. How can I accomplish this?
Sample table:
xpid xblood xcaccess_type
---- ------ -------------
1 98 400
1 49 300
1 152 288
2 33 288
2 155 300
2 70 400
3 40 300
3 45 400
Sample desired output:
xpid xblood xcaccess_type
---- ------ -------------
1 49 300
3 40 300
See that xpid = 2 is not in the output because its minimum value of xblood had an xcaccess_type = 288 which is excluded. Also see that even though xpid = 1 has an xcaccess_type = 288, because there is a lower value of xblood for xpid = 1 where xcaccess_type not in (288,289,292) it is still included.
If you don't want to write a stored procedure or custom SQL to handle this, you could add another Group. Assuming your deepest group (the one closest to the Details section) is sorting based on xpid, you could add a group inside that one which sorts the xcaccess_type from lowest to highest.
Suppress the header and footer for the new group then add this clause to the details section:
({d11.xpid} = PREVIOUS({d11.xpid})
OR
({d11.xcaccess_type} in [288, 289, 292])
This should modify your report to only ever display the records with the lowest access value per person. And if the lowest access value is one of the three forbidden values, no records will show for that xpid.

Multiply by a cell formula

I am calculating the next excel table in VBA and leave the results as values because of a volume of data. But then I have to multiply these range by 1 or 0 depending on a column.
The problem is that I don't want to multiply by 0 becouse I gonna lose my data and have to recalculate it (I don't want it).
So, after my macro I get a next table, for example:
var.1 var.2 var.3
0 0 0
167 92 549
159 87 621
143 95 594
124 61 463
0 0 0
5 12 75
in a Range("A2:C9").
In a Range("A1:C1") i gonna have a 1 or 0 values that will be changing so i need my Range("A2:C9") to be like:
var.1 var.2 var.3
=0*A$1 =0*B$1 =0*C$1
=167*A$1 =92*B$1 =549*C$1
...
Is it possible to do with a macro? Thank's
And I would like to get
Okay so what I would do here is first copy the original data to another sheet or set of columns so that it is always preserved. Then use this formula:
=IF($A$1 = 0, 0,E3)
Instead of writing the cell E3 reference the data that you copied.