Alternative to Expect.all using elm-test? - elm

I'm new to Elm and I have some question about elm-test. I try to have multiple expect in the same test, but didn't find how. so here is what I've done for now but it's not really expressive
suite : Test
suite =
describe "2048-elm"
[ test "moveLeftWithZero" <|
\_ ->
let
expectedCases =
[ ( [ 2, 0, 0, 2 ], [ 4, 0, 0, 0 ] )
, ( [ 2, 2, 0, 4 ], [ 4, 4, 0, 0 ] )
, ( [ 0, 0, 0, 4 ], [ 4, 0, 0, 0 ] )
, ( [ 0, 0, 2, 4 ], [ 2, 4, 0, 0 ] )
, ( [ 2, 4, 2, 4 ], [ 2, 4, 2, 4 ] )
, ( [ 2, 2, 2, 2 ], [ 4, 4, 0, 0 ] )
]
toTest =
List.map (\expected -> ( Tuple.first expected, Main.moveLeftWithZero (Tuple.first expected) )) expectedCases
in
Expect.equal expectedCases toTest
]
I tried with Expect.all but it does not seems to do what I want

Related

ax=b solving with numpy llinalg gives "Last 2 dimensions of the array must be square"

I have 3 matrices of shape (157, 4) , (157,) and (4,) respectively. The first matrix (an_array) contains the quantity of the items. the second, (Items matrix) contains the items names and the 3rd (Price matrix) one contains the price. I am trying to workout the price of individual items but I get an error saying "Last 2 dimensions of the array must be square" everytime I am trying to do
X = np.linalg.solve(an_array,price)
I am trying to solve ax=b but unfortunately its not working out. Any help would be very much appreciated.
**Price matrix**
array([499.25, 381. , 59.5 , 290. , 128.5 , 305.25, 336.25, 268.5 ,
395.25, 136.5 , 194.5 , 498.75, 62.25, 312.75, 332. , 171. ,
402.5 , 144.5 , 261.5 , 242.75, 381. , 371. , 355.5 , 373. ,
65.5 , 228.75, 208.75, 204.5 , 86.5 , 143. , 70.5 , 36.5 ,
82. , 302.5 , 365.5 , 158.5 , 316.5 , 508. , 86.5 , 359.75,
25.5 , 345.5 , 304.5 , 491.25, 181.5 , 343.75, 383.5 , 283.5 ,
140.25, 426. , 386. , 337.25, 415.5 , 268.25, 406. , 149.5 ,
200. , 122. , 510.25, 280. , 406.75, 191.25, 198. , 114.5 ,
211.5 , 241.75, 195.75, 276.25, 276. , 165.25, 102. , 425. ,
195. , 132.25, 86.75, 446.5 , 318. , 290.75, 286. , 232. ,
520.5 , 382.75, 94. , 482.75, 233.25, 262. , 368.25, 438.75,
433.5 , 334.5 , 360. , 422. , 191. , 292.25, 151.75, 440.25,
370. , 105.25, 122. , 455.5 , 363. , 436. , 147.5 , 548.5 ,
365.75, 185.5 , 348. , 342.5 , 509.25, 465.5 , 380.25, 361. ,
271.25, 414.25, 366.75, 145.5 , 348. , 471.25, 254.5 , 329. ,
441. , 253.25, 448.5 , 142. , 312.5 , 350. , 94. , 333. ,
418. , 194.5 , 543. , 212.5 , 66.5 , 370. , 423. , 164. ,
393.25, 299.75, 529.5 , 166.25, 228.5 , 476. , 373. , 383.25,
409. , 241. , 107.75, 194.5 , 350. , 221.75, 633.25, 444.25,
155.25, 76. , 542. , 346. , 159.75])
**Item matrix**
array(['Cereal', 'Coffee', 'Eggs', 'Pancake'], dtype='<U7')
#an_array matrix
array([[ 7, 6, 6, 9],
[ 6, 7, 0, 10],
[ 0, 7, 1, 0],
[ 4, 0, 10, 0],
[ 0, 0, 4, 2],
[10, 7, 0, 3],
[ 9, 1, 4, 3],
[ 9, 8, 0, 2],
[10, 0, 4, 5],
[ 6, 3, 0, 0],
[ 0, 1, 9, 0],
[ 3, 9, 9, 9],
[ 2, 0, 0, 1],
[ 6, 0, 6, 3],
[ 9, 0, 3, 4],
[ 8, 2, 0, 0],
[ 9, 0, 0, 10],
[ 5, 0, 0, 2],
[ 8, 7, 3, 0],
[ 0, 1, 6, 5],
[ 7, 0, 3, 8],
[ 0, 5, 10, 6],
[ 7, 1, 10, 0],
[ 4, 7, 10, 2],
[ 0, 0, 1, 2],
[ 2, 6, 0, 7],
[ 6, 4, 0, 3],
[ 8, 0, 0, 2],
[ 0, 0, 2, 2],
[ 3, 7, 0, 2],
[ 0, 9, 1, 0],
[ 1, 3, 0, 0],
[ 3, 4, 0, 0],
[ 4, 0, 0, 10],
[ 4, 9, 7, 4],
[ 6, 7, 0, 0],
[ 6, 9, 7, 0],
[ 6, 0, 10, 8],
[ 0, 0, 2, 2],
[ 6, 0, 4, 7],
[ 1, 1, 0, 0],
[ 3, 9, 7, 4],
[ 0, 1, 10, 4],
[ 6, 1, 10, 7],
[ 0, 2, 6, 2],
[ 4, 1, 7, 5],
[ 0, 3, 9, 8],
[ 3, 2, 8, 2],
[ 0, 10, 3, 1],
[ 4, 0, 8, 8],
[ 2, 0, 8, 8],
[ 7, 8, 2, 5],
[ 7, 2, 2, 10],
[ 0, 9, 3, 7],
[ 4, 4, 6, 8],
[ 5, 9, 0, 0],
[ 0, 2, 9, 0],
[ 4, 0, 2, 0],
[10, 9, 5, 7],
[ 4, 4, 0, 8],
[ 9, 10, 5, 3],
[ 4, 0, 0, 5],
[ 1, 0, 0, 8],
[ 1, 1, 0, 4],
[ 4, 1, 6, 0],
[ 0, 8, 2, 7],
[ 1, 5, 6, 1],
[ 4, 4, 3, 5],
[ 9, 6, 3, 0],
[ 2, 3, 2, 3],
[ 4, 4, 0, 0],
[ 8, 10, 10, 0],
[ 6, 6, 2, 0],
[ 0, 0, 1, 5],
[ 1, 0, 0, 3],
[ 7, 0, 4, 10],
[ 4, 8, 5, 4],
[ 9, 8, 0, 3],
[ 4, 6, 4, 4],
[10, 2, 1, 0],
[10, 3, 6, 8],
[ 6, 8, 3, 7],
[ 0, 9, 0, 2],
[ 8, 7, 4, 9],
[ 0, 6, 0, 9],
[ 0, 0, 4, 8],
[ 0, 0, 8, 9],
[ 8, 8, 8, 3],
[ 3, 5, 8, 8],
[ 7, 3, 0, 8],
[ 9, 6, 7, 0],
[ 8, 0, 4, 8],
[ 9, 2, 0, 0],
[ 6, 3, 0, 7],
[ 0, 4, 3, 3],
[10, 1, 8, 3],
[ 5, 10, 6, 4],
[ 0, 7, 0, 3],
[ 4, 0, 2, 0],
[10, 6, 0, 10],
[ 4, 0, 5, 8],
[10, 0, 7, 4],
[ 1, 7, 0, 4],
[10, 0, 6, 10],
[ 8, 10, 4, 3],
[ 9, 1, 0, 0],
[ 9, 0, 8, 0],
[ 6, 0, 0, 10],
[ 9, 1, 8, 7],
[ 1, 10, 8, 10],
[ 0, 6, 7, 9],
[10, 5, 0, 6],
[ 0, 10, 5, 5],
[ 8, 6, 1, 9],
[ 0, 4, 9, 7],
[ 4, 0, 1, 2],
[ 3, 9, 5, 6],
[10, 10, 5, 5],
[ 5, 0, 1, 6],
[ 9, 8, 5, 0],
[ 8, 3, 2, 10],
[ 0, 2, 2, 9],
[ 5, 9, 10, 4],
[ 6, 4, 0, 0],
[ 8, 1, 7, 0],
[ 6, 7, 7, 2],
[ 0, 9, 0, 2],
[10, 8, 0, 4],
[10, 1, 8, 2],
[ 0, 3, 0, 8],
[10, 8, 10, 4],
[ 0, 0, 8, 2],
[ 0, 4, 0, 2],
[ 8, 0, 10, 0],
[ 7, 0, 5, 8],
[ 6, 8, 0, 0],
[ 8, 6, 0, 9],
[10, 6, 0, 3],
[ 9, 4, 5, 10],
[ 0, 10, 0, 5],
[ 6, 4, 2, 2],
[ 9, 10, 3, 8],
[ 8, 3, 3, 6],
[ 6, 0, 3, 9],
[ 3, 1, 10, 6],
[ 0, 0, 3, 8],
[ 4, 1, 0, 1],
[ 5, 1, 0, 4],
[ 7, 0, 10, 0],
[ 5, 10, 0, 3],
[10, 8, 9, 9],
[10, 6, 9, 1],
[ 0, 8, 0, 5],
[ 0, 10, 1, 0],
[ 8, 7, 10, 6],
[ 0, 0, 8, 8],
[ 0, 5, 1, 5]])
you have 4 variables that are present in 157 equations, and you need to solve for the 4 variables, this system has no solution and is not solved by a linear system solution, rather as a least squares solution using np.linalg.lstsq
X = np.linalg.lstsq(an_array, price)[0]
print(X)
Using the lstsqr as recommended by #Ahmed:
In [88]: X = np.linalg.lstsq(quant, price)[0]
C:\Users\paul\AppData\Local\Temp\ipykernel_6428\995094510.py:1: FutureWarning: `rcond` parameter will change to the default of machine precision times ``max(M, N)`` where M and N are the input matrix dimensions.
To use the future default and silence this warning we advise to pass `rcond=None`, to keep using the old, explicitly pass `rcond=-1`.
X = np.linalg.lstsq(quant, price)[0]
In [90]: X
Out[90]: array([20. , 5.5 , 21. , 22.25])
Multiplying the quantities by these prices and summing gives the same numbers as your price array:
In [91]: quant#X
Out[91]:
array([499.25, 381. , 59.5 , 290. , 128.5 , 305.25, 336.25, 268.5 ,
395.25, 136.5 , 194.5 , 498.75, 62.25, 312.75, 332. , 171. ,
... , 159.75])
So there's no noise in these prices - the values are exact. That means you could use solve with any 4 of the price and quant values. That addresses the "must be square" error. quant[:4,:] has (4,4) shape.
In [92]: np.linalg.solve(quant[:4,:], price[:4])
Out[92]: array([20. , 5.5 , 21. , 22.25])

Identify the space group isomorphism between the the group created by AffineCrystGroup and the one given by cryst package

I use the following code snippet to create the diamond space group in GAP with the help of cryst package:
gap> M1:=[[0, 0, 1, 0],[1, 0, 0, 0],[0, -1, 0, 0],[1/4, 1/4, 1/4, 1]];;
gap> M2:=[[0,0,-1,0],[0,-1,0,0],[1,0,0,0],[0,0,0,1]];;
gap> S:=AffineCrystGroup([M1,M2]);
<matrix group with 2 generators>
The above code snippet comes from page 21 of the book Computer Algebra and Materials Physics, as shown below:
# As for the diamond case, in the GAP computation, the
# crystallographic group is defined as follows. (The minimal
# generating set is used for simplicity.)
gap> M1:=[[0,0,1,0],[1,0,0,0],[0,-1,0,0],[1/4,1/4,1/4,1]];;
gap> M2:=[[0,0,-1,0],[0,-1,0,0],[1,0,0,0],[0,0,0,1]];;
gap> S:=AffineCrystGroup([M1,M2]);
<matrix group with 2 generators>
gap> P:=PointGroup(S);
Group([ [ [ 0, 0, 1 ], [ 1, 0, 0 ], [ 0, -1, 0 ] ],
[ [ 0, 0, -1 ], [ 0, -1, 0 ], [ 1, 0, 0 ] ] ])
It's well-known that diamond has the space group Fd-3m (No. 227). I wonder how I can verify/confirm/check this fact in GAP after I've created the above AffineCrystGroup.
Regards,
HZ
Based on the command ConjugatorSpaceGroups provided by the cryst package, as described here, I figured out the following solution:
gap> M1OnRight:=[[0,0,1,0],[1,0,0,0],[0,-1,0,0],[1/4,1/4,1/4,1]];;
gap> M2OnRight:=[[0,0,-1,0],[0,-1,0,0],[1,0,0,0],[0,0,0,1]];;
gap> SG227OnRight:=AffineCrystGroupOnRight([M1OnRight,M2OnRight]);
<matrix group with 2 generators>
gap> ConjugatorSpaceGroups(SG227OnRight, SpaceGroupOnRightIT(3,227));
[ [ 1, 0, 0, 0 ], [ 0, 1, 0, 0 ], [ 0, 0, 1, 0 ], [ 3/8, 3/8, 7/8, 1 ] ]

Pytorch's gather, sequeeze and unsqueeze to Tensorflow Keras

I am migrating a code from pytorch to tensorflow, and in the function that calculates the loss, I have the below line that I need to migrate to tensorflow.
state_action_values = net(t_states_features).gather(1, actions_v.unsqueeze(-1)).squeeze(-1)
I found tf.gather and tf.gather_nd and I am not sure which is more suitable and how it could be used, also unsqueeze's alternative is maybe tf.expand_dims?
In an attempt to get a clearer view of the line's result, I split it into multiple parts with print statements.
print("net result")
state_action_values = net(t_states_features)
print(state_action_values)
print("gather result")
state_action_values = state_action_values.gather(1, actions_v.unsqueeze(-1))
print(state_action_values)
print("last squeeze")
state_action_values = state_action_values.squeeze(-1)
net result
tensor([[ 45.6878, -14.9495, 59.3737],
[ 33.5737, -10.4617, 39.0078],
[ 67.7197, -22.8818, 85.7977],
[ 94.7701, -33.2053, 120.5519],
[ nan, nan, nan],
[ 84.7324, -29.2101, 108.0821],
[ 67.7193, -22.7702, 86.9558],
[113.6835, -38.7149, 142.6167],
[ 61.9260, -20.1968, 79.8010],
[ 51.6152, -17.7391, 66.0719],
[ 73.6565, -21.5699, 98.9463],
[ 84.0761, -26.5016, 107.6888],
[ 60.9459, -20.1257, 76.4105],
[103.2883, -35.4035, 130.4503],
[ 37.1156, -13.5180, 47.1067],
[ nan, nan, nan],
[ 55.6286, -18.5239, 71.9837],
[ 55.3858, -18.7892, 71.1197],
[ 50.2419, -17.2959, 66.7059],
[ 82.5715, -30.0302, 108.4984],
[ -0.8662, -1.1861, 1.6033],
[112.4620, -38.6416, 142.4556],
[ 57.8702, -19.8080, 74.7656],
[ 45.8418, -15.7436, 57.3367],
[ 81.6596, -27.5002, 104.6002],
[ 57.1507, -21.8001, 67.7933],
[ 35.0414, -11.8199, 47.6573],
[ 67.7085, -23.1017, 85.4623],
[ 40.6284, -12.4578, 58.9603],
[ 68.6394, -23.1481, 87.0832],
[ 27.0549, -8.6635, 34.0150],
[ 25.4071, -8.5511, 34.0285],
[ 62.9161, -22.1693, 78.7965],
[ 85.4505, -28.1487, 108.6252],
[ 67.6665, -23.2376, 85.7117],
[ 60.7806, -20.2784, 77.1022],
[ 66.5209, -21.5674, 88.5561],
[ 61.6637, -20.9891, 72.3873],
[ 45.1634, -15.4678, 61.4886],
[ 66.8119, -23.1250, 85.6189],
[ nan, nan, nan],
[ 67.8166, -24.8342, 84.6706],
[ 86.2114, -29.5941, 107.8025],
[ 66.2716, -23.3309, 83.9700],
[101.2122, -35.3554, 127.4772],
[ 61.0749, -19.4720, 78.5588],
[ 50.4058, -16.1262, 63.1010],
[ 27.7543, -9.3767, 35.7448],
[ 67.7810, -23.4962, 83.6030],
[ 35.0103, -11.7238, 44.7983],
[ 55.7402, -19.0223, 70.3627],
[ 67.9733, -22.0783, 85.1893],
[ 60.5253, -20.3157, 79.7312],
[ 67.2404, -21.5205, 81.4499],
[ 57.9502, -20.7747, 70.9109],
[ 87.6536, -31.4256, 112.6491],
[ 90.3668, -30.7755, 116.6192],
[ 59.0660, -19.6988, 75.0723],
[ 50.0969, -17.4135, 62.6556],
[ 28.8703, -9.0950, 34.5749],
[ 68.4053, -22.0715, 88.2302],
[ 69.1397, -21.4236, 84.7833],
[ 23.8506, -8.1834, 30.8318],
[ 58.4296, -20.2432, 73.8116],
[ 87.5317, -29.0606, 110.0389],
[ nan, nan, nan],
[ 88.6387, -30.6154, 112.4239],
[ 51.6089, -16.1073, 66.2757],
[ 94.3989, -32.1473, 119.0358],
[ 82.7449, -30.7778, 102.8537],
[ 74.3067, -26.6585, 98.2536],
[ 77.0881, -26.5706, 98.3553],
[ 28.5688, -9.2949, 41.1165],
[ 86.1560, -26.9364, 107.0244],
[ 41.8914, -16.9703, 57.3840],
[ 88.8886, -29.7008, 108.2697],
[ 61.1243, -20.7566, 77.2257],
[ 85.1174, -28.7558, 107.3853],
[ 81.7256, -27.9047, 104.5006],
[ 51.2663, -16.5880, 67.1428],
[ 46.9150, -12.7457, 61.3240],
[ 36.1758, -12.9769, 47.7178],
[ 85.5846, -29.4141, 107.9649],
[ 59.9424, -20.8349, 75.3359],
[ 62.6516, -22.1235, 81.6903],
[104.7664, -34.5876, 129.9478],
[ 64.4671, -23.3980, 83.9093],
[ 69.6928, -23.6567, 89.6024],
[ 60.4407, -19.6136, 75.9350],
[ 33.4921, -10.3434, 44.9537],
[ 57.9112, -19.4174, 74.3050],
[ 24.8262, -9.3637, 30.1057],
[ 85.3776, -28.9097, 110.1310],
[ 63.8175, -22.3843, 81.0308],
[ 34.6040, -12.3217, 46.0356],
[ 88.3740, -29.5049, 110.2897],
[ 66.8196, -22.5860, 85.5386],
[ 58.9767, -22.0601, 78.7086],
[ 83.2090, -26.3499, 113.5105],
[ 54.8450, -17.7980, 68.1161],
[ nan, nan, nan],
[ 85.0846, -29.2494, 107.6780],
[ 76.9251, -26.2295, 98.4755],
[ 98.2907, -32.8878, 124.9192],
[ 91.1387, -30.8262, 115.3978],
[ 73.1062, -24.9450, 90.0967],
[ 27.6564, -8.6114, 35.4470],
[ 71.8508, -25.1529, 95.5165],
[ 69.7275, -20.1357, 86.9620],
[ 67.0907, -21.9245, 84.8853],
[ 77.3163, -25.5980, 92.7700],
[ 63.0082, -21.0345, 78.7311],
[ 68.0553, -22.4280, 84.8031],
[ 5.8148, -2.3171, 8.0620],
[103.3399, -35.1769, 130.7801],
[ 54.8769, -18.6822, 70.4657],
[ 58.4446, -18.9764, 75.5509],
[ 91.0071, -31.2706, 112.6401],
[ 84.6577, -29.2644, 104.6046],
[ 45.4887, -15.8309, 59.0498],
[ 56.3384, -18.9264, 78.8834],
[ 63.5109, -21.3169, 81.5144],
[ 79.4635, -29.8681, 100.5056],
[ 27.6559, -10.0517, 35.6012],
[ 76.3909, -24.1689, 93.6133],
[ 34.3802, -11.5272, 45.8650],
[ 60.3553, -20.1693, 76.5371],
[ 56.0590, -18.6468, 69.8981]], grad_fn=<AddmmBackward0>)
gather result
tensor([[ 59.3737],
[-10.4617],
[ 67.7197],
[ 94.7701],
[ nan],
[-29.2101],
[ 67.7193],
[-38.7149],
[-20.1968],
[ 66.0719],
[ 98.9463],
[107.6888],
[-20.1257],
[-35.4035],
[ 47.1067],
[ nan],
[ 55.6286],
[-18.7892],
[ 66.7059],
[-30.0302],
[ 1.6033],
[112.4620],
[ 74.7656],
[-15.7436],
[ 81.6596],
[-21.8001],
[ 35.0414],
[-23.1017],
[ 40.6284],
[ 68.6394],
[ 34.0150],
[ 34.0285],
[ 78.7965],
[-28.1487],
[ 67.6665],
[-20.2784],
[-21.5674],
[ 72.3873],
[-15.4678],
[ 85.6189],
[ nan],
[-24.8342],
[-29.5941],
[-23.3309],
[101.2122],
[-19.4720],
[-16.1262],
[ -9.3767],
[-23.4962],
[-11.7238],
[ 70.3627],
[-22.0783],
[-20.3157],
[ 67.2404],
[-20.7747],
[112.6491],
[-30.7755],
[-19.6988],
[ 50.0969],
[ 34.5749],
[ 88.2302],
[-21.4236],
[ -8.1834],
[ 73.8116],
[110.0389],
[ nan],
[112.4239],
[-16.1073],
[-32.1473],
[-30.7778],
[ 98.2536],
[ 98.3553],
[ 28.5688],
[107.0244],
[-16.9703],
[-29.7008],
[ 77.2257],
[-28.7558],
[-27.9047],
[ 67.1428],
[-12.7457],
[ 47.7178],
[-29.4141],
[ 59.9424],
[-22.1235],
[129.9478],
[-23.3980],
[-23.6567],
[ 75.9350],
[-10.3434],
[-19.4174],
[ 30.1057],
[ 85.3776],
[ 63.8175],
[ 46.0356],
[-29.5049],
[-22.5860],
[-22.0601],
[113.5105],
[-17.7980],
[ nan],
[-29.2494],
[ 76.9251],
[-32.8878],
[115.3978],
[-24.9450],
[ 35.4470],
[ 95.5165],
[ 86.9620],
[-21.9245],
[-25.5980],
[ 78.7311],
[-22.4280],
[ 5.8148],
[103.3399],
[ 70.4657],
[ 58.4446],
[ 91.0071],
[104.6046],
[ 45.4887],
[-18.9264],
[ 63.5109],
[ 79.4635],
[-10.0517],
[ 76.3909],
[ 34.3802],
[-20.1693],
[-18.6468]], grad_fn=<GatherBackward0>)
last squeeze
tensor([ 59.3737, -10.4617, 67.7197, 94.7701, nan, -29.2101, 67.7193,
-38.7149, -20.1968, 66.0719, 98.9463, 107.6888, -20.1257, -35.4035,
47.1067, nan, 55.6286, -18.7892, 66.7059, -30.0302, 1.6033,
112.4620, 74.7656, -15.7436, 81.6596, -21.8001, 35.0414, -23.1017,
40.6284, 68.6394, 34.0150, 34.0285, 78.7965, -28.1487, 67.6665,
-20.2784, -21.5674, 72.3873, -15.4678, 85.6189, nan, -24.8342,
-29.5941, -23.3309, 101.2122, -19.4720, -16.1262, -9.3767, -23.4962,
-11.7238, 70.3627, -22.0783, -20.3157, 67.2404, -20.7747, 112.6491,
-30.7755, -19.6988, 50.0969, 34.5749, 88.2302, -21.4236, -8.1834,
73.8116, 110.0389, nan, 112.4239, -16.1073, -32.1473, -30.7778,
98.2536, 98.3553, 28.5688, 107.0244, -16.9703, -29.7008, 77.2257,
-28.7558, -27.9047, 67.1428, -12.7457, 47.7178, -29.4141, 59.9424,
-22.1235, 129.9478, -23.3980, -23.6567, 75.9350, -10.3434, -19.4174,
30.1057, 85.3776, 63.8175, 46.0356, -29.5049, -22.5860, -22.0601,
113.5105, -17.7980, nan, -29.2494, 76.9251, -32.8878, 115.3978,
-24.9450, 35.4470, 95.5165, 86.9620, -21.9245, -25.5980, 78.7311,
-22.4280, 5.8148, 103.3399, 70.4657, 58.4446, 91.0071, 104.6046,
45.4887, -18.9264, 63.5109, 79.4635, -10.0517, 76.3909, 34.3802,
-20.1693, -18.6468], grad_fn=<SqueezeBackward1>)
Edit 1: print of actions_v
actions_v
tensor([2, 0, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 2, 0, 2, 0, 1,
2, 0, 2, 1, 1, 0, 2, 1, 0, 0, 2, 1, 1, 1, 0, 1, 0, 1, 1, 2, 1, 1, 2, 1,
0, 2, 1, 2, 0, 2, 2, 0, 0, 1, 2, 0, 1, 2, 0, 0, 1, 1, 2, 0, 0, 2, 0, 0,
1, 1, 2, 0, 1, 0, 2, 0, 1, 0, 0, 0, 0, 1, 2, 0, 2, 0, 1, 1, 2, 1, 2, 2,
2, 0, 1, 1, 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 2, 1, 1, 0, 1, 0, 1, 2,
2, 1, 0, 2, 0, 0, 2, 1])
gather_nd takes inputs that have the same dimension as the input tensor, and will output a tensor of values being at those indices (which is what you want).
gather will output slices (but you can give as indice shape whatever you want, the output tensor will just be a bunch of slices that are structured accordingly to the shape of indices) which is not what you want.
So you should first make the indices match the dimensions of the initial matrix:
indices = tf.transpose(tf.stack((tf.range(tf.shape(state_action_values)[0]),actions_v)))
And then gather_nd:
state_action_values = tf.gather_nd(state_action_values,indices)
Keivan

Reading JSON data with nested obects within arrays to SQL Server 2016

I'm stuck trying to read the below JSON data into SQL Server 2016. I can't return any values from the Values object onwards.
The select statement shows NULL values for binWidth, minVal, nBins and type.
Also, I'm unsure how to deal with the result array as the values do not have any keys assigned.
Any help much appreciated.
JSON data:
DECLARE #json NVARCHAR(MAX) =
'{
"Histograms": [
{
"Name": "20458-Z01-DWL",
"RegisterId": "0",
"Tags": [],
"UUID": "a4c5fa3f-ecb8-4635-8e94-5167e743b518",
"Values": [
{
"config": {
"binWidth": 50,
"minVal": 50,
"nBins": 18,
"type": "total wait"
},
"result": [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
]
}
]
},
{
"Name": "20458-Z02-DWL",
"RegisterId": "1",
"Tags": [],
"UUID": "95d57826-30f6-44c9-ad0d-6a24684fcaed",
"Values": [
{
"config": {
"binWidth": 50,
"minVal": 50,
"nBins": 18,
"type": "total wait"
},
"result": [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
]
}
]
},
{
"Name": "20458-Z03-DWL",
"RegisterId": "2",
"Tags": [],
"UUID": "90223a0e-3d1a-471f-a871-ee56da4799f5",
"Values": [
{
"config": {
"binWidth": 50,
"minVal": 50,
"nBins": 18,
"type": "total wait"
},
"result": [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
]
}
]
},
{
"Name": "20458-Z04-DWL",
"RegisterId": "3",
"Tags": [],
"UUID": "6c837def-feeb-48d5-8dcf-307b56ec44e9",
"Values": [
{
"config": {
"binWidth": 50,
"minVal": 100,
"nBins": 16,
"type": "total wait"
},
"result": [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
]
}
]
},
{
"Name": "20458-Z05-DWL",
"RegisterId": "4",
"Tags": [],
"UUID": "76bd5aa2-8860-4a2e-997d-3c83e940790f",
"Values": [
{
"config": {
"binWidth": 50,
"minVal": 100,
"nBins": 16,
"type": "total wait"
},
"result": [
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
]
}
]
}
],
"LogEntryId": 6593,
"StartTimestamp": "2020-07-20T16:05:00Z",
"Timestamp": "2020-07-20T16:06:00Z"
}'
Query to fetch items from data:
SELECT
[Name],
[RegisterId],
UUID,
binWidth
minVal,
nBins,
[type]
FROM
OPENJSON (#json, '$.Histograms')
WITH
([Name] nvarchar(100),
[RegisterId] nvarchar(100),
UUID nvarchar(100),
[Values] nvarchar(max) AS json) AS Histograms
CROSS APPLY
OPENJSON (Histograms.[Values])
WITH
(binWidth int,
minVal int,
nBins int,
[type] nvarchar(100)) AS config
Figured out the last question re. passing result array into one field - edit below. Thanks again for your guidance, have learnt a lot!
select
Histograms.[Name],
Histograms.[RegisterId],
Histograms.UUID,
config.binWidth,
config.minVal,
config.nBins,
config.[type],
r.[key] as 'result position',
r.[value] as 'result value'
from
openjson(#json, '$.Histograms')
with (
[Name] nvarchar(100),
LogEntryId int,
[RegisterId] nvarchar(100),
UUID nvarchar(100),
[Values] nvarchar(max) as json
) as Histograms
cross apply openjson (Histograms.[Values]) h
cross apply openjson (h.value)
with
(
binWidth int N'$.config.binWidth',
minVal int N'$.config.minVal',
nBins int N'$.config.nBins',
[type] nvarchar(100) N'$.config.type',
result nvarchar(max) as json
) as config
CROSS APPLY OPENJSON(config.result) r

Tensorflow, Reshape like a convolution

I have a matrix [3,3,256], my final output must be [4,2,2,256], I have to use a reshape like a 'convolution' without changing the values. (In this case using a filter 2x2). Is there a method to do this using tensorflow?
If I understand your question correctly, you want to store the original values redundantly in the new structure, like this (without the last dim of 256):
[ [ 1 2 3 ] [ [ 1 2 ] [ [ 2 3 ] [ [ 4 5 ] [ [ 5 6 ]
[ 4 5 6 ] => [ 4 5 ] ], [ 5 6 ] ], [ 7 8 ] ], [ 8 9 ] ]
[ 7 8 9 ] ]
If yes, you can use indexing, like this, with x being the original tensor, and then stack them:
x2 = []
for i in xrange( 2 ):
for j in xrange( 2 ):
x2.append( x[ i : i + 2, j : j + 2, : ] )
y = tf.stack( x2, axis = 0 )
Based on your comment, if you really want to avoid using any loops, you might utilize the tf.extract_image_patches, like below (tested code) but you should run some tests because this might actually be much worse than the above in terms of efficiency and perfomance:
import tensorflow as tf
sess = tf.Session()
x = tf.constant( [ [ [ 1, -1 ], [ 2, -2 ], [ 3, -3 ] ],
[ [ 4, -4 ], [ 5, -5 ], [ 6, -6 ] ],
[ [ 7, -7 ], [ 8, -8 ], [ 9, -9 ] ] ] )
xT = tf.transpose( x, perm = [ 2, 0, 1 ] ) # have to put channel dim as batch for tf.extract_image_patches
xTE = tf.expand_dims( xT, axis = -1 ) # extend dims to have fake channel dim
xP = tf.extract_image_patches( xTE, ksizes = [ 1, 2, 2, 1 ],
strides = [ 1, 1, 1, 1 ], rates = [ 1, 1, 1, 1 ], padding = "VALID" )
y = tf.transpose( xP, perm = [ 3, 1, 2, 0 ] ) # move dims back to original and new dim up front
print( sess.run(y) )
Output (horizontal separator lines added manually for readability):
[[[[ 1 -1]
[ 2 -2]]
[[ 4 -4]
[ 5 -5]]]
[[[ 2 -2]
[ 3 -3]]
[[ 5 -5]
[ 6 -6]]]
[[[ 4 -4]
[ 5 -5]]
[[ 7 -7]
[ 8 -8]]]
[[[ 5 -5]
[ 6 -6]]
[[ 8 -8]
[ 9 -9]]]]
I have a similar problem with you and I found that in tf.contrib.kfac.utils there is a function called extract_convolution_patches. Suppose you have a tensor X with shape (1, 3, 3, 256) where the initial 1 marks batch size, you can call
Y = tf.contrib.kfac.utils.extract_convolution_patches(X, (2, 2, 256, 1), padding='VALID')
Y.shape # (1, 2, 2, 2, 2, 256)
The first two 2's will be the number of your output filters (makes up the 4 in your description). The latter two 2's will be the shape of the filters. You can then call
Y = tf.reshape(Y, [4,2,2,256])
to get your final result.