astropy make_lupton_rbg returns UFuncTypeError - astropy

I have three arrays that I read from FITS files that contain images (I can open them with astroimagej. They don't seem corrupted)
band = fits.open("band.fits")[0].data
In : r
Out:
array([[ 40, 608, 829, ..., 652, 297, 306],
[ 70, 886, 786, ..., 1088, 519, 314],
[ 14, 112, 518, ..., 885, 1454, 0],
...,
[1648, 471, 0, ..., 40, 558, 68],
[1456, 536, 0, ..., 42, 257, 108],
[ 235, 858, 177, ..., 113, 203, 108]], dtype=uint16)
In : g
Out:
array([[ 916, 0, 130, ..., 339, 84, 546],
[ 0, 0, 836, ..., 0, 0, 0],
[ 0, 1726, 1712, ..., 0, 0, 505],
...,
[1025, 0, 129, ..., 1485, 2915, 0],
[ 0, 0, 1129, ..., 990, 1815, 0],
[ 659, 0, 296, ..., 0, 0, 0]], dtype=uint16)
In : b
Out:
array([[ 916, 0, 130, ..., 339, 84, 546],
[ 0, 0, 836, ..., 0, 0, 0],
[ 0, 1726, 1712, ..., 0, 0, 505],
...,
[1025, 0, 129, ..., 1485, 2915, 0],
[ 0, 0, 1129, ..., 990, 1815, 0],
[ 659, 0, 296, ..., 0, 0, 0]], dtype=uint16)
When I try to run the make_lupton_rbg function from the astropy library, I get the error message
UFuncTypeError: Cannot cast ufunc 'multiply' output from dtype('float64') to dtype('uint16') with casting rule 'same_kind'
Yet it seems to me like every array here is of the same type. What might be going wrong?
I also tried to run the function with one of these arrays repeated three times e.g. make_lupton_rgb(r, r, r), to see if there was a problem with one individual FITS file. I get the same error message in all cases.

Converting each array to float64 worked, though I don't know why.

Related

What measure of uncertainty is the default that is used in the gratia draw() function?

I have a data set that looks like this:
structure(list(landings = c(116, 31, 0, 0, 0,
0, 0, 0, 0, 120, 0, 241, 9, 0, 64, 326, 142, 605, 139, 410,
212, 470, 416, 309, 1269, 474, 22, 135, 395, 464, 451, 32,
2537, 210, 299, 1522, 184, 550, 666, 429, 1372, 184, 147,
1208, 159, 951, 1000, 1100, 301, 144, 244, 0, 0, 281, 0,
0, 0, 0, 0, 0, 0, 0, 0, 42, 594, 26, 747, 436, 0, 914, 182,
8, 275, 175, 766, 130, 930, 31, 177, 123, 895, 88, 107, 0,
4, 481, 909, 511, 877, 402, 295, 336, 645, 310, 301, 398,
411, 0, 205, 293, 49, 454, 162, 138, 1171, 0, 138, 0, 111,
0, 0, 36, 78, 114, 0, 0, 134, 44, 549, 0, 378, 716, 739,
393, 203, 839, 70, 454, 132, 651, 63, 1850, 217, 403, 55,
0, 408, 43, 17, 12, 26, 2, 811, 581, 1216, 154, 1059, 89,
1862, 1310, 297, 29, 680, 0, 0, 29, 0, 0, 0, 0, 0, 0, 17,
6, 0, 0, 0, 44, 909, 0, 0, 0, 194, 0, 212, 18, 46, 44, 56,
365, 37, 0, 73, 11, 16, 19, 0, 0, 0, 23, 0, 92, 0, 216, 0,
16, 0, 80, 319, 59, 35, 929, 47, 0, 0, 356, 0, 0, 33, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 13, 0, 0, 91, 362, 0,
0, 0, 0, 0, 29, 0, 0, 392, 105, 0, 94, 15, 222, 34, 44, 178,
1867, 0, 224, 241, 23, 1502, 492, 168, 0, 234, 299, 453,
0, 406, 149, 0, 39, 57, 86, 0, 28, 23, 265, 0, 0, 0, 168,
31, 20, 0, 28, 78, 244, 13, 0, 99, 168, 861, 52, 649, 0,
174, 0, 0, 2462, 64, 178, 0, 61, 0, 321, 391, 33, 17, 227,
241, 248, 294, 1119, 37, 90, 0, 85, 37, 89, 0, 0, 0), Date = c(2014,
2014.01916495551, 2014.03832991102, 2014.05749486653, 2014.07665982204,
2014.09582477755, 2014.11498973306, 2014.13415468857, 2014.15331964408,
2014.17248459959, 2014.1916495551, 2014.21081451061, 2014.22997946612,
2014.24914442163, 2014.26830937714, 2014.28747433265, 2014.30663928816,
2014.32580424367, 2014.34496919918, 2014.36413415469, 2014.3832991102,
2014.40246406571, 2014.42162902122, 2014.44079397673, 2014.45995893224,
2014.47912388775, 2014.49828884326, 2014.51745379877, 2014.53661875428,
2014.55578370979, 2014.5749486653, 2014.59411362081, 2014.61327857632,
2014.63244353183, 2014.65160848734, 2014.67077344285, 2014.68993839836,
2014.70910335387, 2014.72826830938, 2014.74743326489, 2014.7665982204,
2014.78576317591, 2014.80492813142, 2014.82409308693, 2014.84325804244,
2014.86242299795, 2014.88158795346, 2014.90075290897, 2014.91991786448,
2014.93908281999, 2014.9582477755, 2014.97741273101, 2014.99657768652,
2015.01574264203, 2015.03490759754, 2015.05407255305, 2015.07323750856,
2015.09240246407, 2015.11156741958, 2015.13073237509, 2015.1498973306,
2015.16906228611, 2015.18822724162, 2015.20739219713, 2015.22655715264,
2015.24572210815, 2015.26488706366, 2015.28405201916, 2015.30321697467,
2015.32238193018, 2015.34154688569, 2015.3607118412, 2015.37987679671,
2015.39904175222, 2015.41820670773, 2015.43737166324, 2015.45653661875,
2015.47570157426, 2015.49486652977, 2015.51403148528, 2015.53319644079,
2015.5523613963, 2015.57152635181, 2015.59069130732, 2015.60985626283,
2015.62902121834, 2015.64818617385, 2015.66735112936, 2015.68651608487,
2015.70568104038, 2015.72484599589, 2015.7440109514, 2015.76317590691,
2015.78234086242, 2015.80150581793, 2015.82067077344, 2015.83983572895,
2015.85900068446, 2015.87816563997, 2015.89733059548, 2015.91649555099,
2015.9356605065, 2015.95482546201, 2015.97399041752, 2015.99315537303,
2016.01232032854, 2016.03148528405, 2016.05065023956, 2016.06981519507,
2016.08898015058, 2016.10814510609, 2016.1273100616, 2016.14647501711,
2016.16563997262, 2016.18480492813, 2016.20396988364, 2016.22313483915,
2016.24229979466, 2016.26146475017, 2016.28062970568, 2016.29979466119,
2016.3189596167, 2016.33812457221, 2016.35728952772, 2016.37645448323,
2016.39561943874, 2016.41478439425, 2016.43394934976, 2016.45311430527,
2016.47227926078, 2016.49144421629, 2016.5106091718, 2016.52977412731,
2016.54893908282, 2016.56810403833, 2016.58726899384, 2016.60643394935,
2016.62559890486, 2016.64476386037, 2016.66392881588, 2016.68309377139,
2016.7022587269, 2016.72142368241, 2016.74058863792, 2016.75975359343,
2016.77891854894, 2016.79808350445, 2016.81724845996, 2016.83641341547,
2016.85557837098, 2016.87474332649, 2016.893908282, 2016.91307323751,
2016.93223819302, 2016.95140314853, 2016.97056810404, 2016.98973305955,
2017.00889801506, 2017.02806297057, 2017.04722792608, 2017.06639288159,
2017.0855578371, 2017.10472279261, 2017.12388774812, 2017.14305270363,
2017.16221765914, 2017.18138261465, 2017.20054757016, 2017.21971252567,
2017.23887748118, 2017.25804243669, 2017.2772073922, 2017.29637234771,
2017.31553730322, 2017.33470225873, 2017.35386721424, 2017.37303216975,
2017.39219712526, 2017.41136208077, 2017.43052703628, 2017.44969199179,
2017.4688569473, 2017.48802190281, 2017.50718685832, 2017.52635181383,
2017.54551676934, 2017.56468172485, 2017.58384668036, 2017.60301163587,
2017.62217659138, 2017.64134154689, 2017.6605065024, 2017.67967145791,
2017.69883641342, 2017.71800136893, 2017.73716632444, 2017.75633127995,
2017.77549623546, 2017.79466119097, 2017.81382614648, 2017.83299110199,
2017.85215605749, 2017.871321013, 2017.89048596851, 2017.90965092402,
2017.92881587953, 2017.94798083504, 2017.96714579055, 2017.98631074606,
2018.00547570157, 2018.02464065708, 2018.04380561259, 2018.0629705681,
2018.08213552361, 2018.12046543463, 2018.13963039014, 2018.15879534565,
2018.17796030116, 2018.19712525667, 2018.21629021218, 2018.23545516769,
2018.2546201232, 2018.27378507871, 2018.29295003422, 2018.31211498973,
2018.33127994524, 2018.35044490075, 2018.36960985626, 2018.38877481177,
2018.40793976728, 2018.42710472279, 2018.4462696783, 2018.46543463381,
2018.48459958932, 2018.50376454483, 2018.52292950034, 2018.54209445585,
2018.56125941136, 2018.58042436687, 2018.59958932238, 2018.61875427789,
2018.6379192334, 2018.65708418891, 2018.67624914442, 2018.69541409993,
2018.71457905544, 2018.73374401095, 2018.75290896646, 2018.77207392197,
2018.79123887748, 2018.81040383299, 2018.8295687885, 2018.84873374401,
2018.86789869952, 2018.88706365503, 2018.90622861054, 2018.92539356605,
2018.94455852156, 2018.96372347707, 2018.98288843258, 2019.00205338809,
2019.0212183436, 2019.04038329911, 2019.05954825462, 2019.07871321013,
2019.09787816564, 2019.11704312115, 2019.13620807666, 2019.15537303217,
2019.17453798768, 2019.19370294319, 2019.2128678987, 2019.23203285421,
2019.25119780972, 2019.27036276523, 2019.28952772074, 2019.30869267625,
2019.32785763176, 2019.34702258727, 2019.36618754278, 2019.38535249829,
2019.4045174538, 2019.42368240931, 2019.44284736482, 2019.46201232033,
2019.48117727584, 2019.50034223135, 2019.51950718686, 2019.53867214237,
2019.55783709788, 2019.57700205339, 2019.5961670089, 2019.61533196441,
2019.63449691992, 2019.65366187543, 2019.67282683094, 2019.69199178645,
2019.71115674196, 2019.73032169747, 2019.74948665298, 2019.76865160849,
2019.787816564, 2019.80698151951, 2019.82614647502, 2019.84531143053,
2019.86447638604, 2019.88364134155, 2019.90280629706, 2019.92197125257,
2019.94113620808, 2019.96030116359, 2019.9794661191))
I am running a GAM that looks like this:
gam1<-gam(landings~s(Date))
I am using draw to plot my data:
draw(gam1)
I have been looking to figure out what the uncertainty is measured by in draw() with no success. Is this a 95% confidence interval or standard error that is used to plot the uncertainty in this plot?
It's an approximate 95% credible interval (drawn at 2 * the standard error of the smooth), the same as you'd get from mgcv:::plot.gam().
I should make this clearer, and allow users to control what coverage they want for the interval, in the package.

Numpy : How to assign directly a subarray from values when these values are step spaced

I have 2 global arrays "tab1" and "tab2" with dimensions respectively equal to 21x21 and 17x17.
I would like to assign the block of "tab1" ( indexed by [15:20,0:7]) by the block of "tab2" indexed by [7:17:2,0:7] (so with a step between elements of 1st array dimension) : I tried whith this syntax :
tab1[15:20,0:7] = tab2[7:17:2,0:7]
Unfortunately, this doesn't work, it seems that only "diagonal" (I mean one by one) elements of 15:20 are taken into account following the values of "tab2" along [7:17:2].
Is there a way to assign a subarray of "tab1" with another subarray "tab2" composed of indexes with step spaced values ?
If someone could see what's wrong or suggest another method, this would be nice.
UPDATE 1: indeed, from my last tests, it seems good but is it also the same for the assignment of block [15:20,15:20] :
tab1[15:20,15:20] = tab2[7:17:2,7:17:2]
??
ANSWER : it seems ok also for this block assignment, sorry
The assignment works as I expect.
In [1]: arr = np.ones((20,10),int)
The two blocks have the same shape:
In [2]: arr[15:20, 0:7].shape
Out[2]: (5, 7)
In [3]: arr[7:17:2, 0:7].shape
Out[3]: (5, 7)
and assigning something interesting, looks right:
In [4]: arr2 = np.arange(200).reshape(20,10)
In [5]: arr[15:20, 0:7] = arr2[7:17:2, 0:7]
In [6]: arr
Out[6]:
array([[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1],
...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1],
[ 70, 71, 72, 73, 74, 75, 76, 1, 1, 1],
[ 90, 91, 92, 93, 94, 95, 96, 1, 1, 1],
[110, 111, 112, 113, 114, 115, 116, 1, 1, 1],
[130, 131, 132, 133, 134, 135, 136, 1, 1, 1],
[150, 151, 152, 153, 154, 155, 156, 1, 1, 1]])
I see a (5,7) block of values from arr2, skipping rows like [80, 100,...]

Preventing an animation from looping

I want my animation to only play once and not loop. My understanding is that you can do that by setting "next" to false. However, my animation is still looping. Here is my sprite sheet json file:
{
"images": [
"ressources/atlas/apparition.png"
],
"framerate": 12,
"frames": [
[1, 1, 170, 172, 0, -15, -15],
[1, 175, 164, 165, 0, -19, -18],
[1, 342, 156, 160, 0, -23, -21],
[159, 342, 147, 146, 0, -27, -28],
[167, 175, 134, 128, 0, -33, -37],
[173, 1, 122, 96, 0, -40, -52],
[173, 99, 96, 64, 0, -52, -68]
],
"animations": {
"apparition": { "frames": [6, 5, 4, 3, 2, 1, 0], "next": false }
}
}
Ideas?
Well... it seems that you must use gotoAndPlay() if you want to prevent looping. I was using play().

Filter sequence items in TensorFlow

I have a tensor of allowed items
index = tf.constant([61, 215, 23, 18, 241, 125])
and need to remove items from input sequence batches that are not in index.
seq = tf.constant(
[
[ 18, 241, 0, 0],
[125, 61, 23, 241],
[ 23, 92, 18, 0],
[ 5, 61, 215, 18],
]
)
After the calculation in this case I need
result_needed = tf.constant(
[
[ 18, 241, 0, 0],
[125, 61, 23, 241],
[ 23, 18, 0, 0],
[ 61, 215, 18, 0],
]
)
I cannot do this in Python because this calculation happens during predictions. Also note that while item IDs here are small, solution needs to deal with numbers from 1 to 2^40.
Answer
After some serious pondering time, I came up with the following:
idx_range = tf.reshape(tf.range(seq.shape[-2]), [-1, 1])
idx_tile = tf.tile(idx_range, [1, seq.shape[-2].value])
idx_flat = tf.reshape(idx_tile, [-1])
truth_value = tf.equal(index, tf.expand_dims(seq, -1))
one_hot = tf.to_float(truth_value)
ones = tf.nn.top_k(tf.reduce_sum(one_hot, -1), seq.shape[-1]).indices
ones_flat = tf.reshape(ones, [-1])
ones_idx = tf.reshape(
tf.stack([idx_flat, ones_flat], axis=1),
tf.concat([seq.shape, [2]], axis=0)
)
tf.gather_nd(seq, ones_idx)
This is not exactly what I said I needed, but actually got me close enough. Instead of the output replacing the blacklisted items with 0, it moves them to the end. If you needed them gone, I'm sure there's a method to remove them, but I'm not looking into it. Apologies.

The `out` arguments in `numpy.einsum` can not work as expected

I have two piece codes. The first one is:
A = np.arange(3*4*3).reshape(3, 4, 3)
P = np.arange(1, 4)
A[:, 1:, :] = np.einsum('j, ijk->ijk', P, A[:, 1:, :])
and the result A is :
array([[[ 0, 1, 2],
[ 6, 8, 10],
[ 18, 21, 24],
[ 36, 40, 44]],
[[ 12, 13, 14],
[ 30, 32, 34],
[ 54, 57, 60],
[ 84, 88, 92]],
[[ 24, 25, 26],
[ 54, 56, 58],
[ 90, 93, 96],
[132, 136, 140]]])
The second one is:
A = np.arange(3*4*3).reshape(3, 4, 3)
P = np.arange(1, 4)
np.einsum('j, ijk->ijk', P, A[:, 1:, :], out=A[:,1:,:])
and the result A is :
array([[[ 0, 1, 2],
[ 0, 0, 0],
[ 0, 0, 0],
[ 0, 0, 0]],
[[12, 13, 14],
[ 0, 0, 0],
[ 0, 0, 0],
[ 0, 0, 0]],
[[24, 25, 26],
[ 0, 0, 0],
[ 0, 0, 0],
[ 0, 0, 0]]])
So the result is different. Here I want to use out to save memory. Is it a bug in numpy.einsum? Or I missed something?
By the way, my numpy version is 1.13.3.
I haven't used this new out parameter before, but have worked with einsum in the past, and have a general idea of how it works (or at least used to).
It looks to me like it initializes the out array to zero before the start of iteration. That would account for all the 0s in the A[:,1:,:] block. If instead I initial separate out array, the desired values are inserted
In [471]: B = np.ones((3,4,3),int)
In [472]: np.einsum('j, ijk->ijk', P, A[:, 1:, :], out=B[:,1:,:])
Out[472]:
array([[[ 3, 4, 5],
[ 12, 14, 16],
[ 27, 30, 33]],
[[ 15, 16, 17],
[ 36, 38, 40],
[ 63, 66, 69]],
[[ 27, 28, 29],
[ 60, 62, 64],
[ 99, 102, 105]]])
In [473]: B
Out[473]:
array([[[ 1, 1, 1],
[ 3, 4, 5],
[ 12, 14, 16],
[ 27, 30, 33]],
[[ 1, 1, 1],
[ 15, 16, 17],
[ 36, 38, 40],
[ 63, 66, 69]],
[[ 1, 1, 1],
[ 27, 28, 29],
[ 60, 62, 64],
[ 99, 102, 105]]])
The Python portion of einsum doesn't tell me much, except how it decides to pass the out array to the c portion, (as one of the list of tmp_operands):
c_einsum(einsum_str, *tmp_operands, **einsum_kwargs)
I know that it sets up a c-api equivalent of np.nditer, using the str to define the axes and iterations.
It iterates something like this section in the iteration tutorial:
https://docs.scipy.org/doc/numpy-1.13.0/reference/arrays.nditer.html#reduction-iteration
Notice in particular the it.reset() step. That sets the out buffer to 0 prior to iterating. It then iterates over the elements of input arrays and the output array, writing the calculation values to the output element. Since it is doing a sum of products (e.g. out[:] += ...), it has to start with a clean slate.
I'm guessing a bit as to what is actually going on, but it seems logical to me that it should zero out the output buffer to start with. If that array is the same as one of the inputs, that will end up messing with the calculation.
So I don't think this approach will work and save you memory. It needs a clean buffer to accumulate the results in. Once that's done it, or you, can write the values back into A. But given the nature of a dot like product, you can't use the same array for input and for output.
In [476]: A[:,1:,:] = np.einsum('j, ijk->ijk', P, A[:, 1:, :])
In [477]: A
Out[477]:
array([[[ 0, 1, 2],
[ 3, 4, 5],
[ 12, 14, 16],
[ 27, 30, 33]],
....)
In the C source code for einsum, there is a section that will take the array specified by out and do some zero-setting.
But in the Python source code for example, there are execution paths that call the tensordot function before ever descending the arguments to call c_einsum.
This means that some operations might be pre-computed (thus modifying your array A on some contraction passes) with tensordot, before any sub-array is ever set to zero by the zero-setter inside the C code for einsum.
Another way to put it is: on each pass at doing the next contraction operations, NumPy has many choices available to it. To use tensordot directly without getting into the C-level einsum code just yet? Or to prepare the arguments and pass to the C level (which will involve over-writing some sub-view of the output array with all zeros)? Or to re-order the operations and repeat the check?
Depending on the order it chooses for these optimizations, you can end up with unexpected all-zeros sub-arrays.
Best bet is to not try to be this clever and use the same array for the output. You say it is because you want to save memory. Yes, in some special cases an einsum operation might be do-able in-place. But it does not currently detect if this is the case and attempt to avoid the zero-setting.
And in a huge number of cases, over-writing into one of the input arrays during the middle of the overall operation would cause many problems, much like trying to append to a list you are directly looping over, etc.