File Upload to Local using Acr in Elixir - file-upload

I am using Arc.Definition(https://github.com/stavro/arc) for uploading an image to the Local Storage.
My file_service.ex is below:
defmodule MyApp.FileService do
use Arc.Definition
use Arc.Ecto.Definition
#image_types ~w(.jpg .jpeg .png .gif)
#versions [:original]
#default_filename "image.png"
#heights %{
medium: 400
}
#widths %{
medium: 400
}
def __storage, do: Arc.Storage.Local
def upload_image(%Plug.Upload{} = image, resource_type, resource_id) do
store({%Plug.Upload{path: image.path, filename: #default_filename},
%{resource_type: resource_type, resource_id: resource_id}})
end
def upload_base64_image(base64_image, resource_type, resource_id) do
store({%{filename: #default_filename, binary: base64_image_to_binary(base64_image)}})
end
def delete_file(image_url, resource) do
delete({image_url, resource})
end
defp base64_image_to_binary("data:image/" <> rest) do
rest
|> String.replace("\n", "")
|> String.split(",")
|> Enum.at(1)
|> Base.decode64!
end
defp base64_image_to_binary(base64_image) do
base64_image
|> String.replace("\n", "")
|> Base.decode64!
end
end
But, I am getting an error saying "no function clause matching in Arc.Actions.Store.store".
The stack trace is below:
** (FunctionClauseError) no function clause matching in Arc.Actions.Store.store/2
(arc) lib/arc/actions/store.ex:8: Arc.Actions.Store.store(MyApp.FileService, {%{binary: <<255, 216,
255, 225, 3, 48, 69, 120, 105, 102, 0, 0, 73, 73, 42, 0, 8, 0, 0, 0,
58, 0, 50, 1, 2, 0, 20, 0, 0, 0, 198, 2, 0, 0, 15, 1, 2, 0, 10, 0, 0,
0, 218, 2, 0, 0, 1, 1, ...>>, filename: "image.png"}})
Anyone, please help?

Your code
def upload_base64_image(base64_image, resource_type, resource_id) do
store({%{filename: #default_filename, binary: base64_image_to_binary(base64_image)}})
end
's store is using wrong.
It only accept tuple(file, scope) or filepath(map).
So it should be: store(%{filename: #default_filename, binary: base64_image_to_binary(base64_image)}).
See github's example:
# Store a file from a connection body
{:ok, data, _conn} = Plug.Conn.read_body(conn)
Avatar.store(%{filename: "file.png", binary: data})
I figure it out by reading traceback and arc's store implementaion:
def store(definition, {file, scope}) when is_binary(file) or is_map(file) do
put(definition, {Arc.File.new(file), scope})
end
def store(definition, filepath) when is_binary(filepath) or is_map(filepath) do
store(definition, {filepath, nil})
end

Related

astropy make_lupton_rbg returns UFuncTypeError

I have three arrays that I read from FITS files that contain images (I can open them with astroimagej. They don't seem corrupted)
band = fits.open("band.fits")[0].data
In : r
Out:
array([[ 40, 608, 829, ..., 652, 297, 306],
[ 70, 886, 786, ..., 1088, 519, 314],
[ 14, 112, 518, ..., 885, 1454, 0],
...,
[1648, 471, 0, ..., 40, 558, 68],
[1456, 536, 0, ..., 42, 257, 108],
[ 235, 858, 177, ..., 113, 203, 108]], dtype=uint16)
In : g
Out:
array([[ 916, 0, 130, ..., 339, 84, 546],
[ 0, 0, 836, ..., 0, 0, 0],
[ 0, 1726, 1712, ..., 0, 0, 505],
...,
[1025, 0, 129, ..., 1485, 2915, 0],
[ 0, 0, 1129, ..., 990, 1815, 0],
[ 659, 0, 296, ..., 0, 0, 0]], dtype=uint16)
In : b
Out:
array([[ 916, 0, 130, ..., 339, 84, 546],
[ 0, 0, 836, ..., 0, 0, 0],
[ 0, 1726, 1712, ..., 0, 0, 505],
...,
[1025, 0, 129, ..., 1485, 2915, 0],
[ 0, 0, 1129, ..., 990, 1815, 0],
[ 659, 0, 296, ..., 0, 0, 0]], dtype=uint16)
When I try to run the make_lupton_rbg function from the astropy library, I get the error message
UFuncTypeError: Cannot cast ufunc 'multiply' output from dtype('float64') to dtype('uint16') with casting rule 'same_kind'
Yet it seems to me like every array here is of the same type. What might be going wrong?
I also tried to run the function with one of these arrays repeated three times e.g. make_lupton_rgb(r, r, r), to see if there was a problem with one individual FITS file. I get the same error message in all cases.
Converting each array to float64 worked, though I don't know why.

Numpy : How to assign directly a subarray from values when these values are step spaced

I have 2 global arrays "tab1" and "tab2" with dimensions respectively equal to 21x21 and 17x17.
I would like to assign the block of "tab1" ( indexed by [15:20,0:7]) by the block of "tab2" indexed by [7:17:2,0:7] (so with a step between elements of 1st array dimension) : I tried whith this syntax :
tab1[15:20,0:7] = tab2[7:17:2,0:7]
Unfortunately, this doesn't work, it seems that only "diagonal" (I mean one by one) elements of 15:20 are taken into account following the values of "tab2" along [7:17:2].
Is there a way to assign a subarray of "tab1" with another subarray "tab2" composed of indexes with step spaced values ?
If someone could see what's wrong or suggest another method, this would be nice.
UPDATE 1: indeed, from my last tests, it seems good but is it also the same for the assignment of block [15:20,15:20] :
tab1[15:20,15:20] = tab2[7:17:2,7:17:2]
??
ANSWER : it seems ok also for this block assignment, sorry
The assignment works as I expect.
In [1]: arr = np.ones((20,10),int)
The two blocks have the same shape:
In [2]: arr[15:20, 0:7].shape
Out[2]: (5, 7)
In [3]: arr[7:17:2, 0:7].shape
Out[3]: (5, 7)
and assigning something interesting, looks right:
In [4]: arr2 = np.arange(200).reshape(20,10)
In [5]: arr[15:20, 0:7] = arr2[7:17:2, 0:7]
In [6]: arr
Out[6]:
array([[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1],
...
[ 1, 1, 1, 1, 1, 1, 1, 1, 1, 1],
[ 70, 71, 72, 73, 74, 75, 76, 1, 1, 1],
[ 90, 91, 92, 93, 94, 95, 96, 1, 1, 1],
[110, 111, 112, 113, 114, 115, 116, 1, 1, 1],
[130, 131, 132, 133, 134, 135, 136, 1, 1, 1],
[150, 151, 152, 153, 154, 155, 156, 1, 1, 1]])
I see a (5,7) block of values from arr2, skipping rows like [80, 100,...]

Can we use pytorch scatter_ on GPU

I'm trying to do one hot encoding on some data with pyTorch on GPU mode, however, it keeps giving me an exception. Can anybody help me?
Here's one example:
def char_OneHotEncoding(x):
coded = torch.zeros(x.shape[0], x.shape[1], 101)
for i in range(x.shape[1]):
coded[:,i] = scatter(x[:,i])
return coded
def scatter(x):
return torch.zeros(x.shape[0], 101).scatter_(1, x.view(-1,1), 1)
So if I give it an tensor on GPU, it shows like this:
x_train = [[ 0, 0, 0, 0, 0],
[ 0, 0, 0, 0, 0],
[ 0, 0, 0, 0, 0],
[14, 13, 83, 18, 14],
[ 0, 0, 0, 0, 0]]
print(char_OneHotEncoding(torch.tensor(x_train, dtype=torch.long).cuda()).shape)
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-62-95c0c4ade406> in <module>()
4 [14, 13, 83, 18, 14],
5 [ 0, 0, 0, 0, 0]]
----> 6 print(char_OneHotEncoding(torch.tensor(x_train, dtype=torch.long).cuda()).shape)
7 x_train[:5, maxlen:maxlen+5]
<ipython-input-53-055f1bf71306> in char_OneHotEncoding(x)
2 coded = torch.zeros(x.shape[0], x.shape[1], 101)
3 for i in range(x.shape[1]):
----> 4 coded[:,i] = scatter(x[:,i])
5 return coded
6
<ipython-input-53-055f1bf71306> in scatter(x)
7
8 def scatter(x):
----> 9 return torch.zeros(x.shape[0], 101).scatter_(1, x.view(-1,1), 1)
RuntimeError: Expected object of backend CPU but got backend CUDA for argument #3 'index'
BTW, if we simply remove the .cuda() here, everything goes one well
print(char_OneHotEncoding(torch.tensor(x_train, dtype=torch.long)).shape)
torch.Size([5, 5, 101])
Yes, it is possible. You have to pay attention that all tensors are on GPU. In particular, by default, constructors like torch.zeros allocate on CPU, which will lead to this kind of mismatches. Your code can be fixed by constructing with device=x.device, as below
import torch
def char_OneHotEncoding(x):
coded = torch.zeros(x.shape[0], x.shape[1], 101, device=x.device)
for i in range(x.shape[1]):
coded[:,i] = scatter(x[:,i])
return coded
def scatter(x):
return torch.zeros(x.shape[0], 101, device=x.device).scatter_(1, x.view(-1,1), 1)
x_train = torch.tensor([
[ 0, 0, 0, 0, 0],
[ 0, 0, 0, 0, 0],
[ 0, 0, 0, 0, 0],
[14, 13, 83, 18, 14],
[ 0, 0, 0, 0, 0]
], dtype=torch.long, device='cuda')
print(char_OneHotEncoding(x_train).shape)
Another alternative are constructors called xxx_like, for instance zeros_like, though in this case, since you need different shapes than x, I found device=x.device more readable.

Preventing an animation from looping

I want my animation to only play once and not loop. My understanding is that you can do that by setting "next" to false. However, my animation is still looping. Here is my sprite sheet json file:
{
"images": [
"ressources/atlas/apparition.png"
],
"framerate": 12,
"frames": [
[1, 1, 170, 172, 0, -15, -15],
[1, 175, 164, 165, 0, -19, -18],
[1, 342, 156, 160, 0, -23, -21],
[159, 342, 147, 146, 0, -27, -28],
[167, 175, 134, 128, 0, -33, -37],
[173, 1, 122, 96, 0, -40, -52],
[173, 99, 96, 64, 0, -52, -68]
],
"animations": {
"apparition": { "frames": [6, 5, 4, 3, 2, 1, 0], "next": false }
}
}
Ideas?
Well... it seems that you must use gotoAndPlay() if you want to prevent looping. I was using play().

Filter sequence items in TensorFlow

I have a tensor of allowed items
index = tf.constant([61, 215, 23, 18, 241, 125])
and need to remove items from input sequence batches that are not in index.
seq = tf.constant(
[
[ 18, 241, 0, 0],
[125, 61, 23, 241],
[ 23, 92, 18, 0],
[ 5, 61, 215, 18],
]
)
After the calculation in this case I need
result_needed = tf.constant(
[
[ 18, 241, 0, 0],
[125, 61, 23, 241],
[ 23, 18, 0, 0],
[ 61, 215, 18, 0],
]
)
I cannot do this in Python because this calculation happens during predictions. Also note that while item IDs here are small, solution needs to deal with numbers from 1 to 2^40.
Answer
After some serious pondering time, I came up with the following:
idx_range = tf.reshape(tf.range(seq.shape[-2]), [-1, 1])
idx_tile = tf.tile(idx_range, [1, seq.shape[-2].value])
idx_flat = tf.reshape(idx_tile, [-1])
truth_value = tf.equal(index, tf.expand_dims(seq, -1))
one_hot = tf.to_float(truth_value)
ones = tf.nn.top_k(tf.reduce_sum(one_hot, -1), seq.shape[-1]).indices
ones_flat = tf.reshape(ones, [-1])
ones_idx = tf.reshape(
tf.stack([idx_flat, ones_flat], axis=1),
tf.concat([seq.shape, [2]], axis=0)
)
tf.gather_nd(seq, ones_idx)
This is not exactly what I said I needed, but actually got me close enough. Instead of the output replacing the blacklisted items with 0, it moves them to the end. If you needed them gone, I'm sure there's a method to remove them, but I'm not looking into it. Apologies.