Constrain collection of dependent types to match between args and return - idris

I want to write a function that takes in a collection (I'm not too fussed what kind) of dependent types, and returns another collection of the same types but perhaps with different values. The elements are of the form
data Tensor : (Vect r Nat) -> Type where
For example, a function that accepts a (Tensor [2, 3, 4], Tensor [2], Tensor []) or (Tensor [3],) and returns values of the same type.
What I've tried
Using dependent pairs: accept a List (s ** Tensor s). I don't then know how to constrain the output to have the same types.
Using tuples, but I'm not sure how to fix the element type to be Tensor

You can write a function that is indexed by the complete shape of all the Tensors, not just any one of them. The shape of each Tensor is a List Nat, so the shape of a list of them is a List (List Nat):
import Data.Vect
data Tensor : (Vect r Nat) -> Type where
data Tensors : List (List Nat) -> Type where
Nil : Tensors []
Cons : Tensor (fromList ns) -> Tensors nss -> Tensors (ns :: nss)
Here's an example of a shape-preserving map function:
mapTensors
: ({0 r : Nat} -> {0 ns : Vect r Nat} -> Tensor ns -> Tensor ns) ->
Tensors nss -> Tensors nss
mapTensors f Nil = Nil
mapTensors f (Cons t ts) = Cons (f t) (mapTensors f ts)

Related

How to point from the inputs of shape (100,24,24,6) the last channel dimension i.e (6,) to be worked on?

I am trying to use the tf.map_fn() , where my elems should be pointing to the channel dimension of my inputs(shape = 100,24,24,6), so my elems should be a list/tuple of tensors, pointing or accessing the values of the channel dimension(6) of the inputs .I am trying to do it by making a for loop in such a way :
#tf.function
def call(self, inputs, training=True):
elems = []
for b in inputs:
for h in b:
for w in h:
for c in w:
elems.append(c)
changed_inputs = tf.map_fn(self.do_mapping, elems)
return changed_inputs
What i am trying to achieve in the self.do_mapping is that it is doing a dictionary look up for the values of a dictionary (vmap) using the keys and the return the values. the dictionary vmap is made by accessing the output of a layer and appending only the similar values of the channel dimension of the output of layer so the keys in dictionary are tuple of 6 (as the size of channel dimension) tf.tensorobjects and values of dictionary is the count which i keep. This is how the dictionary is made :
value = list(self.get_values())
vmap = {}
cnt = 0
for v0 in value:
for v1 in v0:
for v2 in v1:
for v3 in v2:
v = tuple(v3)
if v not in vmap:
vmap[v]=cnt
cnt+=1
the do_mapping function is :
#tf.function
def do_mapping(self,pixel):
if self._compression :
pixel = tuple(pixel)
enumerated_value=self._vmap.get(pixel)
print(enumerated_value)
print(tf.shape(pixel))
exit()
return enumerated_value
If i try to use the tf.map_fn now where i try to point the elems to the channel dimension then i get the following error :(ValueError: elements in elems must be 1+ dimensional Tensors, not scalars ). Please help me to understand how can i use the tf.map_fn for my case ? Thank you in advance
First, instead of doing a for loop (try to avoid for efficiency), you can just reshape that way:
elems = tf.reshape(inputs,-1)
Second, what do you want to do exactly? What do you mean by "it doesn't work"? What is the error message? What is self.do_mapping?
Best,
Keivan

Prove arbitrarily-nested Vect alias is showable

I've been trying to figure out how to implement Show for my Tensor type for ages. Tensor is a thin wrapper round either a single value, or arbitrarily-nested Vects of values
import Data.Vect
Shape : Nat -> Type
Shape rank = Vect rank Nat
array_type: (shape: Shape rank) -> (dtype: Type) -> Type
array_type [] dtype = dtype
array_type (d :: ds) dtype = Vect d (array_type ds dtype)
data Tensor : (shape: Shape rank) -> (dtype: Type) -> Type where
MkTensor : array_type shape dtype -> Tensor shape dtype
Show dtype => Show (Tensor shape dtype) where
show (MkTensor x) = show x
I get
When checking right hand side of Prelude.Show.Main.Tensor shape dtype implementation of Prelude.Show.Show, method show with expected type
String
Can't find implementation for Show (array_type shape dtype)
which is understandable given array_type's not trivial. I believe that it should be showable, as I can show highly-nested Vects in the REPL as long their elements are Show. I guess Idris just doesn't know it's an arbitrarily nested Vect.
If I pull in some implicit parameters and case split on rank/shape, I get somewhere
Show dtype => Show (Tensor {rank} shape dtype) where
show {rank = Z} {shape = []} (MkTensor x) = show x -- works
show {rank = (S Z)} {shape = (d :: [])} (MkTensor x) = show x -- works
show {rank = (S k)} {shape = (d :: ds)} (MkTensor x) = show x -- doesn't work
and I can indefinitely expand this to higher and higher rank explicitly, where the RHS is always just show x, but I can't figure out how to get this to type check for all ranks. I'd guess some recursive thing is required.
EDIT to be clear, I want to know how to do this by using Idris' implementation of Show for Vects. I want to avoid having to construct an implementation manually myself.
If you want to go via the Show (Vect n a) implementations, you can do that as well, by defining a Show implementation that requires that there is a Show for the vector:
import Data.Vect
import Data.List
Shape : Nat -> Type
Shape rank = Vect rank Nat
array_type: (shape: Shape rank) -> (dtype: Type) -> Type
array_type [] dtype = dtype
array_type (d :: ds) dtype = Vect d (array_type ds dtype)
data Tensor : (shape: Shape rank) -> (dtype: Type) -> Type where
MkTensor : array_type shape dtype -> Tensor shape dtype
Show (array_type shape dtype) => Show (Tensor {rank} shape dtype) where
show (MkTensor x) = show x
For all choices of shape, the Show (array_type shape dtype) constraint will reduce to Show dtype, so e.g. this works:
myTensor : Tensor [1, 2, 3] Int
myTensor = MkTensor [[[1, 2, 3], [4, 5, 6]]]
*ShowVect> show myTensor
"[[[1, 2, 3], [4, 5, 6]]]" : String
You were on the right track: you can do it by writing a recursive function over the nested vectors, and then lifting it to your Tensor type in the Show implementation:
showV : Show dtype => array_type shape dtype -> String
showV {rank = 0} {shape = []} x = show x
showV {rank = (S k)} {shape = d :: ds} xs = combine $ map showV xs
where
combine : Vect n String -> String
combine ss = "[" ++ concat (intersperse ", " . toList $ ss) ++ "]"
Show dtype => Show (Tensor {rank} shape dtype) where
show (MkTensor x) = showV x
Example:
λΠ> show $ the (Tensor [1, 2, 3] Int) $ MkTensor [[[1, 2, 3], [4, 5, 6]]]
"[[[1, 2, 3], [4, 5, 6]]]"

Destructing Iterator.product in Julia

I wonder if I have Iterator.product generating pairs (x,y) how I get only x-componnet out of it?
I'm trying to convert from Python/numpy to julia using this cheatsheet. Simple thing:
xs = np.arange(0,2*np.pi,0.1)
ys = np.arange(0,2*np.pi,0.1)
Xs,Ys = np.meshgrid(xs,ys)
F = sin(Xs)*cos(Ys)
Now in Julia I got I should use Iterator.product
xs = 0.0 : 0.1 : 2*pi
ys = 0.0 : 0.1 : 2*pi
XYs = Iterators.product(xs,ys)
for (i, (x,y)) in enumerate( XYs ) println("$i $x $y") end
# as far as good
# Now what?
(Xs,Ys)=XYs ; println(Xs); println(Ys) # Nope
.(Xs,Ys)=XYs ; println(Xs); println(Ys) # syntax: invalid identifier name "."
Xs=xy[:,0] ; println(Xs) # MethodError: no method matching getindex( ...
XYs_T = transpose(XYs) ; println(XYs_T) # MethodError: no method matching transpose(
# this seems to work, but how to address index which is not "first" or "last" ?
Xs = first.(XYs)
Ys = last.(XYs)
# I guess I know how to continue
F = sin.(Xs)*cos.(Ys)
imshow(F) # yes, corrent
You can avoid using the routines in Iterators if you use a comprehension:
julia> F = [sin(x) * cos(y) for x in xs, y in ys]
63×63 Array{Float64,2}:
0.0 0.0 0.0 0.0 0.0 … 0.0 0.0 0.0 0.0
0.0998334 0.0993347 0.0978434 0.0953745 0.0919527 0.0925933 0.0958571 0.098163 0.0994882
0.198669 0.197677 0.194709 0.189796 0.182987 0.184262 0.190756 0.195345 0.197982
0.29552 0.294044 0.289629 0.282321 0.272192 0.274089 0.28375 0.290576 0.294498
...
Remember that since Julia organizes arrays in memory by column first and Python row first, if you want the arrays to display the same in the REPL, you will need to reverse the order of x and y in the array reference, as in the 'ij' matrix indexing option of numpy's meshgrid:
F = [sin(x) * cos(y) for y in ys, x in xs] # BUT reference elements as e = F[y, x] now
You can use comprehensions, but I like broadcasting:
xs = range(0, 2π, step=0.1)
ys = range(0, 2π, step=0.1)
F = sin.(xs) .* cos.(ys') # notice the adjoint '
You should avoid creating the intermediate meshgrid, that's just wasted. In fact, you should go read the manual section on broadcasting: https://docs.julialang.org/en/v1/manual/functions/#man-vectorized-1
The same thing applies in Python, btw, there is no need for meshgrid, just make sure that xs is a row vector and ys a column vector (or vice versa), then you can directly multiply np.sin(xs) * np.cos(ys), and they will broadcast to matrix form automatically.

Failed to convert object of type <class 'function'> to Tensor

I am trying to randomize the flip augmentation using tensorflow's left_right and up_down augmentation function. I am getting error mapping the function based on the boolean condition via tf.cond()
random_number=tf.random_uniform([],seed=seed)
print_random_number=tf.print(random_number)
flip_strategy=tf.less(random_number,0.5)
version 0.1
image=tf.cond
(
flip_strategy,
tf.image.flip_left_right(image),
tf.image.flip_up_down(image),
)
version 0.2
image=tf.cond
(
flip_strategy,
lambda: tf.image.flip_left_right(image),
lambda: tf.image.flip_up_down(image),
)
ERROR
TypeError: Failed to convert object of type to Tensor. Contents: . Consider casting elements to a supported type.ROR:
Let me know what am I missing or if more info is needed.
From the documentation:
tf.math.less(
x,
y,
name=None
)
Args:
x: A Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, int64, bfloat16, uint16, half, uint32, uint64.
y: A Tensor. Must have the same type as x.
name: A name for the operation (optional).
So tf.less expects two tensors, but one of the arguments you pass is a numpy array. You could just convert the numpy array in tensor like
random_number=tf.random_uniform([],seed=seed)
print_random_number=tf.print(random_number)
random_numer=tf.convert_to_tensor(random_number,dtype=tf.float32)
flip_strategy=tf.less(random_number,0.5)
image=tf.cond`
(
flip_strategy,
tf.image.flip_left_right(image),
tf.image.flip_up_down(image),
)

How can oddly shaped lists be zipped in Elm?

I have two data structures I would like to combine in my Elm program. Their types are List (a, List b) and List c. This is the simplest way to store the data in the model, but I would like to write a function to transform it to a view model before displaying it.
type alias Model1 a b = List (a, List b)
type alias Model2 c = List c
type alias ViewModel a b c = List (a, List (b, c))
toViewModel : Model1 -> Model2 -> ViewModel
toViewModel model1 model2 =
???
The toViewModel function should zip the sub-lists in model 1 with the elements of model 2. Assume the size of model2 is the same as the sum of the sizes of the sub-lists in model 1.
For example, if model1 has length 2 and the two elements' sublists have lengths 3 and 4, assume model 2 will have length 7. Model 1's first element's sublist should be zipped with the first 3 elements of model 2, and model 1's second element's sublist should be zipped with the next 4 elements of model 2.
Here's a diagram of the previous example:
How can the toViewModel function be constructed?
Elm doesn't have a built-in zip function but it's easy to define it in terms of List.map2:
zip : List a -> List b -> List (a, b)
zip = List.map2 (,)
You can't use that on the disjointed list by itself, but you can use it to zip up the sub-lists in Model1 with the first n items from Model2. To get the list of things that need to be zipped up, you'll just need to use List.take for the zipping portion, and List.drop to get the remaining items from Model2 that need to be zipped up with the next Model1 entity. This can be done using something like the following functionn.
toViewModel : Model1 a b -> Model2 c -> ViewModel a b c
toViewModel m1 m2 =
case m1 of
[] ->
[]
((m1ID, m1List)::rest) ->
let
len = List.length m1List
init' = List.take len m2
tail' = List.drop len m2
in
(m1ID, zip m1List init') :: toViewModel rest tail'
Notice that using zip is going to stop at whichever list ends first, so if you have lists of uneven length, items are going to get dropped.
Edit - Here's an additional way of doing this where the explicit recursion is isolated in a way that the main functionality can be achieved through mapping
Another way to solve this would be to first group Model2 into a list of lists, where each item contains an appropriate number of elements for the equivalent Model1 entry.
For this I'll define a function takes which splits up a list into a number of smaller lists, the sizes of these lists being passed in as the first parameter.
takes : List Int -> List a -> List (List a)
takes counts list =
case counts of
[] -> []
(x::xs) -> List.take x list :: takes xs (List.drop x list)
Now you can use the takes function to group the Model2 list, allowing you to map it against the Model1 input so you can zip the internal lists.
toViewModel : Model1 a b -> Model2 c -> ViewModel a b c
toViewModel model1 model2 =
let
m2Grouped = takes (List.map (List.length << snd) model1) model2
mapper (m1ID, m1List) m2List = (m1ID, zip m1List m2List)
in
List.map2 mapper model1 m2Grouped
Here is a slightly simplier take on the task:
Use List.concatMap to create a flat List (a, b) from Model1
Merge it with Model2, to get List (a, b, c) with data from both lists
Group the list by a to get final List (a, List (b, c)) with List.filterMap
I've had to mock some data to run the tests here.
Please consider the following example:
import Graphics.Element exposing (show)
-- List (a, List b)
type alias Model1 = List (String, List Int)
-- List c
type alias Model2 = List Bool
-- List (a, List (b, c))
type alias ViewModel = List (String, List (Int, Bool))
toViewModel : Model1 -> Model2 -> ViewModel
toViewModel model1 model2 =
let
-- Map merge both lists and map values to the grouping key.
mappedList =
model1
|> List.concatMap (\(groupBy, list) -> List.map (\el -> (groupBy, el)) list)
|> List.map2 (\val2 (groupBy, val1) -> (groupBy, (val1, val2))) model2
-- Extract a list for for specified grouping key.
getListByKey search =
List.filterMap
(\(key, val) -> if key == search then Just val else Nothing)
mappedList
in
List.map (\(search,_) -> (search, getListByKey search)) model1
initModel1 : Model1
initModel1 =
[ ("foo", [ 1, 2, 3 ])
, ("bar", [ 4, 5, 6, 7])
]
initModel2 : Model2
initModel2 =
[ True
, False
, True
, False
, True
, False
, True
]
main =
toViewModel initModel1 initModel2
|> show