In Rust, is a vector an Iterator? - iterator

Is it accurate to state that a vector (among other collection types) is an Iterator?
For example, I can loop over a vector in the following way, because it implements the Iterator trait (as I understand it):
let v = vec![1, 2, 3, 4, 5];
for x in &v {
println!("{}", x);
}
However, if I want to use functions that are part of the Iterator trait (such as fold, map or filter) why must I first call iter() on that vector?
Another thought I had was maybe that a vector can be converted into an Iterator, and, in that case, the syntax above makes more sense.

No, a vector is not an iterator.
But it implements the trait IntoIterator, which the for loop uses to convert the vector into the required iterator.
In the documentation for Vec you can see that IntoIterator is implemented in three ways:
for Vec<T>, which is moved and the iterator returns items of type T,
for a shared reference &Vec<T>, where the iterator returns shared references &T,
and for &mut Vec<T>, where mutable references are returned.
iter() is just a method in Vec to convert Vec<T> directly into an iterator that returns shared references, without first converting it into a reference. There is a sibling method iter_mut() for producing mutable references.

Related

Implications of using np.lib.stride_tricks.as_strided

I have some questions about how np.lib.stride_tricks.as_strided differs from just using np.ndarray and if the as_strided can be intercepted by sub-classes to np.ndarray.
Given an array:
a = np.arange(10, dtype=np.int8)
Is there any difference (effectively) between using as_strided and creating the ndarray with the appropriate strides directly?
np.lib.stride_tricks.as_strided(a, (9,2), (1,1))
np.ndarray((9,2), np.int8, buffer=a.data, strides=(1,1))
as_strided just seems to be more implicit about what it is doing.
Can either function be intercepted by a sub-classes for np.ndarray? If a in the example above was a custom subclass of ndarray will there be some function that gets called before returning the strided view? I know that setting subok for as_strided will preserve any subclass, but is this actually doing anything else than casting the array?

Cython references to slots in a numpy array

I have an object with a numpy array instance variable.
Within a function, I want to declare local references to slots within that numpy array.
E.g.,
cdef double& x1 = self.array[0]
Reason being, I don't want to spend time instantiating new variables and copying values.
Obviously the above code doesn't work. Something about c++ style references not supported. How do I do what I want to do?
C++ references aren't supported as local variables (even in Cython's C++ mode) because they need to be initialized upon creation and Cython prefers to generate code like:
# start of function
double& x_ref
# ...
x_ref = something # assign
# ...
This ensures that variable scope behaves in a "Python way" rather than a "C++ way". It does mean everything needs to be default constructable though.
However, C++ references are usually implemented in terms of pointers, so the solution is just to use pointers yourself:
cdef double* x1 = &self.array[1]
x1[0] = 2 # use [0] to dereference pointer
Obviously C++ references make the syntax nicer (you don't have to worry about dereferences and getting addresses) but performance-wise it should be the same.

Algebraic Data types in VBA

I’m trying to make a set of functions and subs of basic algebraic operations like matrix product, vector product, finding the inverse of a matrix.
I have been using multidimensional arrays so far, and declaring them as variants, because for some reason, when you want a function to return an array value, by setting it equal to an array inside the function, it only works if they are both of type variant .
I want to declare a data type called vector which could be scalar, vector, matrix, or even something with more dimensionality, so when I declare a generic function like addition, I can say:
function addition (vect1 as vector, vect2 as vector) as vector
or maybe:
function addition (vect1() as vector, vect2() as vector) as vector()
and it works for every vector type (as long as vect1 and vect2 are the same size obviously).
I would like vector’s components to be addressed like arrays are e.g.
vect1(2,3) and not vect1.row(2).column(3)
Is it possible to create such data type in VBA? It's basically the data type you work with in mathlab or octave but I would like to create in VBA and to take values from MS Excel.

Find all variables that a tensorflow op depends upon

Is there a way to find all variables that a given operation (usually a loss) depends upon?
I would like to use this to then pass this collection into optimizer.minimize() or tf.gradients() using various set().intersection() combinations.
So far I have found op.op.inputs and tried a simple BFS on that, but I never chance upon Variable objects as returned by tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES) or slim.get_variables()
There does seem to be a correspondence between corresponding 'Tensor.op._idandVariables.op._id` fields, but I'm not sure that's a something I should rely upon.
Or maybe I should't want to do this in the first place?
I could of course construct my disjoint sets of variables meticulously while building my graph, but then it would be easy to miss something if I change the model.
The documentation for tf.Variable.op is not particularly clear, but it does refer to the crucial tf.Operation used in the implementation of a tf.Variable: any op that depends on a tf.Variable will be on a path from that operation. Since the tf.Operation object is hashable, you can use it as the key of a dict that maps tf.Operation objects to the corresponding tf.Variable object, and then perform the BFS as before:
op_to_var = {var.op: var for var in tf.trainable_variables()}
starting_op = ...
dependent_vars = []
queue = collections.deque()
queue.append(starting_op)
visited = set([starting_op])
while queue:
op = queue.popleft()
try:
dependent_vars.append(op_to_var[op])
except KeyError:
# `op` is not a variable, so search its inputs (if any).
for op_input in op.inputs:
if op_input.op not in visited:
queue.append(op_input.op)
visited.add(op_input.op)

tensorflow add new op : could attr accept scalar tensor?

I can't find detail info about this in official doc.
Could anyone give more detailed info?
TensorFlow uses attrs as "compile-time constants" that determine the behavior and type (number of inputs and outputs) of an op.
You can define an op that has a TensorProto as one of its attrs. For example the tf.constant() op takes its value as an attr, which is defined here in the corresponding op registration.
There are a few limitations to this feature:
It is not currently possible to constrain the shape of the tensor statically. You would need to validate this in the constructor for the op (where GetAttr is typically called).
Similarly, it is not currently possible to constrain the element type of the tensor statically, so you will need to check this at runtime as well.
In the Python wrapper for your op, you will need to pass the attr's value as a TensorProto, e.g. by calling tf.contrib.util.make_tensor_proto() to do the conversion.
In general, you may find it much easier to use a simple int, float, bool, or string attr instead of a scalar TensorProto, but the TensorProto option is available if you need to encode a less common type.