What is tf FlatMapDataset - tensorflow

I cannot find anything about this type of tensorflow dataset: FlatMapDataset.
I came over it by using Hugginface traformer library. The glue_convert_examples_to_features functions returns it.
What is it? And what do I do with it?

Related

Sklearn datasets default data structure is pandas or numPy?

I'm working through an exercise in https://www.oreilly.com/library/view/hands-on-machine-learning/9781492032632/ and am finding unexpected behavior on my computer when I fetch a dataset. The following code returns
numpy.ndarray
on the author's Google Collab page, but returns
pandas.core.frame.DataFrame
on my local Jupyter notebook. As far as I know, my environment is using the exact same versions of libraries as the author. I can easily convert the data to a numPy array, but since I'm using this book as a guide for novices, I'd like to know what could be causing this discrepancy.
from sklearn.datasets import fetch_openml
mnist = fetch_openml('mnist_784', version=1)
mnist.keys()
type(mnist['data'])
The author's Google Collab is at the following link, scrolling down to the "MNIST" heading. Thanks!
https://colab.research.google.com/github/ageron/handson-ml2/blob/master/03_classification.ipynb#scrollTo=LjZxzwOs2Q2P.
Just to close off this question, the comment by Ben Reiniger, namely to add as_frame=False, is correct. For example:
mnist = fetch_openml('mnist_784', version=1, as_frame=False)
The OP has already made this change to the Colab code in the link.

Does PyTorch have a RandomState-like object for random number generation?

in numpy i can
import numpy as np
rs = np.random.RandomState(seed=0)
and then pass that object around, eg for dependency injection.
Does PyTorch have a similar interface? I can't find anything in the docs, but maybe i'm missing something.
The closest thing would be torch.manual_seed, which sets the seed for generating random numbers and returns a torch.Generator. This thread here has more information, apparently there may be some inconsistencies depending on whether you are using GPU or a CPU.

TF 2.0: how to convert 'tf.contrib.eager.num_gpus()'?

I found no way to get a 'num_gpus' function working in TF 2.0.
Thought it should be somehow possible with 'compat.v1'.
I used 'tensorflow.contrib.eager.num_gpus()' within a helper_function to initialize gpu if present.
What is the intended way to get the desired info in TF 2.0?
Instead of the former tf.contrib.eager.num_gpus() use these SLOC:
from tensorflow.python.eager import context
blGPU = context.num_gpus()
if blGPU > 0 :
with tf.device("/gpu:0"):
...

tensorflow 'module' object has no attribute 'prepare_attention'

I am using tensorflow version 1.3. But the tutorial that I following is written on the version 1.0 and I am quite new on tensorflow. The problem that I get is:
module' object has no attribute 'prepare_attention
And the code is ;
tf.contrib.seq2seq.prepare_attention(attention_states, attention_option = "bahdanau", num_units = decoder_cell.output_size)
I couldn't figure out what the use instead of tf.contrib.seq2seq.prepare_attention() function. Is there anyone who can help?
Degrade your tensorflow and it'll work. The problem is that prepare_attention is deprecated and hence we use an older version of tf to work with it
Okay, all you need to do is create a new environment with python 3.5.4 and then install tensorflow 1.0.0. That's it. Everything will work fine.
tf.contrib.seq2seq.prepare_attention works only when the TensorFlow version is 1.0, I have version 2.3.1
My solution:
tf.contrib.seq2seq.prepare_attention = tf.compat.v1.nn.rnn_cell.prepare_attention

NameError: global name 'linear' is not defined

I am trying to run an implementation of attention mechanism by Google DeepMind. However it is based on an older version of TensorFlow and I am getting this error
from tensorflow.models.rnn.rnn_cell import RNNCell, linear
concat = linear([inputs, h, self.c], 4 * self._num_units, True)
NameError: global name 'linear' is not defined
I couldn't find the linear model/function in the new tensorflow documentation. Can anyone help me with this? Thanks!
You need to use the tf.nn.rnn_cell._linear function to make the code work. Have a look at this tutorial for a sample usage.