tensorflow provides linspace but not logspace. Is there any reason for that?
I know I can use numpy for that but I was just curious about the reason behind it.
Related
Machine learning framework comprise, amongst other things, the following functions:
augmentations
metrics and losses
These functions are simple conversions of tensors and seem rather framework independent. However, for example tensorflow's categorical crossentropy loss uses some tensorflow specific functions like tf.convert_to_tensor() or tf.cast(). So it cannot be used easily in pytorch. Also tensorflow heavily prefers to work with tensorflow tensors instead of numpy ones to create tensorflow graphs to my knowledge.
Are there any existing efforts or ideas how to write such functions in a way that they can be used in both frameworks? I'm thinking of pure numpy functions which can be somehow converted to either tensorflow or pytorch.
Im using a Sklearn for my machine learning and my question is how can i see my process of my taining?
If i use Tensoflow i can see my loading process with Tensorboard. But does Sklearn have something like this?
As pointed out in the comments, you can use matplotlib. There are plenty of tutorials of how to create a plot updating in real-time during your training.
However, personally I found these options pretty cumbersome. I instead chose to use the PyTorch interface to tensorboard.
That works like a charm and you can just pass in numpy loss values.
Here's how to get started: https://pytorch.org/docs/stable/tensorboard.html
Please could you tell me if it is feasible to transform a torch model (torch.save) into algebraic matrices/ equations that can be operated with numpy or basic Python, without the need to install torch and other related libraries (that occupy a lot of space)? In an afirmative case, could you please give me some hints or a link with explanations? Thank you very much.
I'm not aware of any way to do this without a lot of your own work. Basically you'd have to port most of the pytorch library to numpy, which would be a huge project. If space is an issue check if you can save some space by e.g using earlier torch versions or using only the CPU-versions of pytorch.
I have been looking to learn TensorFlow and I have noticed that different functions are used for the same goal. To square a variable for instance, I have seen tf.square(), tf.math.square() and tf.keras.backend.square(). This is the same for most math operations. Are all these the same or is there any difference?
Mathematically, they should produce the same result. However Tensorflow functions in tensorflow.math.somefunction are used for operating Tensorflow tensors.
For example, when you write a custom loss or metric, the inputs and outputs should be Tensorflow tensors. So that Tensorflow knows how to take gradients of the functions. You can also use tf.keras.backend.* functions for custom loss etc.
Try to use tensorflow.math.somefunctions whenever you can, native operations are preferred. Because they are officially documented and guarateed to have backward compatibility between TF versions like TF 1.x and TF 2.x.
Data set is numpy set. Some tutorial said: because it is needed to in advantage of GPU, we should change numpy array to tensorflow tensor. And then use tensorflow model.
But after training, some code use numpy function to test and interactive. But the code in tensorflow official tutorial still use the same tensorflow model and tf.dataset to test.
I want to know:
When testing or real time apply, should I use numpy or tensorflow tensor and model?
In other words, is there some bad influences using tensorflow tensor and function if not traing?
eg.:
we use selected_words =tf.argsort(o_j)
in stead of
selected_words = np.argsort(o_j)
Since TF tensor runs on GPU and numpy array runs on CPU, conversion from GPU to CPU needs memory allocation and content copy using CUDA API (see pycuda document), which causes a tiny delay. Such delay could be a problem in training because of the high throughput data stream, but I think it could be ignored in most inference usage case. Anyway, if the selected_words is the desired output, we normally would prefer to use tf.argsort to make an elegant end-to-end model. However, if the output would be used in multiple places like logits, use np.argsort in a specific situation is fine.