Implementing Shaw's Relative Attention using Tensorflow - tensorflow

Is there a straightforward way to implement relative positional encoding as described in the Shaw paper using Tensorflow instead of absolute positional encoding? Thanks!

Related

What's the difference between Keras' AUC(curve='PR') and Scikit-learn's average_precision_score?

I am quite confused on the difference between Keras' AUC(curve='PR') and Scikit-learn's average_precision_score. My objective is to compute the Area Under the Precision-Recall Curve (AUPRC), for both Scikit-learn and Keras models. However, these two metrics yield vastly different results!
Did I miss something out on the TensorFlow-Keras documentation at https://www.tensorflow.org/api_docs/python/tf/keras/metrics/AUC, with regards to the use of the AUC function?
As stated in the Scikit-learn documentation, they use a different implementation method:
References [Manning2008] and [Everingham2010] present alternative variants of AP that interpolate the precision-recall curve. Currently, average_precision_score does not implement any interpolated variant. References [Davis2006] and [Flach2015] describe why a linear interpolation of points on the precision-recall curve provides an overly-optimistic measure of classifier performance. This linear interpolation is used when computing area under the curve with the trapezoidal rule in auc.
In the average_precision_score function documentation, you can also read:
This implementation is not interpolated and is different
from computing the area under the precision-recall curve with the
trapezoidal rule, which uses linear interpolation and can be too
optimistic.
I encourage you to look in detail at the different functions and their descriptions available in the metrics module. I also highly recommend to read the related paper.
Lastly, there's also a potentially interested thread here: [AUC] result of tf.metrics.auc doesnot match with sklearn's.

Use TF.JS model in regular TensorFlow

Is it possible to use a TensorFlow.JS models from regular TensorFlow? Some of them like FaceMesh have no direct counterparts available.
We do not have any tools to convert from TensorFlow.js format to Python compatible format.
We only have the ability to go from Python -> JavaScript right now to the best of my knowledge (most folk want to take some research written in Python and use it in JS land vs the other way around).

Tensorflow Object Detection API - How do I implement Mask R-CNN via this?

I notice in the code for the Tensorflow Object Detection API there are several references to Mask R-CNN however no mention of it in the documentation. Is it possible to train/run Mask R-CNN through this API, and if so how?
You may not like it, but the answer is (for the moment), is no. The API cannot be used to predict or recover masks
They only use a little part of the Mask R-CNN paper to predict boxes in a certain way, but predicting the instance masks is not yet implemented.
Now we can implement Mask with faster_rcnn_inception_v2 there is samples with 1.8.0 tensorflow version

Image warping in Tensorflow

I'm working with video sequences and optical flows. I'd like to know, whether Tensorflow has operation for warping images. Analog to image.warp in Torch https://github.com/torch/image/blob/master/doc/paramtransform.md
If there is no such operation build in, maybe there is open source code for that. Or you could provide pointers for me to implement this operation in TF.
Thanks!
I haven't found a built in function yet. But the answer to this question might help you.
Its built with standard Tensorflow Ops and doing a bilinear interpolation, but I guess it won't be very fast in comparison to a truly CUDA optimized Op. Also you need to expand it for batches, color images and the kind of padding you want.
Yes. In tf2 it is moved to TensorFlow Addons (here):
#tf.function
tfa.image.dense_image_warp(
image: tfa.types.TensorLike,
flow: tfa.types.TensorLike,
name: Optional[str] = None
) -> tf.Tensor
Opencv have these functions: warpPerspective(), perspectiveTransform()

would it be straight forward to implement a spatial transformer network in tensorflow?

i am interested in trying things out with a spatial transformer network and I can't find any implementation of it in caffe or tensorflow, which are the only two libraries I'm interested in using. I have a pretty good grasp of tensorflow but was wondering if it would be straight forward to implement with the existing building blocks that tensorflow offers without having to do something too complicated like write a custom c++ module
Yes, it is very straight forward to setup the Tensorflow graph for a spatial transformer network with the existing API.
You can find an example implementation in Tensorflow here [1].
[1] https://github.com/daviddao/spatial-transformer-tensorflow
There is an implementation in caffe here. https://github.com/daerduoCarey/SpatialTransformerLayer
Tensorflow has a implementation of Spatial Transformer Network in the models repository - https://github.com/tensorflow/models/tree/master/research/transformer