Does anyone know an encoder which supports the "main444-16-stillpicture" profile (for 16 bits bit-depth images) ?
I know that VideoLAN's x265 encoder does not support it!
Thanks for your answer!
Related
I'm using AMD Gpu, and the VideoFileWriter doing exactly what I need,
the only thing that I didn't find a way to use is the h264_amf encoder instead of the usual h264 encoder.
I am currently trying to quantize a bert-classifier model but am running into an error, I was wondering if this is even supported at the moment or not? For clarity I am asking if quantization is supported on the BERT Classifier super class in the tensorflow-model-garden? Thanks in advance for the help!
Quantizing the standard BERT classifier is probably not a good way to go, if you are interesting in running a BERT-like model on a resource constrained edge device (like a mobile phone). For your specific question, I believe the answer is 'no, quantization of the standard BERT is not supported.' However, a better answer is probably to use one of the smaller BERT-type models that have been created for the edge use case, such as MobileBERT:
https://github.com/google-research/google-research/tree/master/mobilebert
The above link includes scripts for fine-tuning and then converting to TF Lite format in order to run on device.
I want to use universal sentence encoder but the problem is that Google's pretrained versions doesn't support my language (not even multilingual version: https://tfhub.dev/google/universal-sentence-encoder-multilingual/3)
Is there any tutorial or way how to train my own universal sentence encoder from scratch with my own corpus?
According to the issue opened here(https://github.com/tensorflow/hub/issues/36) it seems that the model was not released to open source. You need to build it by yourself or fine-tuned to specific task.
Keras has preprocessing.image.flow_from_directory() to read the gray scale and rgb image formats. Is there some way i can read HDR images with 4 channels ('rgbe') using keras or similar library? Any ideas will be appreciated.
Thank you in advance.
The function preprocessing.image.flow_from_directory() is a very powerful one. Sadly it has only the two modes you mentioned. I would suggest you two things since there is not a similar library that could work for you:
Go from RGBE to RGB and use preprocessing.image.flow_from_directory()
Checkout this Github link. They talk about keras having preprocessing with 4 channels, I suggest you update keras.
If you want to use the E value, because you think it will have importance in your net, just build your own reader. This might help.
I wanted to quantize (change all the floats into INT8) a ssd-mobilenet model and then want to deploy it onto my raspberry-pi. So far, I have not yet found any thing which can help me with it. Any help would be highly appreciated.
I saw tensorflow-lite but it seems it only supports android and iOS.
Any library/framweork is acceptable.
Thanks in advance.
Tensorflow Lite now has support for the Raspberry Pi via Makefiles. Here's the shell script. Regarding Mobilenet-SSD, you can get details on how to use it with TensorFlow Lite in this blog post (and here)
You can try using TensorRT library.
One of the features of the library is quantization.
In general mobilenets are difficult to quantize (see https://arxiv.org/pdf/2004.09602.pdf) but the library should do a good work