Google Cloud ML Engine: How to create task receiving an image float32 format - tensorflow

I'm new to Google Cloud ML Engine. I'm deploying my first model and I trained a model that receives images in float32 format. I'm following ML Engine tutorial but the they encode the image in base64. Is there a way to encode it using float32? Or can I create a task that is in float32?
python -c 'import base64, sys, json; img = base64.b64encode(open(sys.argv[1], "rb").read()); print json.dumps({"inputs": {"key":"0", "image_bytes": {"b64": im g}}})' flower.jpg &> request.json

There are multiple ways to encode image data, some more efficient than others. These are outlined in this answer. You are looking for the "Raw Tensor Encoded AS JSON" section, which shows how to export your model and also how to construct the JSON. Please also consider the tradeoff of the inefficiency of using floats in JSON and consider the alternative approaches.

Related

One hot encoding gives different result for model and deployment

I am preparing the AI system and I use tensorflow.keras.preprocessing.text.one_hot for encoding the categorical data. I am working on text and sentence kind of data.
vocab_length = 1000
encoded_text = one_hot(text, vocab_length)
so, after the model training, I deploy the model and it will work on user input text I am using the same one_hot method but encoding algorithms generate different encoding so I am getting the wrong prediction. I also try to dump the one_hot into joblib and load it on the server still it gives the wrong result. Kindly suggest to me how can I get the same encoding into the model and server deployment.

How to load in a downloaded tfrecord dataset into TensorFlow?

I am quite new to TensorFlow, and have never worked with TFRecords before.
I have downloaded a dataset of images from online and the download format was TFRecord.
This is the file structure in the downloaded dataset:
1.
2.
E.g. inside "test"
What I want to do is load in the training, validation and testing data into TensorFlow in a similar way to what happens when you load a built-in dataset, e.g. you might load in the MNIST dataset like this, and get arrays containing pixel data and arrays containing the corresponding image labels.
(train_images, train_labels), (test_images, test_labels) = tf.keras.datasets.mnist.load_data()
However, I have no idea how to do so.
I know that I can use dataset = tf.data.TFRecordDataset(filename) somehow to open the dataset, but would this act on the entire dataset folder, one of the subfolders, or the actual files? If it is the actual files, would it be on the .TFRecord file? How do I use/what do I do with the .PBTXT file which contains a label map?
And even after opening the dataset, how can I extract the data and create the necessary arrays which I can then feed into a TensorFlow model?
It's mostly archaeology, and plus a few tricks.
First, I'd read the README.dataset and README.roboflow files. Can you show us what's in them?
Second, pbtxt are text formatted so we may be able to understand what that file is if you just open it with a text editor. Can you show us what's in that.
The think to remember about a TFRecord file is that it's nothing but a sequence of binary records. tf.data.TFRecordDataset('balls.tfrecord') will give you a dataset that yields those records in order.
Number 3. is the hard part, because here you'll have binary blobs of data, but we don't have any clues yet about how they're encoded.
It's common for TFRecord filed to contian serialized tf.train.Example.
So it would be worth a shot to try and decode it as a tf.train.Example to see if that tells us what's inside.
ref
for record in tf.data.TFRecordDataset('balls.tfrecord'):
break
example = tf.train.Example()
example.ParseFromString(record.numpy())
print(example)
The Example object is just a representation of a dict. If you get something other than en error there look for the dict keys and see if you can make sense out of them.
Then to make a dataset that decodes them you'll want something like:
def decode(record):
return tf.train.parse_example(record, {key:tf.io.RaggedFeature(dtype) for key, dtype in key_dtypes.items()})
ds = ds.map(decode)

Convert a .npy file to wav following tacotron2 training

I am training the Tacotron2 model using TensorflowTTS for a new language.
I managed to train the model (performed pre-processing, normalization, and decoded the few generated output files)
The files in the output directory are .npy files. Which makes sense as they are mel-spectograms.
I am trying to find a way to convert said files to a .wav file in order to check if my work has been fruitfull.
I used this :
melspectrogram = librosa.feature.melspectrogram(
"/content/prediction/tacotron2-0/paol_wavpaol_8-norm-feats.npy", sr=22050,
window=scipy.signal.hanning, n_fft=1024, hop_length=256)
print('melspectrogram.shape', melspectrogram.shape)
print(melspectrogram)
audio_signal = librosa.feature.inverse.mel_to_audio(
melspectrogram, sr22050, n_fft=1024, hop_length=256, window=scipy.signal.hanning)
print(audio_signal, audio_signal.shape)
sf.write('test.wav', audio_signal, sample_rate)
But it is given me this error : Audio data must be of type numpy.ndarray.
Although I am already giving it a numpy.ndarray file.
Does anyone know where the issue might be, and if anyone knows a better way to do it?
I'm not sure what your error is, but the output of a Tacotron 2 system are log Mel spectral features and you can't just apply the inverse Fourier transform to get a waveform because you are missing the phase information and because the features are not invertible. You can learn about why this is at places like Speech.Zone (https://speech.zone/courses/)
Instead of using librosa like you are doing, you need to use a vocoder like HiFiGan (https://github.com/jik876/hifi-gan) that is trained to reconstruct a waveform from log Mel spectral features. You can use a pre-trained model, and most off-the-shelf vocoders, but make sure that the sample rate, Mel range, FFT, hop size and window size are all the same between your Tacotron2 feature prediction network and whatever vocoder you choose otherwise you'll just get noise!

What is the best way to create a custom federated image dataset for TFF in SQLite format?

I went through the source for the CIFAR-100 inbuilt dataset and decided to create a compatible version for the FairFace dataset in order to be able to leverage the other built-in functions without many modifications everywhere once I convert FairFace into a structure very similar to CIFAR-100.
I did search around but was unable to find how the CIFAR-100 SQLite database was created - specifically how the images were converted into BLOB for storage. After a bit of trial and error, I tried doing it this way:
sample = getDatabyIndex(train_labels, index)
example = tf.train.Example(features=tf.train.Features(feature={
'image' : bytes_feature(sample[0].tobytes()),
'label' : int64_feature(sample[1])
}))
example = example.SerializeToString()
cur.execute("insert into examples('split_name','client_id','serialized_example_proto') values(?,?,?)", ('train', i, sqlite3.Binary(example)))
Executing this for each sample in the train data and similarly for test data. I am able to load it using this decoding method:
def parse_proto(tensor_proto):
parse_spec = {
'image': tf.io.FixedLenFeature(shape=(), dtype=tf.string),
'label': tf.io.FixedLenFeature(shape=(), dtype=tf.int64),
}
decoded_example = tf.io.parse_example(tensor_proto, parse_spec)
return collections.OrderedDict(
image=tf.reshape(tf.io.decode_raw(decoded_example['image'], tf.uint8), (224,224,3)),
label=decoded_example['label'])
What I noticed, however, is that the final sqlite.lzma compressed archive is 6.4 GB in size whereas the source archive for the dataset was 555 MB. I am guessing that due to the way I am storing the images, compression is not working as well as it could if they were stored in a more compatible manner. I see from the CIFAR-100 code that the images are loaded directly as FixedLenFeatures of shape (32,32,3) which means that they were stored as such but I have been unable to find a way to store my images as such. The only method that worked for me was the bytes_feature route.
What would be the best/recommended way to go about this?
Without more information about LZMA compression is being applied its hard to answer about the size increase.
To directly use the same tf.io.FixedLenFeature as the CIFAR-100 dataset from tff.simulation.datasets.cifar100.load_data the tf.train.Example needs to be constructed usingint64_feature() for the 'image' key instead of bytes. This may require casting sample[0] to a different dtype (assuming it is a np.ndarray).
During decoding:
First parse as an (N, M, 3) tensor with int64. From tensorflow_federated/python/simulation/datasets/cifar100.py#L31:
'image': tf.io.FixedLenFeature(shape=(32, 32, 3), dtype=tf.int64),
Cast to tf.unit8. From tensorflow_federated/python/simulation/datasets/cifar100.py#L37:
image=tf.cast(parsed_features['image'], tf.uint8),
NOTE: Because of varint encoding used in protocol buffers (https://developers.google.com/protocol-buffers/docs/encoding#varints), using int64 isn't expected to add significant overhead for the serialized representation (at least less than 4x).

How to makeup FSNS dataset with my own image for attention OCR tensorflow model

I want to apply attention-ocr to detect all digits on number board of cars.
I've read your README.md of attention_ocr on github(https://github.com/tensorflow/models/tree/master/research/attention_ocr), and also the way I should do to use my own image data to train model with the StackOverFlow page.(https://stackoverflow.com/a/44461910/743658)
However, I didn't get any information of how to store annotation or label of the picture, or the format of this problem.
For object detection model, I was able to make my dataset with LabelImg and converting this into csv file, and finally make .tfrecord file.
I want to make .tfrecord file on FSNS dataset format.
Can you give me your advice to go on this training steps?
Please reread the mentioned answer it has a section explaining how to store the annotation. It is stored in the three features image/text, image/class and image/unpadded_class. The image/text field is used for visualization, some models support unpadded sequences and use image/unpadded_class, while the default version relies on the text padded with null characters to have the same length stored in the feature image/class. Here is the excerpt to store the text annotation:
char_ids_padded, char_ids_unpadded = encode_utf8_string(
text, charset, length, null_char_id)
example = tf.train.Example(features=tf.train.Features(
feature={
'image/class': _int64_feature(char_ids_padded),
'image/unpadded_class': _int64_feature(char_ids_unpadded),
'image/text': _bytes_feature(text)
...
}
))
If you have worked with tensorflow object detection, then the apporach should be much easier for you.
You can create the annotation file (in .csv format) using labelImg or any other annotation tool.
However, before converting it into tensorflow format (.tfrecord), you should keep in mind the annotation format. (FSNS format in this case)
The format is : files text xmin ymin xmax ymax
So while annotating dont bother much about the class (as you would have done in object detection !! Some random name should suffice.)
Convert it into .tfrecords.
And finally labelMap is a list of characters which you have annotated.
Hope it helps !