How to train syntaxnet model with my own POS data? - tensorflow

I have my own POS data with below format.
Sentence:
I love Stack Overflow.
POS:
I/PRP love/VBP Stack/NNP Overflow/NNP ./.
So, how I train Syntaxnet with this data?
And also I want to get this output:
**(ROOT
(S
(NP (PRP I))
(VP (VBP love)
(NP (NNP Stack) (NNP Overflow)))
(. .)))**
What is the format of "record_format: 'english-text'" in Syntaxnet context.pbtxt file? How its look like?

The output that you are interested in is a constituency parse tree. I am afraid you won't be able to use SyntaxNet to produce constituency trees without some significant code changes.
For POS tagging only, please use the CoNLL format where you fill in only columns 1, 2 and 5:
http://ilk.uvt.nl/conll/#dataformat

Related

How to find all possible paths from one rdf node to another? Any optimization on backtracking search?

In KG we have sample Rdf triples be like.
input1 <---- f(x) -----> ouput1 <----- f(x) -----> output2 <----- f(x) -----> output3
My goal is to find all posibble paths from Input1 - output3. we start from (Input1) and everytime we will try to find f(x) and ouput with given input. SPARQL -
?function has_input input.?function has_output ?output.
when we got some (ouput)s, they become input for the next query and so on until we found goal output(output3).
As I am finding all posible paths so I just implemented a backtracking algorithm to consider all nodes and find all path combinations.
Problem:
It takes too long time to finding all possible paths in worst case scenerio. Is there any solution that is related to semantic web? or any optimization technique for this scenerio to reduce time of backtrack?

Convert a .npy file to wav following tacotron2 training

I am training the Tacotron2 model using TensorflowTTS for a new language.
I managed to train the model (performed pre-processing, normalization, and decoded the few generated output files)
The files in the output directory are .npy files. Which makes sense as they are mel-spectograms.
I am trying to find a way to convert said files to a .wav file in order to check if my work has been fruitfull.
I used this :
melspectrogram = librosa.feature.melspectrogram(
"/content/prediction/tacotron2-0/paol_wavpaol_8-norm-feats.npy", sr=22050,
window=scipy.signal.hanning, n_fft=1024, hop_length=256)
print('melspectrogram.shape', melspectrogram.shape)
print(melspectrogram)
audio_signal = librosa.feature.inverse.mel_to_audio(
melspectrogram, sr22050, n_fft=1024, hop_length=256, window=scipy.signal.hanning)
print(audio_signal, audio_signal.shape)
sf.write('test.wav', audio_signal, sample_rate)
But it is given me this error : Audio data must be of type numpy.ndarray.
Although I am already giving it a numpy.ndarray file.
Does anyone know where the issue might be, and if anyone knows a better way to do it?
I'm not sure what your error is, but the output of a Tacotron 2 system are log Mel spectral features and you can't just apply the inverse Fourier transform to get a waveform because you are missing the phase information and because the features are not invertible. You can learn about why this is at places like Speech.Zone (https://speech.zone/courses/)
Instead of using librosa like you are doing, you need to use a vocoder like HiFiGan (https://github.com/jik876/hifi-gan) that is trained to reconstruct a waveform from log Mel spectral features. You can use a pre-trained model, and most off-the-shelf vocoders, but make sure that the sample rate, Mel range, FFT, hop size and window size are all the same between your Tacotron2 feature prediction network and whatever vocoder you choose otherwise you'll just get noise!

how to reduce the dimension of the document embedding?

Let us assume that I have a set of document embeddings. (D)
Each of document embedding is consisting of N number of word vectors where each of these pre-trained vector has 300 dimensions.
The corpus would be represented as [D,N,300].
My question is that, what would be the best way to reduce [D,N,300] to [D,1, 300]. How should I represent the document in a single vector instead of N vectors?
Thank you in advance.
I would say that what you are looking for is doc2vec. Using this you can convert the whole document into a one, 300-dimensional vector. You can use it like this:
from gensim.test.utils import common_texts
from gensim.models.doc2vec import Doc2Vec, TaggedDocument
documents = [TaggedDocument(doc, [i]) for i, doc in enumerate(your_documents)]
model = Doc2Vec(documents, vector_size=300, window=2, min_count=1, workers=4)
This will train the model on your data and you will be able to represent each document with only one vector as you specified in the question.
You can run inferrance with:
vector = model.infer_vector(doc_words)
I hope this is helpful :)
It's fairly common and fairly (perhaps surprisingly) effective to simply average the word vectors.
Good question but all the answers will result in the some loss of information. The best way for you is to use a Bi-LSTM/GRU layer and provide your word embeddings as input to that layer. And take the output of last time step.
The output of last timestep will have all the contextual information of document both in forward and backward direction. And hence, this is the best way to get what you want as the model learns the representation.
Note that, the larger the document, the more loss of information.

How to interpret the output of a synaxnet when annotating a corpus

I annotated a corpus using pre-trained syntaxnet model (i.e. using Parse McParseface). I am having a problem understanding the output. There are two metrics reproted in the output. Are those for POS tagging and dependency parsing? If yes, which one is POS tagging performance and which one is for dependency parsing performance?
Here is the output:
INFO:tensorflow:Total processed documents: 21710
INFO:tensorflow:num correct tokens: 454150
INFO:tensorflow:total tokens: 560993
INFO:tensorflow:Seconds elapsed in evaluation: 1184.63, eval metric: 80.95%
INFO:tensorflow:Processed 206 documents
INFO:tensorflow:Total processed documents: 21710
INFO:tensorflow:num correct tokens: 291851
INFO:tensorflow:total tokens: 504496
INFO:tensorflow:Seconds elapsed in evaluation: 1193.17, eval metric: 57.85%
If you're using
https://github.com/tensorflow/models/blob/master/syntaxnet/syntaxnet/demo.sh
then the first metric is POS tag accuracy, the second UAS. They are only meaningful if the conll data you input contains gold POS tags and gold dependencies.

Using tensorflow for sequence tagging : Synced sequence input and output

I would like to use Tensorflow for sequence tagging namely Part of Speech tagging. I tried to use the same model outlined here: http://tensorflow.org/tutorials/seq2seq/index.md (which outlines a model to translate English to French).
Since in tagging, the input sequence and output sequence have exactly the same length, I configured the buckets so that input and output sequences have same length and tried to learn a POS tagger using this model on ConLL 2000.
However it seems that the decoder sometimes outputs a taggedsequence shorter than the input sequence (it seems to feel that the EOS tag appears prematurely)
For example:
He reckons the current account deficit will narrow to only # 1.8 billion in September .
The above sentence is tokenized to have 18 tokens which gets padded to 20 (due to bucketing).
When asked to decode the above, the decoder spits out the following:
PRP VBD DT JJ JJ NN MD VB TO VB DT NN IN NN . _EOS . _EOS CD CD
So here it ends the sequence (EOS) after 15 tokens not 18.
How can I force the sequence to learn that the decoded sequence should be the same length as the encoded one in my scenario.
If your input and output sequences are the same length you probably want something simpler than a seq2seq model (since handling different sequence lengths is one of it's strengths)
Have you tried just training (word -> tag) ?
note: that for something like pos tagging where there is clear signal from tokens on either side you'll definitely get a benefit from a bidirectional net.
If you want to go all crazy there would be some fun character level variants too where you only emit the tag at the token boundary (the rationale being that pos tagging benefits from character level features; e.g. things like out of vocab names). So many variants to try! :D
There are various ways of specifying an end of sequence parameter. The translate demo uses a flag <EOS> to determine the end of sequence. However, you can also specify end of sequence by counting the number of expected words in the output. In the lines 225-227 of the translate.py:
# If there is an EOS symbol in outputs, cut them at that point.
if data_utils.EOS_ID in outputs:
outputs = outputs[:outputs.index(data_utils.EOS_ID)]
You can see that outputs are being cut off whenever <EOS> is encountered. You can easily tweak it to constrain the number of output words. You might also consider getting rid of <EOS> flag altogether while training, considering your application.
I came to the same problem. At the end I found ptb_word_lm.py example in tensorflow's examples is exactly what we need for tokenization, NER and POS tagging.
If you look into details of the language model example, you can find out that it treats the input character sequence as X and right shift X for 1 space as Y. It is exactly what fixed length sequence labeling needs.