I just got into Rasa to make a chatbot and wanted to know if there was some way to train Rasa Core on datasets(preferably subreddit datasets) either by its own or using tensorflow or something.
Thanks in advance.
Some ideas:
Core training data in Rasa follows a specific story format which to date requires a predefined, closed set of intents and actions to operate. You could use data from a dataset for NLU intent examples and use it to train a model in that way, provided you format it correctly. You could also use data from a dataset to define responses for retrieval actions, if you were wanting to generate natural sounding replies.
Related
I have some data in my MySQL database that I want to make predictions on. Now, I also have a model developed using TensorFlow on similar data to make these predictions.
I want to use the power of in-database machine learning in order to make my predictions. I am thinking of using MindsDB for this purpose.
I have already used MindsDB for another use case, where I trained the model on the data in my database and then subsequently used it for making predictions. But, is it possible to use my pre-developed model in order to make predictions? If so, how do I do it?
Some example code would be greatly appreciated.
Based on 5 features extracted from a sample of binary files, the idea is to combine different deep learning models each of them processing one feature sample.
Or simply is there a way to connect a CNN and a RNN, in a way that the output of the CNN would be the input of the RNN ?
Any help or reference would be appreciated
The Keras Functional API can be used to combine different Deeplearing models.
It is much more flexible than the Keras Sequential API, in that it can support multiple input, output pipelines.
You can implement non-linear topology with the Functional API.
For example:
I am new to the Transformers concept and I am going through some tutorials and writing my own code to understand the Squad 2.0 dataset Question Answering using the transformer models. In the hugging face website, I came across 2 different links
https://huggingface.co/models
https://huggingface.co/transformers/pretrained_models.html
I want to know the difference between these 2 websites. Does one link have just a pre-trained model and the other have a pre-trained and fine-tuned model?
Now if I want to use, let's say an Albert Model For Question Answering and train with my Squad 2.0 training dataset on that and evaluate the model, to which of the link should I further?
I would formulate it like this:
The second link basically describes "community-accepted models", i.e., models that serve as the basis for the implemented Huggingface classes, like BERT, RoBERTa, etc., and some related models that have a high aceptance or have been peer-reviewed.
This list has bin around much longer, whereas the list in the first link only recently got introduced directly on the Huggingface website, where the community can basically upload arbitrary checkpoints that are simply considered "compatible" with the library. Oftentimes, these are additional models trained by practitioners or other volunteers, and have a task-specific fine-tuning. Note that al models from /pretrained_models.html are also included in the /models interface as well.
If you have a very narrow usecase, you might as well check and see if there was already some model that has been fine-tuned on your specific task. In the worst case, you'll simply end up with the base model anyways.
I've trained a model who can mimic day to day conversations occurring on reddit. But my problem is that I want it to reply to a specific use case based on the vocabulary it has learned.
Summary: I am building a chatbot and there are many use cases. I've trained a model with a reddit dataset and now I have a model who can mimic reddit conversation. I want it to map it's vocabulary to one of my use cases. How should I tackle this scenario?
Any idea? Please help, I've tried searching the internet for days, but no answer.
from Tensorflow's documentation, there seems to be a large array of options for "running", serving, testing, and predicting using a Tensorflow model. I've made a model very similar to MNIST, where it outputs a distribution from an image. For a beginner, what would be the easiest way to take one or a few images, and send them through the model, getting an output prediction? It is mostly for experimentation purposes. Sorry if this is too redundant, but all my research has led me to so many different ways of doing this and the documentation doesn't really give any info on the pros and cons of the different methods. Thanks
I guess you are using placeholders for your model input and then using feed_dict to feed values into your model.
If that's the case the simplest way would be after you have a trained model you save it using tf.saver. Then you can have a test script where you restore your model and then sess.run on your output variable with a feed_dict of whatever you want your input to be.