How to store machine learning trained model in Redis? - redis

I have a trained model, trained on IMDB reviews. I have to classify 10 reviews * 3 times, means 30 reviews. It takes more or less 3 min to classify 10 reviews, so it will take around 10 minutes to classify all reviews. In that time, user will just go away, so I read somewhere to store model in Redis. I have researched a bit but can't find anything. Can anyone please help me in storing trained model in Redis?

Related

Training data in sentiment analysis

I'm doing sentiment analysis of tweets related to recent acquisition of Twitter by Elon Musk. I have a corpus of 10 000 tweets and I'd like to use machine learning methods using models like SVM and Linear Regression. My question is, when I want to train the models, do I have to manually tag big portion of those 10 000 collected tweets with either positive or negative class to train the model correctly or can I use some other dataset of tweets not relating to this topic that's already tagged to train the model for sentiment analysis? Thank you for your answers!

Recurrent Neural Models with Multiple Individuals/Ids

I'm fairly beginner in the world of machine learning and am trying to make some predictions on fantasy sport performance in baseball on any given night. Given the sequential nature of the data, recurrent neural networks seemed liked a good starting point.
I understand the basic principles of rnn but what isn't clear to mean is how to incorporate multiple time series' from different individuals into a single model. For instance, we have performance for 2000 players across each of their career and hence have 2000 distinct time series. In order to make use of rnn, would I have to build models for each player separately, or is it possible/better to pass a player's ID into the model as a feature?
If the latter is possible, I'm still unsure about how this would mechanically work, because players have different time series lengths, and we would have many time series observations for a particular point in time.
Some references/examples/advice would be very helpful.

tensorflow: recommendation system predict users without retrain the model

I read many recommendation system implementation in tensorflow, for example, https://keras.io/examples/structured_data/collaborative_filtering_movielens/, but most of them could only predict ratings for the known user, known user means the user used in training the model.
so for example, if the model trained with 5 users, it could only predict rating result for those 5 users.
for new users with some rating data, the model cannot make prediction without retrain the whole model.
is it possible to predict new users with some rating data without retrain the model?

Training different objects using tensorflow Object detection API

I recently came across this link for learning tensorflow object detection
https://www.youtube.com/watch?v=Rgpfk6eYxJA&t=993s
However I have few doubts and want suggestion on how to proceed.
1) How should I train different objects using the same model( I mean what should my data set contain if I want to train cats,dogs as objects.
2) and once I have trained it for dogs and then continue training on cars will the model detect dogs?
Your dataset should contain a large variety of examples for every object (class) you wish to detect. It sounds like you're misunderstanding the training process by assuming that you train it on each class of objects in sequence, this is incorrect. When you train the model you will be taking a random batch of samples (maybe 64 for example) across all classes.
Training simultaneously on all or many of the classes makes sense, you have one model that has to perform equally well on all classes. So when you train the model you compute the error of the parameters with respect to a random selection of classes and average the error to come up with each update step, yielding a model that performs well across classes.
Notice that it's quite common to run into class imbalance issues. If you have only a few samples of cats, and millions of samples of dogs you will disproportionately penalize the network for misclassifying dogs as cats and the network will simply always predict dog to hedge its bet. Ideally, you will have a roughly equal balance of data per class, if not, there are books and tutorials galore on the strategies to deal with this.

How to make a model of 10000 Unique items using tensorflow? Will it scale?

I have a use case where I have around 100 images each of 10000 unique items. I have 10 items with me which are all from the 10000 set and I know which 10 items too but only at the time of testing on live data. I have to now match the 10 items with their names. What would be an efficient way to recognise these items? I have full control of training environment background and the testing environment background. If I make one model of all 10000 items, will it scale? Or should I make 10000 different models and run the 10 items on the 10 models I have pretrained.
Your question is regarding something called "one-vs-all classification" you can do a google search for that, the first hit is a video lecture by Andrew Ng that's almost certainly worth watching.
The question has been long studied and in a plethora of contexts. The answer to your question does very much depend on what model you use. But I'll assume that, if you're doing image classification, you are using convolutional neural networks, because, after all, they're state of the art for most such image classification tasks.
In the context of convolutional networks, there is something called "Multi task learning" that you should read up on. Boiled down to a single sentence, the concept is that the more you ask the network to learn the better it is at the individual tasks. So, in this case, you're almost certain to perform better training 1 model on 10,000 classes than 10,000 classes each performing a one-vs-all classification scheme.
Take for example the 1,000 class Imagenet dataset, and CIFAR-10's 10 class dataset. It has been demonstrated in numerous papers that first training against Imagenet's 1,000 class dataset, and then simply replacing the last layer with a 10 class output and re-training on CIFAR-10's dataset will produce a better result than just training on CIFAR-10's dataset alone. There are admittedly multiple reasons for this result, Imagenet is a larger dataset. But the richness of class labels, multi-task learning, in the Imagenet dataset is certainly among the reasons for this result.
So that was a long winded way of saying, use one model with 10,000 classes.
An aside:
If you want to get really, really interesting, and jump into the realm of research level thinking, you might consider a 1-hot vector of 10,000 classes rather sparse and start thinking about whether you could reduce the dimensionality of your output layer using an embedding. An embedding would be a dense vector, let's say size 100 as a good starting point. Now class labels turn into clusters of points in your 100 dimensional space. I bet your network will perform even better under these conditions.
If this little aside didn't make sense, it's completely safe to ignore it, your 10,000 class output is fine. But if it did peek your interest look up information on Word2Vec, and read this really nice post on how face recognition is achieved using embeddings: https://medium.com/#ageitgey/machine-learning-is-fun-part-4-modern-face-recognition-with-deep-learning-c3cffc121d78. You might also consider using an Auto Encoder to generate an embedding for the images (though I favor triplet embeddings as typically used in face recognition myself).