I just get a question. I want the system to return the -1 as unknown char for the new untrained letters. For example, if I have trained 1/2/3/4,when I test the char '5' or '6', the tensorflow should return -1 as unknown char.
Is it possible?
Thanks.
I'd think for simple classifications, you're looking for anything that has less than a certain confidence/score of being a known class.
To be fair, I've only use Keras on top of TensorFlow, so YMMV.
I'd just train it on the 4 categories you know, then when it classifies if the final results if the top one has less than a certain raw score/weight (let's say it classifies an unknown 7 as a 4, but with a mediocre score) treat it as a -1.
This might not work with every loss/objection function you train your model on, but should work MSE or categorical cross entropy if you can get the raw final weight.
Related
So I have been looking at XGBoost as a place to start with this, however I am not sure the best way to accomplish what I want.
My data is set up something like this
Where every value, whether it be input or output is numerical. The issue I'm facing is that I only have 3 input data points per several output data points.
I have seen that XGBoost has a multi-output regression method, however I am only really seeing it used to predict around 2 outputs per 1 input, whereas my data may have upwards of 50 output points needing to be predicted with only a handful of scalar input features.
I'd appreciate any ideas you may have.
For reference, I've been looking at mainly these two demos (they are the same idea just one is scikit and the other xgboost)
https://machinelearningmastery.com/multi-output-regression-models-with-python/
https://xgboost.readthedocs.io/en/stable/python/examples/multioutput_regression.html
In order to prevent divisions by zero in TensorFlow, I want to add a tiny number to my dividend. A quick search did not yield any results. In particular, I am interested in using the scientific notation, e.g.
a = b/(c+1e-05)
How can this be achieved?
Assuming a, b and c are tensors. The formula you have written will work as expected. 1e-5 will be broadcasted and added on the tensor c. Tensorflow automatically typecasts the 1e-5 to tf.constant(1e-5).
Tensorflow however has some limitations with non-scalar broadcasts. Take a look at my other answer.
I currently want to use Tensorflows Object Detection API for my custom problem.
I already created the dataset, but its pretty unbalanced.
The Dataset has 3 classes and my main problem is, that one class has about 16k samples and another class has only about 2.5k samples.
So I think I have to balance the dataset. Someone told me, that there is something called sample/class weights(Not sure if this is 100% correct), which balance the samples for training, so that the biggest class has a smaller impact on training then the smallest class.
I'm not able to find this method for balancing. Can someone pleas give me a hint where to start?
You can do normal cross entropy, giving you a ? x 1 tensor, X of losses
If you want class number N to count T times more, you can do
X = X * tf.reduce_sum(tf.multiply(one_hot_label, class_weight), axis = 1)
tf.multiply
scales the label by whatever weight you want,
tf.reduce_sum
converts the label vector a to a scalar, so you end up with a ? x 1 tensor filled with the class weightings. Then you simply multiply the tensor of losses with the tensor of weightings to achieve desired results.
Since one class is 6.4 times more common than the other, I would apply the weightings 1 and 6.4 to the more common and less common class respectively. This will mean that every time the less common class occurs, it has 6.4 times the affect of the more common class, so it's like it saw the same number of samples from each.
You might want to modify it so that the weighting add up to the number of classes. This matches the default case is all of the weightings are 1. In that case we have 1 /7.4 and 6.4/7.4
I want to predict stock price.
Normally, people would feed the input as a sequence of stock prices.
Then they would feed the output as the same sequence but shifted to the left.
When testing, they would feed the output of the prediction into the next input timestep like this:
I have another idea, which is to fix the sequence length, for example 50 timesteps.
The input and output are exactly the same sequence.
When training, I replace last 3 elements of the input by zero to let the model know that I have no input for those timesteps.
When testing, I would feed the model a sequence of 50 elements. The last 3 are zeros. The predictions I care are the last 3 elements of the output.
Would this work or is there a flaw in this idea?
The main flaw of this idea is that it does not add anything to the model's learning, and it reduces its capacity, as you force your model to learn identity mapping for first 47 steps (50-3). Note, that providing 0 as inputs is equivalent of not providing input for an RNN, as zero input, after multiplying by a weight matrix is still zero, so the only source of information is bias and output from previous timestep - both are already there in the original formulation. Now second addon, where we have output for first 47 steps - there is nothing to be gained by learning the identity mapping, yet network will have to "pay the price" for it - it will need to use weights to encode this mapping in order not to be penalised.
So in short - yes, your idea will work, but it is nearly impossible to get better results this way as compared to the original approach (as you do not provide any new information, do not really modify learning dynamics, yet you limit capacity by requesting identity mapping to be learned per-step; especially that it is an extremely easy thing to learn, so gradient descent will discover this relation first, before even trying to "model the future").
i got a game with only 10x2 pixels as input and it learns after one hour training doing it by itself. Now i want to use one float value output of the model instead of three classifier outputs. The three classifier outputs where stop,1-step right, 1-step-left. Now i want to produce one output value which tells me e.g. -4 => 4 steps-left, +2 => 2 steps-right and so on.
But after training for 1-2 hours, it only produces numbers around 0.001, but it should produce numbers between -10.0->+10.0 ?
Do i need todo it in a completly other way, or can i use an classifier model to output real value without changing much code ?
thanks for help
game code link
Training a classifier is much simpler than coming up with a good loss function that will give you scalaer values that make sense. Much (!) simpler.
Make it a classifier with 21 classes (0=10 left, 1=9 left, 2=8 left,...,10=stay, 11=1 right, ..., 20=10 right)