How to correctly interpret NuPIC output vol.2 - nupic

Here is discussion about correct interpretation of NuPIC output which I would like to extend. First I will provide short summary and then ask another question.
Consider following output:
step,original,prediction,anomaly score
175,0,0.0,0.32500000000000001
176,62,52.0,0.65000000000000002
177,402,0.0,1.0
178,0,0.0,0.125
179,402,0.0,1.0
180,0,0.0,0.0
181,3,402.0,0.050000000000000003
182,50,52.0,0.10000000000000001
183,68,13.0,0.90000000000000002
This is output of one step ahead prediction without using inference shifter. It basically mean that the prediction made at step N is for step N+1. Or in another words if the prediction is perfectly right then prediction value at step N should correspond to the original value at step N+1.
Anomaly score can be viewed as confidence of prediction. For example, NuPIC might only be 23% confident in the best prediction it gives, in which case the anomaly score could be very high. This is the case of step 179 where prediction is 0 and the original value on step 180 is 0. Note that anomaly score on step 179 is 1.0. It means that NuPIC was not confident in prediction, despite that the prediction was correct.
Opposite situation happens on step 180 where prediction is 0 and the original value on step 181 is 3. Note that anomaly score on step 180 is 0. That means that NuPIC was quiet confident in prediction but it was not correct.
Questions:
Does anomaly score on given step also counts with the original value on given step? For example anomaly score on this step
181,3,402.0,0.050000000000000003
take into account that 3 is the original value? Or it is computed without respect to this value?
Is it possible to compute some kind of debug information reading prediction and anomaly score? I mean something like this from NuPIC perspective: I'm 23% sure that next value will be 10, I'm 27% sure that next value will be 20, I'm 50% sure that next value will be 30.
Is OK to predict data for zero steps forward if I'm just interested in the prediction accuracy?
Does NuPIC make some kind of look back? I mean if NuPIC was at step 180 confident that next value will be 0 but later it shows that it was mistake does NuPIC somehow recount the anomaly score from step 180 for further data processing? Or this is done automatically in HTM?

Related

XGBoost- Help interpreting the booster behaviour. Why is the 0th iteration always coming out to be best?

I am training an XGBoost model and having trouble interpreting the model behaviour.
early_stopping_rounds =10
num_boost_round=100
Dataset is unbalanced with 458644 1s and 7975373 0s
evaluation metric is AUCPR
param = {'max_depth':6, 'eta':0.03, 'silent':1, 'colsample_bytree': 0.3,'objective':'binary:logistic', 'nthread':6, 'subsample':1, 'eval_metric':['aucpr']}
From my understanding of "early_stopping_rounds" the training is supposed to stop after no improvement is observed in the test/evaluation dataset's eval metric(aucpr) for 10 consecutive rounds. However, in my case, even when there is a clear improvement in the AUCPR of the evaluation dataset, the training still stops after the 10th boosting stage. Please see the training log below. Additionally, the best iteration comes out to be the 0th one when clearly the 10th iteration has an AUCPR much higher than the 0th iteration.
Is this right? If not what could be going wrong? If yes then please correct my understanding about early stopping rounds and best iteration.
Very interesting!!
So it turns out that early_stopping looks to minimize (RMSE, log loss, etc.) and to maximize (MAP, NDCG, AUC) - https://xgboost.readthedocs.io/en/latest/python/python_intro.html
When you use aucpr, it is actually trying to minimize it - perhaps that's the default behavior.
Try to set maximize=True when calling xgboost.train() - https://github.com/dmlc/xgboost/issues/3712

Initial jump in loss with TensorFlow

Suppose I have a saved model that is nearly at the minimum, but with some room for improvement. For example, the loss (as reported by tf.keras.Models.model.evaluate() ) might be 11.390, and I know that the model can go down to 11.300.
The problem is that attempts to refine this model (using tf.keras.Models.model.fit()) consistently result in the weights receiving an initial 'jolt' during the first epoch, which sends the loss way upwards. After that, it starts to decrease, but it does not always converge on the correct minimum (and may not even get back to where it started.)
It looks like this:
tf.train.RMSPropOptimizer(0.0002):
0 11.982
1 11.864
2 11.836
3 11.822
4 11.809
5 11.791
(...)
15 11.732
tf.train.AdamOptimizer(0.001):
0 14.667
1 11.483
2 11.400
3 11.380
4 11.371
5 11.365
tf.keras.optimizers.SGD(0.00001):
0 12.288
1 11.760
2 11.699
3 11.650
4 11.666
5 11.601
Dataset with 30M observations, batch size 500K in all cases.
I can mitigate this by turning the learning rate way down, but then it takes forever to converge.
Is there any way to prevent training from going "wild" at the beginning, without impacting the long-term convergence rate?
As you tried decreasing the learning rate is the way to go.
E.g. learning rate = 0.00001
tf.train.AdamOptimizer(0.00001)
Especially with Adam that should be promising, since the learning rate is at the same time an upper bound for the step size.
On top of that you could try learning rate scheduling, where you set the learning rate according to your predefined schedule.
Also I feel that from what you show when you decreased the learning rate, this does not seem to be too bad, in terms of convergence rate.
Maybe another hyperparameter you could tune in your case would be to reduce the batch size, to decrease computation cost per update.
Note:
I find the term "not the right minimum" rather misleading. To further understand nonconvex optimization for artificial neural networks, I would like to Point to the deep learning book of Ian Goodfellow et al

Is it advisable to save the final state from training of an RNN to initialize it during testing?

After training a RNN does it makes sense to save the final state so that it is then the initial state for testing?
I am using:
stacked_lstm = rnn.MultiRNNCell([rnn.BasicLSTMCell(n_hidden,state_is_tuple=True) for _ in range(number_of_layers)], state_is_tuple=True)
The state has a very specific meaning and purpose. This isn't a question of "advisable" or not, there's a right and wrong answer here, and it depends on your data.
Consider each timestep in your sequence of data. At the first time step your state should be initialized to all zeros. This value has a specific meaning, it tells the network that this is the beginning of your sequence.
At each time step the RNN is computing a new state. The MultiRNNCell implementation in tensorflow is hiding this from you, but internally in that function a new hidden state is computed at each time step and passed forward.
The value of state at the 2nd time step is the output of the state at the 1st time step, and so on and so forth.
So the answer to your question is yes only if the next batch is continuing in time from the previous batch. Let me explain this with a couple of examples where you do, and don't perform this operation respectively.
Example 1: let's say you are training a character RNN, a common tutorial example where your input is each character in the works of Shakespear. There are millions of characters in this sequence. You can't train on a sequence that long. So you break your sequence into segments of 100 (if you don't know why to do otherwise limit your sequences to roughly 100 time steps). In this example, each training step is a sequence of 100 characters, and is a continuation of the last 100 characters. So you must carry the state forward to the next training step.
Example 2: where this isn't use would be in training an RNN to recognize MNIST handwritten digits. In this case you split your image into 28 rows of 28 pixels and each training has only 28 time steps, one per row in the image. In this case each training iteration starts at the beginning of the sequence for that image and trains fully until the end of the sequence for that image. You would not carry the hidden state forward in this case, your hidden state must start with zero's to tell the system that this is the beginning of a new image sequence, not the continuation of the last image you trained on.
I hope those two examples illustrate the important difference there. Know that if you have sequence lengths that are very long (say over ~100 timesteps) you need to break them up and think through the process of carrying forward the state appropriately. You can't effectively train on infinitely long sequence lengths. If your sequence lengths are under this rough threshold then you won't worry about this detail and always initialize your state to zero.
Also know that even though you only train on say 100 timesteps at a time the RNN can still be expected to learn patterns that operate over longer sequences, Karpathy's fabulous paper/blog on "The unreasonable effectiveness of RNNs" demonstrates this beautifully. Those character level RNNs can keep track of important details like whether a quote is open or not over many hundreds of characters, far more than were ever trained on in one batch, specifically because the hidden state was carried forward in the appropriate manner.

Should my seq2seq RNN idea work?

I want to predict stock price.
Normally, people would feed the input as a sequence of stock prices.
Then they would feed the output as the same sequence but shifted to the left.
When testing, they would feed the output of the prediction into the next input timestep like this:
I have another idea, which is to fix the sequence length, for example 50 timesteps.
The input and output are exactly the same sequence.
When training, I replace last 3 elements of the input by zero to let the model know that I have no input for those timesteps.
When testing, I would feed the model a sequence of 50 elements. The last 3 are zeros. The predictions I care are the last 3 elements of the output.
Would this work or is there a flaw in this idea?
The main flaw of this idea is that it does not add anything to the model's learning, and it reduces its capacity, as you force your model to learn identity mapping for first 47 steps (50-3). Note, that providing 0 as inputs is equivalent of not providing input for an RNN, as zero input, after multiplying by a weight matrix is still zero, so the only source of information is bias and output from previous timestep - both are already there in the original formulation. Now second addon, where we have output for first 47 steps - there is nothing to be gained by learning the identity mapping, yet network will have to "pay the price" for it - it will need to use weights to encode this mapping in order not to be penalised.
So in short - yes, your idea will work, but it is nearly impossible to get better results this way as compared to the original approach (as you do not provide any new information, do not really modify learning dynamics, yet you limit capacity by requesting identity mapping to be learned per-step; especially that it is an extremely easy thing to learn, so gradient descent will discover this relation first, before even trying to "model the future").

Using Torch for Time Series prediction using LSTMs

My main problem is how should I pre-process my dataset that is basically a 60 minutely sequenced numbers inputs that will result in a 1 hourly output. Knowing that each input vector every minute is producing some output, but unfortunately this output can't be observed until 1 hour is passed.
I thought about considering putting 60 inputs as one big input vector which corresponds to 1 hourly output on a normal ML classfier, hence having 1 sample at a time. But I don't think it would be time series anymore.
How can I represent that to be doable in an LSTM environment?