Can I make CNTK detect overfitting? - cntk

CNTK only stops after maxEpochs is reached, and then runs test. Is there a way to make it run test after each epoch to check if it is severely overfitting?

How about run until end, and then validate at different epochs? You may refer to this link.

There are two issues here. There are the technical issues running an test on the test data after each epoch.(I cannot comment on that) But this is not the important issues! The second issue is that you are polluting the test dataset if you are using it several times. If you are using, you test data as a stop criteria you are fitting you data to the test data set! Therefore, only use you test data on time, and don’t use the test data as for any kind of training.

Related

TF2 tensorflow hub retrained model - expanded my training datababase twofold, accuracy dropped 30%, after cleaning up data dropped 40%

I need assistance as a ML beginner. I have retrained TF2's model using make_image_classifier through tensorflow hub (command line approach).
My very first training:
I have immediately retrained the model once again as this one's predictions did not satisfy me -> using 4 classes instead and achieved 85% val.accuracy in Epoch 4/5 and 81% val.accuracy in Epoch 5/5. Because the problem is complex, I have decided to expand upon my db, increasing the number of data fed to the model.
I have expanded the database more than twofold! The results are shocking and I have no clue what to do. I can't believe this.
If anything - the added data is of much better quality, relevance and diversity, and it is actually obtained by myself - a human "discriminator" to the 2nd model's preformance (the 81% acc. one). If I knew how to do it, which is a second question - I would simply add this "feedback data" to my already working model. But I have no idea how to make it happen - ideally, I'd want to give feedback to the bot, allowing it to train once again on the additional data, it understanding that this is a feedback rather than a random additional set of data. However training a new model with updated data from scratch I'd like to try out anyways.
How do I interpret those numbers? What can be wrong with the data? Is it the data what's wrong at all, which is my assumption? Why is the training accuracy below 0.5?? What could have caused the drop between Epoch 2/5' and Epoch 3/5 starting the downfall? What are common issues and similar situations when something like this happens?
I'd love to understand what happened behind the curtains on a more lower level here but I have difficulty, I need guidance. This is as discouraging as the first & second training were encouraging.
Possible problems with the data I can see - but I can't believe they could cause such a drop especially because they were there during 1st&2nd training that went okay:
1-5% images can be in multiple classes (2 classes or maximum 3) - in my opinion this is even encouraged for the problem, as the model imitates me, and I want it to struggle with finding out what it is about certain elements that caused me to classify them as two+ simultaneous classes. Finding out features in them that made me think they'd satisfy both.
1-3% can be duplicates.
There's white border around the images (since it didn't seem to affect the first trainings I didn't bother to remove it)
The drop is 40%.
I will now try to use Tensorflow's tutorial to do it through a script rather than using command line, but seeing such a huge drop is very discouraging, I was hoping to see an increase, after all that's a common suggestion to feed a model more data and I have made sure to feed it quality additional data.. I appreciate every single suggestion for fine-tuning the model, as I am a beginner and I have no experience with what might work and what most probably won't. Thank you.
EDIT: I have cleaned my data by:
removing borders(now they are tiny, white, sometimes not present at all)
removing dust
removing artifacts which there were quite many in the added data!
Convinced the last point was the issue, I retrained. Results are even worse!
2 classes binary
4 classes

Benchmark vs Solver - same data, different result

Currently, we are implementing timetable planning with optaplanner - overall works great! But we are trying to do some improvements on how our solver works - try to use different algorithms etc. So we used benchmark with simple config: common heuristic phase, and than HILL_CLIMBING, LATE_ACCEPTANCE and TABU_SEARCH and this are results
Benchmark:
HILL: 0hard/-5medium/-5soft
LATE_ACCEPTANCE: 0hard/-5medium/-126soft
TABU: 0hard/-7medium/-4soft
At this where is starting to be tricky - I'm coping solver configuration and using the same data set and I have very different results:
Solver with the same dataset:
HILL: 0hard/-11medium/-7soft
LATE_ACCEPTANCE: 0hard/-5medium/-121soft
TABU: 0hard/-11medium/-18soft
So it seems that only LATE_ACCEPTANCE is close to benchmark - but others are way off - any idea why its behave like that?
Assuming that both the solver and benchmark use the default, REPRODUCIBLE environment mode, might it be caused by different termination conditions?
Note that even if you use the same time-based termination, it may not be fully reproducible due to context switching. To make sure every run with the same configuration ends up with exactly the same score, you can use a step-based termination.
Please check the INFO-level logging; each phase reports there the best attained score and the number of steps it took.

How do I handle variability of output in Anylogic?

I have been working on a simulation model for battery swapping in Anylogic. So far I have developed the simulation model, optimization experiment and parameters variation experiment.
There are no errors in the model but the output values are unsatisfactory. Small changes such as changing the step size of the decision variables results in a drastic change in the best value obtained after every experiment. Though the objective does not change much but I am concerned about the other variables that are changing with each run. Even with multiple optimization runs it is difficult to come to a conclusion.
For reference I am posting an output of parameters variation experiment here. I ran the experiment with an optimized value but I was getting feasible results (percentile > 95%) far off the expected input values. Although, the overall result is correct (decreasing percentile with increasing charging time) but it is difficult to understand the variability.
Can anyone help?enter image description here
When building a model, this is a common problem you will have when looking at high level overall outputs. You could have a model bug, but it is just as likely (if not more likely) that there is some dynamic to your system that was not clear in simple Excel spreadsheets or mental models. The DES may be telling us something truly interesting about the system behavior, but without additional outputs, there is no way to understand what that is.
A few suggestions:
Run this as a simple single scenario, where you manually update inputs. When you run this with the low range of input values and then the high range of input values, what do you see on the animation or additional outputs that is different than you expected or could explain the overall output trend? Try running several intermediate points.
Add additional output metrics. If you look at queue sizes, resource utilizations, turn-around-times, etc; do you see anything at that level that is different than expected?
Add a "replication" log. When you run a set of inputs for multiple scenarios, does any single replication stand out as an outlier? If so, re-run the scenario with that set of inputs and that random seed.
There is no substitute for understanding underlying system behavior, and without understanding those dynamics, looking at overall correlation with optimization or parameter variation experiments will often lead companies to make the wrong policies decisions.

tensorflow one of 20 parameter server is very slow

I am trying to train DNN model using tensorflow, my script have two variables, one is dense feature and one is sparse feature, each minibatch will pull full dense feature and pull specified sparse feature using embedding_lookup_sparse, feedforward could only begin after sparse feature is ready. I run my script using 20 parameter servers and increasing worker count did not scale out. So I profiled my job using tensorflow timeline and found one of 20 parameter server is very slow compared to the other 19. there is not dependency between different part of all the trainable variables. I am not sure if there is any bug or any limitation issues like tensorflow can only queue 40 fan out requests, any idea to debug it? Thanks in advance.
tensorflow timeline profiling
It sounds like you might have exactly 2 variables, one is stored at PS0 and the other at PS1. The other 18 parameter servers are not doing anything. Please take a look at variable partitioning (https://www.tensorflow.org/versions/master/api_docs/python/state_ops/variable_partitioners_for_sharding), i.e. partition a large variable into small chunks and store them at separate parameter servers.
This is kind of a hack way to log Send/Recv timings from Timeline object for each iteration, but it works pretty well in terms of analyzing JSON dumped data (compared to visualize it on chrome://trace).
The steps you have to perform are:
download TensorFlow source and checkout a correct branch (r0.12 for example)
modify the only place that calls SetTimelineLabel method inside executor.cc
instead of only recording non-transferable nodes, you want to record Send/Recv nodes also.
be careful to call SetTimelineLabel once inside NodeDone as it would set the text string of a node, which will be parsed later from a python script
build TensorFlow from modified source
modify model codes (for example, inception_distributed_train.py) with correct way of using Timeline and graph meta-data
Then you can run the training and retrieve JSON file once for each iteration! :)
Some suggestions that were too big for a comment:
You can't see data transfer in timeline that's because the tracing of Send/Recv is currently turned off, some discussion here -- https://github.com/tensorflow/tensorflow/issues/4809
In the latest version (nightly which is 5 days old or newer) you can turn on verbose logging by doing export TF_CPP_MIN_VLOG_LEVEL=1 and it shows second level timestamps (see here about higher granularity).
So with vlog perhaps you can use messages generated by this line to see the times at which Send ops are generated.

Are regression tests the entire test suite or a sample of tests?

I was taught that a regression test was a small (only enough to prove you didn't break anything with the introduction of a change or new modules) sample of the overall tests. However, this article by Ron Morrison and Grady Booch makes me think differently:
The desired strategy would be to bring each unit in one at a time, perform an extensive regression test, correct any defects and then proceed to the next unit.
The same document also says:
As soon as a small number of units are added, a test version is generated and "smoke tested," wherein a small number of tests are run to gain confidence that the integrated product will function as expected. The intent is neither to thoroughly test the new unit(s) nor to completely regression test the overall system.
When describing smoke testing, the authors say this:
It is also important that the Smoke Test perform a quick check of the entire system, not just the new component(s).
I've never seen "extensive" and "regression test" used together nor a regression test described as "completely regression test the overall system". Regression tests are supposed to be as light and quick as possible. And the definition of smoke test is what I learned a regression test was.
Did I misunderstand what I was taught? Was I taught incorrectly? Or are there multiple interpretations of "regression test"?
There are multiple interpretations. If you're only fixing a bug that affects one small part of your system then regression tests might only include a small suite of tests that exercise the class or package in question. If you're fixing a bug or adding a feature that has wider scope then your regression tests should have wider scope as well.
The "if it could possibly break, test it" rule of thumb applies here. If a change in Foo could affect Bar, then run the regressions for both.
Regression tests just check to see if a change caused a previously passed test to fail. They can be run at any level (unit, integration, system). Reference.
I always took regression testing to mean any tests whose purpose was to ensure that existing functionality is not broken by new changes. That would not imply any constraint on the size of the test suite.
Regression is generally used to refer to the whole suite of tests. It is the last thing QA does before a release. It is used to show that everything that used to work still works, to the extent that that is possible to show. In my experience, it is generally a system-wide set of tests regardless of how small the change was (although small changes may not trigger a regression test).
Where I work, regression tests are standardized for each application at the end of each release. They are intended to test all functionality, but they are not designed to catch subtle bugs. So if you have a form that has various kinds of validation done on it, for example, a regression suite for that form would be to confirm that each type of validation gets done (field level and form level) and that correct information can be submitted. It is not designed to cover every single case (i.e. what if I leave field A blank? How about field B? it will just test one of them and assume the others work).
However, on the current project I'm working on, the regression tests are much more thorough, and we have noticed a reduction in the number of defects being raised during testing. Those two are not necessarily related, but we do notice it fairly consistently.
my understanding of the term 'regression testing' is:
unit tests are written to test features when the system is created
when bugs are discovered, more unit tests are written to reproduce the bug and verify that it has been corrected
a regression test runs the entire set of tests prove that everything still works including that no old bugs have reappeared [i.e. to prove that the code has not "regressed"]
in practice, it is best to always run all existing unit tests when changes are made. the only time i'd bother with a subset of tests is when the full unit test suite takes "too long" to run [where "too long" is fairly subjective]
Start with what you are trying to accomplish. Then do what you need to do to accomplish that goal. And then use buzzword bingo to assign a word to what you actually do. Just like everyone else :-) Accuracy isn't all that important.
... regression test was a small (only enough to prove you didn't break anything with the introduction of a change or new modules) sample of the overall tests
If a small sample of tests is enough to prove that the system works, why do the rest of the tests even exist? And if you think you know that your change only affected a subset of functionality, then why do you need to test anything after making the change? Humans are fallible, nobody really knows if changing something breaks something else. IMO, if your tests are automated, re-run them all. And if they aren't automated, automate them. In the mean time, re-run whatever is automated.
In general, a subset of the feature tests for the new feature introduced in version X of a product becomes the basis of the regression tests for version X+1, X+2, and so on. Over time, you may reduce the time taken by the feature/regression tests of stable features which have not suffered from regressions. If a feature suffers from lots of regressions, then it may be beneficial to increase the emphasis on the feature.
I think that the article referring to 'extensive regression test' means run an extensive set of (individually simple) regression tests.