Disable eval using tf.learn.Experiment - tensorflow

What's the best way to disable evaluation if I'm using tf.learn.Experiment?
I'm running this model which constructs an experiment.
tf.contrib.learn.Experiment(
estimator=estimator,
train_input_fn=train_input_fn,
train_steps=FLAGS.num_train_steps,
train_monitors=[export_monitor],
eval_input_fn=eval_input_fn,
eval_steps=FLAGS.num_eval_steps,
eval_metrics=_create_evaluation_metrics(),
min_eval_frequency=100)
To debug an issue with evaluation, I'd like to prevent evluation from running. Is there an easy way to do this?

The answer is going to depend on the method you invoke on Experiment. Presumably you are going to call train_and_evaluate, e.g., if TF_CONFIG's task type is set to "master" (cf this code).
In that case, you'll want to set min_eval_frequency to 0 or None (cf this code)

Related

Reactor - Stop source when first empty

I have a requirement like this.
Flux<Integer> s1 = .....;
s1.flatMap(value -> anotherSource.find(value));
I need a way to stop this s1 when anotherSource.find gives me first empty. how to do that?
Note:
One possible solution is to throw error then capture it to stop.
anotherSource.find(value).switchIfempty(Mono.error(..))
I am looking for better solution than this.
You won't find a specific operator for this, you'll have to combine operators to achieve it. (Note that doesn't make it a "hack" per-se, reactive frameworks are generally intended to be used in a way where you combine basic operators together to achieve your use-case.)
I would agree that using an error to achieve is far from ideal though as it potentially disrupts the flow of real errors in the reactive chain - so that should really be a last resort.
The approach I've generally taken in cases where I want the stream to stop based on an inner publisher is to materialise the inner stream, filter out the onComplete() signals and then re-add the onComplete() wherever appropriate (in this case, if it's empty.) You can then dematerialise the outer stream and it'll respond to the completed signal wherever you've injected it, stopping the stream:
s1.flatMap(
value ->
anotherSource
.find(value)
.materialize()
.filter(s -> !s.isOnComplete())
.defaultIfEmpty(Signal.complete()))
.dematerialize()
This has the advantage of preserving any error signals, while also not requiring another object or special value.

G_LLL_XD function in NTL library faulty

I am trying to use the G_LLL_XD function on the NTL library. Whenever I use the function in this format:
G_LLL_XD(B, delta); ,
the program works.
Though, when I want to change the default deep or prune variables and write the function in one of these ways:
G_LLL_XD(B, delta, deep, check, verbose);
G_LLL_XD(B, delta, prune, check, verbose);
during runtime, I get this error:
R610
- abort() has been called
and in the command prompt it says:
"sorry...deep insertions not implemented"
I find this very weird since whenever I use prune as a variable, I get this crash error, which I shouldn't because the function shouldn't be looking for deep insertion but prune, and when I do use deep as a variable and have implemented deep, I still get an error.
Can anybody help me understand what the problem is or how I can fix this? Thank you very much.
I dont found a argument prune for LLL function in NTL. But there is one for BKZ. Since the are both accept positive intergers, its only a naming confusion.
From the documentation:
NOTE: use of "deep" is obsolete, and has been "deprecated". It is
recommended to use BKZ_FP to achieve higher-quality reductions.
Moreover, the Givens versions do not support "deep", and setting
deep != 0 will raise an error in this case.
So you can not use G_LLL_XD with deep != 0 but LLL_XD should work (but it is deprecated).
But as mentioned, you should consider using BKZ_XD instead of LLL_XD.
A BKZ basis of a lattice is also LLL reduced, so there should be no problem. BKZ is slower than LLL but you can choose a small Blocksize, maybe 10 or 20 but also 2 or 4 will work, to speed the reduction up.

Hyperopt set timeouts and modify space during execution

if someone can help on:
How to set a timeout for each individual test ? a timeout for the total experiment ?
How to setup a progressive strategy which would eliminate/prune a % of worst scoring branches of search space at different stage of the experiment (while using current optimization algorithms) ? ie. at 30% of the max total experiment, it could remove 50% of the worst scoring classifiers and all its branch of hyperparameters to remove it from upcoming tests. Then, same process at 60%...
Thanks a lot!
Following my exchange on hyperopt's github:
there is not a per-trial timeout but hyperopt-sklearn implements its own solution by just wrapping the function. Please look for "fn_with_timeout" at https://github.com/hyperopt/hyperopt-sklearn/ .
from issue 210: "the optimizers are stateless, and fmin stores all state of the experiment in the trials object. So if you remove some experiments from the trials object, it's as if they never happened. use fmin's "max_evals" parameter to interrupt search as often as you need to make these sorts of modifications. It should be fine to use repeated calls with e.g. max_evals increasing by 1 every time if you want really fine grained control."
Thanks for looking into this, #doxav. I've written some code that addresses question 1, taking part of fn_with_timeout from hyperopt-sklearn and adapting it for standard Hyperopt cost functions.
You can find it here:
https://gist.github.com/hunse/247d91d14aaa8f32b24533767353e35d

How to use MPI_Barrier with another communicator?

I'm a bit of newbie on MPI programming ( mpich2 fedora ).
I'm writing be cause, i got Dead lock when use MPI_Barrier with another comunicator different to MPI_COMM_WORLD.
I make 2 communicators like this:
MPI_Comm_split (MPI_COMM_WORLD, color, rank, &split_comm);
If i put a MPI_Barrier where all colors can pass, it'll be all right.
But if i put a MPI_Barrier where only color == 1 can pass, i got Dead lock.
How to use MPI_Barrier with another communicator ?
I was also using MPI_Bcast () (with another different communicator MPI_COMM_WORLD) but it wasn't blocked when nobody call MPI_Bcast too. Can one different communicator to MPI_COMM_WORLD synchronise your own processes?
It would be helpful if you could post a code snippet. It's hard to debug a deadlock from your words alone.
At any rate, you pass the communicator you want to block as an argument to MPI_Barrier:
http://mpi.deino.net/mpi_functions/mpi_barrier.html
http://www.mcs.anl.gov/research/projects/mpi/www/www3/MPI_Barrier.html
MPI_Bcast is a blocking function. So, if one or more ranks do not reach the MPI_Bcast call, then you could have a deadlock.
Remember that MPI_COMM_WORLD includes all ranks, even after the MPI_Comm_Split call.

Creating robust real-time monitors for variables

We can create a real-time monitor for a variable like this:
CreatePalette#Panel#Row[{"x = ", Dynamic[x]}]
(This is more interesting and useful if x happens to be something like $Assumptions. It's so easy to set a value and then forget about it.)
Unfortunately this stops working if the kernel is re-launched (Quit[], then evaluate something). The palette won't show changes in the value of x any more.
Is there a way to do this so it keeps working even across kernel sessions? I find myself restarting the kernel quite often. (If the resulting palette causes the kernel to be automatically started after Quit that's fine.)
Update: As mentioned in the comments, it turns out that the palette ceases working only if we quit by evaluating Quit[]. When using Evaluation -> Quit Kernel -> Local, it will keep working.
Link to same question on MathGroup.
I can only guess, because on my Ubuntu here the situations seems buggy. The trick with the Quit from the menu like Leonid suggested did not work here. Another one is: on a fresh Mathematica session with only one notebook open:
Dynamic[x]
x = 1
Dynamic[x]
x = 2
gives as expected
2
1
2
2
Typing in the next line Quit, evaluating and typing then x=3 updates only the first of the Dynamic[x].
Nevertheless, have you checked the command
Internal`GetTrackedSymbols[]
This gives not only the tracked symbols but additionally some kind of ID where the dynamic content belongs. If you can find out, what exactly these numbers are and investigate in the other functions you find in the Internal context, you may be able to add your palette Dynamic-content manually after restarting the kernel.
I thought I had something like that with
Internal`SetValueTrackExtra
but I'm currently not able to reproduce the behavior.
#halirutan's answer jarred my memory...
Have you ever come across: Experimental/ref/ValueFunction? (documentation address)
Although the documentation contains no examples, the 'more information' section provides the following tidbit:
The assignment ValueFunction[symb] = f specifies that whenever
symb gets a new value val, the expression f[symb,val] should be
evaluated.