I'm using scipy.ndimage.zoom and I get this annoying warning:
UserWarning: From scipy 0.13.0, the output shape of zoom() is calculated with round() instead of int() - for these inputs the size of the returned array has changed.
I'm not sure what I should get from it, I started using it with SciPy 1.0.0 so I don't believe it really affects me.
I guess calling it UserWarning is a bit questionable given it's not intended for user consumption, but maybe the intended user is the developer importing the library.
I'm using multiprocessing and I get one warning per process, even more annoying.
Is there a sane way to silent it?
It was easier than I thought, leaving the question for future reference in case anyone needs this.
import warnings
warnings.filterwarnings('ignore', '.*output shape of zoom.*')
Your proposed solution did not work for me. But what does work is:
import warnings
# other nice code
with warnings.catch_warnings():
warnings.simplefilter("ignore")
x = scipy.ndimage.interpolation.zoom(...)
Related
I am currently reading the documentation for numpy, however to get a more thorough understanding of the library, it would be helpful if there was a way to debug the workflow of the library as I call a particular function.
I have tried debugging when numpy was imported as a third party module. However, when I try to step into it, it is actually stepping over.
Therefore, I am building it from source and thereby trying to build it locally in an attempt to run it.
I find the documentation provided in the numpy website for developers to be a bit vague for beginners like me.
I would highly appreciate any comments that would set me on the right path, as I have tried everything that I know of.
Thanks!
I am currently reading the documentation for numpy, however to get a more thorough understanding of the library, it would be helpful if there was a way to debug the workflow of the library as I call a particular function.
Unless you plan to fix a bug in Numpy, help Numpy developpers or you are a contributor, you should not debug Numpy directly.
I have tried debugging when numpy was imported as a third party module. However, when I try to step into it, it is actually stepping over.
By default, Numpy enable compiler optimizations like -O2 or -O3 or even using annotations in the code so to tell the compiler to use a given optimization level (so to better vectorize it for example). Such optimizations tends to make debugging harder and unreliable. The maximal optimization level for debugging should be -Og and the minimal one is -O0. Using -O1/-O2/-O3 tends to causes issues. You also need to enable debugging informations with -g.
The standard way to run and debug Numpy is to use gdb --args python runtests.py -g --python mytest.py. The -g flag should compile Numpy with compiling options -O0 -ggdb. Adding --debug-info may help you to understand if everything is built correctly. For more information see this and that. You can also see the above informations in the runtests.py script.
If you still have issues with the above method, the last desperate option is to add printf directly in the code (and take care to flush stdout frequently). It is not very clean and force Numpy to be frequently recompiled which is a bit slow but it is a pretty good solution when gdb is unstable (ie. crashes or just bogus) for example.
Thank you for contributing to Numpy.
I don't have a specific problem at the moment, but it keeps coming up that I have a bug in my loss function and the error printouts are not sufficient to localize the problem to a specific line of code. For example expected 'int32' but got 'float32' or something like that. Is there a way to know which line of code in the loss function is the source of the problem?
I'll note that sometimes the error comes during compilation, in which case print statements have been helpful. But I have not identified a way to find the problem (outside of guessing or commenting out sections) if it happens only during training, since printouts are not displayed.
You can use one of following option to debug custom loss function or any other deep learning works.
tf.print
tfbdg
TensorBoard debugger
I am trying to run some code which cannot be altered in functionality, but it would be great if I can somehow ignore the warnings or prevent them from being printed to the console, as it congests it and makes it unreadable. Thank you.
You can try tf.autograph.set_verbosity it's used to control how much info tensorflow logs as indicated in their docs for TF2.6
tf.autograph.set_verbosity(
level=0, alsologtostdout=False
)
Check also the answers here which covers solutions for multiple versions of tensorflow and python.
I am new to Tensorflow and am currently working on Tensorflow2. I'm still having a hard time writing code, because I don't have the possibility to debug.
I already tried to get further with the line:
tf.executing_eagerly()
and
tf.print()
but this is only a small help compared to the "normal" debugging in python.
Is there a better possibility to debug the code and to view the content of variables?
The only thing I currently get is this view, but that doesn't give me any insight into the actual variables either:
Rather than using tf.print, use the normal Python's print if you are eagerly executing. You will be able to see the contents of the variables.
I'm looking for a way to "enter" a module in the REPL, so that I can access all symbols without qualification (not just the exported ones), and any function (re)defined at the REPL gets in the specified module. (Basically this is the functionality of Common Lisp's in-package macro.)
This would be useful in a REPL-oriented workflow, as I would be able to write the same code in the REPL as in the module I am developing.
The manual recommends a workflow where I qualify everything, but that seems annoying.
I started a package called REPLMods.jl for this a while back. It should probably be polished up, but I haven't had the time.
I spoke to core Julia members and there was interest in getting it merged into base once things were clean, but again, no time!
I know this isn't quite what you're asking, but just in case the 'obvious' had not occured to you (or future visitors to the question), assuming you loaded a module with an annoyingly cumbersome name, e.g.
import LaTeXStrings
and you don't want to have to type LaTeXStrings all the time just to explore its accessibles, i.e.
LaTeXStrings.[TAB]
you can just assign the imported module as a whole to another variable, i.e.
const l = LaTeXStrings
I'm sure in the absence of a more appropriate built-in solution, at least typing l.[TAB] as opposed to LaTeXStrings.[TAB]is a lot more tolerable :)
(I find it odd, in fact, that julia doesn't seem to support the import LaTeXStrings as l syntax ...)
It's 2020, I'm using Julia 1.4, and was unable to get REPLMods.jl to work. I think the following seem good enough for the time being:
ExportAll.jl - see Exporting all symbols in Julia for a discussion (just that one shouldn't ExportAll to replace normal export)
and Revise.jl