My code goes through the pandas_udf function but yet it is not covered under SonarQube. How to cover those codes?
Related
Backgrund
In Tensorflow, even when using mutable variables, it looks there is no out option as in numpy to specify the location to store the calculation result. One of the reason why the calculation gets slower is the temporary copy as explained From Python to Numpy and in my understanding re-using the existing buffer would avoid such copies.
Question
Would like to understand why there is no out option equivalent in Tensorflow. For instance matmul appear to have no such option to specify the location.Is it because by design Tensorflow will avoid making temporary copies or does it always create temporary copies.
It appears there is no copy indexing or view indexing concepts that numpy has. When an array is extracted from an existing array, is it a shallow copy (view) or a deep copy or it depends?
Please advise where to look at to understand the internal behavior overview similar to From Python to Numpy that gives good insights into its internal architecture and performance considerations.
Tensorflow produces computations graphs, which are highly optimized in terms of the data flow. For example, if some of the stated computations are not needed to produce the final result, TF would not evaluate them. Moreover, TF compiles procedures to its own low-level operations. Hence out parameter of numpy does not make sense in this context.
Thus, TF internally optimizes all steps of the dataflow, and you do not need to provide any instructions. You can optimize the procedure of getting the result as an algorithm, but not how the algorithmworks internally.
To get familiar with the idea what a computational graph is, consider reading this guide
So it turns out when you want to use interactive plots (i.e. with zooming, moving around, rotating etc.) in jupyter lab with a python kernel, you need to use %matplotlib widget, this at least works for me. Now the question is: How could I use that feature with a julia kernel? I am a big fan of both matplotlib and julia and I do not want to compromise on them. When I type the above command with a julia kernel, I get the message
The analogue of IPython's %matplotlib in Julia is to use the PyPlot package, which gives a Julia interface to Matplotlib including inline plots in IJulia notebooks. (The equivalent of numpy is already loaded by default in Julia.) Given PyPlot, the analogue of %matplotlib inline is using PyPlot, since PyPlot defaults to inline plots in IJulia. To enable separate GUI windows in PyPlot, analogous to %matplotlib, do using PyPlot; pygui(true). To specify a particular gui backend, analogous to %matplotlib gui, you can either do using PyPlot; pygui(:gui); using PyPlot; pygui(true) (where gui is wx, qt, tk, or gtk), or you can do ENV["MPLBACKEND"]=backend; using PyPlot; pygui(true) (where backend is the name of a Matplotlib backend, like tkagg).
For more options, see the PyPlot documentation.
This of course is all true, but it does not mention how interactivity could be achieved. PyPlot works out of the box, but the plots are non-interactive (in the above sense). Any ideas?
See the PyPlot docs here: https://github.com/JuliaPy/PyPlot.jl for details on setting up interactive plots and such.
This gist may also be of use to you: https://gist.github.com/gizmaa/7214002. I would also suggest looking into Makie.jl for more rich interactive plotting.
Edit:
This question may also be relevant to you: Inline Interactive Plots with Julia in jupyter notebook
I know spectrogram can be plotted using different functions of the different libraries in python. In matplotlib, plyplot plots spectrogram directly using time-series audio data but librosa first applies short Fourier transform on data before plotting spectrogram.
But I am still confused between two.
Please tell me the detailed difference between
1.librosa.dispay.specshow()
2.matplotlib.pyplot.specgram()
I have searched the internet a lot but couldn't find any relevant information though.
According to librosa documentation, all librosa plotting functions are depends on matplotlib.
All of librosa’s plotting functions rely on matplotlib. To demonstrate everything we can do, it will help to import matplotlib’s pyplot API here.
Are there any methods to use pandas, numpy for doing transformations in google cloud data flow?
https://cloud.google.com/blog/big-data/2016/03/google-announces-cloud-dataflow-with-python-support
In the above link it says having support for numpy, scipy and pandas, But there are no examples available
Dataflow or Beam do not currently have transforms that use Numpy or Pandas. Nonetheless, you must be able to use them without much trouble.
If you give more info about your use case, we can help you figure it out.
I have recently upgraded my SciPy stack. Ipython Notebooks that previously worked now fail in the new Jupyter Notebook.
Previously I could evaluate SymPy matrices using SciPy/NumPy functions. Below is a minimal example with the eig function from SciPy performed on a SymPy matrix. It returns object arrays are not supported. This did not used to happen. During my upgrade several packages may have upgraded, including SymPy.
I don't know how it worked in your previous setup, but the process of converting SymPy matrices to NumPy arrays was explicit as early as 2012, per this answer, and SymPy has a utility function matrix2numpy for this purpose. So, in your context
LA.eig(matrix2numpy(M, dtype=float))
returns the expected eigenvalues. Without the helper function, it could be
LA.eig(np.array(M.tolist(), dtype=float))
If you'd like SciPy functions to accept SymPy objects, that would be an issue for their tracker, rather than a question for Stack Overflow.