Incompatability when upgrading Scipy cannot use SciPy function on SymPy matrix - numpy

I have recently upgraded my SciPy stack. Ipython Notebooks that previously worked now fail in the new Jupyter Notebook.
Previously I could evaluate SymPy matrices using SciPy/NumPy functions. Below is a minimal example with the eig function from SciPy performed on a SymPy matrix. It returns object arrays are not supported. This did not used to happen. During my upgrade several packages may have upgraded, including SymPy.

I don't know how it worked in your previous setup, but the process of converting SymPy matrices to NumPy arrays was explicit as early as 2012, per this answer, and SymPy has a utility function matrix2numpy for this purpose. So, in your context
LA.eig(matrix2numpy(M, dtype=float))
returns the expected eigenvalues. Without the helper function, it could be
LA.eig(np.array(M.tolist(), dtype=float))
If you'd like SciPy functions to accept SymPy objects, that would be an issue for their tracker, rather than a question for Stack Overflow.

Related

How to cover the pandas Udf using pytest

My code goes through the pandas_udf function but yet it is not covered under SonarQube. How to cover those codes?

What is the difference between tf.square, tf.math.square and tf.keras.backend.square?

I have been looking to learn TensorFlow and I have noticed that different functions are used for the same goal. To square a variable for instance, I have seen tf.square(), tf.math.square() and tf.keras.backend.square(). This is the same for most math operations. Are all these the same or is there any difference?
Mathematically, they should produce the same result. However Tensorflow functions in tensorflow.math.somefunction are used for operating Tensorflow tensors.
For example, when you write a custom loss or metric, the inputs and outputs should be Tensorflow tensors. So that Tensorflow knows how to take gradients of the functions. You can also use tf.keras.backend.* functions for custom loss etc.
Try to use tensorflow.math.somefunctions whenever you can, native operations are preferred. Because they are officially documented and guarateed to have backward compatibility between TF versions like TF 1.x and TF 2.x.

Is there a logspace function in tensorflow?

tensorflow provides linspace but not logspace. Is there any reason for that?
I know I can use numpy for that but I was just curious about the reason behind it.

Triangular matrix matrix multiply `trmm` in TensorFlow

I need to get fastest possible matmul operation in TF for the case when one of the matrices is lower triangular. The cuBLAS and the BLAS have trmm functions, but looks like TensorFlow doesn't benefit from it.
I checked LinearOperators implementation for LowerTriangular case. But, it is not clear either it utilizes BLAS implementation or not.
Can anyone confirm that most optimized version is implemented by LinearOperators?
Thanks!

how much of sklearn can I use with pypy?

The pypy project is currently adding support for numpy.
My impression is that sklearn library is mainly based on numpy.
Would I be able to use most of this library or there are other requirements that are not supported yet?
Officially, none of it. If you want to do a port, go ahead (and please report results on the mailing list), but PyPy is simply not supported because scikit-learn uses many, many parts of NumPy and SciPy as well as having a lot of C, C++ and Cython extension code.
The official website of sklearn (https://scikit-learn.org/stable/faq.html), see here:
Do you support PyPy?
In case you didn’t know, PyPy is an alternative Python implementation with a built-in just-in-time compiler. Experimental support for PyPy3-v5.10+ has been added, which requires Numpy 1.14.0+, and scipy 1.1.0+.
Also see what pypy has to say (https://www.pypy.org/)
Compatibility: PyPy is highly compatible with existing python code. It supports cffi, cppyy, and can run popular python libraries like twisted, and django. It can also run NumPy, Scikit-learn and more via a c-extension compatibility layer.