Calling PyArray_SearchSorted from Cython -- 3 or 4 arguments? - numpy

I'm trying to use PyArray_SearchSorted using NumPy's C API from Cython.
When call it like PyArray_SearchSorted(values, point, NPY_SEARCHLEFT) I get the GCC error: error: too few arguments to function call, expected 4, have 3.
On the other hand, when I call it like PyArray_SearchSorted(values, point, NPY_SEARCHLEFT, NULL), Cython supplies an error: Call with wrong number of arguments (expected 3, got 4).
Looking more closely, it appears there is a discrepancy between the function signature as currently defined in NumPy and as defined in Cython's includes.
I know the sorter argument for searchsorted only appeared in NumPy 1.7.0, but isn't backwards compatibility one of the guarantees with the NumPy C API? Or is this just a Cython bug?
In case it matters, I'm using Cython 0.21.1, NumPy 1.9.1 and Python 2.7 from conda on OS X.

It looks like this change occurred between release 1.6 and 1.7, in this commit:
https://github.com/numpy/numpy/commit/313fe46046a7192cbdba2e679a104777301bc7cf#diff-70664f05e46e0882b0ebe8914bea85b4L1611
I believe this is definitely a bug, but unfortunately this particular kind of bug can easily slip in even with a high standard of diligence. Something like a rigorous ABI conformance test suite would be needed to catch these consistently.

Related

df.info() results in an error. all other methods of pandas/Numpy works fine

I cannot execute df.info(). it results in an error:
TypeError: Cannot interpret '<attribute 'dtype' of 'numpy.generic' objects>' as a data type.
I've read some solutions regarding the versions of the libraries, but I'm not sure what should I do.
I've checked the versions in Jupyter:
Numpy version: 1.21.5
pandas version: 1.0.1
Thank you for any help!

Object arrays not supported on numpy with mkl?

I recently switched from numpy compiled with open blas to numpy compiled with mkl. In pure numeric operations there was a clear speed up for matrix multiplication. However when I ran some code I have been using which multiplies matrices containing sympy variables, I now get the error
'Object arrays are not currently supported'
Does anyone have information on why this is the case for mkl and not for open blas?
Release notes for 1.17.0
Support of object arrays in matmul
It is now possible to use matmul (or the # operator) with object arrays. For instance, it is now possible to do:
from fractions import Fraction
a = np.array([[Fraction(1, 2), Fraction(1, 3)], [Fraction(1, 3), Fraction(1, 2)]])
b = a # a
Are you using # (matmul or dot)? A numpy array containing sympy objects will be object dtype. Math on object arrays depends on delegating the action to the object's own methods. It cannot be performed by the fast compiled libraries, which only work with c types such as float and double.
As a general rule you should not be trying to mix numpy and sympy. Math is hit-or-miss, and never fast. Use sympy's own Matrix module, or lambdify the sympy expressions for numeric work.
What's the mkl version? You may have to explore this with creator of that compilation.

Does PyTorch have a RandomState-like object for random number generation?

in numpy i can
import numpy as np
rs = np.random.RandomState(seed=0)
and then pass that object around, eg for dependency injection.
Does PyTorch have a similar interface? I can't find anything in the docs, but maybe i'm missing something.
The closest thing would be torch.manual_seed, which sets the seed for generating random numbers and returns a torch.Generator. This thread here has more information, apparently there may be some inconsistencies depending on whether you are using GPU or a CPU.

Tensorflow: classifier.predict and predicted_classes

System information
custom code: no, it is the one in https://www.tensorflow.org/get_started/estimator
system: Apple
OS: Mac OsX 10.13
TensorFlow version: 1.3.0
Python version: 3.6.3
GPU model: AMD FirePro D700 (actually, two such GPUs)
Describe the problem
Dear all,
I am running the simple iris program:
https://www.tensorflow.org/get_started/estimator
under python 3.6.3 and tensorflow 1.3.0.
The program executes correctly, apart from the very last part, i.e. the one related to the confusion matrix.
In fact, the result I get for the confusion matrix is:
New Samples, Class Predictions: [array([b'1'], dtype=object), array([b'2'], dtype=object)]
rather than the expected output:
New Samples, Class Predictions: [1 2]
Has anything about confusion matrix changed in the latest release?
If so, how should I modify that part of the code?
Thank you very much for your help!
Best regards
Ivan
Source code / logs
https://www.tensorflow.org/get_started/estimator
This looks like a numpy issue. array([b'1'], dtype=object) is one way numpy represents the string '1'.

Tensorflow error when I try to use tf.contrib.layers.convolution2d

When I invoke tf.contrib.layers.convolution2d the tensorflow execution terminates with an error about one of the parameters used
got an unexpected keyword argument 'weight_init'
The parameter passed are the follows:
layer_one = tf.contrib.layers.convolution2d(
float_image_batch,
num_output_channels=32,
kernel_size=(5,5),
activation_fn=tf.nn.relu,
weight_init=tf.random_normal,
stride=(2, 2),
trainable=True)
That is exactly as described in the book that I'm reading. I suspect a possible syntax problem with weight_init=tf.random_normal written directly inside the call, but I don't know how to fix. I'm using Tensorflow 0.12.0
The book that you are reading (You didn't mention which one) might be using an older version of TensorFlow when the initial values for the weight tensor was passed through the weight_init argument. In the TensorFlow library version you are using (You didn't mention your TF version), probably that argument is replaced with weight_initializer. The latest (TensorFlow v0.12.0) documentation for tf.contrib.layers.convolution2d is here.
To fix your problem, you can change the following line in your code:
weight_init=tf.random_normal
to
weight_initializer=tf.random_normal_initializer()
According to the documentation, by default, tf.random_normal_initialier uses a 0.0 mean, a standard deviation of 1.0 and the datatype to be tf.float32. You may change the arguments as per your need using this line instead:
weight_initializer=tf.random_normal_initializer(mean=0.0, stddev=1.0, seed=None, dtype=tf.float32)