Supports of np.digitize - cupy

I see the cupy version of np.digitize is not yet supported. Are there plans to make this function work in cupy?
If not, is there an easy solution to implement the functionality for cupy arrays?

Related

How to create an _Arg, _Retval, _If operation in tensorflow?

I'm trying to test all the operations available in tensorflow. For example, we can find 'Conv2d' in tf.nn module.
There are some operations started with an '_', e.g, '_Arg', '_ArrayToList', '_Retval'. I looked into the tensorflow source code, but still can't find how to create an operation '_Arg'. Please give me some instructions of how to find these operations, or what does these operations do?
Those operations are for an internal purpose, they are implemented in c++ so you'll need to download the source code, code (in c++) your own tests, compile and run them, since most of those operations do not have a Python wrapper.
Here you can find the c++ api.
This tutorial may help you if you are starting with tf operation. It does not do what you want, as it works with custom public operations.
You may have a look to the tests already implemented in tf code, fore example this test file.
However, I will strongly recommend that you reconsider if you really need to test those functions. Testing every single function from TensorFlow, even the internal ones, is going to be a hard job.

How can I use NumPy in IronPython for Revit API?

I'm writing a script in Revit API by using python. I'm looking to use NumPy since I'm trying to generate a lattice grid of points to place families to. However, I know NumPy is not compatible with IronPython since it's written in CPython. Is there a solution for this? If not, is there any good way to generate a lattice grid of points without using external packages like NumPy?
pyRevit has a CPython engine available.
The post I linked was the beta announcement. It is now available in pyRevit master release.
Some people have already sucessfully used pandas and numpy.
pyRevit use pythonnet

Exporting Tensorflow Models for Eigen Only Environments

Has anyone seen any work done on this? I'd think this would be a reasonably common use-case. Train model in python, export the graph and map to a sequence of eigen instructions?
I don't believe anything like this is available, but it is definitely something that would be useful. There are some obstacles to overcome though:
Not all operations are implemented by Eigen.
We'd need to know how to generate code for all operations we want to support.
The glue code to allocate buffers and schedule work can get pretty gnarly.
It's still a good idea though, and it might get more attention posted as a feature request on https://github.com/tensorflow/tensorflow/issues/

How to speed up matrix functions such as expm function in scipy/numpy?

I'm using scipy and numpy to calculate exponentiation of a 6*6 matrix for many times.
Compared to Matlab, it's about 10 times slower.
The function I'm using is scipy.linalg.expm, I have also tried deprecated methods scipy.linalg.expm2 and scipy.linalg.expm3, and those are only two times faster than expm. My question is:
What's wrong with expm2 and expm3 as they are faster than expm?
I'm using wheel package from http://www.lfd.uci.edu/~gohlke/pythonlibs/, and I found https://software.intel.com/en-us/articles/building-numpyscipy-with-intel-mkl-and-intel-fortran-on-windows. Is the wheel package compiled with MKL. If not, I think I can optimize and numpy, scipy by compile it by myself with MKL?
Any other ways to optimize the performance?
Well I think I have found answer for question 1 and 2 by myself
1. It seems expm2 and expm3 returns array rather than matrix. But they are about 2 times faster than expm
Well, after a whole day trying to compile scipy by MKL, I succeed. It's really hard to build the scipy, especially when I'm using windows, x64 and python3. It turned out to be a waste of time. It's not even a bit faster than the whl package from http://www.lfd.uci.edu/~gohlke/pythonlibs/ .
Hoping someone give answer to question 3.
Your matrix is relatively small, so maybe the numerical part is not the bottleneck. You should use a profiler to make sure that the limitation is in the exponentiation.
You can also take a look at the source code of these implementations and write an equivalent function with less conditionals and checking.

Unconstrained Optimization using Gradient and (sparse) Hessian

I'm looking for a C++ optimization package that can do multivariate unconstrained optimization using gradient and Hessian information. I'm doing it now in Matlab using fminunc with the 'GradObj', 'Hessian', and 'HessPattern' options. My Hessian is very sparse so a package that takes that into account would be preferable.
Are there any alternatives to Matlab for this? Open-source or closed-source are both fine. C++ is preferable but I'm flexible.
Have you considered compiling the MATLAB library into a .dll or .exe that R can reference? MATLAB has this capability.
You could just ditch the Hessian and use a BFGS approach, like libLBFGS. These quasi-Newton methods are usually pretty good.
As I understand, what you need is an efficient linear algebra library. Consider, for example, uBLAS
I recently encountered the trustOptim package in R. It is useful in case the Hessian is sparse. As far as I know, the core of that package is written in C++ and interfaced with R using Rcpp. It's open source as well.