Pytorch tensor.method() vs torch.method(tensor)? - oop

I've noticed that you can often invoke a method working with tensors from the torch module itself, or call it as a bound method on an instance of a torch tensor.
For instance:
import torch
my_tens = torch.ones((3,2))
another_tens = torch.ones((3,2))
res_tens = my_tens==another_tens
# both are equivalent:
torch.all(res_tens, dim=1)
res_tens.all(dim=1)
Similarly, .sum() and other methods work the same way. Why is that? Are there any advantages to using one approach or the other?

These two options are equivalent and runs the same implementation "under the hood".
You can use whatever is more convenient for you and makes your code more readable.

Related

How to setup a batched matrix multiplication in Numba with np.dot() using contiguous arrays

I am trying to speed up a batched matrix multiplication problem with numba, but it keeps telling me that it's faster with contiguous code.
Note: I'm using numba version 0.55.1, and numpy version 1.21.5
Here's the problem:
import numpy as np
import numba as nb
def numbaFastMatMult(mat,vec):
result = np.zeros_like(vec)
for n in nb.prange(vec.shape[0]):
result[n,:] = np.dot(vec[n,:], mat[n,:,:])
return result
D,N = 10,1000
mat = np.random.normal(0,1,(N,D,D))
vec = np.random.normal(0,1,(N,D))
result = numbaFastMatMult(mat,vec)
print(mat.data.contiguous)
print(vec.data.contiguous)
print(mat[n,:,:].data.contiguous)
print(vec[n,:].data.contiguous)
clearly all the relevant data is contiguous (run the above code snippet and see the results of print()...
But, when I run this code, I get the following warning:
NumbaPerformanceWarning: np.dot() is faster on contiguous arrays, called on (array(float64, 1d, C), array(float64, 2d, A))
result[n,:] = np.dot(vec[n,:], mat[n,:,:])
2 Extra comments:
This is just a toy problem for replication. I'm actually using something with many more data points, so hoping this will speed up.
I think the "right" way to solve this is with np.tensordot. However, I want to understand what's going on for future reference. For example, this discussion addresses a similar issue, but as far as I can tell, doesn't address why the warning shows up directly.
I've tried adding a decorator:
nb.float64[:,::1](nb.float64[:,:,::1],nb.float64[:,::1]),
I've tried reordering the arrays so the batch index is first (n in the above code)
I've tried printing whether the "mat" variable is contiguous from inside the function
I'll leave this up, but I figured it out:
Outside of a numba function:
mat[n,:,:].data.contiguous==True
but inside numba, mat[n,:,:] is no longer continous.
Changing my code above to np.dot(vec[n], mat[n]) removed the warning.
I'm making this the "correct" answer since it solved my problem. However, according to max9111's response, this behavior may be a bug!

SkLearn - Using RegressorChain with ColumnTransformer in Pipelines?

I'm having problems using sklearn's RegressorChain (https://scikit-learn.org/stable/modules/generated/sklearn.multioutput.RegressorChain.html), and unfortunately there doesn't seem to be a lot of documentation/examples about this.
The documentation states indirectly (through the set_params method) that it can be used with Pipelines. My pipeline has:
ct = ColumnTransformer(
transformers=[
('scaler', MinMaxScaler(), numerical_columns),
('onehot', OneHotEncoder(), ['day_of_week']),
],
remainder='passthrough'
)
cv = TimeSeriesSplit(n_splits = groups.nunique()) #groups by date
pipeline = make_pipeline(ct, lgb.LGBMRegressor(random_state=42))
target_transform_output = TransformedTargetRegressor(regressor=pipeline, transformer=PowerTransformer())
and then I do:
chain_regressor = RegressorChain(base_estimator=target_transform_output , order=[1,0,2])
chain_regressor.fit(X, y)
In the above, both X and y are pandas Dataframes, and y has 3 target columns.
When I run the code, I get a python stack trace caused by the fit() call, starting in __init.py__ in _get_column_indices(X, key) when doing all_columns = X.columns. The error is:
AttributeError: 'numpy.ndarray' object has no attribute 'columns'
and further down at the end of the stack trace:
ValueError: Specifying the columns using strings is only supported for pandas DataFrames
I assume this is because the ColumnTransformer returns ndarrays, a well-known problem. Does this mean that the RegressorChain can't be used with the ColumnTransformer?
After this, I removed the column transformer step from the pipeline and tried again, and without the ColumnTransformer everything works fine (even the TransformedTargetRegressor).
Any help, ideas or workaround appreciated.
You have the issue the wrong way around: it's not that ColumnTransformer outputs an array and RegressorChain expected a dataframe; rather, the RegressorChain converts your input to an array before calling your pipeline, and so your ColumnTransformer doesn't get a dataframe as input and cannot use your column-name specifications.
You could just specify the columns by index or callable in the ColumnTransformer. But I think in this case, you have two unfortunate side-effects:
For each target, you are re-encoding day_of_week and re-scaling each independent variable (not wrong, just a little wasteful), and
you never scale the targets, even when they are used as independent variables for "later" targets' regressions (not wrong for a tree-based model like your lightGBM [in fact, for LGBM, why bother scaling at all?], but other models might suffer from not scaling those).
(1) can be fixed by preprocessing as a pipeline step before RegressorChain. (2) can be fixed by changing the scaler's column specification to a callable, below using the helper make_column_selector. Doing that fix for (2) does end up re-calculating the scalings at each step (hurting (1) again), but I think in the end that's a bigger deal (if you wanted to use something other than a tree model at some point).
So I would suggest instead:
encoder = ColumnTransformer(
transformers=[
('onehot', OneHotEncoder(), ['day_of_week']),
],
remainder='passthrough',
)
scale_nums = ColumnTransformer(
transformers=[
('scaler', MinMaxScaler(), make_column_selector(dtype_include=np.number)),
],
remainder='passthrough',
)
modeling_pipe = make_pipeline(scale_nums, lgb.LGBMRegressor(random_state=42))
target_transform_output = TransformedTargetRegressor(
regressor=modeling_pipe,
transformer=PowerTransformer(),
)
final_pipeline = make_pipeline(encoder, target_transform_output)

Efficient solving of generalised eigenvalue problems in python

Given an eigenvalue problem Ax = λBx what is the more efficient way to solve it out of the two shown here:
import scipy as sp
import numpy as np
def geneivprob(A,B):
# Use scipy
lamda, eigvec = sp.linalg.eig(A, B)
return lamda, eigvec
def geneivprob2(A,B):
# Reduce the problem to a standard symmetric eigenvalue problem
Linv = np.linalg.inv(np.linalg.cholesky(B))
C = Linv # A # Linv.transpose()
#C = np.asmatrix((C + C.transpose())*0.5,np.float32)
lamda,V = np.linalg.eig(C)
return lamda, Linv.transpose() # V
I saw the second version in a codebase and was wondering if it was better than simply using scipy.
Well there is no obvious advantage in using the second approach, maybe for some class of matrices it will be better, I would suggest you to test with the problems you want to solve. Since you are transforming the eigenvectors, this will also transform how the errors affect the solution, and maybe that is the reason for using this second method, not efficiency, but numerical accuracy, or convergence.
Another thing is that the second method will only work for symmetric B.

In Keras, is there documentation describing the string name to class mappings for initializers, optimizers, etc?

Is there any documentation describing what string names map to what objects in Keras? For example, below I create an Embedding layer from tf.keras.layers and I can use 'uniform' to map to the tf.keras.initializers.RandomUniform class.
tf.keras.layers.Embedding(1000, 64, embeddings_initializer='uniform')
But I only know that by seeing examples of that usage. I presume the supported string forms are documented somewhere, but I can't seem to find such documentation, and digging through the code got too abstract to follow easily.
Version: TF 1.13.1
There is no list of string constants available in keras implementation in TF (and, I suppose, in original keras neither).
For the initializer case the 'uniform' string is converted to config and a fabric method is called on that config with a hint to create an object from initializers namespace (can be found here as def deserialize_keras_object):
config = {'class_name': str(identifier), 'config': {}}
deserialize_keras_object(
config,
module_objects=globals(),
custom_objects=custom_objects,
printable_module_name='initializer')
Therefore, I can not think of a better way to, for example, list all initializers than:
import tensorflow as tf
for k, v in tf.keras.initializers.__dict__.items():
if not k[0].isupper() and not k[0] == "_":
print(k)
And output, although with extra values, is like:
constant
glorot_normal
glorot_uniform
identity
ones
orthogonal
zeros
he_normal
he_uniform
lecun_normal
lecun_uniform
normal
random_normal
random_uniform
uniform
truncated_normal
deserialize
get
serialize

Declaring theano variables for pymc3

I am having issues replicating a pymc2 code using pymc3.
I believe it is due to the fact pymc3 is using the theano type variables which are not compatible with the numpy operations I am using. So I am using the #theano.decorator:
I have this function:
with pymc3.Model() as model:
z_stars = pymc3.Uniform('z_star', self.z_min_ssp_limit, self.z_max_ssp_limit)
Av_stars = pymc3.Uniform('Av_star', 0.0, 5.00)
sigma_stars = pymc3.Uniform('sigma_star',0.0, 5.0)
#Fit observational wavelength
ssp_fit_output = self.ssp_fit_theano(z_stars, Av_stars, sigma_stars,
self.obj_data['obs_wave_resam'],
self.obj_data['obs_flux_norm_masked'],
self.obj_data['basesWave_resam'],
self.obj_data['bases_flux_norm'],
self.obj_data['int_mask'],
self.obj_data['normFlux_obs'])
#Define likelihood
like = pymc.Normal('ChiSq', mu=ssp_fit_output,
sd=self.obj_data['obs_fluxEr_norm'],
observed=self.obj_data['obs_fluxEr_norm'])
#Run the sampler
trace = pymc3.sample(iterations, step=step, start=start_conditions, trace=db)
where:
#theano.compile.ops.as_op(itypes=[t.dscalar,t.dscalar,t.dscalar,t.dvector,
t.dvector,t.dvector,t.dvector,t.dvector,t.dscalar],
otypes=[t.dvector])
def ssp_fit_theano(self, input_z, input_sigma, input_Av, obs_wave, obs_flux_masked,
rest_wave, bases_flux, int_mask, obsFlux_mean):
...
...
The first three variables are scalars (from the pymc3 uniform distribution). The
remaining variables are numpy arrays and the last one is a float. However, I am
getting this "'numpy.ndarray' object has no attribute 'type'" error:
File "/home/user/anaconda/lib/python2.7/site-packages/theano/gof/op.py", line 615, in __call__
node = self.make_node(*inputs, **kwargs)
File "/home/user/anaconda/lib/python2.7/site-packages/theano/gof/op.py", line 963, in make_node
if not all(inp.type == it for inp, it in zip(inputs, self.itypes)):
File "/home/user/anaconda/lib/python2.7/site-packages/theano/gof/op.py", line 963, in <genexpr>
if not all(inp.type == it for inp, it in zip(inputs, self.itypes)):
AttributeError: 'numpy.ndarray' object has no attribute 'type'
Please any advice in the right direction will be most welcomed.
I had a bunch of time-wasting-stops when I went from pymc2 to pymc3. The problem, I think, is that the doc is quite bad. I suspect they neglect the doc as far as the code is still evolving. 3 comments/advises:
I wish you could find some help using '#theano.compile.ops.as_op' here: failure to adapt pymc2 into pymc3 or here how to fit a method belonging to an instance with pymc3?
The drawback of '#theano.compile.ops.as_op' is that you implicitly exclude any analysis related to the gradient of your function. To have access to the gradient, I think you need to define your function in a more complex way presented here how to fit a method belonging to an instance with pymc3?
warning: for the moment, using theano seems to be a source of problem if you want to distribute your code under Windows. See build a .exe for Windows from a python 3 script importing theano with pyinstaller, but I am not sure whether it is just a personal clumsiness or really a problem. Personally I had to give up theano to be able to distribute my code...