how to provide input channel dimension for nnabla convolution operation? - 3d-convolution

i was trying to write convolution layer in nnable using nnabla.parametric_functions, how do i provide the input channel dimensions to it ?
I tried to look into nnabla docs at https://nnabla.readthedocs.io/en/latest/python/api/parametric_function.html?highlight=convolution#nnabla.parametric_functions.convolution
Here is the code snippet:
import numpy as np
import nnabla as nn
import nnabla.functions as F
import nnabla.parametric_functions as PF
input = nn.Variable([1,3,224,224])
projection = PF.convolution(input, 768,(16,16), stride=(16,16), name='projection_conv')
print("conv", projection.shape)

In NNabla, For PF.convolution(), you only need to specify the output channel dimension i.e. outmaps not input channel dimension. API will itself infer it from the input variable in the backend.

Related

How to use a custom loss function in GPfLOW 2?

I am new to GPflow and I am trying to figure out how to write a custom loss function to optimize the model. For my purpose, I need to manipulate the predicted output of the GP through different data treatments, and thus, it is the output I get after these treatments, that I would like the optimise the GP model according to. For that purpose I would like to use the root mean square error as loss function.
Workflow:
Input -> GP model -> GP_output -> Data treatment -> Predicted_output -> RMSE(Predicted_output, Observations)
I hope this makes sense.
Normally models are optimised doing something like this:
import gpflow as gf
import numpy as np
X = np.linspace(0, 100, num=100)
n = np.random.normal(scale=8, size=X.size)
y_obs = 10 * np.sin(X) + n
model = gf.models.GPR(
data=(X, y_obs),
kernel=gf.kernels.SquaredExponential(),
)
gf.optimizers.Scipy().minimize(
model.training_loss, model.trainable_variables, options=optimizer_config
)
I have figured out how to do a workaround using the scipy minimize function to optimise using RMSE, but I would like to stay within the GPflow framework, where I can just input model.trainable_variables as argument, and have a general function that also works if I have multiple input/output dimensions.
def objective_func(params):
model.kernel.lengthscales.assign(params[0])
model.kernel.variance.assign(params[1])
model.likelihood.variance.assign(params[2])
GP_output = model.predict_y(X)[0]
GP_output = GP_output.numpy()
Predicted_output = data_treatment_func(GP_output)
return np.sqrt(np.square(np.subtract(Predicted_output, y_obs)).mean())
from scipy.optimize import minimize
res = minimize(objective_func,
x0=(1.0, 1.0, 1.0),)
I found the answer myself.
If you write your objective_func using TensorFlow instead of NumPy (e.g. tf.math.sqrt, tf.reduce_mean) you can simply pass that to gf.optimizers.Scipy().minimize(...) instead of model.training_loss:
import tensorflow as tf
def objective_func():
GP_output = model.predict_y(X)[0]
Predicted_output = data_treatment_func(GP_output)
return tf.sqrt(tf.reduce_mean(tf.square(Predicted_output - y_obs)))
gf.optimizers.Scipy().minimize(
objective_func, model.trainable_variables, options=optimizer_config
)

Convert a tensorflow script to pytorch (TransformedDistribution)

I am trying to rewrite a tensorflow script in pytorch. I have a problem finding the equivalent part in torch for the following line from this script:
import tensorflow_probability as tfp
tfd = tfp.distributions
a_distribution = tfd.TransformedDistribution(
distribution=tfd.Normal(loc=0.0, scale=1.0),
bijector=tfp.bijectors.Chain([
tfp.bijectors.AffineScalar(shift=self._means,
scale=self._mags),
tfp.bijectors.Tanh(),
tfp.bijectors.AffineScalar(shift=mean, scale=std),
]),
event_shape=[mean.shape[-1]],
batch_shape=[mean.shape[0]])
In particular, I have a huge problem for replacing the tfp.bijectors.Chain component.
I wrote the following lines in torch, but I am wondering whether these lines in pytorch compatible with the above tensorflow code and whether I can specify the batch_shape somewhere?
base_distribution = torch.normal(0.0, 1.0)
transforms = torch.distributions.transforms.ComposeTransform([torch.distributions.transforms.AffineTransform(loc=self._action_means, scale=self._action_mag, event_dim=mean.shape[-1]), torch.nn.Tanh(),torch.distributions.transforms.AffineTransform(loc=mean, scale=std, event_dim=mean.shape[-1])])
a_distribution = torch.distributions.transformed_distribution.TransformedDistribution(base_distribution, transforms)
Any solution?
In Pytorch, the base distribution class Distribution expects both a batch_shape and a event_shape parameter. Now notice that the subclass TransformedDistribution does not take such parameters (src code). That's because they are inferred from the base distribution class provided on initialization: see here and here.
You already found out about AffineTransform and ComposeTransform. Keep in mind you must stick with classes from the torch.distributions.
This holds for torch.normal which should be replaced with Normal. With this class, the shape is inferred from the provided loc and scale tensors.
And nn.Tanh which should be replaced with TanhTransform.
Here is a minimal example using your transformation pipeline:
Imports:
from torch.distributions.normal import Normal
from torch.distributions import transforms as tT
from torch.distributions.transformed_distribution import TransformedDistribution
Parameters:
mean = torch.rand(2,2)
std = 1
_action_means, _action_mag = 0, 1
event_dim=mean.shape[-1]
Distribution definition:
a_distribution = TransformedDistribution(
base_distribution=Normal(loc=torch.full_like(mean, 0),
scale=torch.full_like(mean, 1)),
transforms=tT.ComposeTransform([
tT.AffineTransform(loc=_action_means, scale=_action_mag, event_dim=event_dim),
tT.TanhTransform(),
tT.AffineTransform(loc=mean, scale=std, event_dim=event_dim)]))

How to apply pandas data on word2vec

I am trying to use W2V.
I saved my preprocessed data as a pandas dataframe, and I want to apply the word2vec algorithm to my preprocessed data.
This is my data.
http://naver.me/IFjLAHld
This is my code.
from gensim.models.word2vec import Word2Vec
import pandas as pd
import numpy as np
df = pd.read_excel('re_nlp0820.xlsx')
model = Word2Vec(df['nlp'],
sg=1,
window=3,
min_count=1,
workers=4,
iter=1)
model.init_sims(replace=True)
model_result1 = model.wv.most_similar('국민', topn =20)
print(model_result1)
Please, help me
First you need to convert the data you are passing to the Word2Vec instance into a nested list where each list contains the tokenized form of the text. You can do so by:
from gensim.models.word2vec import Word2Vec
import pandas as pd
import numpy as np
import nltk
df = pd.read_excel('re_nlp0820.xlsx')
nlp = [nltk.word_tokenize(i) for i in df['nlp']]
model = Word2Vec(nlp,
sg=1,
window=3,
min_count=1,
workers=4,
iter=1)
model.init_sims(replace=True)
model_result1 = model.wv.most_similar('국민', topn =20)
print(model_result1)
Gensim's Word2Vec needs as its training corpus a re-iterable sequence, where each item is a list-of-words.
You df['nlp'] is probably just a sequence of strings, so it's not in the right format. You should make sure each of its items is broken into a Python list that has your desired words as individual strings.
(Separately: min_count=1 is almost always a bad idea with this algorithm, which gives better results if rare words with few usage examples are discarded. And, you shouldn't need to call .init_sims() at all.)

TensorFlow:Failed to convert a NumPy array to a Tensor (Unsupported object type int)

I am practicing on this kaggle dataset regarding car price prediction (https://www.kaggle.com/hellbuoy/car-price-prediction). I dont know why am I receiving this error.
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from tensorflow.keras import layers,models
cars_data=pd.read_csv('/content/CarPrice_Assignment.csv')
cars_data.head()
cars_data.info()
cars_data.describe()
train_data=cars_data.iloc[:103]
train_data=train_data.drop('price',axis=1)
train_data=np.asarray(train_data.values)
train_targets=cars_data.price.iloc[:103]
train_targets=np.asarray(train_targets)
test_data=cars_data.iloc[103:165]
test_data=test_data.drop('price',axis=1)
test_data=np.asarray(test_data.values)
test_targets=cars_data.price.iloc[103:165]
test_targets=np.asarray(test_targets)
val_data=cars_data.iloc[165:]
val_data=val_data.drop('price',axis=1)
val_data=np.asarray(val_data.values)
val_targets=cars_data.price.iloc[165:]
val_targets=np.asarray(val_targets)
model=models.Sequential()
model.add(layers.Dense(10,activation='relu',input_shape=(25,)))
model.add(layers.Dense(8,activation='relu'))
model.add(layers.Dense(6,activation='relu'))
model.add(layers.Dense(1))
model.compile(optimizer='rmsprop',loss='mse',metrics=['mae'])
model.fit(train_data,train_targets,epochs=20,batch_size=1)
There are 2 things you need to address in your code.
Categorical Variables
By printing the value of train_data, I can see there are still some categorical variables in form of string. Tensorflow cannot process that kind of data directly, so you need to deal with categorical variables. See answer from Best way to deal with categorical variables in regression problem - python as your starting point.
target shape
Your train_targets shape is (107,) means that this is a 1D array. The correct shape for tensorflow input(for simple regression problem) is (107,1). Modify your code like this to reshape the value :
train_targets=np.asarray(train_targets).reshape(-1,1)

any demo for statsmodels regression model in crossvalidation setting?

I want to learn multiple linear regression model for my data frame. Below is demo code. Everywhere on internet I only found as such code where target variable can be learned with other variables but on complete data set. I do not specifically found k-fold cross validation set up preferably a direct function as parameter to ols method.
I did however found crossvalidation setup in regression method from scikit-learn library, but I wonder why not with statsmodels.
import statsmodels.formula.api as sm
import pandas
Assuming data was in .csv format and read using pandas (therefore, in data frame format).
mod = sm.ols(formula='Target_Var ~ var_1 + var_2 + var_3', data=df)
res = mod.fit()
print res.summary()