error message learning simple model mxnet - mxnet

I am getting this error message
Error in mx.model.select.layout.train(X, y) : Cannot auto select
array.layout, please specify this parameter
I am trying to create simple model to understand how things work
create data
train.x <- data.matrix(sample(1:100,1000,replace=T) )
colnames(train.x) <- "X"
train.y <- data.matrix(train.x^2)
colnames(train.y) <- "Y"
test.x = data.matrix(sample(1:100,50,replace=T) )
colnames(test.x) <- "X"
test.y <-data.matrix(test.x^2)
colnames(test.y) <- "Y"
test data
mx.set.seed(0)
model <- mx.mlp(train.x, train.y, hidden_node=10, out_node=2,
out_activation="softmax",
num.round=20, array.batch.size=15, learning.rate=0.07, momentum=0.9,
eval.metric=mx.metric.accuracy)

It looks like you're trying to learn an approximation of of the function x --> x ^ 2. Your activation function shouldn't then be "softmax". That would be more appropriate for a classification problem. You could MSE (mean-squared error) or other loss functions more suited to a regression problem.
You might also find this MXNet/R tutorial helpful:
https://mxnet.incubator.apache.org/tutorials/r/fiveMinutesNeuralNetwork.html

Related

Getting "ValueError: data type <class 'numpy.object_'> not inexact" error while trying to linear fit a dataset using uncertainities

I am very new to python so i am struggling a lot to do what i want to do, so i figured i could ask.
I have an excel sheet with data columns like period, pdot, flux values etc. There are also error columns associated with these. I want to plot these in python, and then do a linear fit while counting in the errors. Then obtain values like standard deviation or p-value to decide the goodness of the fit. Then using this fit i will try to predict values based on a missing parameter. I managed to do it without the errors, but now im trying to do it while propagating my error and its causing me some errors.
My working code that doesnt take errors into consideration is like this:
dist_array1= np.multiply(3.08567758128*10**21,dist_array)
dist_array2 = np.multiply(dist_array1,dist_array1)
e1=np.multiply(4*math.pi,dist_array2)
L_gamma = np.multiply(e1,flux_array)
Gamma_Eff = np.divide(L_gamma,edot_array)
Tau = np.divide(period_array,pdot_array)
constant = 2.94*10**8
t1=np.power(period_array,-5)
t2=np.multiply(t1,pdot_array)
t3=np.power(t2,1/2)
B_LC = np.multiply(constant,t3)
c1=np.multiply(10**15,pdot_array)
c2=np.log(c1)
c3=np.log(period_array)
c4=1-np.multiply(11/7,c3)+np.multiply(4/7,c2)
c5=3.56-c3-c2
Zeta1=1+np.divide(c4,c5)
c6=0.8-np.multiply(2/7,c3)+np.multiply(2/7,c2)
Zeta2=1+np.divide(c6,1.3)
c8=0.6-np.multiply(11/14,c3)+np.multiply(2/7,c2)
Zeta3=1+np.divide(c8,1.3)
#Here i defined my variables that i will work with, now i will try to fit it.
x1 = np.log(period_array)
y1 = np.log(Gamma_Eff)
coef1, V1 = np.polyfit(x1,y1,1, cov=True)
poly1d_fn1 = np.poly1d(coef1)
fig, (ax1, ax2, ax3) = plt.subplots(1, 3,figsize=(30,10))
fig.suptitle('Figure 1')
ax1.plot(x1,y1, 'yo', x1, poly1d_fn1(x1), '-k')
x2 = np.log(Tau)
coef2, V2 = np.polyfit(x2,y1,1, cov=True)
poly1d_fn2 = np.poly1d(coef2)
ax2.plot(x2,y1, 'yo', x2, poly1d_fn2(x2), '-k')
x3= np.log(B_LC)
coef3, V3 = np.polyfit(x3,y1,1, cov=True)
poly1d_fn3 = np.poly1d(coef3)
ax3.plot(x3,y1, 'yo', x3, poly1d_fn3(x3), '-k')
ax1.set(xlabel='log P (s)', ylabel='log η')
ax2.set(xlabel='log τ (yr)', ylabel='log η')
ax3.set(xlabel='log B_LC (G)', ylabel='log η')
#And then obtain the uncertainities
sigma_period_1=np.sqrt(V1[0][0])
sigma_period_2=np.sqrt(V1[1][1])
sigma_Tau_1=np.sqrt(V2[0][0])
sigma_Tau_2=np.sqrt(V2[1][1])
sigma_B_LC_1=np.sqrt(V3[0][0])
sigma_B_LC_2=np.sqrt(V3[1][1])
Now this works well and i can fit it, the problem is i cannot get stuff like p-value or standard deviation from the fit. I think i need to use statsmodels for that. And i also need to put errors into the formulas to be more accurate. What i changed to obtain this so far is as follows:
period_array= unumpy.uarray(period_array,perioderr_array) # Here im combining the error and the value so that i can use it propagates the error.
pdot_array=unumpy.uarray(pdot_array,pdoterr_array) #Same thing for the second value with error
flux_array=unumpy.uarray(flux_array,flux_err_array) #Same thing for third
c2=unumpy.log(c1) #Here i had to use unumpy instead of np because it gave me errors when using log function
c3=unumpy.log(period_array) #Same thing
Then i tried to fit using polyfit, to see if it works, then i will try to get the same fit with statsmodels.
x1 = unumpy.log(period_array) #log issue again
y1 = unumpy.log(Gamma_Eff)
coef1, V1 = np.polyfit(x1,y1,1, cov=True)
The last line gives me the error "ValueError: data type <class 'numpy.object_'> not inexact" I did some digging and i understood the problem as "my values are not float, and this is why im getting error, so i need to turn them into float". To do this i tried many things including stuff like x = list(x) but to no avail.
So what am i doing wrong?

Error in UseMethod("filter") : no applicable method for 'filter' applied to an object of class "NULL"

I am actually using Tidymodels package on R to study a multi-class classification problem. I have trained several models using Workflow sets, and in my recipe I added a step taken there to replace NA values with a constant. The models that I included in the workflow are:
mlp <-
mlp(hidden_units = tune(), penalty = tune(), epochs = tune()) %>%
set_engine('nnet') %>%
set_mode('classification')
multinom <-
multinom_reg(penalty = tune(), mixture = tune()) %>%
set_engine('glmnet')
rand_forest <-
rand_forest(mtry = tune(), min_n = tune()) %>%
set_engine('ranger') %>%
set_mode('classification')
tabnet <- tabnet(mode="classification", batch_size= 126, virtual_batch_size= 128, epochs= 1,
num_steps = tune(), learn_rate = tune())%>%
set_engine("torch", verbose = TRUE)
For some models I tried a recipe with SMOTE ("themis" package), PCA, and normalisation (all in the same workflow by adding the steps to the original recipe). Training and testing went pretty well, so I tried an ensemble of these models (using the package "stacks"):
tidymodels_prefer()
stack1 <-
stacks() %>%
add_candidates(res_1)
set.seed(2002)
res1_stack <-
stack1 %>%
blend_predictions()
ens <- fit_members(res1_stack)
When I run this last operation (fit_members) I receive this error
Error in UseMethod("filter") :
no applicable method for 'filter' applied to an object of class "NULL"
I figured out, reading this and this on GitHub, that it was because the added step "constantimpute" to the recipe. However, I don't exactly know how can I fix it. Someone can help me?
Thank you very much!!!
Before using the filter function, make sure the table you want to filter is loaded.
Most times we have the the view() function applied and this prevents the table from being loaded into memory for usage.

How to expand the output of GlobalAveragePooling2D() to be suitable for BiSeNet?

I am trying to build the BiseNet shown in the figure at "https://github.com/Blaizzy/BiSeNet-Implementation".
When I want to use the GlobalAveragePooling2D() in Keras(tf-backend) to finish the Attention Refined Module in Figure(b), I find the output shape of the GlobalAveragePooling2D() is not suitable for the next convolution.
I checked out many implementation of BiSeNet code in github, however, most of them use AveragePooling2D(size=(1,1)) instead. But AveragePooling2D(size=(1,1)) is completely non-sense.
So I define a lambda layer to do what I want (The selected code is shown as below). The lambda layer works but seems very ugly:
def samesize_globalAveragePooling2D(inputtensor):
# inputtensor shape:(?, 28,28,32)
x = GlobalAveragePooling2D()(inputtensor) # x shape:(?, 32)
divide = tf.divide(inputtensor, inputtensor) # divide shape:(?, 28,28,32)
x2 = x * divide # x2 shape:(?, 28,28,32)
global_pool = Lambda(function=samesize_globalAveragePooling2D)(conv_0)
Hope to get suggestion to make this lambda to be more graceful.
Thanks!
This could be done using a lambda layer on tf.reduce_mean.
tf.keras.layers.Lambda(lambda x: tf.reduce_mean(x, axis=[1, 2], keep_dims=True))

How to simulate from priors with pymc3

I'd like to simulate y from the prior (not from the posterior) with pymc3.
I first defined the model:
import pymc3 as pm
with pm.Model() as m:
mu = pm.Normal('mu', mu=0, sd=10)
sigma = pm.Uniform('sigma', lower=0, upper=10)
y = pm.Normal('y', mu=mu, sd=sigma)
trace = pm.sample(1000, tune=1000)
Then I tried to get 10 simulated y from the model with:
y_pred = pm.sample_ppc(trace, 10, m, size=10)
But result comes out empty. I searched through the documentation but I didn't find a relevant example. Is it possible to do it with pymc3?
The trace contains the sample from the prior when no observed is associated with the model definition. However, this could fail sometimes. We are currently working on a sample_prior function that would make this process easier and more straightforward: https://github.com/pymc-devs/pymc3/pull/2876

Unable to obtain moments using tensorflow

I want to calculate the moments of a vector x = np.random.normal(0,1,[1,500]). When I do mean, std = tf.nn.moments(x,axes=[0]), it throws this error:
File "/tmp/venv/local/lib/python2.7/site-packages/tensorflow/python/ops/nn.py", line 830, in moments
y = math_ops.cast(x, dtypes.float32) if x.dtype == dtypes.float16 else x
TypeError: data type not understood
I am using tensorflow==0.11.0. What is the correct syntax?
As shown in the documentation for tf.nn.moments, the input x must be a Tensor.
You should use something like the following:
x = tf.placeholder("float", [None,500])
mean, std = tf.nn.moments(x, axes=[0])
sess = tf.Session()
sess.run(tf.global_variables_initializer())
sample_mean, sample_std = sess.run([mean, std],
feed_dict={x: np.random.normal(0,1,[1,500])})
Note: This particular calculation does not make much sense, since there is only one data value. You may want to either increase the shape to something like [32, 500], or more likely change the axes from [0] to [1].
Regardless, the calculation will complete without errors, despite the calculated standard deviation being equal to 0, because the moments are calculated along an axis with one dimension.