Where can I find the coherence funtion in r? - text-mining

Excuse me for being basic, but I want to use the 'coherence' function I found on this link to evaluate my latent dirichlet allocation topics and it isn't working with text2vec and I can't tell which library it is in, if it isn't that one.
coherence(x, tcm, metrics = c("mean_logratio", "mean_pmi", "mean_npmi",
"mean_difference", "mean_npmi_cosim", "mean_npmi_cosim2"), smooth = 1e-12,
n_doc_tcm = -1)

Related

Generating a Plot of CV vs. Degrees of Freedom

I have a dataset (n=298), and I am currently working on a general additive model for it. There are three predictor variables and one response variable. I used this code to generate the GAM and perform leave one out cross validation:
ctrl <- trainControl(method = "LOOCV")
model <- train(response~ predictor1+ predictor2 + predictor3, data= data[2:5], method = "gam", trControl = ctrl)
While I think this worked in generating the model and performing cross validation, I'd like to graph the CV value over the degrees of freedom, similar to what is shown in the book image below. I'm not really sure how to go about this with my model as I am pretty new to using R.
[Graph Example
I tried to use plot(model), but it just outputs the graph below, which isn't very helpful and certainly isn't what I'm looking for. Any advice on how to approach this would be greatly appreciated. Thanks.
plot(model) Graph

Multiple axis scale in Lets plot Kotlin

I'm learning some data science related topics and oh boy, this is a jungle of different libraries for everything 😅
Because of things, I went with Lets-plot, which has a nice Kotlin API that I'm using combined with Kotlin kernel for Jupyter notebooks
Overall, things are going pretty good. Most tutorials & docs I see online use different libraries for plotting (e.g. Seaborn, Matplotlib, Plotly) so most of the time I have to do some reading of the Lets-Plot-Kotlin reference and try/error until I find the equivalent code for my graphs
Currently, I'm trying to graph the distribution of differences between two values. Overall, this looks pretty good. I can just do something like
(letsPlot(df)
+ geomHistogram { x = "some-column" }
).show()
which gives a nice graph
It would be interesting to see the density estimator as well, geomDensity to the rescue!
(letsPlot(df)
+ geomDensity(color = "red") { x = "some-column" }
).show()
Nice! Now let's watch them both together
(letsPlot(df)
+ geomDensity(color = "red") { x = "some-column" }
+ geomHistogram() { x = "some-column" }
).show()
As you can see, there's a small red line in the bottom (the geomDensity!). Problem here (I would say) is that both layers are using the same Y scale. Histogram is working with 0-20 values and density with 0-0.02 so when plotted together it's just a line at the bottom
Is there any way to add several layers in the same plot that use their own scale? I've read some blogposts that claim that you should not go for it (seems to be pretty much accepted by the community.
My target is to achieve something similar to what you can do with Seaborn by doing
plt.figure(figsize=(10,4),dpi=200)
sns.histplot(data=df,x='some_column',kde=True,bins=25)
(yes I know I took the lets plot screenshot without the bins configured. Not relevant, I'd say ¯_(ツ)_/¯ )
Maybe I'm just approaching the problem with a mindset I should not? As mentioned, I'm still learning so every alternative will be highly welcomed 😃
Just, please, don't go with the "Switch to Python". I'm exploring and I'd prefer to go one topic at a time
In order for histogram and density layers to share the same y-scale you need to map variable "..density.." to aesthetic "y" in the histogram layer (by default histogram maps "..count.." to "y").
You will find an example of it in cell [4] in this notebook: https://nbviewer.org/github/JetBrains/lets-plot-kotlin/blob/master/docs/examples/jupyter-notebooks/distributions.ipynb
BWT, many of the pages in Lets-Plot Kotlin API Reference are equipped with links on demo-notebooks, in "Examples" section: geomHistogram().
And of course you can find a lot of info online on the R ggplot2 package which is largely applicable to Lets-Plot as well. For example: Histogram with kernel density estimation.
Finally :) , calling show() is not necessary - Jupyter Kotlin kernel will render plot automatically if plot expression is the last one in the cell which is often the case.

tf.function property in pytorch

I'm a beginner in pytorch, and I have some functions that are needed to implement in network.
My question is: is there any way like tf.function, or should I use "class(nn.Module)" with variable?
For example, let X be a 10x2 matrix . In pseudo-code:
a = Variable(1.0)
b = Variable(1.0)
Y = a*X[:,0]**2 + b*X[:,1]
In PyTorch you don't need things like tf.function, you just use normal Python code (because of the dynamic graph).
Please give more detailed example (with code) of what you're trying to do if the above doesn't answer your question.

Reinforcement learning a3c with multiple independent outputs

I am attempting to modify and implement googles pattern of the Asynchronous Advantage Actor Critic (A3C) model. There are plenty of examples online out there that have gotten me started but I am running into a issues attempting to expand the samples.
All of the examples I can find focus on pong as the example which has a state based output of left or right or stay still. What I am trying to expand this to is a system that also has a separate on off output. In the context of pong, it would be a boost to your speed.
The code I am basing my code on can be found here. It is playing doom, but it still has the same left and right but also a fire button instead of stay still. I am looking at how I could modify this code such that fire was an independent action from movement.
I know I can easily add another separate output from the model so that the outputs would look something like this:
self.output = slim.fully_connected(rnn_out,a_size,
activation_fn=tf.nn.softmax,
weights_initializer=normalized_columns_initializer(0.01),
biases_initializer=None)
self.output2 = slim.fully_connected(rnn_out,1,
activation_fn=tf.nn.sigmoid,
weights_initializer=normalized_columns_initializer(0.01),
biases_initializer=None)
The thing I am struggling with is how then do I have to modify the value output and redefine the loss function. The value is still tied to the combination of the two outputs. Or is there a separate value output for each of the independent output. I feel like it should still only be one output as the value, but I am unsure how I them use that one value and modify the loss function to take this into account.
I was thinking of adding a separate term to the loss function so that the calculation would look something like this:
self.actions_1 = tf.placeholder(shape=[None],dtype=tf.int32)
self.actions_2 = tf.placeholder(shape=[None],dtype=tf.float32)
self.actions_onehot = tf.one_hot(self.actions_1,a_size,dtype=tf.float32)
self.target_v = tf.placeholder(shape=[None],dtype=tf.float32)
self.advantages = tf.placeholder(shape=[None],dtype=tf.float32)
self.responsible_outputs = tf.reduce_sum(self.output1 * self.actions_onehot, [1])
self.responsible_outputs_2 = tf.reduce_sum(self.output2 * self.actions_2, [1])
#Loss functions
self.value_loss = 0.5 * tf.reduce_sum(tf.square(self.target_v - tf.reshape(self.value,[-1])))
self.entropy = - tf.reduce_sum(self.policy * tf.log(self.policy))
self.policy_loss = -tf.reduce_sum(tf.log(self.responsible_outputs)*self.advantages) -
tf.reduce_sum(tf.log(self.responsible_outputs_2)*self.advantages)
self.loss = 0.5 * self.value_loss + self.policy_loss - self.entropy * 0.01
I am looking to know if I am on the right track here, or if there are resources or examples that I can expand off of.
First of all, the example you are mentioning don't need two output nodes. One output node with continuous output value is enough to solve. Also you should't use placeholder for advantage, but rather you should use for discounted reward.
self.discounted_reward = tf.placeholder(shape=[None],dtype=tf.float32)
self.advantages = self.discounted_reward - self.value
Also while calculating the policy loss you have to use tf.stop_gradient to prevent the value node gradient feedback contribution for policy learning.
self.policy_loss = -tf.reduce_sum(tf.log(self.responsible_outputs)*tf.stop_gradient(self.advantages))

How to get scikit learn to find simple non-linear relationship

I have some data in a pandas dataframe (although pandas is not the point of this question). As an experiment I made column ZR as column Z divided by column R. As a first step using scikit learn I wanted to see if I could predict ZR from the other columns (which should be possible as I just made it from R and Z). My steps have been.
columns=['R','T', 'V', 'X', 'Z']
for c in columns:
results[c] = preprocessing.scale(results[c])
results['ZR'] = preprocessing.scale(results['ZR'])
labels = results["ZR"].values
features = results[columns].values
#print labels
#print features
regr = linear_model.LinearRegression()
regr.fit(features, labels)
print(regr.coef_)
print np.mean((regr.predict(features)-labels)**2)
This gives
[ 0.36472515 -0.79579885 -0.16316067 0.67995378 0.59256197]
0.458552051342
The preprocessing seems wrong as it destroys the Z/R relationship I think. What's the right way to preprocess in this situation?
Is there some way to get near 100% accuracy? Linear regression is the wrong tool as the relationship is not-linear.
The five features are highly correlated in my data. Is non-negative least squares implemented in scikit learn ? ( I can see it mentioned in the mailing list but not the docs.) My aim would be to get as many coefficients set to zero as possible.
You should easily be able to get a decent fit using random forest regression, without any preprocessing, since it is a nonlinear method:
model = RandomForestRegressor(n_estimators=10, max_features=2)
model.fit(features, labels)
You can play with the parameters to get better performance.
The solutions is not as easy and can be very influenced by your data.
If your variables R and Z are bounded (for ex 0<R<1 -3<Z<2) then you should be able to get a good estimation of the output variable using neural network.
Using neural network you should be able to estimate your output even without preprocessing the data and using all the variables as input.
(Of course here you will have to solve a minimization problem).
Sklearn do not implement neural network so you should use pybrain or fann.
If you want to preprocess the data in order to make the minimization problem easier you can try to extract the right features from the predictor matrix.
I do not think there are a lot of tools for non linear features selection. I would try to estimate the important variables from you dataset using in this order :
1-lasso
2- sparse PCA
3- decision tree (you can actually use them for features selection ) but I would avoid this as much as possible
If this is a toy problem I would sugges you to move towards something of more standard.
You can find a lot of examples on google.