Linear regression output is given as separated and NAN values - variables

I'm trying to create the best linear regression model and my code is:
Daugialype2 <- lm(TNFBL~IL-4_konc_BL+ MCP-1_konc_BL+IL-8_konc_BL+TGF-β1_konc_BL)
summary(Daugialype2) #this code is working, I get a normal output
BUT
Then I want to introduce more variables to the model, e.g.
Daugialype2 <- lm(TNFBL~IL-4_konc_BL+ MCP-1_konc_BL+IL-8_konc_BL+TGF-β1_konc_BL+MiR_181_BL)
For unknown reasons, my output looks like this (even though without the MiR_181_BL variable, the output was good:
enter image description here
I don't know where is the problem - I don't get any error message. Could it be in the variable itself?
My variable looks like this (while others have less numbers after comma)
enter image description here
It's my very first model. Thank you for your answers!

Related

CreateML data analysis stopped

When I attempt to train a CreateML model, I get the following screen after inputting my training data:
Create ML error message
I am then unable to add my test data or train the model. Any ideas on what is going on here?
[EDIT] As mentioned in my comment below, this issue went away when I removed some of my training data. Any newcomers who are running into this issue are encouraged to try some of the solutions below and comment on whether it worked for them. I'm happy to accept an answer if it seems like it's working for people.
This happens when the first picture in the dataset has no label. If you place a labeled photo as the first in the dataset and in the coreML json, you shouldn't get that issue.
Correct:
[{"annotations":[{"label":"Enemy","coordinates":{"y":156,"x":302,"width":26,"height":55}}],"imagefilename":"Enemy1.png"},{"annotations":[{"label":"Enemy","coordinates":{"y":213,"x":300,"width":69,"height":171}}],"imagefilename":"Enemy7.png"},{"annotations":
Incorrect:
[{"annotations":[],"imagefilename":"Enemy_v40.png"},{"annotations":[],"imagefilename":"Enemy_v41.png"},{"annotations":[],"imagefilename":"Enemy_v42.png"},{"annotations":
At the minimum you should check for these 2 situations, which triggered the same generic error for me (data analysis stopped), in the context of an Object Detection Model:
One or more of the image names referenced in annotations.json is incorrect (e.g. typo in image name)
The first entry in annotations.json has an empty annotations array (i.e. an image that does not contain any of the objects to be detected)
If you are using any random Split or something similar, make sure, its parsing the data correctly. you can test this easily by debugging.
I suggest you check to see if your training data is consistent and all entries have all needed values. The error is likely in the section of data you removed.
That would cause the error Nate commented he is seeing when he gets that pop up.
Getting the log would be the next step in any other evaluation.

Octave: quadprog index issue?

I am trying to run several files of code for an assignment. I am trying to solve an optimization problem using the "quadprog" function from the "optim" package.
quadprog is supposed to solve the optimization problem in a certain format and takes inputs H,f, A,b, Aeq, Beq, lb, ub.
The issue I am having involves my f which is a column vector of constants. To clarify, f looks like c*[1,1,1,1,1,1] where c is a constant. Quadprog seems to run my code just fine for certain values of c, but gives me the error:
error: index (_,49): but object has size 2x2
error: called from
quadprog at line 351 column 32
for other values of c. So, for example, 1/3 works, but 1/2 doesn't. Does anyone have any experience with this?
Sorry for not providing a working example. My code runs over several files and I seem to only be having problems with a specific value set that is very big. Thanks!
You should try the qp native Octave function.
You mention f is: c*[1,1,1,1,1,1] but, if c is a scalar, that is not a column vector. It seems very odd that a scalar value might produce a dimensions error...

How to get Elemwise{tanh,no_inplace}.0 value

I am using Deep learning Theano. How can I see the content of a variable like this: Elemwise{tanh,no_inplace}.0. It is the input data of logistic layer.
Suppose your variable is called t. Then you can evaluate it by calling t.eval(). This may fail if input data are needed. In that case you need to supply them by providing a dictionary like this t.eval({input_var1: value1, input_var2: value2}). This is the ad-hoc way of evaluating a theano-expression.
The way it works in real programs is to create a function taking the necessary input, for example: f = theano.function([input_var1, input_var2], t), will yield a function that takes two input variables, calculates t from them and outputs the result.
Right now, you don't seem to print values but operations. The output Elemwise{tanh,no_inplace}.0 means, that you have an element wise operation of tanh, that is not done in place. You still need to create a function that takes input and executes your operation. Then you need to call that function and print the result. You can read more about that in the graph-structure part of their tutorial.

Extracting Data from an Area file

I am trying to extract information at a specific location (lat,lon) from different satellite images. These images are were given to me in the AREA format and I cooked up a simple jython script to extract temperature values like so.
While the script works, here is small snippet from it that prints out the data value at a point.
from edu.wisc.ssec.mcidas import AreaFile as af
url="adde://localhost/imagedata?&PORT=8113&COMPRESS=gzip&USER=idv&PROJ=0& VERSION=1&DEBUG=false&TRACE=0&GROUP=FL&DESCRIPTOR=8712C574&BAND=2&LATLON=29.7276 -85.0274 E&PLACE=ULEFT&SIZE=1 1&UNIT=TEMP&MAG=1 1&SPAC=4&NAV=X&AUX=YES&DOC=X&DAY=2012002 2012002&TIME=&POS=0&TRACK=0"
a=af(url);
value=a.getData();
print value
array([[I, [array([I, [array('i', [2826, 2833, 2841, 2853])])])
So what does this mean?
Please excuse me if the question seems trivial, while I am comfortable with python I am really new to dealing with scientific data.
Note
Here is a link to the entire script.
After asking around, I found out that the Area objects returns data in multiples of four. So the very first value is what I am looking for.
Grabbing the value is as simple as :
ar[0][0][0]

prediction using libsvm in java

I'm using libsvm(3.11) tool for implementation of SVM classification in my project(Text Classification using Multi Agent). But every time when I'm predicting the result it is giving the same label to all the test Documents i.e., either +1 or -1, though I'm using different kinds of data.
I'm using the following procedure for executing libsvm classification for a plain text documents:
-> There will be a set of training text documents
-> I'm converting these text documents into libsvm supported format using TF-IDF weights(I'm taking two folders, that represents two classes .. for 1st folder I assigned label -1 and for 2nd folder it is +1 follows TF-IDF values for that text document)
-> After that I took those bag of words into one plain text document .. and then by using those words I'm generating test document vector with some label(I'm taking only one test document, so IDF will be 1 always and there ll be only one vector ... I hope label doesn't matter) ...
-> After that I'm applying the libsvm functions svm_train and svm_predict with default options
Am I doing in correct procedure?? .. If there is any wrong procedure plz feel free to inform me .. It ll really helps me ..
and Y this libsvm is always giving the result as only one label?? .. Is it any fault with my procedure?? .. or problem with tool??
Thanks in Advance..
Why are you using a new criteria to make test documents? The testing and training document sets should all be derived from your original set of "training text documents". I put these in quotes because you could take a subset of these and use them for testing. Ultimately, make sure your training and testing text document sets are distinct and from the original set.