insert on weka -doesn't show me the file - dataframe

I have wrote that in notepad
a b c d e
1 1 1 1 0 0
2 1 1 0 1 0
3 1 0 1 0 1
4 0 1 1 1 0
then I save it as s.ar
I open WEKA and try to insert it but it doesn't show me that.

.ar is not a file extension that Weka recognizes.
If you meant to create an ARFF file, then please see the documentation for that file format.
Have you thought about just using CSV (comma-separated values, .csv extension) instead? You could rewrite your data like this:
ID,a,b,c,d,e
1,1,1,1,0,0
2,1,1,0,1,0
3,1,0,1,0,1
4,0,1,1,1,0
Then you can load it in Weka's Explorer (after selecting CSV data files as type from the file chooser dialog).

Related

Validate records order in shell script

Input file
Header
1
2
3
1
2
3
Trailer
We need to validate input file line first character like 1 2 3 like these sequence or not. Can you please help me with scripts

Score bic may be used with discrete data only

I have a data frame with all columns in discrete format. I apply the following code to generate a BN using bnlearn package. However I get this error that says "score 'bic' may be used with discrete data only" while essentially my data are discrete! Here is a sample of my data:
A B C
3 2 0
0 0 5
5 1 7
0 0 2
4 6 1
And this is what I run:
> test=hc(dat, score="bic")
Error in check.score(score, x) :
score 'bic' may be used with discrete data only.
I don't get why my data is not seen as discrete?

How to proceed with my Spark / Scala project

I am new to Spark and Scala. I am working on a Scala project where I will have data access from SQL Server.
There is a table in SQL Server has info about clothes. itemCode is the primary key and several attributes with Boolean value 0/1 - Designer, Exclusive, Handloom and several other columns having attributes of the product etc.
Code Designer Exclusive Handloom
A 1 0 1
B 1 0 0
C 0 0 1
D 0 1 0
E 0 1 0
F 1 0 1
G 0 1 0
H 0 0 0
I 1 1 1
J 1 1 1
K 0 0 1
L 0 1 0
M 0 1 0
N 1 1 0
O 0 1 1
P 1 1 0
and the list continues.
I have to select a collection of 32 items out of 320 items that have ATLEAST:
8 Designer, 8 Exclusive, 8 Handloom, 8 WeddingStyle, 8 PartyStyle,
8 Silk, 8 Georgette
I had solved the problem in MS Excel solver (it uses Gradient Descent algo) by adding an extra column and using sumproduct function between added column and required columns. So, the problem was solved there and it took around 1 minute 30 seconds for the same.
Also, the problem can be solved by writing an SQL query with 32 joins (so many), for example, if i want to select 6 items out of those 16 above with atleast 4 items designer, 4 exclusive, 4 handloom, the query would be like in my post: MYSQL - Select rows fulfilling many count conditions
In production, I have to fetch 32 rows like this way, So my question is how do I proceed further with the project.
I am working on Scala IDE for Eclipse, and have added spark mllib there. I have fetched data via JDBC and stored in a dataframe, and the created a temporary table:
dataFrame.registerTempTable("Data")
There is a class optimizer in mllib optimization that uses gradient descent (like excel solver does) to solve problems. But, that is for machine learning and takes as input training data.
I am not able to understand how do I proceed with my project. Can i use mllib, or use a better simple version of the sql with sparkSQL. I need serious help.
I'd recommend you to use https://spark.apache.org/docs/1.3.0/sql-programming-guide.html#creating-dataframes rather than MLLib.
I solved this problem through linear programming. I have now used lpsolver library for java in my scala project. It is giving almost the same result as in excel solver.

Load txt file, read values and sum

I am new in Octave and I would like to create the following things:
A .txt file with 10 elements = 10 values
Load this file and read the data (values)
A function to add all these values
Return the total
Any idea?
Create a text file called "in.txt" that looks something like this:
1 2 3 4 5 6 7 8 9 10
Then read it into Octave and sum the elements using something like this (assuming Octave's working directory is the same as where you saved the file, use "pwd" to confirm):
load("in.txt"); sum(x)

Convert data in a specific format in Apache Pig.

I want to convert data in to a specific format in Apache Pig so that I can use a reporting tool on top of it.
For example:
10:00,abc
10:00,cde
10:01,abc
10:01,abc
10:02,def
10:03,efg
The output should be in the following format:
abc cde def efg
10:00 1 1 0 0
10:01 2 0 0 0
10:02 0 0 1 0
The main problem here is that a value can occur multiple times in a row, depending on the different values available in the sample csv file, up to a total of 120.
Any suggestions to tackle this are more than welcome.
Thanks
Gagan
Try something like the following:
A = load 'data' using PigStorage(",") as (key:chararray,value:chararray);
B = foreach A generate key,(value=='abc'?1:0) as abc,(value=='cde'?1:0) as cde,(value=='efg'?1:0) as efg;
C = group B by key;
D = foreach C generate group as key, COUNT(abc) as abc, COUNT(cde) as cde, COUNT(efg) as efg;
That should get you a count of the occurances of a particular value for a particular key.
EDIT: just noticed the limit 120 part of the question. If you cannot go above 120 put the following code
E = foreach D generate key,(abc>120?"OVER 120":abc) as abc,(cde>120?"OVER 120":cde) as cde,(efg>120?"OVER 120":efg) as efg;