I am a bit new to this and I have what I believe is a basic question. For one year I have been sending the same email every two weeks. The variable I want to analyse is the language( English and French). I would like to know how important is the difference of click rate between English and French. I already have all the data, I am just not sure what is the best test to use for this. Any help?
Thanks,
Daniel
A t-test sounds like it would be the most appropriate method to ascertain whether there is a statistically significant difference between the mean click rate for English language emails in comparison to French language emails. Depending on how your data is organised and collected a paired-sample t-test or an independent sample t-test could be used. If the data does not meet parametric assumptions (ie is not normally distributed) you could use the Mann-Whitney U Test in place of an independent sample t-test or the Wilcoxon signed-rank test for a paired samples t-test. All four of these tests can be found under the Analyze menu in SPSS, with the t-test under Compare Means and the non-parametric equivalents under Nonparametric tests. This link gives some advice on t-tests in SPSS and how to interpret them. If you comment giving a link to your data I can help further if necessary.
Related
I am going to attend the Instance Matching of OAEI, now I need to make my results to Alignment Format. In order to achieve it, I have learned official tutorials.(link:http://alignapi.gforge.inria.fr/tutorial/tutorial1/index.html).
But there are many differences between the method taught and the method I want. In other words, I can't understand the API.
This is my situation:
I have 2 rdf file(person11.rdf and person12.rdf respectively.data link is http://oaei.ontologymatching.org/2010/im/index.html, the PR dataset), each file has information of many person. I want to find the coreferent entities, the results must be printed in Alignment Format. I find the results by using SPARQL, but I don't know how to print it in Alignment Format.
So, I have three questions:
First, if I want to generate a Alignment Format file, is the method taught the only way?
Second, can you give me your method(code better) to generate the Alignment Format file? Maybe I am wrong from the beginning, can you give me some suggestions?
Third, if you attended OAEI or know something about Instance Matching, can you give me some advice? I want to find the coreferent entities.
Thank you!
First question: I guess that the "mentioned method" is the one in tutorial1. It is not the appropriate one since you have to write a program to output the alignment format and this is a command line interface tutorial. In this case, you'd better look at http://alignapi.gforge.inria.fr/tutorial/tutorial2/index.html
Then, there are basically two ways to do:
The advised one (for several reasons and for participating to OAEI) is to follow these tutorials, to create an empty alignment in it, to create the correspondences from the results of your SPARQL query and to render it. Everything is covered by the tutorials but the part concerning your SPARQL queries. This assumes that you are programming in Java.
The non-advised solution (primarily non advised because you will have to debug your own renderer), is to write, in any programming language that you want a program that output the format (which corresponds to what you cite).
Think about it: how would you expect that the Alignment API knows the results of your SPARQL query? If you come up with a nice solution, contact the API developers, they may integrate it and others could benefit.
Second question: I cannot do better than what is above.
Third question: too general. Read the OAEI results (http://oaei.ontologymatching.org) and look at the code of others.
Good luck!
I would need a simple explanation on what a error-guessing test case it. Is it dangerous to use? I would appreciate an example.
Best regards,
Erica
Error guessing is documented here: http://en.wikipedia.org/wiki/Error_guessing
It's a name for something that's very common -- guessing where errors might occur based on your previous experience.
For example you have a routine that calculates whether a value inputted by a user from a terminal is a prime number:
You'd test the cases where errors tend to occur:
Empty input
Values that are not integers (floating point, letters, etc)
Values that are boundary cases 2, 3, 4
etc.
I would assume that every tester/QA person would be asked questions like this during an interview. It gives you a chance to talk about what procedures you've used in the past during testing.
I think the method goes like this:
1. Do formal testing
2. Use knowledge gained during formal testing about how the system works to make a list of places where defects might be
3. Design tests to verify whether those defects exist.
By its nature this process is very ad-hoc and unstructured.
When learning a new programming language there are always a couple of traditional problems that are good to get yourself moving. For example, Hello world and Fibonacci will show how to read input, print output and compute functions (the bread and butter that will solve basically everything) and while they are really simple they are nontrivial enough to be worth their time (and there is always some fun to be had by calculating the factorial of a ridiculously large number in a language with bignums)
So now I'm trying to get to grips with some SQL system and all the textbook examples I can think of involve mind-numbingly boring tables like "Student" or "Employee". What nice alternate datasets could I use instead? I am looking for something that (in order of importance) ...
The data can be generated by a straightforward algorithm.
I don't want to have to enter things by hand.
I want to be able to easily increase the size of my tables to stress efficiency, etc
Can be used to showcase as much stuff as possible. Selects, Joins, Indexing... You name it.
Can be used to get back some interesting results.
I can live with "boring" data manipulation if the data is real and has an use by itself but I'd rather have something more interesting if I am creating the dataset from scratch.
In the worst case, I at least presume there should be some sort of benchmark dataset out there that would at least fit the first two criteria and I would love to hear about that too.
The benchmark database in the Microsoft world is Northwind. One similar open source (EPL) one is Eclipse's Classic Models database.
You can't autogenerate either as far as I know.
However, Northwind "imports and exports specialty foods from around the world", while Classic Models sells "scale models of classic cars". Both are pretty interesting. :)
SQL is a query language, not a procedural language, so unless you will be playing with PL/SQL or something similar, your examples will be manipulating data.
So here is what was fun for me -- data mining! Go to:
http://usa.ipums.org/usa/
And download their micro-data (you will need to make an account, but its free).
You'll need to write a little script to inject the fixed width file into your db, which in itself should be fun. And you will need to write a little script to auto create the fields (since there are many) based on parsing their meta-file. That's fun, too.
Then, you can start asking questions. Suppose the questions are about house prices:
Say you want to look at the evolution of house price values by those with incomes in the top 10% of the population over the last 40 years. Then restrict to if they are living in california. See if there is a correlation between income and the proportion of mortgage payments as a percentage of income. Then group this by geographic area. Then see if there is a correlation between those areas with the highest mortgage burden and the percentage of units occupied by renters. Your db will have some built-in statistical functions, but you can always program your own as well -- so correl might be the equivalent of fibonnacci. Then write a little script to do the same thing in R, importing data from your db, manipulating it, and storing the result.
The best way to learn about DBs is to use them for some other purpose.
Once you are done playing with iPUMS, take a look at GEO data, with (depending on your database) something like PostGis -- the only difference is that iPUMS gives you resolution in terms of tracts, whereas GIS data has latitude/longitude coordinates. Then you can plot a heat map of mortgage burdens for the U.S., and evolve this heat map over different time scales.
Perhaps you can do something with chemistry. Input the 118 elements, or extract them for an online source. Use basic rules to combine them into molecules, which you can store in the database. Combine molecules into bigger molecules and perform more complex queries upon them.
You will have a hard time finding database agnostic tutorials. The main reason for that is that the SQL-92 standard on which most examples are based on is plain old boring. There are updated standards, but most database agnostic tutorials will dumb-it-down to the lowest common denomiator: SQL-92.
If you want to learn about databases as a software engineer, I would definitely recommend starting with Microsoft SQL Server. There are many reasons for that, some are facts, some are opinions. The primary reason though is that it's a lot easier to get a lot further with SQL Server.
As for sample data, Northwind has been replaced by AdventureWorks. You can get the latest versions from codeplex. This is a much more realistic database and allows demonstrating way more than basic joins, filtering and roll-ups. The great thing too, is that it is actually maintained for each release of SQL Server and updated to showcase some of the new features of the database.
Now, for your goal #1, well, I would consider the scaling out an exercise. After you go through the basic and boring stuff, you should gradually be able to perform efficient large-scale data manipulation and while not really generating data, at least copy/paste/modify your SQL data to take it to the size you think.
Keep in mind though that benchmarking databases is not trivial. The performance and efficiency of a database depends on many aspect of your application. How it is used is just as important as how it is setup.
Good luck and do let us know if you find a viable solution outside this forum.
Implement your genealogical tree within a single table and print it. In itself is not a very general problem, but the approach certainly is, and it should prove reasonably challenging.
Geographic data can showcase a lot of SQL capabilities while being somewhat complicated (but not too complicated). It's also readily available from many sources online - international organizations, etc.
You could create a database with countries, cities, zip codes, etc. Mark capitals of countries (remember that some countries have more than one capital city...). Include GIS data if you want to get really fancy. Also, consider how you might model different address information. Now what if the address information had to support international addresses? You can do the same with phone numbers as well. Once you get the hang of things you could even integrate with Google Maps or something similar.
You'd likely have to do the database design and import work yourself, but really that's a pretty huge part of working with databases.
Eclipse's Classic Model database is the best open source database equivalent of Factorial and the Fibonacci function .And Microsoft's Northwind is the another powerful alternative that you can use .
I'm about to release a FOSS data generator that can generate random yet meaningful data in CSV format. Rather belatedly, I guess, I need to poll the state of the art for such products - because if there is a well known and useful existing tool, I can write my work off to experience. I am aware of of a couple of SQL Server specific tools, but mine is not database specific.
So, links? And if you have used such a product,
what features did you find it was missing?
Edit: To add a bit more info on my tool (Ooh, Matron!) it is intended to allow generation of any kind of random data from existing data files, and
supports weighting. It is XML based (sorry, folks) and lets you say things like:
<pick distribute="20,80" >
<datafile file="femalenames.dat"/>
<datafile file="malenames.dat"/>
<pick/>
to select female names about 20% of the time and male names 80% of the time.
But the purpose of this question is not to describe my product but to get info on other tools.
Latest: If anyone is interested, they can get the alpha of my data generator at http://code.google.com/p/csvtest
That can be a one-liner in R where I use the littler scripting front-end:
# generate the data as a one-liner from the command-line
# we set the RNG seed, and draw from a bunch of distributions
# indented just to fit the box here
edd#ron:~$ r -e'set.seed(42); write.csv(data.frame(y=runif(10), x1=rnorm(10),
x2=rt(10,4), x3=rpois(10, 0.4)), file="/tmp/neil.csv",
quote=FALSE, row.names=FALSE)'
edd#ron:~$ cat /tmp/neil.csv
y,x1,x2,x3
0.914806043496355,-0.106124516091484,0.830735621223563,0
0.937075413297862,1.51152199743894,1.6707628713402,0
0.286139534786344,-0.0946590384130976,-0.282485683052060,0
0.830447626067325,2.01842371387704,0.714442314565005,0
0.641745518893003,-0.062714099052421,-1.08008578470128,0
0.519095949130133,1.30486965422349,2.28674786332467,0
0.736588314641267,2.28664539270111,-0.73270267483628,1
0.134666597237810,-1.38886070111234,-1.45317770550920,1
0.656992290401831,-0.278788766817371,-1.01676025893376,1
0.70506478403695,-0.133321336393658,0.404860813371462,0
edd#ron:~$
You have not said anything about your data-generating process, but rest assured that R can probably cope with just about any requirement, including multivariate normal, t, skew-t, and more. The (six different) random-number generators in R are also of very high quality.
R can also write to DBs, or read parameters from it, and if it needs to be on Windoze then the Rscript front-end could be used instead of littler.
I asked a similar question some months ago:
Tools for Generating Mock Data?
I got some sincere suggestions, but most were not suitable for my needs. Either expensive (non-free) software, or else not flexible enough w.r.t. data types and database structure, or range of mock data, or way too slow (e.g. the Rails ActiveRecord solution).
Features I was looking for were:
Generate mock data to fill existing database tables
Quick to generate > 1 million rows
Produce either SQL script format or flat file suitable for importing
Scriptable command-line interface, not a GUI
Not dependent on Microsoft Windows environment
Nice-to-have features:
Extensible/configurable
Open-source, free license
Written in a dynamic language like Perl/PHP/Python
Point it at a database and let it "discover" the metadata
Integrated with testing tools (e.g. DbUnit)
Option to fill directly into the database as it generates data
The answer I accepted as Databene Benerator. Though since asking the question, I admit I haven't used it very much.
I was surprised that even when asking the community, the range of tools for generating mock data was so thin. This seems like a niche waiting to be filled! I'll be interested to see what you release.
I would like to know if somebody often uses metrics to validate its code/design.
As example, I think I will use:
number of lines per method (< 20)
number of variables per method (< 7)
number of paremeters per method (< 8)
number of methods per class (< 20)
number of field per class (< 20)
inheritance tree depth (< 6).
Lack of Cohesion in Methods
Most of these metrics are very simple.
What is your policy about this kind of mesure ? Do you use a tool to check their (e.g. NDepend) ?
Imposing numerical limits on those values (as you seem to imply with the numbers) is, in my opinion, not very good idea. The number of lines in a method could be very large if there is a significant switch statement, and yet the method is still simple and proper. The number of fields in a class can be appropriately very large if the fields are simple. And five levels of inheritance could be way too many, sometimes.
I think it is better to analyze the class cohesion (more is better) and coupling (less is better), but even then I am doubtful of the utility of such metrics. Experience is usually a better guide (though that is, admittedly, expensive).
A metric I didn't see in your list is McCabe's Cyclomatic Complexity. It measures the complexity of a given function, and has a correlation with bugginess. E.g. high complexity scores for a function indicate: 1) It is likely to be a buggy function and 2) It is likely to be hard to fix properly (e.g. fixes will introduce their own bugs).
Ultimately, metrics are best used at a gross level -- like control charts. You look for points above and below the control limits to identify likely special cases, then you look at the details. For example a function with a high cyclomatic complexity may cause you to look at it, only to discover that it is appropriate because it a dispatcher method with a number of cases.
management by metrics does not work for people or for code; no metrics or absolute values will always work. Please don't let a fascination with metrics distract from truly evaluating the quality of the code. Metrics may appear to tell you important things about the code, but the best they can do is hint at areas to investigate.
That is not to say that metrics are not useful. Metrics are most useful when they are changing, to look for areas that may be changing in unexpected ways. For example, if you suddenly go from 3 levels of inheritance to 15, or 4 parms per method to 12, dig in and figure out why.
example: a stored procedure to update a database table may have as many parameters as the table has columns; an object interface to this procedure may have the same, or it may have one if there is an object to represent the data entity. But the constructor for the data entity may have all of those parameters. So what would the metrics for this tell you? Not much! And if you have enough situations like this in the code base, the target averages will be blown out of the water.
So don't rely on metrics as absolute indicators of anything; there is no substitute for reading/reviewing the code.
Personally I think it's very difficult to adhere to these types of requirements (i.e. sometimes you just really need a method with more than 20 lines), but in the spirit of your question I'll mention some of the guidelines used in an essay called Object Calisthenics (part of the Thoughtworks Anthology if you're interested).
Levels of indentation per method (<2)
Number of 'dots' per line (<2)
Number of lines per class (<50)
Number of classes per package (<10)
Number of instance variances per class (<3)
He also advocates not using the 'else' keyword nor any getters or setters, but I think that's a bit overboard.
Hard numbers don't work for every solution. Some solutions are more complex than others. I would start with these as your guidelines and see where your project(s) end up.
But, regarding these number specifically, these numbers seem pretty high. I usually find in my particular coding style that I usually have:
no more than 3 parameters per method
signature about 5-10 lines per method
no more than 3 levels of inheritance
That isn't to say I never go over these generalities, but I usually think more about the code when I do because most of the time I can break things down.
As others have said, keeping to a strict standard is going to be tough. I think one of the most valuable uses of these metrics is to watch how they change as the application evolves. This helps to give you an idea how good a job you're doing on getting the necessary refactoring done as functionality is added, and helps prevent making a big mess :)
OO Metrics are a bit of a pet project for me (It was the subject of my master thesis). So yes I'm using these and I use a tool of my own.
For years the book "Object Oriented Software Metrics" by Mark Lorenz was the best resource for OO metrics. But recently I have seen more resources.
Unfortunately I have other deadlines so no time to work on the tool. But eventually I will be adding new metrics (and new language constructs).
Update
We are using the tool now to detect possible problems in the source. Several metrics we added (not all pure OO):
use of assert
use of magic constants
use of comments, in relation to the compelxity of methods
statement nesting level
class dependency
number of public fields in a class
relative number of overridden methods
use of goto statements
There are still more. We keep the ones that give a good image of the pain spots in the code. So we have direct feedback if these are corrected.