Are there any good tools to collect Objective-C metrics? - objective-c

I'm using Jenkins for CI on iOS projects and want to collect some software metrics on them. But the only tool I was able to find was CLOC which only counts lines of codes (LOCs). Nevertheless it's better than nothing.
What I really want to count are methods, classes, calls to other classes etc. (to do the fancy cyclomatic complexity stuff).
Perhaps I'm missing some tools, let me know, if I do.

OCLint?
From oclint.org:
OCLint is a static code analysis tool for improving quality and
reducing defects by inspecting C, C++ and Objective-C code and looking
for potential problems like:
Possible bugs - empty if/else/try/catch/finally statements
Unused code
unused local variables and parameters
Complicated code - high cyclomatic complexity, NPath complexity and high NCSS
Redundant code -
redundant if statement and useless parentheses
Code smells - long
method and long parameter list
Bad practices - inverted logic and
parameter reassignment ...

Lizard will do it. Check it out at https://github.com/terryyin/lizard.

You can try XClarify, a pretty complete objective-c code analyzer, and it's free for open source contributors.

Beyond lines of code and test coverage, I'm not sure there are any such tools yet for Obj-C. I suspect we'll see some soon given the influx of devs from other platforms who use metrics, but in my 7 years as an Obj-C dev I haven't heard of anyone having a tool for collecting them. Of course it'd be good to be proved wrong :)

ProjectCodeMeter measures flow complexity (similar to McCabe cyclomatic complexity) on Objective-C code, but it doesn't count methods and classes though..

I use few tools for gathering code quality metrics:
OCLint - Gather some metrics, like cyclomatic complexity, and enforce best practice - http://oclint.org
Simian - Similarity Analyser - http://www.harukizaemon.com/simian/
Clang analyzer - Same tool as in Xcode (Product -> Analyze), seems like a bit outdated though useful too. To perform it on CI see that: http://clang-analyzer.llvm.org/scan-build.html
Coveralls - nice tools for visualization of unit test coverage - https://coveralls.io
I've found recently that it exists free plugin for SonarQube - https://github.com/octo-technology/sonar-objective-c but it's not really feature-rich. Official one is here: http://www.sonarsource.com/products/plugins/languages/objective-c/

What I really want to count are methods, classes
nnnot rrreallly.... you can parse the xcode indexes or the output of nm -- or run doxygen.
calls to other classes etc
gcov -- or run doxygen

I just stumbled upon Xcode Statistician (link seems to be dead), but haven't tried it yet. The zip archive can be downloaded directly.

Related

Is it feasible to use Antlr for source code completion?

I don't know, if this question is valid since i'm not very familiar with source code parsing. My goal is to write a source code completion function for one existing programming language (Language "X") for learning purposes.
Is Antlr(v4) suitable for such a task or should the necessary AST/Parse Tree creation and parsing be done by hand, assuming no existing solutions exists?
I haven't found much information about that specific topic, except a list of compiler books, except a compiler is not what i'm after for.
The code completion in GoWorks is completely implemented using ANTLR 4. The following video shows the level of completion of this code completion engine. The code completion example runs from 5 minutes through the end of the video.
Intro to Tunnel Vision Labs' GoWorks IDE (Preview Release)
I have been working on code completion algorithms for many years, and strongly believe that there is no better solution (automated or manual) for producing a code completion solution for a new language that meets the requirements for what I would call highly-responsive code completion. If you are not interested in that level of performance or accuracy, other solutions may be easier for you to get involved with (I don't work with those personally, because I am too easily disappointed in the results).
Xtext uses ANTLR3 and has good autocomplete facilities. The problem is, it generates a seperate parser (again using antlr3) for autocomplete processing which is derived from AbstractInternalContentAssistParser. This multi-thousand line code part shows that the error recovery of ANTLR3 alone found to be insufficient by the xtext team.
Meanwhile ANTLR4 has a function parser.getExpectedTokensWithinCurrentRule() which lists possible token types for given position. It works when used in a ParseTreeListener. Remaining is semantics, scoping etc which is out of ANTLRs scope.

Have JUnit fail tests that don't actually run an assertion

My team is working on educating some of our developers about testing. They understand why to write tests and are on board that they should write tests, but are falling a little short on writing good tests.
I just saw a commit like this
public void SomeTest{
#Test
public void testSomething{
System.out.println(new mySomething.getData());
}
So they were at least making sure their code gave them the expected output by looking.
It will be a bit before we can really sell the idea of code reviews. In the mean time I was considering having JUnit fail any tests that do not have actual assertXXX or fail statements in them. I would then like to have that failure message say something like "Your tests should use assertions and actually examine the output!".
I fully expect this to lead to calls like assertTrue(1 == 1);. We're working on the team buy in for proper testing and code reviews, are there any technical mechanisms we can use to make life easier for the developers that already get it?? What about technical mechanisms to help the new guys understand?
I think you should consider organizational changes: mentoring, training, code reviews.
The tools can only help you if you're using them in good faith with a base understanding of the goals. If one of these is missing they won't help you.
Humans are just to intelligent to do dump things or work around metrics. I think your assessment is not correct that "they" are on board if they can't write a single useful test. Automatic tools are simply not the correct tools at this stage. You can't learn by being told by a program what to do next.
You can use some static code analyzer.
I use PMD which includes a JUnit rule set. There are a lot of IDE plugins which will mark rule violations in the IDE. You can configure the rule sets to your needs.
You will also profit from the other rule sets - which will warn you on code style / best practice violations (although you have to decide sometimes if the tool or you are the fool :-)).
to answer the stated question for future viewers.
JUnit uses reflection to run tested function if any Exception, Error throws -> test fails, otherwise succeed. Assert class is just a utils class.

Has anyone worked with TestCocoon?

I was trying out TestCocoon the other day, and everything seemed great. I compiled my code using cscl,cslib and cslink and I was expecting this to take care of all the instrumentation. I get some .csmes files and .exe.csmes files, but when I load them into the CoverageBrowser I cannot see anything relevant. No covered/uncovered lines. All the lines are grey.
Is anything else needed in order for TestCocoon to report coverage? Do I need to modify my source files? I also posted on their forums here, but no result:
http://www.testcocoon.org/forum/viewtopic.php?f=8&t=44
I tried this tool with few projects using Visual Studio 2008, and I found:
Pros:
- it can collect results from multiple runs, you can run your software at different machines and collect results together
- it has useful GUI for browsing results
- you can merge coverage from many modules and anlyse it as whole application
- forum works, I submited two problems and got implemented fixtures in few days
- it works almost without any problems (I found two minor compilation problems) with quite complicated sources, with tons of templates, boost::spirit parsers, other boost stuff (including meta-programming modules etc.), STL, Qt (everything together)
- well documented
- it's free
Cons:
- instrumentation is definitely slow
- multi-process single project compilation using Visual Studio 2008 doesn't work, only one file at a time is compiled which makes building slower (you will get better performance building whole solution with many projects)
At this moment I didn't try to use this tool for continuous coverage measurement.
Either way, in my opinion it's worth to try.
BTW, Tony, PC-Lint is static-analysis tool, isn't it? interesting idea to compare it with dynamic-analysis tool...
TestCocoon (now at 1.6.7) works well with the small C code bases we tend to unit test. The performance impact seems about normal for other instrumentation methods we've used.
We are able to extract coverage information in our makefiles and the coverage browser is very useful.
Dont use testcocoon, I am currently using it, and its shoddy as hell. Pay for something better (it will cost alot). It is the ultimate death sentence, seriously, don't do it. Whatever you do, stay away from testcocoon at all costs. Worst move ever. You might as well sell your kids for drug money.

How to encourage positive developer behavior with an IDE?

The goal of IDEs is increase productivity. They do a great job at that. Refactoring, navigation, inline documentation, auto completion help increase productivity immensely.
But: Every tool is a weapon. The very same IDE helps to produce chunk code. Some IDE features are an invitation to produce bad code: code generation, code formatting tools, refactoring tools.
IDE overuse tends to isolate developers from the necessary details. It is a good thing that you can start working but at some point in your career you have to be able to figure out how to start a process. You can ignore this detail for some time, in the end they are important to write a working product (vs. bolted together stuff that works 90% of the time).
How do you encourage positive behavior of other developers working with an IDE? This is a question as old as copy and paste.
To get the right impression: developers have to have the maximum freedom to mobilize their maximum creativity and motivation. They may use IDEs and all the related tools as they see fit. Nobody should impose draconian measures on them. I don't want to demotivate and force someone to do something. Good behavior has to be encouraged. It has to itch little a bit if you do the wrong thing. In the same line as the SO "accept rate" metric (and reputation). You can ignore it but life is better if you follow the rules.
(The solution should work in a given setting. You can ignore reviews, changing the staffing or more education as potential solutions.)
Train your IDE, instead of being trained by it.
Set up code formatting the way you (or your team) wants it. Heck, even disable it in cases where it makes sense. I've never seen an IDE align something like this with a sensible combination of tabs and spaces (where \t is obviously the tab character):
{
\tcout << "Hello "
\t << (some + long + expression +
\t to_produce_the_word(world))
\t << endl;
}
In languages like Java, you cannot avoid boilerplate. The best option you have is to check generated code, ensuring that it is the same as what you'd have written by hand. Modify it as necessary. Configure your IDE to generate the exact code that you need, if possible. Eclipse is pretty good at this.
Know what's going on under the hood.
Know that your IDE is actually invoking the compiler. Have some insight into the flags that it passes. Be able to invoke the compiler from the command line.
Know about the runtime system. Be aware of the flags that are used or needed to launch your program. Be able to launch the program from a command line.
I think before anyone uses a RAD tool of any type they should be able to write the application from scratch (scratch being wiring together the framework components) in notepad potentially on a computer that is 10 years older than current technology :P. Not knowing the ins and outs of a paradigm/framework leads to bad code from novice developers who only learn things at a mile high view of the platforms they develop for. Perhaps they should do this in a few technologies -- i.e., GTK programming is completely different to MVC which is then also different to SWING and .NET.
I think the end result should be a developer that thinks of the finer details of a problem before they jump to thinking of how they will write an interface to it in a specific RAD environment.
its an open ended question, but...
We have a Eclipse format file that everyone shares, so that we all format the code in the same manor. (Except the one lone InteliJ guy we have).
Everyone shares a dictionary file. It helps to remove all the red lines from the code. Making it look cleaner and more readable.
I run EMMA over the code to find out who isn't testing their code, and then moan at them.
The main problems we face is that most of the team don't know all the features/power of the IDE (eclipse). The didn't know about CTRL + O (twice), or auto code gen. All I can do as a 'hot key wizard' is keep sharing my knowledge with them to help them become more productive.
I look forward to the day when my problem is that they auto gen as much as possible.
Rather than me finding bugs where the wrong value is returned from a getter method due to a typo.
Attempt development (at least occasionally) using only a text editor and launching the compilation, testing, etc. from the command line.
Typing the commands will get tedious very quickly so create scripts or (even better) learn rake, ant, msbuild.
If the IDE does code generation for you and that code generation is really important (such as generating classes from xsd or proxy classes from wsdl), try to find out how to run the code generation from the command line - then hook the code generation into a build (so you'll never be tempted to edit the generated code).
The idea of autoformatting code is great but it usually just turns your code into a mess. If you have less code, minor formatting inconsistencies are just not a big deal.
Adding code quality tools into your build - style checks, class and method sizes, complexity, code duplication, test coverage, etc (complexian, simian, flog, flay, ndepend, ncover, etc.) will discourage IDE generated code.

How would one go about testing an interpreter or a compiler?

I've been experimenting with creating an interpreter for Brainfuck, and while quite simple to make and get up and running, part of me wants to be able to run tests against it. I can't seem to fathom how many tests one might have to write to test all the possible instruction combinations to ensure that the implementation is proper.
Obviously, with Brainfuck, the instruction set is small, but I can't help but think that as more instructions are added, your test code would grow exponentially. More so than your typical tests at any rate.
Now, I'm about as newbie as you can get in terms of writing compilers and interpreters, so my assumptions could very well be way off base.
Basically, where do you even begin with testing on something like this?
Testing a compiler is a little different from testing some other kinds of apps, because it's OK for the compiler to produce different assembly-code versions of a program as long as they all do the right thing. However, if you're just testing an interpreter, it's pretty much the same as any other text-based application. Here is a Unix-centric view:
You will want to build up a regression test suite. Each test should have
Source code you will interpret, say test001.bf
Standard input to the program you will interpret, say test001.0
What you expect the interpreter to produce on standard output, say test001.1
What you expect the interpreter to produce on standard error, say test001.2 (you care about standard error because you want to test your interpreter's error messages)
You will need a "run test" script that does something like the following
function fail {
echo "Unexpected differences on $1:"
diff $2 $3
exit 1
}
for testname
do
tmp1=$(tempfile)
tmp2=$(tempfile)
brainfuck $testname.bf < $testname.0 > $tmp1 2> $tmp2
[ cmp -s $testname.1 $tmp1 ] || fail "stdout" $testname.1 $tmp1
[ cmp -s $testname.2 $tmp2 ] || fail "stderr" $testname.2 $tmp2
done
You will find it helpful to have a "create test" script that does something like
brainfuck $testname.bf < $testname.0 > $testname.1 2> $testname.2
You run this only when you're totally confident that the interpreter works for that case.
You keep your test suite under source control.
It's convenient to embellish your test script so you can leave out files that are expected to be empty.
Any time anything changes, you re-run all the tests. You probably also re-run them all nightly via a cron job.
Finally, you want to add enough tests to get good test coverage of your compiler's source code. The quality of coverage tools varies widely, but GNU Gcov is an adequate coverage tool.
Good luck with your interpreter! If you want to see a lovingly crafted but not very well documented testing infrastructure, go look at the test2 directory for the Quick C-- compiler.
I don't think there's anything 'special' about testing a compiler; in a sense it's almost easier than testing some programs, since a compiler has such a basic high-level summary - you hand in source, it gives you back (possibly) compiled code and (possibly) a set of diagnostic messages.
Like any complex software entity, there will be many code paths, but since it's all very data-oriented (text in, text and bytes out) it's straightforward to author tests.
I’ve written an article on compiler testing, the original conclusion of which (slightly toned down for publication) was: It’s morally wrong to reinvent the wheel. Unless you already know all about the preexisting solutions and have a very good reason for ignoring them, you should start by looking at the tools that already exist. The easiest place to start is Gnu C Torture, but bear in mind that it’s based on Deja Gnu, which has, shall we say, issues. (It took me six attempts even to get the maintainer to allow a critical bug report about the Hello World example onto the mailing list.)
I’ll immodestly suggest that you look at the following as a starting place for tools to investigate:
Software: Practice and Experience April 2007. (Payware, not available to the general public---free preprint at http://pobox.com/~flash/Practical_Testing_of_C99.pdf.
http://en.wikipedia.org/wiki/Compiler_correctness#Testing (Largely written by me.)
Compiler testing bibliography (Please let me know of any updates I’ve missed.)
In the case of brainfuck, I think testing it should be done with brainfuck scripts. I would test the following, though:
1: Are all the cells initialized to 0
2: What happens when you decrement the data pointer when it's currently pointing to the first cell? Does it wrap? Does it point to invalid memory?
3: What happens when you increment the data pointer when it's pointing at the last cell? Does it wrap? Does it point to invalid memory
4: Does output function correctly
5: Does input function correctly
6: Does the [ ] stuff work correctly
7: What happens when you increment a byte more than 255 times, does it wrap to 0 properly, or is it incorrectly treated as an integer or other value.
More tests are possible too, but this is probably where i'd start. I wrote a BF compiler a few years ago, and that had a few extra tests. Particularly I tested the [ ] stuff heavily, by having a lot of code inside the block, since an early version of my code generator had issues there (on x86 using a jxx I had issues when the block produced more than 128 bytes or so of code, resulting in invalid x86 asm).
You can test with some already written apps.
The secret is to:
Separate the concerns
Observe the law of Demeter
Inject your dependencies
Well, software that is hard to test is a sign that the developer wrote it like it's 1985. Sorry to say that, but utilizing the three principles I presented here, even line numbered BASIC would be unit testable (it IS possible to inject dependencies into BASIC, because you can do "goto variable".