Does a tests submodule really reduce code bloat in Rust? - testing

Section 5.2 Testing in the Rust Book says
The [tests] module allows us to group all of our tests together, and to also define helper functions if needed, that don't become a part of the rest of our crate. The cfg attribute only compiles our test code if we're currently trying to run the tests. This can save compile time, and also ensures that our tests are entirely left out of a normal build.
I presume functions marked as #[test] do not appear in release builds, even if they appear in a module that does, right? I'd expect it's just test helper functions that might waste space. And they could be hidden individually with #[cfg(test)], right?

Yes, you can hide individual functions with #[cfg(test)], and #[test] functions will be stripped in non-test builds (note that one can test in release mode as well!). And yes, in a release build unused functions will be optimized away. However:
Adding a single #[cfg(test)] to a module is easier (and thus, is more likely to actually be done) than adding it on every single test.
The compile-time difference still applies. In release builds, when the unused functions are stripped, they have already been analyzed, type-checked, and optimized before they get removed. It's quicker to throw the function's source code away early in the compilation process.
Non-test debug builds matter, too --- and there, unused functions won't be removed.

Related

How to debug moonscript?

I trying to write some game, based on Love2d framework, compiled from moonscript. Every time when I make a mistake in my code, my application throws error and this error refers to compiled lua-code, but not a moonscript, so I have no idea where exactly this error happens. Tell me please, what a solution in this situation? Thanks.
Moonscript does support source-mapping/error-rewriting, but it is only supported when running in the moon interpreter: https://moonscript.org/reference/command_line.html#error_rewriting
I think it could be enabled in another lua environment but I am not completely sure what would be involved.
It would definetely require moonscript to hold on to the source-map tables that are created during compilation, so you couldn't use moonc; instead use the moonscript module to just-in-time compile require'd modules:
main.lua
-- attempt to require moonscript,
-- for development
pcall(require, 'moonscript')
-- load the main file
require 'init'
init.moon
love.draw = ->
print "test"
with this code and moonscript properly installed you can just run the project using love . as normal. The require 'moonscript' call will change require to compile moonscript modules on-the-fly. The performance penalty is negligible and after all modules have been loaded there is no difference.
Debugging is a problem for pretty much any source-to-source compilation system. The target language has no idea that the original language exists, so it can only talk about things in terms of the target language. The more divergent the target and original languages are, the more difficult debugging will be.
This is a big part of the reason why C++ compilers don't compile to C anymore.
The only real way to deal with this is to become intimately familiar with how the Moonscript compiler generates Lua from your Moonscript code. Learn Lua and carefully read the output Lua, comparing it to the given Moonscript. That will make it easier for you to map the given Lua error and source code to the actual Moonscript code that created it.

How we can get time of individual test cases in DejaGnu

I am running GCC testsuite and I want to know time elapsed for each individual test case. GCC uses DejaGnu for its test suite and I know that time can be used in scripts to get the time of a test case. I am wondering if there is any flag that I can pass with runtest that forces timing for all test cases (without changing test scripts).
I don't know of a generic way.
DejaGNU does not really have a built-in notion of the boundaries of a test. For example, it's reasonably common for a single conceptual test to call "pass" or "fail" several times. E.g., in GCC, a compilation test may check for several warnings from a given source file -- but each separate warning, and also the check for excess warnings, would be a separate pass or fail. However, these would all arise from a single invocation of GCC.
I think there are two approaches that you can take.
You can hack the .exp files you care about and use knowledge of what they are doing to track the times you are interested in.
You can run a single .exp file in isolation and time how long it takes. This is less useful in general, but it is what I did when making the GDB test suite more fully parallelizable.

Unit Testing in xcode5

I've been asked to debug a prototype iPad app (written in Objective C). I thought a good approach would be to write a series of unit tests (IDing bugs and helping me familiarise myself with the code). Though I have written unit tests before I've never used xcode, Objective C; or a Mac for that matter.
The problem being that the code as it stands won't currently build - there are a large number of errors. I'm wondering if there is a way to unit test certain parts of the code using xcode without having to build the entire project; or do I need to ID what's causing all or the errors and eliminate these first?
I would say it depends on how deeply linked the components are, if the error producing components are separate enough (i.e they only communicate/are used by themselves), then you could simply remove them from the build.
However, if the components are also necessary for the remainder of the app (the parts you want to test), then you would need to fix the errors first, as otherwise you couldn't really test the full functionality in your unit tests.

The 'right' way to run unit tests in Clojure

Currently, I define the following function in the REPL at the start of a coding session:
(defn rt []
(let [tns 'my.namespace-test]
(use tns :reload-all)
(cojure.test/test-ns tns)))
And everytime I make a change I rerun the tests:
user=>(rt)
That been working moderately well for me. When I remove a test, I have to restart the REPL and redefine the method which is a little annoying. Also I've heard bad rumblings about using the use function like this. So my questions are:
Is using use this way going to cause me a problem down the line?
Is there a more idiomatic workflow than what I'm currently doing?
most people run
lein test
form a different terminal. Which guarantees that what is in the files is what is tested not what is in your memory. Using reload-all can lead to false passes if you have changed a function name and are still calling the old name somewhere.
calling use like that is not a problem in it's self, it just constrains you to not have any name conflicts if you use more namespaces in your tests. So long as you have one, it's ok.
using lein lets you specify unit and integration tests and easily run them in groups using the test-selectors feature.
I also run tests in my REPL. I like doing this because I have more control over the tests and it's faster due to the JVM already running. However, like you said, it's easy to get in trouble. In order to clean things up, I suggest taking a look at tools.namespace.
In particular, you can use clojure.tools.namespace.repl/refresh to reload files that have changed in your live REPL. There's alsorefresh-all to reload all the files on the classpath.
I add tools.namespace to my :dev profile in my ~/.lein/profiles.clj so that I have it there for every project. Then when you run lein repl, it will be included on the classpath, but it wont leak into your project's proper dependencies.
Another thing I'll do when I'm working on a test is to require it into my REPL and run it manually. A test is just a no-argument function, so you can invoke them as such.
I am so far impressed with lein-midje
$ lein midje :autotest
Starts a clojure process watching src and test files, reloads the associated namespaces and runs the tests relevant to the changed file (tracking dependencies). I use it with VimShell to open a split buffer in vim and have both the source and the test file open as well. I write a change to either one and the (relevant) tests are executed in the split pane.

Static code analysis: integrate into debug and release builds, or just one or the other?

As a best practice, do you run code analysis on both debug and release builds, or just one or the other?
If for some reason the two builds are different (and they really shouldn't be for static analysis purposes), you should ensure that your metrics are running against what's actually going out to production.
Ideally, you should have a CI server, and the commands that developers run to initiate such analysis are no different from what the CI server does.
I usually pick one and that one is the release build. I guess it doesn't really matter but I tend to think that when gather information about what will run in production it is best to test exactly what will go to production (this goes for analysis, profiling, benchmarking, etc.).
Static Code Analysis will show the same results regardless of your build type.
Debug/Release only changes the resulting assembly and the inclusion or exclusion of debugging information at runtime.
I don't have separate ‘debug’ and ‘release’ builds (see Separate ‘debug’ and ‘release’ builds?).
The LLVM folks actually recommend analyzing the DEBUG configuraion:
ALWAYS analyze a project in its "debug" configuration
Most projects can be built in a "debug" mode that enables assertions.
Assertions are picked up by the static analyzer to prune infeasible
paths, which in some cases can greatly reduce the number of false
positives (bogus error reports) emitted by the tool.
In addition, debug builds tend to be faster (no need for optimization), and in the CI world faster is always better (all else being equal).