How can I run Gcov over an installed Cocoa application? - objective-c

I have a Cocoa application which uses an installer. I want to be able to run code coverage over the code (after it has been installed).
This is not the usual unit-test scenario where a single binary will run a suite of tests. Rather, the tests in question will interact with the UI and the app back-end whilst it is running, so I ideally want to be able to start the application knowing that Gcov is profiling it and then run tests against it.
Any ideas?
Update
Thanks to mustISignUp. To clarify why I asked the question:
The mistake I made was thinking that the object, .gcno and .gcda files had to be installed alongside the binaries (thus making the installer difficult). As it happens, the original location of the files is hard-wired into the code along with the instrumentation code.
The solution I went with is zipping up the code on the build machine and putting it on disk on the test machine. lcov (or just gcov) can be run from there. Alternatively, the .gcda files will be created on disk and they must be copied back up to the machine containing the source code.
Either way, the source code doesn't have to be present at install and run time, but if you want to get your results back lcov-style, the coverage counter files produced must be reconciled with the source code.

The app needs to be compiled with the correct GCC flags which will insert the profiling instructions into the code. It isn't something you can turn on and off. ie your code is modified at compile time to output the coverage info.
So, if you compiled the app with the correct flags it will emit coverage data, if you didn't, it won't (and you certainly wouldn't want it to for an app you were going to distribute).

Related

Optimizing JRE and JVM loading time from DVD

I am shipping private JRE along with jar program in the DVD. It takes about 3 minutes if I directly run the program from the DVD. Probably it takes time to load the JRE libraries to main memory as optical disks speed is slow. However when I close the program and re-launch , it gets launched instantaneously as if JRE is installed in the local computer (which is not). I think JRE stays in the memory even after I close the program.
3 minutes is a big waiting time for users, is there any way I can optimize the code (which is the only thing under my control) that it loads only few libraries to launch and load the the other necessary ones on demand. Now probably it is trying to load everything from DVD before showing the program window as suggested by
$java.exe -verbose -jar myProgram.jar
Is there any other solution to launch the program quickly even in the 1st time ? Currently the only workaround is to use the Launch4J .bmp splash screen buts that is very static.
Note: I know installing JRE in local machine will solve the problem but the program is not for technical users, my Launch4J does not find it if installed in custom directory. Also my DVD is copy protected so that the program can not be distributed.
Package your app together with the private JRE into a single EXE that would self-extract into the user's temporary directory and automatically run your app. The startup time improvement will blow you away:
http://www.excelsior-usa.com/blog/excelsior-jet/java-app-as-a-single-exe/
(Download the sample packaged apps and burn them onto a DVD to quickly verify my claim.)
Let me emphasize that you can achieve the result using free tools only, and optionally improve it a bit further with Excelsior JET. Refer to our Knowledge Base article for full instructions (most of which apply whether you use Excelsior JET or not):
HOWTO: Create a single EXE from your Java application
Disclaimer: As you may have already guessed, I work for Excelsior. But again, it all works for the private JRE, and the result, in terms of startup time improvement, is almost as good.

How to automate building of third party library using cmake

What I am looking for:
Download library
Extract It
Apply custom patch
Run configure
Run build command
What library I am trying to build are:
Openssl
Boost
Thrift
C-ares
Curl
Pcre
Nginx
ICU
JsonCPP
I think I can do these things using external module: http://cmake.org/cmake/help/v2.8.8/cmake.html#module:ExternalProject
But I have following question?
I have different type of build and with different directory. Is it
going to build all these library for every different target? If yes
it will be painful as all these library take one hour to build. Is
there a way I can control it and it only build it once. As library
remains same for all these targets.
On switching directory to different name. Cmake force everything to
be rebuild-ed. Will it be same for external library. If yes? How to
solve this problem. I don't want to rebuild the library if I am not
changing them and want to use them while switching to different
branches without building them.
Yes, you can use CMake's ExternalProject feature to accomplish what you want to do.
When using cross-compilation in combination with external projects, the source code will be built once for each toolchain. You could avoid rebuilds if you checked in the results of the build into a source-control system, and re-checked it out on each new person's machine, but I do not recommend this. Instead, have one of your "set up new computer" tasks actually be allowing the compilation to run overnight, which will also act as a test that the machine is actually usable. That set-up task can be launched by a system administrator prior to a new hire's arrival, or you can leave it to the new hire, as circumstances require.
I'm not completely certain what you are asking in your second question, but if the library is unchanged, CMake will detect that it is unchanged and not recompile it. Typically, the source code would be in a single directory tree: each compiled version would be built in a distinct location. Thus, developers can access any compiled version at any time just by switching directories. This is particularly helpful because it allows you to mount these directories over NFS to embedded hardware, et cetera.

Studying web servers such as apache httpd and tomcat

I would like to see how everything is handled behind the scenes behind web servers such as apache httpd and tomcat. How does one go about stepping through these applications, making changes, and then viewing changes?? Applications this complex use scripts for building and I presume they take a while to compile, it seems to me that there would be more to it than simply downloading the source code and importing into Eclipse. Or is it actually that simple?
And how do developers who want to work on the code of these projects get around the fact that it will take a fair amount of time to compile these applications (and other non-trivial applications such as web browsers)? When I am working on smaller stuff I am constantly compiling and then debugging. I imagine that is no feasible when it can take several minutes to compile?
Easy: just read.
http://tomcat.apache.org/tomcat-7.0-doc/building.html
Also, http://wiki.apache.org/tomcat/FAQ/Developing
The current Tomcat 7.0.x trunk takes about 17 seconds to build on my MacBook Pro, and that included downloading a few dependencies that I didn't already have laying around. If you want to re-compile a single .java file, you can re-run the entire build and the toolchain (really just Apache Ant) will figure out which files actually need to be recompiled.
You only modified one source file? Only one source file will be re-compiled when you run ant deploy (you don't even need the "deploy": it's the default). If you use Eclipse or some other similar IDE, it will recompile on the fly and you don't need to worry about the command line or any of that.
If you have further questions, please join the Tomcat users' mailing list (or the developers' list) and join the community.

Powerbuilder run

I'm using Powerbuilder to call an external function from a DLL created in C#
If I generate an executable it works fine, it call the web service perfectly well, but when I'm trying to run it in "development" mode it don't use the "application_name.exe.config" file.
I tried to set "app.config" file hard coded in the DLL, but I was unsuccessful
Clues to resolve this issue?
I think you described it yourself: you're looking for it to use something used by the EXE when you're running from development mode. When you run from development mode, there is no EXE generated or used, so Windows won't be leveraging functionality linked to the EXE. (PB starts your application so quickly because it is only loading the application to the virtual machine and running its Open event.) If you need this, it sounds like you'll have to include Deploying of the EXE and running it as part of your testing cycle.
Good luck,
Terry.
When you compile and run from the exe you're using your exe. But when you run from the dev environment you're actually using pbxxx.exe (pb115.exe, pb110.exe etc.). You may be able to copy the "application_name.exe.config" into your pb directory and rename it something like pbxxx.exe.config. At least that's the way it works with manifest files -- I had two, one called appname.exe.manifest and one called pb115.exe.manifest.
Just curious but how many libraries/objects are in your application?
I have some very large applications and the longest any of them takes to do a full build is about 30 minutes. Something odd about your aapplication for it to take 2 hours to do a full build.
DLL's dont have config files. Only EXE's.

How to validate deployment packages created by msbuild? (preferably using mstest or nunit)

Our msbuild process creates a variety of zip packages for deployment (mostly web sites, but other things as well). We have a variety of recurring problems that keep sneaking back - files included that shouldn't be, missing resources. This screams for automated validation. The criteria to test for are simple
Validation of foosite package:
Resource files are present.
No test result files, obj files, or other build artifacts
And so forth.
Ideally, I could use nunit or mstest, which everone is familiar with. Msbuild knows where the packages are. We have a lot of packages, possible concurrent builds on different branches. Ergo, the location of the packages and names of the packages are not deterministic - so the tests don't know where the packages are.
What is the simplest way to feed msbuild information to mstest or nunit? The answer to this question would one possible answer, however, that question got architectural advice instead of an answer. I know this isn't a unit test, but the test framework is handy, anyway. I could create an exe to validate the build - but why add a couple hours to the project?
Or, do you have a better suggestion for automatically validating build packages? (MSIs, zips, whatever)?
What I've ended up doing is having a bunch of custom MS build tasks which spin up a virtual machine on Virtual Server, copy the MSI onto the machine, silently deploy it and then validate against it. I used PSExec to start the MSI. It could then use the MSTest command line runner to use MSTest and run your test bits.
This is probably overkill for you, but using a VM allows you to start clean and not be affected by any previous installs on your dev box.
If you want a fast fail, like a unit test, then I suggest you create unit tests against your packages. Such a test would unzip the .ZIP packages, and run some asserts against the contents.
You could even use some TDD techniques against the packages. For instance, if you have a deployment fail because a particular file is missing, then write a unit test that fails because the file is missing; change the build so that the file is present; then make sure the unit test succeeds.
But in general, deployment issues are broader than that, and I echo the suggestion from blowdart. Deploy into one or more virtual machines, then run automates tests over the deployed environments. These tests would not only test for simple things like was there an error returned during the installation itself; they would also check things like were the IIS virtual directories set up correctly, with the correct properties and contents, and does the web site basically run.
I'd use several different virtual machines to test different deployment scenarios: one for a clean deploy; one for an upgrade from version .-1, etc. It's possible that the same, or similar IVT tests could be run for each environment.
Even if you can't do this all at once, the thought process involved in this exercise should lead to a more formal definition of what your deployment environment really is. You this will be helpful when you get a chance to embody this formal definition in actual tests.