I'm working on a c++ project built with cmake+ninja with approx 1200 build targets on a 64 thread computer.
There's one translation unit that takes 10min to compile, most others are comparably fast such that a build of all other targets together takes (building on all threads) only about 9 minutes. The slow translation unit is rather independent of the rest, so it doesn't have to be scheduled late, but as it turns out is is scheduled late by default, such that a complete build takes me between 15 and 20 minutes, and at the end there is only one thread working with almost all other targets done. The build would be faster for me, if the slow translation unit would be scheduled first, blocking one thread for about 10 minutes, while all other threads work on the rest of the project and the entire project is built within 10min.
Is there a way in cmake or ninja to shift the priorities for scheduling to point out slow or "please early" targets without messing up dependencies?
As of February 27, 2021, the answer is no. There are some open issues (#232, #376) and an abandoned PR (#1333) on GitHub requesting this feature in base Ninja. CMake does not provide any way to prioritize a target (at least through 3.20) either.
Messing with dependencies (even order-only) doesn't help here (as you likely know) because that would just force smaller targets to start either completely before or completely after the long target. A priority hint is what's truly needed here.
The only workaround I can think of (and it's not a great one), is to split your long target out into a separate ExternalProject and create a superbuild that builds the large target and the independent portion at the same time. This would require significant restructuring and would be a non-starter for a lot of projects. It might be worth the pain if you're losing a lot of development time to this issue, though.
Related
I'm a Community College instructor grading student C++ coding assignments. Been doing the same task all semester. Suddenly, this morning, CLion is building extremely slowly, perhaps even hanging, the second time I build/run a project. WTF? The projects are very small. One source file, one header, no libraries.
What changed? And why would a second build be the problem? It's usually first builds that are slow.
What changed? My harddrive backup software. I told my auto-backup to take 2 hours off and have had no problem since.
I was recently forced to switch from the Code42 CrashPlan software I've been using for years to Carbonite. Code42 is getting out of the end-user backup business.
Note that I believe this problem is at least 95% user error, in how I configured my backup, and max 5% anything to do with Carbonite's implementation. Maybe their file locking strategy is different from CrashPlan's; I don't know.
I did think twice before I configured my CLion Projects folder to be backed up. I knew that backing up object files would be a waste of backup cycles/space. But I was in a hurry and wanted my solution source code to get backed up by Carbonite until I can check it into a repository of some sort at the end of the semester. I'm pretty sure I can go in and refine my backup strategy to NOT include the object/executable folders.
If I Build my solution from VS, it takes less than a minute to do all the checks to tell me all the projects have been built. If I do the same in team foundation build, it takes closer to 20 minutes.
i define as "items to build" only the solution (.sln file) and solution has 32 "projects".
Check the build definition workspace, make sure you are only pulling doen the code you need for the build.
Also TFS does a clean compile, I.e. it pulls down all of the code then rebuilds everything from scratch. How long does a clean build take locally? Less than a minute to build 32 projects seems quick so I suspect that your local build is incremental rather than full.
Run a build with diangnostic loging, this should give you a clue to the parts that are taking the time.
I was trying out TestCocoon the other day, and everything seemed great. I compiled my code using cscl,cslib and cslink and I was expecting this to take care of all the instrumentation. I get some .csmes files and .exe.csmes files, but when I load them into the CoverageBrowser I cannot see anything relevant. No covered/uncovered lines. All the lines are grey.
Is anything else needed in order for TestCocoon to report coverage? Do I need to modify my source files? I also posted on their forums here, but no result:
http://www.testcocoon.org/forum/viewtopic.php?f=8&t=44
I tried this tool with few projects using Visual Studio 2008, and I found:
Pros:
- it can collect results from multiple runs, you can run your software at different machines and collect results together
- it has useful GUI for browsing results
- you can merge coverage from many modules and anlyse it as whole application
- forum works, I submited two problems and got implemented fixtures in few days
- it works almost without any problems (I found two minor compilation problems) with quite complicated sources, with tons of templates, boost::spirit parsers, other boost stuff (including meta-programming modules etc.), STL, Qt (everything together)
- well documented
- it's free
Cons:
- instrumentation is definitely slow
- multi-process single project compilation using Visual Studio 2008 doesn't work, only one file at a time is compiled which makes building slower (you will get better performance building whole solution with many projects)
At this moment I didn't try to use this tool for continuous coverage measurement.
Either way, in my opinion it's worth to try.
BTW, Tony, PC-Lint is static-analysis tool, isn't it? interesting idea to compare it with dynamic-analysis tool...
TestCocoon (now at 1.6.7) works well with the small C code bases we tend to unit test. The performance impact seems about normal for other instrumentation methods we've used.
We are able to extract coverage information in our makefiles and the coverage browser is very useful.
Dont use testcocoon, I am currently using it, and its shoddy as hell. Pay for something better (it will cost alot). It is the ultimate death sentence, seriously, don't do it. Whatever you do, stay away from testcocoon at all costs. Worst move ever. You might as well sell your kids for drug money.
I have MSbuild task like this to sign all the output modules of our project.
<SignFile Condition="Exists('$(OutputPath)\%(FilesToSign.identity)')"
CertificateThumbprint="$(THUMBPRINT)"
SigningTarget="$(OutputPath)\%(FilesToSign.identity)"
TimestampUrl="http://timestamp.verisign.com/scripts/timestamp.dll" />
It takes quite a while (10 minutes or more) when I have many files. It is possible to run stuff in parallel or in other ways speed it up. (I am trying to sign more than 100 files.. )
Another way of speeding up the signing is to remove the TimeStampUrl parameter. It may not be good enough for release build (to not have a time stamp on the signature), but it is good enough for a development build.
And it speeds up the signing process with 80-90%.
The only way to do parallel build with MsBuild is to have different instances of msbuild, thus different project files, I don't think that's recommended here. You cannot do task or target in parallels, yet you can build project in parallel (but you can create several project files with one target in each). You may have precision here : How to run tasks in parallel in MSBuild .
Moreover, I think you will be limited by your disk access speed and not by your memory.
I don't know the SignFile task enough to give advice on how optimize it, though, sorry.
About 2 months ago I overtook building proccess in current company. Even though I don't have much knowledge of it, I was the only with enough time, so I didn't have much choice.
Situation is not that good, and I would like to do following:
Labeling files in SourceSafe with version (example ProjectName PV 1.2)
GetFiles from SourceSafe to specific directory
Build vb6/c++/c# projects(yes, there are all kinds of them)
Build InstallShield setups
This is for now partly done using batch scripts(one for labeling and getting, one for building, etc..). So when building start I pretty much have babysit it.
Good part of this code could be reused.
Any recommendations on how to do it better? One big problem is whole bunch of dependencies between projects. Also labeling has to increment version and if necessary change PV to EV.
I would like to minimize user interaction as much as possible. One click on one build script(Spolsky is god) and all is done, no need to increment version, to set where to get files and similar stuff.
Is the batch scripting best way to go? Should I do some functionality with msbuild. Are there any other options?
Specific code is not need, for now I just need a way how to improve it, even though it wouldn't hurt.
Tnx,
Marko
Since you already have a build system (even though some of it currently "manual"), whatever you do, don't start over from scratch.
(1) Make sure you have a test machine (or Virtual Machine) on which to work. Thus you can make changes and improvements without having to worry about breaking anything.
(2) Put all of your build scripts and tools in version control, not just the source code. Then as you make changes, see if they work. If they do, then save them to version control. If they don't, then roll them back.
(3) Choose one area to work on at a time. Don't try to do everything at once. Going from a lot of manual work to "one-click" will take time no matter what build system you're working with.
Sounds like you want a continuous integration solution, like CC.Net. It has configuration options to do all the things you want and a great community to answer questions.
Also, batch scripting is probably not a good option. Sophisticated build and integration tools will let you feed parameters into the build and create different builds for different environments (test, production, etc.). Batch scripting will involve a lot of hand-coding and glue.