Using MSBuild Import to modularize a project - msbuild

I have developed a large MSBuild project to build a portion of our solution. There's a lot of things going on-- XML parsing/replacing, Windows services, remote copy, etc. As a result, the file has grown really difficult to manage, despite my best efforts to add decorations in comments.
As a goof, I broke out the main chunks of functionality out into separate files, like "XML.targets", "Services.targets", etc and imported them into the main "Build.proj." The build still worked and I immediately found it to be much more manageable.
However, all the info I have read on the Import feature of MSBuild is that it should be used to import reusable targets, ie those than can be consumed by -any- MSBuild project without any modifications. The separate projects I'm creating here are the opposite-- specific to one project and will break by default if use with anything else unless modified.
So I guess what I'm asking is, even though I can... should I? Is there an inherent danger in using Import strictly for the purpose of organizing a large project? Is there a better way to do this?
Thanks

No, there is no inherent danger. I think it's a good decision to split large project into several .targets files specific to certain operation since it reduces overall complexity. The idea of creating reusable targets means that they should have as little dependencies on the other parts as possible. By analogy you can think of separate .targets files as classes. The less coupled they are - the better. Because modification in one targets file will less likely break the whole process. You can take a peace of paper, draw your targets files as points with your main project in the center and draw all the connections between them. Say if one targets file overrides target from another or expects some properties from it or somehow else depends on it then there is a connection. In the perfect scenario you'll get something like a star.
In short: you should if it reduces complexity.

Related

Can we run multiple feature files in the same package using Karate-gatling

I read in the documentation that we can run multiple feature files by adding newer lines for different classpaths in the simulation class. Is there a way wherein we can run multiple feature files belonging to the same package just like we run in FeatureRunner files?
No, I personally think that will introduce maintainability issues. We will consider PR-s though if anyone wants to contribute this.
If you really want this behavior, you should be able to write a small bit of Java code that scans a folder, loops over them and builds the Gatling "scenarios".

Is it possible to add a whole directory of source files to CMake command add_executable?

The documentation of CMake's add_executable gives the following specification of the command:
add_executable(<name> [WIN32] [MACOSX_BUNDLE]
[EXCLUDE_FROM_ALL]
source1 [source2 ...])
I now have a rather large project with a lot of sources and was wondering if it is possible to add a directory as a parameter for add_executable instead of specifying each source file individually? If not, are there any best practices or recommendations on how to approach this situation? I can't imagine the only way this would work is by adding each source file individually? How would this work for (really) large projects then, this doesn't seem like an elegant approach...
The best practice is indeed to list all files manually.
In particular, the CMake docs warn about using GLOB for this purpose:
We do not recommend using GLOB to collect a list of source files from
your source tree. If no CMakeLists.txt file changes when a source is
added or removed then the generated build system cannot know when to
ask CMake to regenerate.
This point is somewhat controversial, as many developers prefer that the build system just adjusts automatically to newly added files. The price for this automation is an increase in fragility of the build scripts.
You will have to remember to manually re-run CMake whenever files were added or removed. You also have to ensure that the physical layout of the files on disk matches the logical layout of the projects that you want to build. The latter point is arguably the bigger problem here. By decoupling the build system from the files on disk you add an additional safety net, but you have to pay for it with increased build script maintenance costs.
The biggest disadvantage of the explicit approach is imho that if you forget to add a new file to the CMakeLists, you might be wondering over weird linker errors for a while before realizing your mistake. I personally find the maintenance overhead for this approach acceptable. Sure, you will have a lengthy filelist in your build script, but you do not have to touch it that often and the changes will usually be trivial.
Since this point is somewhat controversial, I won't blame you if you want to use a GLOB for your project. Just be aware of the consequences and be prepared that all the cool kids will laugh at you if your build breaks one day because of this.

What is the best way to organize source code of a large Cocoa application in Xcode?

Here is what I'm looking for:
I'd like to separate pieces of functionality into modules or components of some sort to limit visibility of other classes to prevent that each class has access to every other class which over time results in spaghetti code.
In Java & Eclipse, for example, I would use packages and put each package into a separate project with a clearly defined dependency structure.
Things I have considered:
Using separate folders for source files and using Groups in Xcode:
Pros: simple to do, almost no Xcode configuration needed
Cons: no compile-time separation of functionality, i.e. access to everything is only one #import statement away
Using Frameworks:
Pros: Framework code cannot access access classes outside of framework. This enforces encapsulation and keeps things separate
Cons: Code management is cumbersome if you work on multiple Frameworks at the same time. Each Framework is a separate Xcode project with a separate window
Using Plugins:
Pros: Similar to Frameworks, Plugin code can't access code of other plugins. Clean separation at compile-time. Plugin source can be part of the same Xcode project.
Cons: Not sure. This may be the way to go...
Based on your experience, what would you choose to keep things separate while being able to edit all sources in the same project?
Edit:
I'm targeting Mac OS X
I'm really looking for a solution to enforce separation at compile time
By plugins I mean Cocoa bundles (http://developer.apple.com/library/mac/#documentation/Cocoa/Conceptual/LoadingCode/Concepts/Plugins.html)
I have worked on some good-sized Mac projects (>2M SLOC in my last one in 90 xcodeproj files) and here are my thoughts on managing them:
Avoid dynamic loads like Frameworks, Bundles, or dylibs unless you are actually sharing the binaries between groups. These tend to create more complexity than they solve in my experience. Plus they don't port easily to iOS, which means maintaining multiple approaches. Worst, having lots of dynamic libraries increases the likelihood of including the same symbols twice, leading to all kinds of crazy bugs. This happens when you directly include some "helper" class directly in more than one library. If it includes a global variable, the bugs are awesome as different threads use different instances of the global.
Static libraries are the best choice in many if not most cases. They resolve everything at build time, allowing code stripping in your C/C++ and other optimizations not possible in dynamic libraries. They get rid of "hey, it loads on my system but not the customer's" (when you use the wrong value for the framework path). No need to deal with slides when computing line numbers from crash stacks. They catch duplicate symbols at build time, saving many hours of debugging pain.
Separate major components into separate xcodeproj. Really think about what "major" means here, though. My 90-project product was way too many. Just doing dependency checking can become a very non-trivial exercise. (Xcode 4 can improve this, but I left the project before we ever were able to get Xcode 4 to reliably build it, so I don't know how well it did in the end.)
Separate public from private headers. You can do this with static libs just as well as you can with Frameworks. Put the public headers in a different directory. I recommend each component have its own public include directory for this purpose.
Do not copy headers. Include them directly from the public include directory for the component. Copying headers into a shared tree seems like a great idea until you do it. Then you find that you're editing the copy rather than the real one, or you're editing the real one, but not actually copying it. In any case, it makes development a headache.
Use xcconfig files, not the build pane. The build pane will drive you crazy in these kinds of big projects. Mine tend to have lines like this:
common="../../common"
foo="$(common)/foo"
HEADER_SEARCH_PATHS = $(inherited) $(foo)/include
Within your public header path, include your own bundle name. In the example above, the path to the main header would be common/foo/include/foo/foo.h. The extra level seems a pain, but it's a real win when you import. You then always import like this: #import <foo/foo.h>. Keeps everything very clean. Don't use double-quotes to import public headers. Only use double-quotes to import private headers in your own component.
I haven't decided the best way for Xcode 4, but in Xcode 3, you should always link your own static libraries by adding the project as a subproject and dragging the ".a" target into your link step. Doing it this way ensures that you'll link the one built for the current platform and configuration. My really huge projects haven't been able to convert to Xcode 4 yet, so I don't have a strong opinion yet on the best way there.
Avoid searching for custom libraries (the -L and -l flags at the link step). If you build the library as part of the project, then use the advice above. If you pre-build it, then add the full path in LD_FLAGS. Searching for libraries includes some surprising algorithms and makes the whole thing hard to understand. Never drop a pre-built library into your link step. If you drop a pre-built libssl.a into your link step, it actually adds a -L parameter for the path and then adds -lssl. Under default search rules, even though you show libssl.a in your build pane, you'll actually link to the system libssl.so. Deleting the library will remove the -l but not the -L so you can wind up with bizarre search paths. (I hate the build pane.) Do it this way instead in xcconfig:
LD_FLAGS = "$(openssl)/lib/libssl.a"
If you have stable code that is shared between several projects, and while developing those projects you're never going to mess with this code (and don't want the source code available), then a Framework can be a reasonable approach. If you need plugins to avoid loading large amounts of unnecessary code (and you really won't load that code in most cases), then bundles may be reasonable. But in the majority of cases for application developers, one large executable linked together from static libraries is the best approach IMO. Shared libraries and frameworks only make sense if they're actually shared at runtime.
My suggestion would be:
Use Frameworks. They're the most easily reusable build artifact of the options you list, and the way you describe the structure of what you are trying to achieve sounds very much like creating a set of Frameworks.
Use a separate project for each Framework. You'll never be able to get the compiler to enforce the kind of access restrictions you want if everything is dumped into a single project. And if you can't get the compiler to enforce it, then good luck getting your developers to do so.
Upgrade to XCode4 (if you haven't already). This will allow you to work on multiple projects in a single window (pretty much like how Eclipse does it), without intermingling the projects. This pretty much eliminates the cons you listed under the Frameworks option.
And if you are targeting iOS, I very strongly recommend that you build real frameworks as opposed to the fake ones that you get by using the bundle-hack method, if you aren't building real frameworks already.
I've managed to keep my sanity working on my project which has grown over the past months to fairly large (number of classes) by forcing myself to practice Model-View-Control (MVC) diligently, plus a healthy amount of comments, and the indispensable source control (subversion, then git).
In general, I observe the following:
"Model" Classes that serialize data (doesn't matter from where, and including app's 'state') in an Objective-C 1 class subclassed from NSObject or custom "model" classes that inherits from NSObject. I chose Objective-C 1.0 more for compatibility as it's the lowest common denominator and I didn't want to be stuck in the future writing "model" classes from scratch because of dependency of Objective-C 2.0 features.
View Classes are in XIB with the XIB version set to support the oldest toolchain I need to support (so I can use a previous version Xode 3 in addition to Xcode 4). I tend to start with Apple provided Cocoa Touch API and frameworks to benefit from any optimization/enhancement Apple may introduce as these APIs evolve.
Controller Classes contain usual code that manages display/animation of views (programmatically as well as from XIBs) and data serialization of data from "model" classes.
If I find myself reusing a class a few times, I'd explore refactoring the code and optimizing (measured using Instruments) into what I call "utility" classes, or as protocols.
Hope this helps, and good luck.
This depends largely on your situation and your own specific preferences.
If you're coding "proper" object-oriented classes then you will have a class structure with methods and variables hidden from other classes where necessary. Unless your project is huge and built of hundreds of different distinguishable modules then its probably sufficient to just group classes and resources into folders/groups in XCode and work with it that way.
If you've really got a huuge project with easily distinguishable modules then by all means create a framework. I would suggest though that this would only really be necessary where you are using the same code in different applications, in which case creating a framework/extra project would be a good way to effectively copy code between projects. In practically all other cases it would probably just be overkill and much more complicated than needed.
Your last idea seems to be a mix of the first two. Plugins (as I understand you are describing - tell me if I'm wrong) are just separated classes in the same project? This is probably the best way, and should be done (to an extent) in any case. If you are creating functionality to draw graphs (for example) you should section off a new folder/group and start your classes and functionality within that, only including those classes into your main application where necessary.
Let me put it this way. There's no reason to go over the top... but, even if just for your own sanity - or the maintainability of your code - you should always endeavour to group everything up into descriptive groups/folders.

What's the best approach to incremental compilation when building a DSL using Eclipse?

As suggested by the Eclipse documentation, I have an org.eclipse.core.resources.IncrementalProjectBuilder that compiles each source file and separately I also have a org.eclipse.ui.editors.text.TextEditor that can edit each source file. Each source file is compiled into its own compilation unit, but it can reference types from other (already compiled) source files.
Two tasks for which this is important are:
Compiling (to make sure the types we're using actually exist)
Autocomplete (to look up the type so we can see what properties/methods are present on it)
To accomplish this, I want to store a representation of all the compiled types in memory (referred to below as my "type store").
My question is two fold:
Task one above is performed by the builder and task two by the editor. So that they both have access to this type store, should I create a static store somewhere that they both can have access to, or does Eclipse provide a neater way to deal with this problem? Note that it is eclipse, not me, that instantiates the builders and editors when they are needed.
When opening eclipse, I don't want to have to rebuild the whole project just so I can re-populate my type store. My best solution so far is to persist this data somewhere and then repopulate my store from that (perhaps upon project open). Is this how other incremental compilers typically do this? I believe Java's approach is to use a special parser that efficiently extracts this data from the class files.
Any insights would be really appreciated. This is my first DSL.
This is an interesting question and one that doesn't have a simple solution. I'll try to describe a potential solution and also describe in a little bit more detail how JDT accomplishes incremental compilation.
First, a bit about JDT:
Yes, JDT does read class files for some of its information, but only for libraries that don't have source code. And this information is really only used for editing assistance (content assist, navigation, etc).
JDT computes incremental compilation by keeping track of dependencies between compilation units as they are compiled. This state information is stored on disk and retrieved and updated after each compile.
As a more complete example, let's say that after a full build, JDT determines that A.java depends on B.java, which depends on C.java.
If there is a structural change in C.java (a structural change is a change that can affect outside files (e.g., adding/removing a non-private field or method)), then B.java will be recompiled. A.java will not be recompiled since there was no structural change in B.java.
After this bit of clarification on how JDT works, here are some possible answers to your questions:
Yes. This must be done through statically accessible global objects. JDT does this through the JavaCore and JavaModelManager objects. If you don't want to use global singletons, then you can access to your type store available through your plugin's Bundle activator instance. The e4 project does allow dependency injection, which is probably even better (but is not really a part of the core Eclipse APIs).
I think persisting the information on the file system is your best bet. The only real way to determine incremental compile dependencies is to do a full build, so you need to persist the information somewhere. Again, this is how JDT does it. The information is stored in your workspaces' .metadata directory somewhere in the org.eclipse.core.resources plugin. You can have a look at the org.eclipse.jdt.internal.core.builder.State class to see the implementation.
So, this may not be the answer you are looking for, but I think this is the most promising way to approach your problem.

What is the MSI component generation best practice?

Visual Studio Installer states that it is a best practice to install each file as an installer component. The heat utility provided with Wix also seems to follow the practice of putting every file in its own component.
InstallShield's component wizard uses InstallShield's setup best practice of placing portable executable files in their own component but groups all other files (e.g. unversioned files) by the common destination folder.
The advantage of practice one (each file in its own component) is that each file is set up as a key file which is important if you want these files to trigger repairs. It also allows automation of creating the components (e.g. heat) easier since you are creating a component for each file.
The disadvantages of practice one include the overhead of managing so many components and the bloating of the registry after the application is installed.
An advantage of practice 2 could be seen in an install that installs hundreds of graphics files to one directory. If you do not care about repair functionality, is there any reason to create hundreds of components for this install?
These 2 different practices are conflicting and I want to know which one that people actually use and why.
I always use the Microsoft approach (something similar to what InstallShield does):
http://msdn.microsoft.com/en-us/library/aa368269(VS.85).aspx
I think it's the best because:
- important files (EXE, DLL etc.) have their own component, so they can be repaired easily
- resource files are grouped together
- it allows an optimum components count (not too many to get a long install, but enough to allow an easy repair)
I also noticed that most commercial setup authoring tools use this approach.
I've written about this in the past and I'll try to find a link to it. I think you already understand the question and it's just time for you to decide what is important to you.
For me, I work on installs with 15,000+ files and we only service with major upgrades. For "Program Executables" we follow 1:1 principals ( a must for COM, Services, ShortCuts and so on anyways ) but for content/data files we actually do a 1 to many with no key file approach to cut down on our number of components. Sure, that means we won't be able to create an MSP that services just one or two content files here and there but for our business needs that's simply not important to us.
Resilency was a bit of a 4 letter word to us so having less key files makes us happier anyways. :-) BTW, VDPROJ also makes every registry key a keyfile of it's own component and that was quite painful for us triggering unneeded repairs.
All of this aside, for anyone who doesn't fully understand all of this, I'd stick to the 1:1 pattern until you come across a situation where you don't want to anymore and you understand the impact of making that choice.