An Intellij Scope can be used to restrict the files analysed during "Inspect Code".
Question: Is there a way to define a scope from the files that differ between two commits or between a commit (or branch) and the current working directory?
This would simplify finding problems freshly introduced in a feature branch and would be particularly helpful for old code bases where running the inspections over the whole project generates too many unrelated findings.
Note that the Scope "uncommitted files" does not help a code review with several commits on a feature branch.
There is a feature request for it, please feel free to vote:
https://youtrack.jetbrains.com/issue/IDEA-145053/Run-inspections-for-the-scope-of-selected-commits
Background
I have a clearcase file that has changes on 3 different branches. I am refactoring this file so that, within the same directory, it only exists on one branch. The files on the other branches I will move to their own special directories.
Question
How do I remove the versions of a file on different branches?
A simple cleartool rmname, done in a view set to the relevant branch, should be enough.
See a detailed description of that command in "About cleartool rmname and checkouts"
This is far safer than a cleartool rmelem, which would completely deletes one or more elements.
First, let me say that yes, this question may be subjective. However, I believe that there is probably a 'best' answer, if all the relevant factors are taken into consideration. In any case, it's worth giving it a shot and asking :)
Let's say that I've three libraries, A, B, and C.
Library B uses library A.
Library C uses library A.
I want people to be able to use A, B, and C together, or to just take any combination of A, B, and C if they wish.
I want to be able to distribute the libraries with source code, so that people can build them themselves if they wish, or just grab and use individual files.
I don't really want to distribute them together in one large monolithic lump.
Apart from the sheer issue of bulk, there's a good reason that I don't want to do this. Let's say that B has an external dependency on some other library that it's designed to work with. I don't want to force someone who just wants to use C to have to link in that other library, just because B uses it. So lumping together A, B and C in one package wouldn't be good.
I want to make it easy for someone who just wants C, to grab C and know that they've got everything they need to work with it.
What are the best ways of dealing with this, given:
the language in question is Objective-c
my preferred delivery mechanism is one or more frameworks (but I'll consider other options)
my preferred hosting mechanism is git / github
I'd rather not require a package manager
This seems like a relatively straightforward question, but before you dive in and say so, can I suggest that it's actually quite subtle. To illustrate, here are some possible, and possibly flawed, solutions.
CONTAINMENT / SUBMODULES
The fact that B and C use A suggests that they should probably contain A. That's easy enough to achieve with git submodules. But then of course the person using both B and C in their own project ends up with two copies of A. If their code wants to use A as well, which one does it use? What if B and C contain slightly different revisions of A?
RELATIVE LOCATION
An alternative is set up B and C so that they expect a copy of A to exist in some known location relative to B and C. For example in the same containing folder as B and C.
Like this:
libs/
libA/
libB/ -- expects A to live in ../
libC/ -- expects A to live in ../
This sounds good, but it fails the "let people grab C and have everything" test. Grabbing C in itself isn't sufficient, you also have to grab A and arrange for it to be in the correct place.
This is a pain - you even have to do this yourself if you want to set up automated tests, for example - but worse than that, which version of A? You can only test C against a given version of A, so when you release it into the wild, how do you ensure that other people can get that version. What if B and C need different versions?
IMPLICIT REQUIREMENT
This is a variation on the above "relative location" - the only difference being that you don't set C's project up to expect A to be in a given relative location, you just set it up to expect it to be in the search paths somewhere.
This is possible, particularly using workspaces in Xcode. If your project for C expects to be added to a workspace that also has A added to it, you can arrange things so that C can find A.
This doesn't address any of the problems of the "relative location" solution though. You can't even ship C with an example workspace, unless that workspace makes an assumption about the relative location of A!
LAYERED SUBMODULES
A variation on the solutions above is as follows:
A, B and C all live in their own repos
you make public "integration" repos (lets call them BI and CI) which arrange things nicely so that you can build and test (or use) B or C.
So CI might contain:
- C.xcworksheet
- modules/
- A (submodule)
- C (submodule)
This is looking a bit better. If someone just wants to use C, they can grab CI and have everything.
They will get the correct versions, thanks to them being submodules. When you publish a new version of CI you'll implicitly be saying "this version of C works with this version of A". Well, hopefully, assuming you've tested it.
The person using CI will get a workspace to build/test with. The CI repo can even contain sample code, example projects, and so on.
However, someone wanting to use B and C together still has a problem. If they just take BI and CI they'll end up with two copies of A. Which might clash.
LAYERED SUBMODULES IN VARIOUS COMBINATIONS
The problem above isn't insurmountable though.
You could provide a BCI repo which looks like this:
- BC.xcworkspace
- modules/
- A (submodule)
- B (submodule)
- C (submodule)
Now you're saying "if you want to use B and C together", here's a distribution that I know works.
This is all sounding good, but it's getting a bit hard to maintain. I'm now potentially having to maintain, and push, various combinations of the following repos: A, B, C, BI, CI, BCI.
We're only talking about three libraries so far. This is a real problem for me, but in the real world potentially I have about ten. That's gotta hurt.
So, my question to you is:
What would you do?
Is there a better way?
Do I just have to accept that the choice between small modules and a big monolithic framework is a tradeoff between better flexibility for the users of the module, and more work for the maintainer?
Libraries are like an onion, lots of layers. And layer violations make for a nasty onion; an inner layer cannot contain an outer layer.
create 3 separate static library projects (assuming you may be targeting iOS); A, B, C
B can include headers from A, C can include headers from A
B and C cannot include headers from each other. A cannot include headers from B or C
Create a Workspace for each combination of libraries you want to support
Add appropriate projects to workspace
Create a new project in each workspace to contain test app and/or unit tests for that combination
The key is the workspace. With the workspace, you can combine an arbitrary set of projects and, as long as their configurations are the same (Debug vs. Release), build/run/analyze/profile will properly determine dependencies (and you can set them up manually), build everything into a single derived data / products folder, and it'll just work.
If you want to build, say, the C project standalone, then A will need to be installed as expected (typically into /usr/local/, but into ~/ works, too) and exactly as it would be on a customer's system (if you were to support binary library installs).
This is exactly how many of us manage our projects at Apple and it works quite well. In terms of management, keep it as simple as possible. You are unlikely to have an entire team devoted to build & configuration and, thus, your configurations should be simple.
If you were to honestly assess the situation and conclude that A will never used by B, then fold B into A and be done with it. Writing re-usable APIs is incredibly difficult to do well. I've seen many a project get bogged down into trying to create a fully generalized solution for what should be just one specific use, wasting huge amounts of time in the process (and sometimes failing).
While you note
I'd rather not require a package manager
I'd still suggest CocoaPods to you. It does all the other things, like deep dependency management, is very friendly to git and is overall pretty simple to install and use.
Still, this is not the exact answer in terms of requirements, you've set.
When I am checking out an application i get four options HEAD, Branches, Versions and Dates what do they mean? What is the difference between each of them?
A code repository is a tree of versions, each of which represents the state of the code at some particular point. It's possible to create a new branch of the tree from any point. Thus…
HEAD is the tip of the main trunk of the tree.
A branch is some other route through the tree of versions (e.g., to support a particular set of releases or develop a feature). If you ask to check out a branch, you typically get the tip of that branch.
A version represents an exact state of the code. In CVS, versions are per-file. (Other source control systems have global versioning.)
A date-based checkout represents getting the state of the code at a particular moment. This can be very useful for tracking down bugs.
The other thing that you'll see is a tagged version. That's where a name is given to a particular state of the tree (e.g., to represent an exact release).
Please take a look at Open Source Development with CVS, especially the Branches chapter. That explains how CVS works and what the head revision is and what branches are.
Okay so here is my situation:
Project A is in solution A, lets call it's output a.dll.
Project B is in solution B, lets call it's output b.exe.
Project B references a.dll
Both solutions are under source code control in different repositories.
My question is: how can i assure that Project A's output gets redirected to project B's "External references" folder, overriding the previous version of a.dll, regardless of what the local developers path structure looks like, is there a way to do this? alternatively could solution A, invoke the build of solution B and then copy it's output locally?
To be brief, automating builds accross solutions without a 'common directory structure' is possible through the use of:
commandline parameters
environment variables
I would encourage you however to consider the "Convention over Configuration" mantra and think up a convention about the relative positions of solutions A and B.
Furthermore it's possible to build projects and solutions using the MSBuild task. The binaries can be copied to your "External references" folder using the Copy task.