Apache Maven: Putting it All Together - apache

Enter, Apache Maven:
Like many things in the software world, Maven seems to be a "big picture" technology, where one has to see the whole/big picture in order to truly understand the roles and significance of each of its components. I am now at the beginning of my journey to see & understand the whole shebang.
I've poured through the very-well-organized Maven documentation center, completing both their 5- and 30-minute tutorials and reading various other documents & articles. I am not attempting to build my first Maven project inside Eclipse, and have installed/configured the m2eclipse plug-in (from Sonatype) to help with this process. I've spent a good amount of time setting up my project's pom.xml file, and now I'm bottlenecking-up with so many questions that I thought it would be high-time to come over here and ask for some nudges in the right direction.
What's the difference between a POM and a settings.xml file? What type of info gets configured in settings.xml that POMs omit? Besides the settings.xml files are there any other "standard" config files one should utilize?
The Maven "Build Life Cycle" is a traditional sequence starting with validate, compile, test, ... and ending with a deploy. How do these life cycle phases map (or don't they) to goals? Does each phase have its own set of goals? Does one configure goals inside the POM? Where does one configure (add, remove, modify) life cycle phases?
I'm choking on the concept of profiles altogether. I've read the basic introduction on them [here](http://maven.apache.org/guides/introduction/introduction-to-profiles.html) and am still not "getting" them. Can someone provide a context-specific example of when/how profiles would be useful?
I do apologize for the multiple "sub-questions" but figured it would be better to make this question a 1-stop-shop for basic Maven comprehension then plague the community with 20 micro questions. Thanks in advance for any clarification on these.

Well, two years to late but I've started to create one of those big pictures. It can be found at github.com/benjaminfoo/MavenBigPicture. It's quite an amount of content which I collected over the last half year, for which I've used maven on a enterprise level.

Sorry, but I can not answer your questions in detail, because it's a lot of stuff. I also tried to learn Maven from the material you write about, but what gave me more inside was the book "Maven - The Definitive Guide" (http://shop.oreilly.com/product/9780596517335.do).
I also can only suggest to start small. Take just a simple Java project and get it running with a simple pom. Get bigger with modules later. Forget about the rest at the beginning. Try to understand (and feel) what Maven is and how it's ticking. Later start trying to understand how the repositories are organized and how to bind your Maven build to an own repository like Artifactory. Try to learn it step by step with a real (not high priority or a private) project.
I work with Maven now for about two years and I am still not feeling like a pro. It's a quite complex beast and very powerful. To master it, it takes some time I guess (or maybe I am not that smart ;-)).
Don't know whether it helps you or not, but maybe it's something. Sorry, but I can not write a new tutorial, here. ;-)

Related

Tiny CLion projects slow to build a second time

I'm a Community College instructor grading student C++ coding assignments. Been doing the same task all semester. Suddenly, this morning, CLion is building extremely slowly, perhaps even hanging, the second time I build/run a project. WTF? The projects are very small. One source file, one header, no libraries.
What changed? And why would a second build be the problem? It's usually first builds that are slow.
What changed? My harddrive backup software. I told my auto-backup to take 2 hours off and have had no problem since.
I was recently forced to switch from the Code42 CrashPlan software I've been using for years to Carbonite. Code42 is getting out of the end-user backup business.
Note that I believe this problem is at least 95% user error, in how I configured my backup, and max 5% anything to do with Carbonite's implementation. Maybe their file locking strategy is different from CrashPlan's; I don't know.
I did think twice before I configured my CLion Projects folder to be backed up. I knew that backing up object files would be a waste of backup cycles/space. But I was in a hurry and wanted my solution source code to get backed up by Carbonite until I can check it into a repository of some sort at the end of the semester. I'm pretty sure I can go in and refine my backup strategy to NOT include the object/executable folders.

Minecraft won't run. Multiple items cannot be resolved to a type

I just finished skating around that infamous "cannot load main class start" thing, and I got blindsided with a sea of errors:
A friend suggests it might have something to do with com.google, but ultimately can't help me.
I haven't made a single alteration to any of the code so far. Eclipse just started up not being able to run and stayed that way. I think there's a way to fix it but it would require making acute changes at the sight of every single error; work that could be wiped by a cleanup if I'm proved wrong.
Anyone have a clue what the issue is? Thank you for the trouble.
UPDATE: Adding guava as a library relieved the error involving com.google, but threw in a handful of others. This one class file contains 3 of the most common unresolved types I've seen scattered throughout: Logger/LogManagaer, PropertyMap, and CrashReport
Your general problems revolve around not having dependencies in the build path. Eclipse's error messages are pretty clear about this; e.g. if a package name is underlined in red and can't be found, then that means it can't be found, and the obvious solution is to add the library that provides it, so that it can be found.
In virtually all cases here, a Google search for the missing packages and classes will lead you to the packages that contain it.
For each unresolved dependency, find the library, add it to the build path, then move on.
I also suggest consulting the documentation that comes with the source code you are attempting to compile, which often simply lists the dependencies, thus saving you the trouble of hunting them down as you go.
While we could do the internet searches for you and hash this out one step at a time, it's both better and faster for you to do it yourself. Better because if you're messing around with Minecraft source, having at least a basic knowledge of how your tools work is going to help you (I also suggest some of the material at http://docs.oracle.com/javase/tutorial/). Faster because the turnaround time of typing package names into the Google search box is a heck of a lot faster than constantly updating your question here and waiting for replies.
It looks like you are missing the Google Collections package, which now is called Guava.
Download the jar file, and add it as a library.

IntelliJ IDEA updating classes takes too much time

I really like IDEA, but when I work with a webapp running on Tomcat and I modify only a single java class file, I have to do an update classes and resources and it takes much more time to do it than in eclipse. In eclipse it's instant, at least I don't notice anything, in IDEA it does a make and updates caches and I don't know what else but it's really annoying.
Why is that and how can I solve this?
Update would depend on your project and its configuration in IDEA. Normally it should not take too long as only the required steps are performed. Compilation is incremental and would be instant. In order to understand why it takes long for your project, we'll need the sample project and the exact steps to reproduce it, please file an issue to our issue tracker.
If you want really fast updates, you may consider using JRebel, it has plug-in for IDEA.
Not so with IntelliJ 10.x. Updates don't require a complete build and redeployment. Try the new version.
I am not sure but you can actually check your Project Settings. There in the modules section you can mark some of your unnecessary folders as excluded.
This might speed up your process as the unnecessary files are now not been indexed.

build script - how to do it

About 2 months ago I overtook building proccess in current company. Even though I don't have much knowledge of it, I was the only with enough time, so I didn't have much choice.
Situation is not that good, and I would like to do following:
Labeling files in SourceSafe with version (example ProjectName PV 1.2)
GetFiles from SourceSafe to specific directory
Build vb6/c++/c# projects(yes, there are all kinds of them)
Build InstallShield setups
This is for now partly done using batch scripts(one for labeling and getting, one for building, etc..). So when building start I pretty much have babysit it.
Good part of this code could be reused.
Any recommendations on how to do it better? One big problem is whole bunch of dependencies between projects. Also labeling has to increment version and if necessary change PV to EV.
I would like to minimize user interaction as much as possible. One click on one build script(Spolsky is god) and all is done, no need to increment version, to set where to get files and similar stuff.
Is the batch scripting best way to go? Should I do some functionality with msbuild. Are there any other options?
Specific code is not need, for now I just need a way how to improve it, even though it wouldn't hurt.
Tnx,
Marko
Since you already have a build system (even though some of it currently "manual"), whatever you do, don't start over from scratch.
(1) Make sure you have a test machine (or Virtual Machine) on which to work. Thus you can make changes and improvements without having to worry about breaking anything.
(2) Put all of your build scripts and tools in version control, not just the source code. Then as you make changes, see if they work. If they do, then save them to version control. If they don't, then roll them back.
(3) Choose one area to work on at a time. Don't try to do everything at once. Going from a lot of manual work to "one-click" will take time no matter what build system you're working with.
Sounds like you want a continuous integration solution, like CC.Net. It has configuration options to do all the things you want and a great community to answer questions.
Also, batch scripting is probably not a good option. Sophisticated build and integration tools will let you feed parameters into the build and create different builds for different environments (test, production, etc.). Batch scripting will involve a lot of hand-coding and glue.

Mercurial practices: use with IDEs and scalability

I am not an experimented user of SCM tools, even though I am convinced of their usefulness, of course.
I used some obscure commercial tool in a former job, Perforce in the current one, and played a bit with TortoiseSVN for my little personal projects, but I disliked having lot of .svn folders all over the place, making searches, backups and such more difficult.
Then I discovered the interest of distributed SCM and I chose to go the apparently simpler (than git) Mercurial way, still for my personal, individual needs. I am in the process of learning to use it properly, having read part of the wiki and being in the middle of the excellent PDF book.
I see often repeated, for example in Mercurial working practices, "don't hesitate to use multiple trees locally. Mercurial makes this fast and light-weight." and "for each feature you work on, create a new tree.".
These are interesting and sensible advices, but they hurt a bit my little habits with centralized SCM, where we have a "holy" central repository where branches are carefully planned (and handled by administrators), changelists must be checked by (senior) peers and must not break the builds, etc. :-) Starting to work on a new branch takes quite some time...
So I have two questions in the light of above:
How practical is it to do lot of clones, in the context of IDEs and such? What if the project has configuration/settings files, makefiles or Ant scripts or shell scripts or whatever, needing path updates? (yes, probably a bad idea...) For example, in Eclipse, if I want to compile and run a clone, I have to do yet another project, tweaking the Java build path, the Run/Debug targets, and so on. Unless an Eclipse plugin ease that task. Do I miss some facility here?
How do that scale? I have read Hg is OK for large code bases, but I am perplex. At my job, we have a Java application (well, several around a big common kernel) of some 2 millions of lines, weighting some 110MB for code alone. Doing a clean compile on my old (2004) Windows workstation takes some 15 minutes to generate the 50MB of class files! I don't see myself cloning the whole project to change 3 files. So what are the practices here?
I haven't yet seen these questions addressed in my readings, so I hope this will make a useful thread.
You raise some good points!
How practical is it to do lot of clones, in the context of IDEs and such?
You're right that it can be difficult to manage many clones when IDEs and other tools depend on absolute paths. Part of it can be solved by always using relative paths in your configuration files -- making sure that a source checkout can compile from any location is a good goal in itself, no matter what revision control system you use :-)
But when you cannot or dont want to bother with several clones, then please note that a single clone can cope with multiple branches. The "hgbook" emphasizes many clones since this is a conceptually simple and very safe way of working. When you get more experience you'll see that you can use multiple heads in a single repository (perhaps by naming them with bookmarks) to do the same.
How do that scale?
Cloning a 110 MB repository should be quite fast: it depends on how long it takes to write 110 MB to your disk. In a recent message to the Mercurial mailinglist it was reported that cloning 6.3 GB took 4 minutes -- scaling that down to 110 MB gives about 4 seconds. That should be fast enough that your tea is still warm :-) Part of the trick is that the history data are simply hard-linked (yes, also on Windows) and so it is only a matter of writing out the files in the working copy.
PhiLo: I'm new at this, but mercurial also has "internal branches" that you can use within a single repository instead of cloning it.
Instead of
hg clone toto toto-bug-434
you can do
cd toto
hg branch bug-434
hg update bug-434
...
hg commit
hg update default
to create a branch and switch back and forth. Your built files not under rev control won't go away when you switch branches, some of them will just go out of date as the underlying sources are modified. Your IDE will rebuild what's needed and no more. It works much like CVS or subversion.
You should still have clean 'incoming' and 'outgoing' repositories in addition to your 'work' repository. Just that your 'work' can serve multiple purposes.
That said, you should clone your work repo before attempting anything intricate. If anything goes wrong you can throw the clone away and start over.
Question 1:
PIDA IDE has pretty good Mercurial integration. We also use Mercurial for development itself. Personally I have about 15 concurrent clones going of some projects, and the IDE copes fine. We don't have the trouble of tweaking build scripts etc, we can "clone and go".
It is so easy that in many cases I will clone to the bug number like:
hg clone http://pida.co.uk/hg pida-345
For bug #345, and I am ready to fix.
If you are having to tweak build scripts depending on the actual checkout directory of your application, I might consider that your build scripts should be using some kind of project-relative path, rather than hard-coded paths.