First day on a project and first day with Maven and I've already wasted a lot of time trying to get it to build.
It appears the issue is that this old project has config, POMs, etc, that have many broken URLs embedded in them. i.e. Maven generated stack traces are presenting lots of URLs that are broken when trying to download project dependencies.
I have been given only the project source which includes Maven config files. I have not been supplied with existing Maven repositories, project dependent libraries or any build environment, etc.
I have been hacking away at these files but I don't get very far with each build attempt.
Am I doing something fundamentally wrong or is this Maven config really stuck in 2008?
Update:
My POM really was stuck in 2008, i.e. by virtue of versioning, it is a snapshot in time while the rest of the Java world moves on.
Some of the dependencies were no longer in any repositories, most of which were defunct projects and so I've ceased to use them. I had to rewrite the entire POM. I had to spend a lot of time tweaking versions to ensure compatibility between dependencies and between plugins. After much battling; some plugins just wouldn't coexist, clobbering each other.
All in all, it was many, many hours effort...too many for this project with only one developer, and I believe I only now know enough to be dangerous.
The good ol' IDE build system would have been a better choice in this instance.
ftr's advice (in the comments section) is right: Maven can't download certain dependencies, but that doesn't necessarily mean that those dependencies don't exist anymore. It could just be that the extra-repos section of the Maven configuration is now missing certain repositories, and/or there's some other connection issue (like bad proxy config - which may lead to you being able to access certain repos but not others).
I've been in a similar situation, and found out that while initially Maven reported errors when trying to download about 80% of the dependencies, after various tweaks on Maven's config I ended up making it download all of the dependencies (well except one which was really just a custom jar somebody did and which was fetched directly from the local file system, but that's besides the point).
Here's what I'd do:
Of all the dependencies that Maven says it can't download, try to spot 2 or 3 which are "well know" (like maybe if it says it can't download Servlet or some Spring library, write down the exact URL's he's trying to contact for those).
Manually check if those URL are indeed accessible (via browser). If so, make sure that the dependencies exist for the version Maven is looking for. Maybe they have been updated since the project was created, and the old version is no longer kept. In this case, 90% of the time the solution is to simply update Maven's pom to point to the new version.
If manually checking the dependency's URL shows you that in fact the dependency exists, for the version Maven is looking for, make sure there's no proxy or some other form of internet connection "extra config" which is done for your browser, but not for Maven. If that's the case, just update Maven's config with all those extra params (proxy, proxy authentication, etc).
If the dependency URL doesn't exist at all, try googling to see if that dependency doesn't now exist on some other repo. For example many of the JBoss dependencies (like Hibernate, etc) have changed repo location somewhere around 2007-2009. If that's the case just add the new repo to Maven's repo list (and remove the old one if it no longer exists).
Finally, the good old shameful way to fix this is to go to a colleague which has (or had) something to do with your project at some point, and copy his local Maven repo to your machine :)
Related
In our team we have a number of APIs specified using the Open API Specification (formerly Swagger). We use Maven and OpenAPI Generator to generate code, build and publish the artifact to our local nexus. We build our code on TeamCity. The artifact is given the version that is specified in the pom.xml file of Maven.
During development we only use snapshot versions, that is versions which can be overwritten and will be cleaned up. This is opposite to release versions, that cannot be overwritten and needs administrative privileges to clean up. The reason for this is, that a developer usually changes a little bit at the time, which is much more convenient with snapshot versions. This also makes cleaning up outdated unreleased artifacts much easier.
Our problem is, that from time to time a developer makes API changes but forgets to set a new version. This works fine locally, but when the code is build on TeamCity the changed API overwrites the artifact of an older version. A developer not working on this branch will then experience a compile error, because the code does not match the API artifact being used.
What does others do? Is there a best practice? Preferably with standard tools. We have tried many things and nothing works well. At the same time this issue is so basic that someone must have a good solution - or at least experience enough to point to the least bad solution.
I am looking at nuget for improving automatic handling of dependencies (both internal and third party) during development.
A long as you develop through the CI Build Server, all is good:
get latest source for A and B, where B depends on A
fix bug in A
build A
check into source control
CI Build Server initiated
new nuget package is created and placed in corporate repository
build B (which will get the updated A package)
run B to verify that the bug in A was fixed
n. repeat n times
However, I'm wondering if it is possible to work locally as a single developer, without having to wait for the CI Build Server to produce a new package?
Nuget has a feature Package Restore, which will download all dependencies automatically on build. You can also list the repository order that the Package Restore should look for packages.
If the workflow could become:
get latest source for A and B, where B depends on A
fix bug in A
build A
(building creates a local nuget package)
run B to test the (resolved) bug in A (should now use our local nuget package, not local repository)
...repeat n times
check into source control
CI Build Server initiated
new nuget package created in corporate repository
Is this possible using Visual Studio, MSBuild, a CI Build Server and nuget? I'm especially interested in the making of local packages while developing locally.
Note that I have native projects, although except the generation of nuget package post-build, this would be a workflow that I hope should work for both C# and C++ projects.
The solution I have now, though far from ideal, is what I could figure out works best. Oh! and it is a work in progress so it WILL change in the coming weeks/months as I figure out how to get around the kinks.
I mostly have to deal with managed DLL right now but I do have some native code and worst, multi-platform native code to deal with eventually.
Create a local repository, basically just a folder and configure it in your list of nuget feeds.
Then I created a task (MSBuild) that will package the project and output it in the local repository's root folder. Make sure the version of your package is always increasing. Presently I do this manually by editing the assembly version.
Once built, update your other projects that reference it, I usually do this though the package manager console (update-package).
Each projects that was updated, bump up their version rinse lathe and repeat until you get to your top-most project (the actual program).
Once everything is nice and good and you are ready to commit then the build system should do it's own packaging and send it to your official repository.
The Good
No clogging of the repository and build system with intermediary development versions, that garbage remains (as it should) local.
Local repos are super easy to set-up, can even be done without changes to VS though the global nuget config.
This is friendly to both paradigms of package recover or checking-in packages with the project. That said I would recommend not checking in the packages you built locally but rather one that was committed to your local repository ideally through the build system. What's built local should remain local.
The Bad
Still much more complicated than just adding projects to a solution.
The deeper (or wider) your dependency tree the bigger the pain.
The Ugly
Makes some native nuget behaviors quite quirky and annoying :
Update operation takes forever if your VS is connected to a version system (perforce for me). I hear they "solved" the problem, would hate to see how it was before if it was worst that it is now !
Having nuget change non-code reference back to never copy is a major pain.
If Only
Configure the desired state of a content dependency (copy always, never or newer) directly from the nuspec and be done with it ! (oh and same story with ClickOnce content status include, exclude etc)
Make the update operation quick, 2 minutes for a dozen project is just insane, especially if the ultimate goal is to manage 500+.
Perhaps a hybrid mode where locally we work with projects inclusion but the build system would work with nuget dependency (and build them if necessary)
If you are to parse the project do follow MSBuild parsing rules and honor the conditional statements.
There are still issues I have yet to figure out like how to manage multiple branches of the code in the repository. How to handle version conflict further up the food chain. In a large project (ultimately we have to bring 500+ separate projects together in a single application executable, conflicts are expected).
I would love to bring all the goodness of sane dependency management à la Maven but thus far I did not find nuget to be mature enough to even think of proposing it to the dev team.
Certainly. In our solutions, NuGet parks the libraries in the "packages" directory of the solution's hierarchy which is ultimately kept in TFS. This allows for complete solution check-outs that includes the required libraries. If it's your intention to update the libraries normally provided by NuGet, you'll need to update the dependent projects' references to point to the project containing the updated code normally provided by the NuGet process.
Prior to checking-in your regular solution work (not the NuGet related libs,) make sure the solution's NuGet libs are up to date, and the references in the solution point back to the NuGet installed libs. Of course, you'll check-in and fetch the NuGet related libs beforehand.
It appears that there's no ivy:unpublish task (e.g. see here).
So, I suppose that unpublished should be performed at the filesystem level either manually or through an Ant task that deletes ~/.ivy2/local subfolers? (when the aim is to unpublish from the local ivy repo).
I'm very surprised you accepted Mark's answer, given that AFAICT it has nothing to do with your use-case, which is about your local repository - that is, your private repository on your system. His answer is relevant to shared repositories only.
There's a use case with Ivy that I suspect is very common. A developer is temporarily working on two projects, one of which is dependent on the other. While they are doing this work, they publish SNAPSHOTs from the upstream project to their local repository, so that the downstream project "sees" their changes. When the developer is done with this task, they check in their changes into source control, and then want to "rejoin the group" and get the latest SNAPSHOT dependencies for the upstream project. At this point they want to "unpublish" the upstream project from their local repository, so that they resume consuming changes from other developers in the group.
For what little it's worth, see also this Ivy Jira issue, from 2006.
As I noted in my comment there, from a couple of weeks ago, I suspect there's some best-practice that I'm not aware of, that makes this moot. Anyone?
No, ivy does not support an unpublish action. Similarily Maven does not support such an operation.
Not sure I understand the use-case. When one "publishes" content it would normally be a very bad idea to remove it later... Why?
You could unpredictably break other people's builds that depend on your version.
Repository caches normally assume that released artifacts never change. If there exists a possibility that they might disappear, this forces a cache to constantly "dial home" to ensure they aren't dirty.
Having said that there are arguments for an against. I'd recommend reading the following excellent blog article from sonatype:
http://blog.sonatype.com/people/2012/01/releases-are-forever/
New to Apache Ivy and I'm configuring the latest-strategies element in my settings file, and am opting to go with the lexiconographic strategy for a number of reasons. But something just dawned on me, and has me worried about Ivy in general. I'm sure I'm just not seeing the "forest" through the "trees", but I absolutely need to gain clarity on this before I can proceed.
My project will use several other homegrown JARs as dependencies. Other developers may be actively working on these other JARs, and may introduce a bug at some point. If my project uses Ivy to always pull down the latest version of these other dependencies, then Ivy may inadvertently pull down a new bug when it goes to build.
What's the common solution here, or what do best practices dictate?
Is there a way to cherry pick which versions of which JARs my project uses? That way I'm not concerned with latest-strategies at all, or lexiconographic order, etc. That would seem to alleviate the problem, but may violate best practices.
Any input is appreciated, as always!
In such situation we used to use tags on trunk. When developer creates tag he must change a version number of a published ivy module. In this case when you want to use stable version of module you could resolve it by certain version (1.2.3) or by latest version from some range (1.2.+). The latest-development strategy pull out the latest unstable trunk or branch version of a module.
We have a largish standalone (i.e. not Java EE) commercial Java project (10,000+ classes, four or five SVN repositories, ten or twenty third-party libraries) that's in the process of switching over to Maven. Unfortunately only one engineer (in a team of a dozen or so distributed across three countries) has any prior Maven experience, so we're kind of figuring it out as we go.
In the old Ant way of doing things, we'd:
check out source code from three or four repositories
compile it all into a single monolithic JAR
release that (as part of a ZIP file with library JARs, an installer, various config files, etc.)
check the JAR into SVN so we had a record of what the customers had actually got.
Now, we've got a Maven repository full of artifacts, and a build process that depends on Maven having access to that repository. So if we need to replicate what we actually shipped to a customer, we need to do a build against a Maven repository that has all the proper versions of everything. This is doable, I guess, if in (some version of) the (SVN-controlled) POM files we set all the dependencies to released versions?
But it gives our release engineer the creepy-crawlies, because there doesn't seem to be any way:
to make sure that somebody doesn't clobber the copy of foo-api-1.2.3.jar on the WebDAV server by mistake (the WebDAV server has access control, but that wouldn't stop a buggy build script)
to detect it if they did
to recover afterwards
His idea is, for release builds, to use a local file system as the repository rather than the WebDAV server, and put that local repository under SVN control.
Our one Maven-experienced engineer doesn't like that -- I guess because he doesn't like putting binaries under version control? -- and suggests that maybe the professional version of the Nexus server can solve the clobbering or clobber-tracking/recovery problem.
Personally, I'm not happy (sorry, Sonatype readers) with shelling out money for a non-free build system when we haven't even seen any benefit from the free version yet, and there's no guarantee it will actually solve the problem.
So our choices seem to be:
WebDAV server
Pros: only one server, also accessible by devs, ...?
Cons: easy clobbering, no clobber-tracking/recovery
Local file system
Pros: can be placed under revision control
Cons: only works with the distribution script
Frankly, both of these seem like hacks to me, and I have to wonder if there isn't a better way to do this.
So: Is there a right thing to do here?
I'm not sure to get everything but I would:
Use the maven-release-plugin (which automates the release process i.e. execute all the steps documented in release:prepare).
Use WebDAV with anonymous read-only and authenticated write policy (so only release engineer can actually deploy released artifacts to the corporate repo).
There is a no need to put generated artifacts under version control (if you have the poms under version control). I don't see the benefits of using the local file system instead of WebDAV (this is not providing more security, you can secure WebDAV as well). I don't see what the commercial version of Nexus would solve here.
Nexus has a setting which prevents you from clobbering an already released artefact in a release repository.
For a team of about a dozen, the free version of Nexus should be enough.