google-files-api - file-upload

I'm looking for a web-based system to simply manage libraries as private packages.

From a different perspective it seems you want to build an online repository of JavaScript libraries. I've seen people making efforts to use Maven for JavaScript library management and you would "manage the tangle" with a repository manager (like Nexus or Archiva). I dare say there are other similar offerings if Maven isn't your thing (Gradle, Grapes, Ivy ...).
(Moved from comments, as I thought this wasn't so much seeking clarification as it was suggesting a solution)

Related

Extending squeak or pharo

Using Monticello package manager does not seem to guarantee that, once you added the interesting package(s), the total image is still coherent. Are there any ways to verify that? Are dependencies verified? Are there guidelines in that direction?
I think you're looking for Metacello, a package and configuration manager for Monticello.
You can check out this guide: Managing projects with Metacello, and also there's a page on Google code
While Monticello actually has the possibility to ensure that dependencies are met,
it is limited to the form “this Monticello version depends on exactly these other Monticello versions”. Also, specifying these dependencies is a bit hidden in the Monticello browser and, above all, scarcely used in the community.
As Uko said, Metacello is exactly intended to solve the problem of dependency management in Smalltalk systems. It is not limited to Monticello, conceptually. To my knowledge, most GemStone, Pharo, and Squeak images come with Metacello pre-installed or easily installable.
Have a look at the blog of Metacello’s author, Dale Henrichs, where he gives some introduction to using Metacello.
There is also the Metacello Repository, where most configurations (think software receipts) can be found.
Monticello's responsibility ends with loading individual packages. Coherence comes with either Metacello (see Uko's answer) or with SqueakMap.
SqueakMap stores install scripts that ensure that entire applications get loaded into your image.

Alternatives to Git Submodules?

I feel that using Git submodules is somehow troublesome for my development workflow. I've heard about Git subtree and Gitslave.
Are there more tools out there for multiple repository projects and how do they compare ?
Can these tools run on Windows ?
Which is best for you depends on your needs, desires, and workflow. They are in some senses semi-isomorphic, it is just some are a lot easier to use than others for specific tasks.
gitslave is useful when you control and develop on the subprojects at more of less the same time as the superproject, and furthermore when you typically want to tag, branch, push, pull, etc all repositories at the same time. gitslave has never been tested on windows that I know of. It requires perl.
git-submodule is better when you do not control the subprojects or more specifically wish to fix the subproject at a specific revision even as the subproject changes. git-submodule is a standard part of git and thus would work on windows.
git-subtree provides a front-end to git's built-in subtree merge strategy. It is better when you prefer to have a single-repository "unified" git history. Unlike the subtree merge strategy, it is easier to export changes to the different (directory) trees back out to the original project, but it is not as automatic as it is with gitslave or even git-submodule.
repo is in theory similar to gitslave, but not as well documented for non-android operations that I have found. It is fairly dedicated to the Google Android development model and only natively supports a handful of git commands (though you can run arbitrary commands) and the limited native support doesn't support, for example, a centralized repository to push to and checking out a branch seems fairly difficult.
kitenet's mr is what you would want to use if you have multiple version control systems in use, but is mostly limited for git-only superprojects due to its lowest common denominator approach. There are ways to run arbitrary commands, but they are not as well integrated.
For some use cases, I have liked each of the following two simple approaches:
Nested repositories. If your software project has a plugin mechanism, with each plugin in its own sub-directory, it can make sense to git-ignore these plugin directories and, in your local filesystem, to make each of them into its own git repository. This way, all your files form a single directory tree, but are managed in different git repositories. It will not confuse git.
Per-package repositories. For software projects where you use some kind of source code package management system (gem / bundler, npm, pear or the like) it can make sense to put your re-used code into separate git repositories, then to make source packages from them, and then to install them with the package management tool into the parent project. Your parent project's git repository would only contain a reference to the required packages and their versions, while the actual code of these packages will be git-ignored as done with all other packages and external libraries as well. Compared to the nested repositories proposed above, this is a more elaborate approach as it allows to specify which package version is to be installed.
I currently use submodules for development and not just relating 3rd party libraries. There are some ways that you can make life easier with submodules, especially when they are the source of merge or rebase conflicts. Look to ls-tree to get the 2 commits involved on a conflict in the submodule. This is probably the most difficult part of submodules for people to deal with. For now scripting will make this much easier to work with. Future versions of Git should have better native support for dealing with them.
Hope this helps.
We encountered a similar issue when using Git submodules in projects where we had dependencies in a variety of languages. To deal with them, we built and open-sourced a tool called MDLR ("Modular") that gives you declarative version-controlled Git dependencies with similar functionality to Git submodules, but without the annoying workflow. You can install it and manage your dependencies with the instructions/downloads on the GitHub repo

Using Nuget in development environment - best practices / how to

Trying to figure out the best way to use Nuget in a development environment to manage our own libraries.
We want to standardize on Nuget way of doing things for our 3rd party libs, but would also like to use Nuget to manage our internal utility libraries, for developers consuming the in house libs this is great and everyones happy. However, for devs actively working on the Utility lib it seems to be more problematic, their previous process of build lib , build main app , F5 and go is now slowed down with publishing, and updating and potentially lots of packages, not to mention the moaning about additional process!
We use TDD on the internal libs but everyone needs to be able to debug and modify libs along with main app, have seen Phil Haacks demo on debug packages in 1.3 and read David Ebbos blog, but that fits different scenario.
So what is the best process for dev/debug cycles? if to use Nuget then we need to accept the existing constraints, or is there a hybrid practice people are using and maybe 1.3 gets closer to automating all this, or do we just avoid Nuget for internal packages which would be a real shame.
Loving Nuget, maybe wanting way to much from the little guy, feedback appreciated.
Thanks
I'd suggest you use separate network shares or feeds (similar to what myget.org supports in the cloud) for different scenarios.
You could imagine creating a CI share, a QA share, a Releases share, ...
Make people working on the referenced library do CI builds that drop CI packages on the CI repository for instance, and have them picked up by other projects (who just need to do a simple update, could be automated through PowerShell in pre-build: check for new version, if so, update).
Just make sure that when products release their milestones, they also release with released dependencies (could be as simple as switching feeds, releases will always have a higher version number than CI builds).
Hope that helps!
Cheers,
Xavier
If you're working on the source code for the lib and the main app at the same time, I'd say NuGet is probably not a good solution. I think it'll only work in situations where you work with a "stable" version of the library that don't need to change frequently during the development of your main app.
That said - is it possible the development on your library could be done in isolation? You already mention you're doing TDD on the lib, so why can't that work be done, then built, deployed, then the main app work done?

Archivable, replicable releases when building with Maven: is there a right way?

We have a largish standalone (i.e. not Java EE) commercial Java project (10,000+ classes, four or five SVN repositories, ten or twenty third-party libraries) that's in the process of switching over to Maven. Unfortunately only one engineer (in a team of a dozen or so distributed across three countries) has any prior Maven experience, so we're kind of figuring it out as we go.
In the old Ant way of doing things, we'd:
check out source code from three or four repositories
compile it all into a single monolithic JAR
release that (as part of a ZIP file with library JARs, an installer, various config files, etc.)
check the JAR into SVN so we had a record of what the customers had actually got.
Now, we've got a Maven repository full of artifacts, and a build process that depends on Maven having access to that repository. So if we need to replicate what we actually shipped to a customer, we need to do a build against a Maven repository that has all the proper versions of everything. This is doable, I guess, if in (some version of) the (SVN-controlled) POM files we set all the dependencies to released versions?
But it gives our release engineer the creepy-crawlies, because there doesn't seem to be any way:
to make sure that somebody doesn't clobber the copy of foo-api-1.2.3.jar on the WebDAV server by mistake (the WebDAV server has access control, but that wouldn't stop a buggy build script)
to detect it if they did
to recover afterwards
His idea is, for release builds, to use a local file system as the repository rather than the WebDAV server, and put that local repository under SVN control.
Our one Maven-experienced engineer doesn't like that -- I guess because he doesn't like putting binaries under version control? -- and suggests that maybe the professional version of the Nexus server can solve the clobbering or clobber-tracking/recovery problem.
Personally, I'm not happy (sorry, Sonatype readers) with shelling out money for a non-free build system when we haven't even seen any benefit from the free version yet, and there's no guarantee it will actually solve the problem.
So our choices seem to be:
WebDAV server
Pros: only one server, also accessible by devs, ...?
Cons: easy clobbering, no clobber-tracking/recovery
Local file system
Pros: can be placed under revision control
Cons: only works with the distribution script
Frankly, both of these seem like hacks to me, and I have to wonder if there isn't a better way to do this.
So: Is there a right thing to do here?
I'm not sure to get everything but I would:
Use the maven-release-plugin (which automates the release process i.e. execute all the steps documented in release:prepare).
Use WebDAV with anonymous read-only and authenticated write policy (so only release engineer can actually deploy released artifacts to the corporate repo).
There is a no need to put generated artifacts under version control (if you have the poms under version control). I don't see the benefits of using the local file system instead of WebDAV (this is not providing more security, you can secure WebDAV as well). I don't see what the commercial version of Nexus would solve here.
Nexus has a setting which prevents you from clobbering an already released artefact in a release repository.
For a team of about a dozen, the free version of Nexus should be enough.

Should we use Nexus or Artifactory for a Maven Repo? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
Improve this question
We are using Maven for a large build process (> 100 modules). We have been storing our external dependencies in source control, and using that to update a local repo.
However, we are ready to graduate to a local repo that can cache central so that we don't have to proactively download all 3rd parties (but we can still have a local repo to pull from). In addition we want to publish our internal build artifacts from a nightly build so that developers don't have to build the world.
We are considering Nexus and Artifactory. What are the reasons for preferring one over the other? Are there others we should be considering?
I'm sure that if you only talk about storing binaries from "mvn deploy" both will do fine.
We use Artifactory very extensively with all upgrades along the way. Lots of projects, numerous snapshots deployed and external repos proxied. Not a single problem. I find it hard to explain how other people experience issues with its DB, indexing or anything else. Nothing like that ever happened to us. Also, Artifactory allows to store data on a disk and only use a DB for storing metadata, it is quite flexible (see more here).
What makes those applications very different is their approach towards integration with other build tools and technologies. Nexus and Sonatype are pretty much locked on Maven and m2eclipse. They ignore anything else and only recently started to work on their own proprietary Hudson integration (see their Maven 3 webinar).
EDIT: This is not true anymore as of 2017 Nexus gives a much larger support for other build tools End of Edit
Artifactory provides an awesome Hudson, TeamCity and Bamboo integration, and Gradle / Ivy support. So while Nexus gives you nothing once you step out of Sonatype "comfort zone" (Maven, m2eclipse), Artifactory embraces and collaborates with all major build tools.
In fact, being able to deploy build artifacts from Hudson, when job has finished, and not by "mvn deploy" is a huge difference: Artifactory Hudson plugin makes an atomic-like deploy of all artifacts at once, only when a build job finished successfully. "mvn deploy" runs after each module and can deploy a partial set of artifacts if a build job fails in the middle. Deploying from Maven on module completion and not from a build server on job completion is really a bad thing to do.
As you see, Artifactory thinks "outside the box" while Nexus thinks "inside the box" and only cares about Maven and Maven artifacts.
Something else that makes Artifactory more accessible is their cloud-based Artifactory Online solution. For about $80 a month you have your own Artifactory instance, no need to dedicate any server for it.
Artifactory has a simple and straightforward REST API, don't know how it works for Nexus.
Edit Nexus has also a REST API that you can use easily as well.
To summarize, for basic storage of Maven artifacts I think both are fine. But while Nexus stops there being strictly a "Maven repository manager", Artifactory goes on and on, being a general "Binaries storage" for binaries of any kind, from any build tool and CI server.
I don't know about Artifactory but here are my reasons for using Nexus:
Dead simple install (and since 1.2, dead simple upgrade, too)
Very good web UI
Easy to maintain, almost no administrative overhead
Provides you with RSS feeds of recently installed, broken artifacts and errors
It can group several repositories so you can mirror several sources but need only one or two entries in your settings.xml
Deploying from Maven works out of the box (no need for WebDAV hacks, etc).
it's free
You can redirect access paths (i.e. some broken pom.xml requires "a.b.c" from "xxx"). Instead of patching the POM, you can fix the bug in Nexus and redirect the request to the place where the artifact really is.
Artifactory supports both file-system and database storage backends. Storage is checksum based and identical binaries are stored only once, no matter how many times they appear in the repo, which makes Artifactory more efficient storage-wise. Move and copy are also very cheap because of this architecture (in Nexus there's no REST for move/copy - you have to move stuff on the file system, then run corrective actions on the repo to let it know content has changed).
Another important differentiator is Artifactory has unique integration with Hudson and TeamCity for capturing information about deployed artifacts, resolved dependencies and environment data associated with build runs, which provides full build traceability.
Artifactory stores the artifacts in a database, which means that if something goes wrong, all your artifacts are gone. Nexus uses a flat file for your precious artifacts so you don't have to worry about them all getting lost.
If you need the "Pro" features of either (e.g. Staging repos, artifact promotion, NuGet), , then you need to consider the different pricing models, which are displayed on their websites.
http://www.jfrog.com/home/v_pricing
http://www.sonatype.com/nexus/purchase
In summary:
Artifactory Pro
you pay per server
you can pay more for increased service hours
Nexus Pro
you pay per seat, i.e. how many developers downloading artifacts
support service is Mon-Fri 0800-2000 ET only, no matter what you pay
No matter how many users you have, Nexus Pro offers a support service that's broadly equivalent to Artifactory's $7,450/year "Silver Value Pack".
$7,450/year will buy you approximately 67 Nexus Pro seats (1-50 # $108, the rest # $120).
On price and support alone then, Nexus Pro makes sense until you get to 67 users, at which point Artifactory becomes the cheaper option.
If you're doing all the support in-house; however, that magic point is about 23 users (Artifactory's most basic support offering is $2,750/year).
I made some research recenly about Artifactory 2 and Nexus 1.3. I'll list here the main differences I found:
Artifactory stores metadata and optionally files in DB, Nexus writes directly to file system. There are pros. and cons. for each approach. DB supports transactions, while in FS stored files can be accessed directly.
Artifactory has higher system requirements especially for disk space.
The most complete comparison: http://binary-repositories-comparison.github.io/
You should use Artifactory
Its latest version was a real jump
You can backup incrementally your repositories , which means you can have all your artifacts saved and maintain
Its has a easy to use web ui
and is really easy to set up
i enjoyed it a lot
check out its new version 2.0
From a learners point of view I note some specific differences between the two.
Sonatype .war deployment is not supported on Jboss application server at the time, although it does run under Tomcat.
Sonatype does not offer me an Amazon Machine Image (AMI), at present, that I could quickly stand up and test.
An Artifactory AMI is provided by Bitnami and takes a only a few minutes to stand up and a few more minutes to configure, maybe several tens of minutes dependant upon what you're trying to achieve.
Artifactory offer a SaaS version of Artifactory in the cloud so you can focus on getting things done rather than infrastructure.
I've no experience with Nexus but I've found Artifactory very intuitive and easy to configure, at least initially.
Added - I do note that the Artifactory User Guide, which may be OK for a seasoned pro, is a bit light on for some in depth explanations. For instance, starting out, one unzips and then addes a Repository, say RedHat's Jboss EAP Enterprise Repo. All goes fine but then when I tried to view the artifacts that were imported Artifactory reports zero artifacts? No errors or warnings so I'm now looking for an explanation. Is this normal or not normal? A simple explanation in the doco can quickly point one in the right direction. Being a good contributor I'm adding these comments to the project for the benefit of other starters.
All politics/religion aside, licensing makes a difference for some organizations.
Nexus is GPL now AGPLv3 and now Eclipse Public License (EPL).
Artifactory is Apache licensed LGPLv3 licensed as of version 2.1 of the product.
You may also want to consider Archiva, just for comparison's sake. It's Apache 2.0 licensed.
I see that Nexus usage is growing, while Artifcatory usage is generaly staying flat.
Picture is taken from here http://blog.sonatype.com/2014/11/42000-nexus-repository-managers-and-growing/
There is also matrix-comparison http://docs.codehaus.org/display/MAVENUSER/Maven+Repository+Manager+Feature+Matrix
Both Artifactory and Nexus have more or less similar feature set but Artifactory's LDAP support makes it more attractive over Nexus. Though Nexus also have LDAP support but in paid version :-(
Hmmm...my experience with artifactory is awful...but I'm a relative newbie so take it with a grain of salt. My overall complaint is that jar files recently uploaded to Artifactory do not seem to get indexed right away - as in for hours - and there does not seem to be a good way to force it. I've tried various things that appeared as if they should have worked, but didn't. I have been working with m2eclipse, adding dependencies to a project that i'm converting from ant. When I try to add a jar that I have just added to artifactory, I expect it to show up as a choice in the selector but it does not.
a coworker told me that they had installed nexus and so far they like it...but I can't vouch for it yet. I'm about to install that on a Linux box as soon as IT can find me one.