Can I tell Gofer to fall back to the local package cache when no internet is available?
For example such that I can use
Gofer it
squeaksource: 'CodePhoo';
addPackage: 'CodePhoo';
load
to setup an image when offline on the train? (In that case we can be sure that the packages are in fact available locally from a previous image setup.)
Theoretically yes, the model of Gofer would support such things.
Practically no, because of missing support from the Monticello side.
Even though Monticello provides a MCRepositoryGroup, this code unfortunately throws all kinds of different errors when one of the repositories is not reachable. That probably makes sense in the context of the Monticello tools, but for Gofer that would need to be reimplemented.
Related
I have just recently come across graph databases and Tinkerpop.
I am somewhat confused on how/what to install to use Tinkerpop 2.5.0/2.6.0. Does it have to be installed on each Database separately (as you would a plugin) or can I set it up and then use it to access different supported software.
My goal is to use it to try out 2 (possibly more) different databases (mainly Neo4j and OrientDB or perhaps Titan) and be able to query them using Gremlin.
How you use TinkerPop is entirely dependent on what you intend to do with it. If you are just getting started, I suggest you simply download the Gremlin distribution, unpackage it and start the console with bin/gremlin.sh. Working in the REPL will help you learn quickly as the feedback time for trying things out is basically instantaneous. Even as your Gremlin code makes its way to production, you will find the Gremlin Console to be a good friend as it provides a way to try out ideas before committing them to code. It also provides a mechanism for maintaining/administering your database with Gremlin.
If you intend to use TinkerPop in a JVM-based application then you will want to use a dependency management tool like Maven and reference the appropriate TinkerPop dependencies you'd like to use. Alternatively, I suppose you could try to manually manage the dependencies by downloading them individually from Maven Central and adding them to your path (though I wouldn't recommend that for obvious reasons). I guess my point for suggesting that, is to just make it clear that the TinkerPop library is just a set of jars that can be included in your JVM development tools like any other.
How you work with a particular database is dependent on the one that you choose, but again the process is little different than what I described above. Neo4j is packaged with the Gremlin Console, so you can work with it right away in there. For OrientDB, you will want to copy those dependencies into the Gremlin Console path (i.e. the /lib directory). If you are building an application, then maven is again your friend and you simply reference the Neo4j or OrientDB maven coordinates and all require dependencies will come with it.
Some implementations, like Titan, have separate prerequisites (e.g. install cassandra or hbase). In those cases, you will need to refer to their documentation for specifics on how to set them up.
All that said, if you are just getting started, I recommend that you look into TinkerPop3. It is the next major line of development for TinkerPop and quit different from it's previous incarnations. It does not yet have all the of the implementations in play as of yet, but database vendors are at work to bring them online. All that I wrote about TinkerPop 2.x "installation" above generally applies to TinkerPop3, however, the TinkerPop3 Gremlin Console does have a plugin system that can help make it a little easier to bring in external dependencies, preventing you from having to worry about dealing with them manually.
Using Monticello package manager does not seem to guarantee that, once you added the interesting package(s), the total image is still coherent. Are there any ways to verify that? Are dependencies verified? Are there guidelines in that direction?
I think you're looking for Metacello, a package and configuration manager for Monticello.
You can check out this guide: Managing projects with Metacello, and also there's a page on Google code
While Monticello actually has the possibility to ensure that dependencies are met,
it is limited to the form “this Monticello version depends on exactly these other Monticello versions”. Also, specifying these dependencies is a bit hidden in the Monticello browser and, above all, scarcely used in the community.
As Uko said, Metacello is exactly intended to solve the problem of dependency management in Smalltalk systems. It is not limited to Monticello, conceptually. To my knowledge, most GemStone, Pharo, and Squeak images come with Metacello pre-installed or easily installable.
Have a look at the blog of Metacello’s author, Dale Henrichs, where he gives some introduction to using Metacello.
There is also the Metacello Repository, where most configurations (think software receipts) can be found.
Monticello's responsibility ends with loading individual packages. Coherence comes with either Metacello (see Uko's answer) or with SqueakMap.
SqueakMap stores install scripts that ensure that entire applications get loaded into your image.
I want to play around and check out Apache's couchdb as a possible back-end for a web-app that I am designing. Therefore I want to have an instance of couchdb, but also to be able to throw it away when the testing is done. The development computer is an Ubuntu laptop (not server). The problems are:
The Ubuntu repository has couchdb 1.0, but the couchdb website strongly recommends to install 1.1, built from source.
I have Erlang built and installed from source, because the Erlang distro from the repository is defective. I don't see the point in installing another Erlang aside it.
couchdb has a lot of dependencies, including a whole bunch of perl libs, that I really don't need, and prefer to throw away when I'm done.
So I am looking for a way to either:
Install couchdb 1.1 as a package that can be easily uninstalled, or
Build couchdb from source, with as few as possible installed dependencies, so when I'm done I can just delete it. Preferably, do this without building another Erlang distro, but configuring it to use the existing one.
Is any of these possible, and how? Thanks in advance.
Btw, I am aware of the build-couchdb project, but from what I read, it requires installing all the build dependencies in advance, which is undesirable, because it will leave a whole bunch of dangling packages in my system, without being a dependency of a couchdb package. It also fetches a copy of Erlang, which is redundant for me.
(Dear moderators: This questions combines issues that relate not only to programming, but also to server administration, Unix software, and, particularly, Ubuntu Linux issues. Therefore, it might be suitable for a few other stack exchange sites. I recon it is most likely to be answered here, since this kind of hackery is often done by programmers. However, if I am wrong, feel free to migrate it, and I apologies in advance for your troubles.)
You could install CouchDB into a chroot jail
A chroot is a way of isolating applications from the rest of your computer, by putting them in a jail. This is particularly useful if you are testing an application which could potentially alter important system files
From the Ubuntu instructions on creating a chroot jail
Another option, assuming your laptop has the appropriate hardware virtualization support, is to use KVM.
The KVM option might be more helpful in the long run as you could move the VM's disk image onto a server.
We have a largish standalone (i.e. not Java EE) commercial Java project (10,000+ classes, four or five SVN repositories, ten or twenty third-party libraries) that's in the process of switching over to Maven. Unfortunately only one engineer (in a team of a dozen or so distributed across three countries) has any prior Maven experience, so we're kind of figuring it out as we go.
In the old Ant way of doing things, we'd:
check out source code from three or four repositories
compile it all into a single monolithic JAR
release that (as part of a ZIP file with library JARs, an installer, various config files, etc.)
check the JAR into SVN so we had a record of what the customers had actually got.
Now, we've got a Maven repository full of artifacts, and a build process that depends on Maven having access to that repository. So if we need to replicate what we actually shipped to a customer, we need to do a build against a Maven repository that has all the proper versions of everything. This is doable, I guess, if in (some version of) the (SVN-controlled) POM files we set all the dependencies to released versions?
But it gives our release engineer the creepy-crawlies, because there doesn't seem to be any way:
to make sure that somebody doesn't clobber the copy of foo-api-1.2.3.jar on the WebDAV server by mistake (the WebDAV server has access control, but that wouldn't stop a buggy build script)
to detect it if they did
to recover afterwards
His idea is, for release builds, to use a local file system as the repository rather than the WebDAV server, and put that local repository under SVN control.
Our one Maven-experienced engineer doesn't like that -- I guess because he doesn't like putting binaries under version control? -- and suggests that maybe the professional version of the Nexus server can solve the clobbering or clobber-tracking/recovery problem.
Personally, I'm not happy (sorry, Sonatype readers) with shelling out money for a non-free build system when we haven't even seen any benefit from the free version yet, and there's no guarantee it will actually solve the problem.
So our choices seem to be:
WebDAV server
Pros: only one server, also accessible by devs, ...?
Cons: easy clobbering, no clobber-tracking/recovery
Local file system
Pros: can be placed under revision control
Cons: only works with the distribution script
Frankly, both of these seem like hacks to me, and I have to wonder if there isn't a better way to do this.
So: Is there a right thing to do here?
I'm not sure to get everything but I would:
Use the maven-release-plugin (which automates the release process i.e. execute all the steps documented in release:prepare).
Use WebDAV with anonymous read-only and authenticated write policy (so only release engineer can actually deploy released artifacts to the corporate repo).
There is a no need to put generated artifacts under version control (if you have the poms under version control). I don't see the benefits of using the local file system instead of WebDAV (this is not providing more security, you can secure WebDAV as well). I don't see what the commercial version of Nexus would solve here.
Nexus has a setting which prevents you from clobbering an already released artefact in a release repository.
For a team of about a dozen, the free version of Nexus should be enough.
I want to bundle JRE 6.0 together with my java application. All my source code reside in CVS. My client will check-out the code and build it themselves. Should I store JRE in CVS?
I normally advocate putting most everything in source control, but this seems a little excessive. Why ?
the JRE is readily available from http://java.sun.com
it doesn't change that often. I'd expect you to specify a minimum version for your code to run against (e.g. 1.5, 1.6 etc.)
I would not put a JDK or JRE into a source code repository:
It is bad practice to put externally versioned things into your version control because it usually leads to over-constraining, obscuring and/or hard-wiring your app's external dependencies. (Maven or Ivy are good solutions for dealing with external dependencies, though not in this case,)
Putting binaries into version control is a bad idea for some version control systems.
But I think your real problem (actually, your user's organization's problem) is the IT folks who refuse to contemplate upgrading the JRE:
They need to be made aware of the
fact that they can install multiple
JRE versions on the one machine, and
configure apps to launch with the JRE
version they require. (It is trivial
on Linux ...)
They need to be made aware of the fact
that their policy is an impediment to
progress.
They need to be made aware of the fact
that their policy is a potential security
issue. If they force users to deploy their
own copies of JDKs / JREs in random places,
it will be difficult to ensure that JRE security
patches get applied. (Besides, 1.4.2 is due
to be end-of-life'd soonish, and security
patches for it will cease.)
EDIT: and there is also the legal question of whether "redistributing" a JRE out of your source code repository is a violation of Sun's click-through JRE/JDK download license. (I don't know ...)
As best practice, you shouldn't keep any binary files in the source control system. For Java developers there is maven that does it's work better in versioning jar files. The reason is that we want to keep our source repository as small as possible so it is faster for those that checks out our code for the first time.
But if you still want to keep binary files in the source control, it would be best to avoid using CVS, because CVS is bad in versioning binary files. You can search with google, why it is bad. If you use SVN, then it still okay because SVN handles binary files much better than CVS.
I see nothing wrong with storing the JRE in CVS.
However, it's not so important whether you do or not as long as your script can pull it as part of the build. For example, if you want to host a downloadable jre.zip on an HTTP server, or point to it in a Maven repo, that's just as good.
Well won't your client all ready have the JRE if you expect him to compile the code before running it? The JDK contains the JRE.
Depends a lot on what you use to handle dependencies. If you use Maven, then create a maven package with the stuff you need, and host it on a local repository.
If you just have CVS (like we do) then it is fine to create big binary packages (since you will need them) which you can then put in CVS. Just be aware that they should be static for best CVS performance.
ALso note that the jsmooth package can create an EXE file of your jar with an JRE embedded in it. This might solve your deployment problem.
For remote compilation, Eclipse can work with a plain JRE. You just need to tell Eclipse where JRE you already have prepared above is located on the disk. There is also a folder inside the Eclipse distribution where the launcher looks automatically.
I'm wondering about the client building the application themselves. It will require some kind of Java compiler, most probably javac wich is part of the JDK. So your client will not only need a JRE, but a JDK as well (unless they will be using Jikes or another alternative compiler).
javac is capable of generating bytecode for previous versions of Java, so using a newer compiler should not pose any problems.
Personally, I would not include large binaries like a JRE as part of my own repository. The JRE can be considered very stable and just listing the minimum version required should be enough. Installing a JRE is also something quite different than installing a single Java application. The two activities should not be mixed.