Cache Credentials During SVN Merge - apache

A merge from a feature branch to trunk took over 45 minutes to complete.
The merge included a whole lot of jars (~250MB), however, when I did it on the server with the file:// protocol the process took less than 30 seconds.
SVN is being served up by Apache over https.
The version of SVN on the server is
svn, version 1.6.12 (r955767)
compiled Sep 3 2013, 17:49:49
My local version is
svn, version 1.7.7 (r1393599)
compiled Oct 8 2012, 20:42:17
On checking the Apache logs I made over 10k requests and apparently each of these requests went through an authentication layer.
Is there a way to configure the server so that it caches the credentials for a period and doesn't make so many authentication requests?
I guess the tricky part is making sure the credentials are only cached for the life of single svn 'request'. If svn merge makes lots of unique individual https requests, how would you determine how long to store the credential for without adding potential security holes?

First of all I'd strongly suggest you upgrade the server to a 1.7 or 1.8 versions since 1.7 and newer servers support an updated version of the protocol that requires fewer requests for many actions.
Second, if you're using path based authorization you probably want SVNPathAuthz short_circuit in your configuration. Without this for secondary paths (i.e. paths not in the request URI) as may happen for many recursive requests (especially log) when the authorization for those paths are run it runs back through the entire Apache httpd authentication infrastructure. With the setting instead of running the entire authentication/authorization infrastructure for httpd, we simply ask mod_authz_svn to authorize the action against the path. Running through the entire httpd infrastructure can be especially painful if you're using LDAP and it needs to go back to the LDAP server to check credentials. The only reason not to use the short_circuit setting is if you have some other authentication module that depends on the path, I've yet to see an actual setup like this in the wild though.
Finally, if you are using LDAP then I suggest you configure the caching of credentials since this can greatly speed up authentication. Apache httpd provides the mod_ldap module for this and suggest you read the documentation for it.
If you provide more details of the server side setup I might be able to give more tailored suggestions.
The comments suggesting that you not put jars in the repository are valuable, but with some configuration improvements you can help resolve some of your slowness anyway.

The merge included a whole lot of jars (~250MB)
That's your problem! If you go through your network via http://, you have to send those jars via http://, and that can be painfully slow. You can increase the cache size of Apache httpd, or you can setup a parallel svn:// server, but you're still sending 1/4 gigabyte of jars through the network. It's why file:// was so much faster.
You should not be storing jars in your Subversion repository. Here's why:
Version control gives you a lot of power:
It helps you merge differences between branches
It helps you follow the changes taking place.
It helps identify a particular change and why a particular change took place.
Storing binary files like jars provide you none of that. You can't merge binary files, and you can't track their changes.
Not only that, but version control systems usually use diffs to track changes. This saves a lot of space. Imagine a 1 kilobyte text file. In 5 revisions, six lines are changed. Instead of taking up 6K of space, only 1K plus those six changes are stored.
When you store a jar, and then a new version of that jar, you can't easily do a diff, and since jar format is zip, you can't really compress them either, store five versions of a jar in Subversion, and you store pretty close to five times the size of that jar. If a jar file is 10K, you're storing 50K of space for that jar.
So, not only are jar files taking up a lot of space, and they don't give you any power in storage, they can quickly take over your repository. I've seen sites where over 90% of a 8 gigabyte repository is nothing but compiled code and third party jars. And, the useful life of these binary files is really quite limited too. So, in these places, 80% of their Subversion repository is wasted space.
Even worse, you tend to lose where you got that jar, and what is in it. When users put in a jar called commons-beans.jar, I don't know what version that jar is, whether that jar was built by someone, and whether it was somehow munged by that person. I've see users merge two separate jars into a single jar for ease of use. If someone calls that jar commmons-beanutils-1.5.jar because it was version 1.5, it's very likely that someone will update it to version 1.7, but not change the name. (It would affect the build, you have to add and delete, there is always some reason).
So, there's a massive amount of wasted space with little benefit and almost no information. Storing jars is just plain bad news.
But your build needs jars! What should you do?
Get a jar repository like Nexus or Artifactory. Both of these repository managers are free and open source.
Once you store your jars in there, you can fetch the revision of the jar you want either through Maven, Gradel, or if you use Ant and want to keep your Ant build system, Ivy. You can also, if you don't feel like being that fancy, fetch the jars via an Ant <get/> task. If you use Jenkins, Jenkins can easily deploy the built jars for other projects to use in your Maven repository.
So, get rid of the jars. Merging will then be a simple diff between text files. Merging branches will be much quicker, and less information has to be sent over the network. If you don't want to switch to Maven, then use Ivy, or simply update your builds with the <wget> task to fetch the jars and the versions you need.

Related

What's the purpose of libgit2-backend?

I'm trying to build a website that supports git version control. I'm using libgit2 for the backend. But I think filesystem is not so easy to scale and to guarantee data integrity. I noticed that libgit2 has custom backend (https://github.com/libgit2/libgit2-backends) support where I can use a database for some storage.
Initially I hoped that I can completely get rid of filesystem by saving everything git related into a database. But after I tried the sqlite backend, It seems that libgit2 still need to generate a .git folder on my filesystem? Can I remove the .git folder when I use a database as the backend?
There are a few "limitations" to having a working, "in-memory" repository using libgit2, even though there's support for "some" in-memory-ness. As you found out, there are API endpoints for both custom object databases (.git/objects/) and the refdb (.git/refs/), and the config subsystem can work in-memory. But that is not the case for all other things that can go in .git/, because there's no customization point at the repo-level — and IMHO this is not git we're talking about anymore when you're completely fs-less — work isn't really going in this direction.
Feature-request: https://github.com/libgit2/libgit2/issues/4671
Discussion on a fix + some branches with "preparatory design work": https://github.com/libgit2/libgit2/pull/4967

Perl6: rakudobrew cannot build moar

I'd like to upgrade to the newest version of Perl6,
rakudobrew build moar
Update git reference: rakudo
Cloning into 'rakudo'...
fatal: unable to connect to github.com:
github.com[0: 140.82.114.4]: errno=Connection timed out
Failed running git clone git://github.com/rakudo/rakudo.git rakudo at /home/con/.rakudobrew/bin/rakudobrew line 57.
main::run("git clone git://github.com/rakudo/rakudo.git rakudo") called at /home/con/.rakudobrew/bin/rakudobrew line 397
main::update_git_reference("rakudo") called at /home/con/.rakudobrew/bin/rakudobrew line 368
main::build_impl("moar", undef, "") called at /home/con/.rakudobrew/bin/rakudobrew line 115
this is just a simple connection failure, but how do I fix this?
Your connection problem is not really anything to do with any P6 related software, or in fact any software you're using. It is, as you say, "just a simple connection failure". And most such failures are transient and "fix themselves". As JJ notes, in such scenarios you just wait and then things start working again.
So by the time you read this it'll probably be working for you again without you having fixed anything. But I'm writing an answer anyway with these sections:
Consider not using rakudobrew
Connection problems that "fix themselves"
Connection problems you investigate or fix yourself
Getting around single points of failure
Consider not using rakudobrew
The main purpose of rakudobrew is to support installation of many versions of Rakudo simultaneously and the main audience for the tool is folk hacking on the Rakudo compiler, not those merely using it.
If you're just a regular user, not someone developing the Rakudo compiler and/or don't need to have multiple versions of Rakudo, with complete source code, installed simultaneously, then consider just downloading and installing Rakudo files directly, eg. via rakudo.org/files, rather than via rakudobrew.
Connection problems that "fix themselves"
rakudobrew failed because a git clone ... command failed because the connection with the github.com server timed out.
A server timing out when doing something that usually works using a connection that usually works is likely a transient problem, aka a "please try later" problem.
Transient problems typically "fix themselves" a few seconds, minutes or hours later.
If there's still a problem when you try again, and you want to spend time trying to find out what's going on officially, then look for a status page for that server.
Here are two status pages I know of for github.com:
https://www.githubstatus.com/
https://twitter.com/githubstatus?lang=en-gb.
And for unofficial scuttlebutt I suggest reading the twitter feed.
For me, right now, github.com is working fine and the status page says all systems are go.
So it should now be working for you too.
If it's not, then you can wait longer, or investigate. It you want to investigate, start by looking at the status pages above.
Connection problems you investigate or fix yourself
If github claims it's working fine then there's presumably a problem with your local internet "on-ramp" (your system or your internet service provider's) or somewhere further afield between your on-ramp and the server you're failing to connect to. (You can only know approximately where the server is based on which region of the world administers the IP address the server is associated with at any given moment.)
The next place to look will be places like the internet traffic report; this indicates traffic jams and the like across the planet. (Ignore the visual display, which is broken on some browsers, and click on the links in the table to drill down.)
If it's all green between you and the region that administers the IP address of the server you're failing to connect to, then the next place to turn would be your system's administrator and/or ISP.
Failing that, then perhaps you can ask a question at a sister stackexchange site like serverfault.com or superuser.com.
Getting around single points of failure
Perhaps you were thinking there might be some system redundancy and/or you're interested in that aspect.
P5's ecosystem and its tools are generally mature and limit spofs. This is unlike the ecosystems and tools of most of the other languages out there; so if you've gotten used to the remarkable reliability/availability of CPAN due to its avoidance of spofs, and by extension perlbrew, well, you've been spoiled by P5.
The P6 ecosystem/tool combinations are evolving in the P5 tradition.
For example, the zef package manager automatically connects to CPAN alongside github, and is built to be able to connect to other repos. The ecosystem is partway there to take advantage of this zef capability in that many modules are redundantly hosted on both CPAN and github.
rakudobrew ignores CPAN and assumes use of git repos. It is impressively configurable via its Variables.pm file which includes a %git_repos variable, which could be re-targeted to an alternative git repo site like gitlab. But no one has, to my knowledge, arranged to redundantly copy and update the relevant rakudo files to another git repo site, so this spof-avoidance ability apparently inherent in rakudobrew's code is, afaik, moot for now.

how to minimize JBOSS AS 7 configuration to fit needs

i need to be able to configure JBOSS as 7.2 to only startup the services required in the project.
what is the best aproach to customize JBOSS as 7.2 configuration and set it to a user defined configuration ?
i'm intending to use :
JAAS , EJB , JSF ..
If I understand your request correctly,
it would be removing the unnecessary subsystems.
In standalone, you can configure multiple xmls per your use, and decide the standalone config to use at startup, which would allow you to adapt to various needs quickly.
In domain, it's even easier as you can define various profiles, each of them with specific subsystems present or not, and then assign the necessary profile to your server group.
Removing subsystems through xml modification or CLI is simple,
adding them through CLI sometimes require figuring some of the "default" expected entries for recreation of the subsystems, but once figured out, is easy also.
Key element in your case is to make sure you do not clean up too many of the subsystems, as they sometimes have dependencies to one another.

Apache Ivy: Where do I put all these JARs?

I'm trying to convince the higher-ups at my work place to migrate to Apache Ivy. I've managed to get a few sandbox projects working using Ivy to power the build, and now I have a greenlight to put together a migration proposal.
We all agree on one thing: we don't want to trust JARs that are located in public directories! I know, I know, a bit paranoid, yes. But we'd like to have a setup where we pull a JAR from a trusted source (either downloading it from the open source project itself, or most likely, gulp, a public repo), and use it for some time before we "certify" it (give it our blessing as a safe artifact to use).
Then we want to have a common repository for all JARs used by our many projects.
My original thinking was to place this repository up in version control (we have an SVN server). But I wasn't sure what best practices dictate. It might make more sense to put our JARs on a file server and FTP to them in the Ivy script.
Either way, SVN (HTTPS) or FTP, all of our servers are authenticated. So, a small number of questions:
Where should we be publishing all of our "certified" JARs (everything from `log4j` to any homegrown JARs we produce)? What do best practices dictate?
The "ivyrep" resolver-type does not take username or passwd atrributes. If our "JAR server" (FTP, SVN, etc.) is authenticated, how do I configure the Ivy scripts to login?
I must echo Brian's recommendation to use a repository manager like Nexus. It's a lot less work in the long run. You'll also discover that the professional version of Nexus enables you to create approval processes around repositories which you plan to use in your build. See the procurement suite functionality.
If, on the other hand, you are determined to build your own repository, then ivy has the tools for the job. You need to become very familiar with the ivy settings file and how it declares and uses resolvers.
If repository is accessible via HTTPS the the url resolver should be able to access it. The resolver will assume that each version of an artifact is in a different directory and you'll need to specify the URL pattern that ivy will need to use when accessing the repository:
<url name="two-patterns-example">
<ivy pattern="http://ivyrep.mycompany.com/[module]/[revision]/ivy-[revision].xml" />
<artifact pattern="http://ivyrep.mycompany.com/[module]/[revision]/[artifact]-[revision].[ext]" />
</url>
The pattern is fully flexible to how you store the artifacts.
Authentication is also handled in the settings file using the credentials tag.
Finally, the FTP protocol is also supported. It's hard to find in the doco, but it's supported by the vfs resolver.
I think that's enough information on an option I don't recommend :-) Having said that I once created an FTP based repository for managing releases to clients. It's useful to have a tool this powerful :-)
Why not use something like Sonatype's Nexus. I've seen it used for Maven, and I believe it'll work for Ivy.
You can set it up to download from remote repositories into (say) a 'test' repository. You can then evaluate those .jars, and if they're good, upload them into an 'approved' repository for general consumption. There's some authentication surrounding this, but you'd have to evaluate that in greater depth. Certainly you can restrict the uploading into repositories via a username/password pair.

How do I backup a nexus repository manager

The nexus book: http://www.sonatype.com/books/nexus-book/reference/. Does not seem to spend any time on how one should go about backing up a nexus repository. If I am installing my snapshot and releases into this local repository, it seems that it would behoove me to back it up. However, I'm not really interested in backing up anything that can easily be downloaded from a remote repository.
Some google searches do not seem to reveal the canonical answer either, so perhaps for posterity it can be recorded here.
Thanks,
Nathan
When you install Nexus, you'll end up with two directories:
nexus-webapp-1.3.1.1/
sonatype-work/
We've separated the application from the data and configuration. The Nexus application is in nexus-webapp-1.3.1.1/ and the data and configuration is in sonatype-work/nexus. This was mainly done to facilitate easier upgrades, but it also has the side-effect of making it very easy to backup a Nexus installation.
The Simple Answer
Nexus doesn't store repositories in a database or do anything that would preclude a simple backup of the file system under sonatype-work/nexus. If you need to create a complete backup, just archive the contents of the sonatype-work/nexus.
Better Answer
If you want a more intelligent approach to backing up a Nexus installation, you will certainly want to backup everything under sonatype-work/nexus/conf, sonatype-work/nexus/storage, sonatype-work/nexus/template-store. If you want to backup the metadata and file attributes that Nexus keeps for proxy repository, backup sonatype-work/nexus/proxy, although this isn't required as the information about the proxy repository will be generated on-demand as attributes are requested.
You don't need to backup sonatype-work/nexus/logs and you don't need to backup the Lucene indexes in sonatype-work/nexus/indexer.
Nexus Pro Answer
There is a Nexus Professional plugin which can automate the process of creating a backup of the Nexus configuration data. This plugin is going to address the contents of the sonatype-work/nexus/conf directory. If you need to backup the sonatype-work/nexus/storage directory, you will need to configure some backup system to backup the contents of that filesystem. Once again, as with Nexus Open Source, there is currently no real benefit in backing up the contents of sonatype-work/nexus/indexer or sonatype-work/nexus/logs.
Excluding Storage for Remote Repositories
In your question you mention that you want to exclude the storage devoted to the local cache of a remote repository. If you are interested in doing this, you'll have to take a further level of granularity and just exclude the directories under sonatype-work/nexus/storage that correspond to the remote repositories.
Do you need to shut Nexus down for a backup?
Brian Fox told me no, the only real chance for file contention is going to be the files in the indexer/ directory. You shouldn't have a problem backing up the sonatype-work filesystem with a running instance of Nexus.
BTW, thanks for the question, this answer will likely be incorporated into the next version of the Nexus book.
afaik nexus (free version) does not have any backup features, but it should be as simple, as knowing your companies groupId and grabbing it from the storage directories in nexus
but i would schedule a complete repository backup too, you never know when the remote repositories are down, when you need them the most