How long takes to Cloudbees repository syncs with Maven Central? - cloudbees

I deployed a war application to Maven Central through oss.sonaytpe.org. The war can be accessed at http://repo.maven.apache.org/maven2/br/com/thiagomoreira/liferay/plugins/bootstrap-jumbotron-theme/1.0.0.0/bootstrap-jumbotron-theme-1.0.0.0.war, it was deployed 4 hours ago at least and it is still not available in http://repo.cloudbees.com/content/repositories/central/. I have another project the relies on the given war to be deployed but I got stuck in this issue. I have two questions: why central is proxied? How long takes to repo.cloudbees.com get in sync with Central? This shouldn't be transparent like a real proxy?
I'm postponing the release of my project to the next weekend due this road block. #fail

repo.cloudbees.com is a (nexus) proxy to access central, it will cache artifacts first time it is requested from a build :
http://repo.cloudbees.com/content/repositories/central/br/com/thiagomoreira/liferay/plugins/bootstrap-jumbotron-theme/1.0.0.0/bootstrap-jumbotron-theme-1.0.0.0.war

everything that you said is right in theory and I knew that before posting my question. But the reality was different, look the screenshot that I took when I was trying to release my project. I pretty sure there is some misconfiguration in between central and cloudbees...

Related

jhipster 3 Migrate from monolithic to microservices

Currently I've a JHipster 3.3 monolithic application and I would like to migrate to microservices architecture. I've already created the registry, the gateway and the uaa service. Now I need to migrate the core business of my application into a microservice. Is there a facility to perform it? Can I make it automatically?
You could either convert your monolith into a service, or re-generate it from your entity definitions.
First approach requires a good understanding about Spring Cloud, you'd start by annotating your app with #EnableEurekaClient, add missing depdendencies on Spring cloud to your pom.xml, add missing properties to your application*.yml, create bootstrap*-yml files. Then you would move your client part to your gateway. This is not easy especially if you're new to spring cloud.
Second approach requires you to generate a microservices app with same options as your monolith, then copy to it your .jhipster folder which contains your entity definitions and re-generate them running yo jhipster:entity <entityName> for each entity in same order as you created them initially and then generate htem also on gateway for generating the client part.
You should also take time to think about why you're migrating, if you turn your monolith app into a single service then it might be a bad idea as you'll only add complexity, it makes sense only if you are planning to add more services and/or split your monolith into several services. There is a good free ebook and video at O'Reilly: "Microservices AntiPatterns and Pitfalls"
For start I want also to subscribe to the last part of the answer of Gaƫl:
think about why you're migrating?
Personally I am at the moment in a migration process. I start in 2015 a JHipster monolith app (at that time that was the only option :) ) which I still develop and add new features. For my monolith I decide to migrate to microservice because we gone increase the team and want to go with a DDD in the future. I must admit that there is some overhead at the begin and the learning curve is quite steep but in the end the results are very rewarding especially if you believe in CI (y)
This is how I migrate my monolith:
be sure that you have all your sources commited and sync with your VSC (I use git as DVCS)
without any changes just run the jhipster generator and overwrite all the old sources
make a git diff to have an overview of the files that are generated from jhipster and which you have modified
if you have not changed the format of the files that jhipster generates it should be just some files in webapp folder and configuration file
if you have differences only because of formatting I will recommend to check the code and then update your base code of your monolith app
the target is to have a few as possible differences when regenerating the the monolith app with the jhipster generator (is better to have fewer files to check when migrating to microservices)
at this moment I imply that you are on clean workspace (i.e. all your changes are sync with the VCS) and if you will run a yo jhipster you will have as few as possible file to recheck manually
in the root folder of the app there is a .yo-rc.json file
in that file you should change the applicationType from monolith to getaway and authenticationType from what you have to jwt e.g.
.yo-rc.json
"jhipsterVersion": "3.5.1",
"serverPort": "8080",
"applicationType": "gateway",
"jhiPrefix": "jhi",
after merging the new generated files you should have now the gateway of the microservice (it can be that you need to delete some classes depending on which authenticationType your monolith use to have)
personally I am working now on moving some of the responsibilities(all the staff that the old monolith did) which exist in the gateway to migrate to sand alone microservices
the migration of the services mentioned in 6.1 is something that goes parallel with adding new features to the app and those will be added as new microservices
My recommendation is to go in small steps/increments and it will be nice if you have a CI so that you can have asap also a feedback about your migration ;)
Good luck.
Cheers, duderoot

Cache Credentials During SVN Merge

A merge from a feature branch to trunk took over 45 minutes to complete.
The merge included a whole lot of jars (~250MB), however, when I did it on the server with the file:// protocol the process took less than 30 seconds.
SVN is being served up by Apache over https.
The version of SVN on the server is
svn, version 1.6.12 (r955767)
compiled Sep 3 2013, 17:49:49
My local version is
svn, version 1.7.7 (r1393599)
compiled Oct 8 2012, 20:42:17
On checking the Apache logs I made over 10k requests and apparently each of these requests went through an authentication layer.
Is there a way to configure the server so that it caches the credentials for a period and doesn't make so many authentication requests?
I guess the tricky part is making sure the credentials are only cached for the life of single svn 'request'. If svn merge makes lots of unique individual https requests, how would you determine how long to store the credential for without adding potential security holes?
First of all I'd strongly suggest you upgrade the server to a 1.7 or 1.8 versions since 1.7 and newer servers support an updated version of the protocol that requires fewer requests for many actions.
Second, if you're using path based authorization you probably want SVNPathAuthz short_circuit in your configuration. Without this for secondary paths (i.e. paths not in the request URI) as may happen for many recursive requests (especially log) when the authorization for those paths are run it runs back through the entire Apache httpd authentication infrastructure. With the setting instead of running the entire authentication/authorization infrastructure for httpd, we simply ask mod_authz_svn to authorize the action against the path. Running through the entire httpd infrastructure can be especially painful if you're using LDAP and it needs to go back to the LDAP server to check credentials. The only reason not to use the short_circuit setting is if you have some other authentication module that depends on the path, I've yet to see an actual setup like this in the wild though.
Finally, if you are using LDAP then I suggest you configure the caching of credentials since this can greatly speed up authentication. Apache httpd provides the mod_ldap module for this and suggest you read the documentation for it.
If you provide more details of the server side setup I might be able to give more tailored suggestions.
The comments suggesting that you not put jars in the repository are valuable, but with some configuration improvements you can help resolve some of your slowness anyway.
The merge included a whole lot of jars (~250MB)
That's your problem! If you go through your network via http://, you have to send those jars via http://, and that can be painfully slow. You can increase the cache size of Apache httpd, or you can setup a parallel svn:// server, but you're still sending 1/4 gigabyte of jars through the network. It's why file:// was so much faster.
You should not be storing jars in your Subversion repository. Here's why:
Version control gives you a lot of power:
It helps you merge differences between branches
It helps you follow the changes taking place.
It helps identify a particular change and why a particular change took place.
Storing binary files like jars provide you none of that. You can't merge binary files, and you can't track their changes.
Not only that, but version control systems usually use diffs to track changes. This saves a lot of space. Imagine a 1 kilobyte text file. In 5 revisions, six lines are changed. Instead of taking up 6K of space, only 1K plus those six changes are stored.
When you store a jar, and then a new version of that jar, you can't easily do a diff, and since jar format is zip, you can't really compress them either, store five versions of a jar in Subversion, and you store pretty close to five times the size of that jar. If a jar file is 10K, you're storing 50K of space for that jar.
So, not only are jar files taking up a lot of space, and they don't give you any power in storage, they can quickly take over your repository. I've seen sites where over 90% of a 8 gigabyte repository is nothing but compiled code and third party jars. And, the useful life of these binary files is really quite limited too. So, in these places, 80% of their Subversion repository is wasted space.
Even worse, you tend to lose where you got that jar, and what is in it. When users put in a jar called commons-beans.jar, I don't know what version that jar is, whether that jar was built by someone, and whether it was somehow munged by that person. I've see users merge two separate jars into a single jar for ease of use. If someone calls that jar commmons-beanutils-1.5.jar because it was version 1.5, it's very likely that someone will update it to version 1.7, but not change the name. (It would affect the build, you have to add and delete, there is always some reason).
So, there's a massive amount of wasted space with little benefit and almost no information. Storing jars is just plain bad news.
But your build needs jars! What should you do?
Get a jar repository like Nexus or Artifactory. Both of these repository managers are free and open source.
Once you store your jars in there, you can fetch the revision of the jar you want either through Maven, Gradel, or if you use Ant and want to keep your Ant build system, Ivy. You can also, if you don't feel like being that fancy, fetch the jars via an Ant <get/> task. If you use Jenkins, Jenkins can easily deploy the built jars for other projects to use in your Maven repository.
So, get rid of the jars. Merging will then be a simple diff between text files. Merging branches will be much quicker, and less information has to be sent over the network. If you don't want to switch to Maven, then use Ivy, or simply update your builds with the <wget> task to fetch the jars and the versions you need.

Need advice regarding deployment on multiple remote machines

Currently I am using ms-deploy to build and deploy on several machines using team-city. In my current scenario, I need to build, package and deploy on Dev. After this I need to deploy this package on test and Live servers (which are on different domain. I understand how we do it but problem is Web transformation only occurs for test and live configs if we build a package. It means if I want to use the same package that is created for Dev cannot be used, as web transformation only occurred for Dev web config. Also know that we can change web config when un-packaging but that parameters are very limited. We have a lot of changes not just the connection string or db changes.
Another solution is to add another step to build packages for test and live as part of Dev deployment but then it means a lot of copying on remote servers, once for test and once for live which is a lot of time consuming due to different domains.
Can you please guide what is the best solution in this scenario. So I can use team-city to publish to Dev and test and live using same package and different web configs in one go.
To configure items at deployment time which are not automatically created for you. You can add a file named parameters.xml to your project and extend what you want to make available at deployment time.
Here's some documentation on the approach Using Deployment Parameters for Web.Config File Settings.

How do I backup a nexus repository manager

The nexus book: http://www.sonatype.com/books/nexus-book/reference/. Does not seem to spend any time on how one should go about backing up a nexus repository. If I am installing my snapshot and releases into this local repository, it seems that it would behoove me to back it up. However, I'm not really interested in backing up anything that can easily be downloaded from a remote repository.
Some google searches do not seem to reveal the canonical answer either, so perhaps for posterity it can be recorded here.
Thanks,
Nathan
When you install Nexus, you'll end up with two directories:
nexus-webapp-1.3.1.1/
sonatype-work/
We've separated the application from the data and configuration. The Nexus application is in nexus-webapp-1.3.1.1/ and the data and configuration is in sonatype-work/nexus. This was mainly done to facilitate easier upgrades, but it also has the side-effect of making it very easy to backup a Nexus installation.
The Simple Answer
Nexus doesn't store repositories in a database or do anything that would preclude a simple backup of the file system under sonatype-work/nexus. If you need to create a complete backup, just archive the contents of the sonatype-work/nexus.
Better Answer
If you want a more intelligent approach to backing up a Nexus installation, you will certainly want to backup everything under sonatype-work/nexus/conf, sonatype-work/nexus/storage, sonatype-work/nexus/template-store. If you want to backup the metadata and file attributes that Nexus keeps for proxy repository, backup sonatype-work/nexus/proxy, although this isn't required as the information about the proxy repository will be generated on-demand as attributes are requested.
You don't need to backup sonatype-work/nexus/logs and you don't need to backup the Lucene indexes in sonatype-work/nexus/indexer.
Nexus Pro Answer
There is a Nexus Professional plugin which can automate the process of creating a backup of the Nexus configuration data. This plugin is going to address the contents of the sonatype-work/nexus/conf directory. If you need to backup the sonatype-work/nexus/storage directory, you will need to configure some backup system to backup the contents of that filesystem. Once again, as with Nexus Open Source, there is currently no real benefit in backing up the contents of sonatype-work/nexus/indexer or sonatype-work/nexus/logs.
Excluding Storage for Remote Repositories
In your question you mention that you want to exclude the storage devoted to the local cache of a remote repository. If you are interested in doing this, you'll have to take a further level of granularity and just exclude the directories under sonatype-work/nexus/storage that correspond to the remote repositories.
Do you need to shut Nexus down for a backup?
Brian Fox told me no, the only real chance for file contention is going to be the files in the indexer/ directory. You shouldn't have a problem backing up the sonatype-work filesystem with a running instance of Nexus.
BTW, thanks for the question, this answer will likely be incorporated into the next version of the Nexus book.
afaik nexus (free version) does not have any backup features, but it should be as simple, as knowing your companies groupId and grabbing it from the storage directories in nexus
but i would schedule a complete repository backup too, you never know when the remote repositories are down, when you need them the most

Should I use an FTP server as a maven host?

I would like to host a Maven repository for a framework we're working on and its dependencies. Can I just deploy my artifacts to my FTP host using mvn deploy, or should I manually deploy and/or setup some things before being able to deploy artifacts? I only have FTP access to server I want to host the Maven repo on.
The online repository I want to use is not hosted by myself. As I say, I only have FTP access, so if possible, I would like to use that FTP space as a Maven repository. The tools mentioned seem to work when you have full control over the host machine, or at least more than just FTP access since you need to configure the local directories where the repositories will be placed. Is this possible?
You might want to have a look at Nexus, a Maven repository manager. We've replaced our local Maven repository with a Nexus-based one and find it tremendously useful.
I've successfully used Archiva as my repository for several years ... see http://archiva.apache.org/. It's easy to administer and allows you to configure as many repositories as you need (SNAPSHOT, internal, external, etc).
According to the book "Better Builds with Maven", the most common type of repository is HTTP, this paragraph describes what I think you need:
This chapter will assume the repositories are running from http://localhost:8081/ and that artifacts are deployed to the repositories using the file system. However, it is possible to use a repository on another server with any combination of supported protocols including http, ftp, scp, sftp and more. For more information, refer to Chapter 3.
A Maven 2 repository is simply a specific directory structure, so once you get the transport and server specifications right for the repository and deployment portion of your POMs, it should be completely transparent to your users.
You can even use Dropbox. All that you need is a public address to access the files generated with mvn deploy, with any of the protocols in the accepted answer.
I guess there are more services that can work in the same way, but I'm not certain about the URL schemes that alternatives to Dropbox may use.
https://maven.apache.org/wagon/wagon-providers/wagon-ftp/ will tell you that you can use ftp to read from an existing repository, but not to create a new one. I don't think that it is impossible in principle, but no one has cared to write all the fiddly code to do the directory management via ftp.