I wanted to know if we can have mass upload of artifacts to the repository in Nexus.
You can do it in a variety of ways:
Use the Nexus artifact upload page (note this only works for multiple artifacts with the same groupId and artifactId).
Set up a script, with multiple invocations of the maven-deploy-plugin's deploy-file goal, one for each artifact.
If you have access to the file system, you can copy the files directly into [sonatype-work]/storage/[repository-name]. If you do this, set up scheduled tasks to rebuild the metadata and reindex the repository.
Use the Nexus Repository Conversion Tool to create Release and Snapshot folders based on your local .m2 folder and then move the contents of those folders into [sonatype-work]/storage/[repository-name].
Related
We have a terraform module developed and kept it inside a repo and people access it by putting below in their main.tf
module "standard_ingress" {
source = "git::https://xxx.xx.xx/scm/xxxx/xxxx-terraform-eks-ingress-module.git?ref=master"
When they do terraform init whole repo is being cloned to folder (~/.terraform/modules/standard_ingress)
We have some non module (non terraform) related folders as well in the same repo and same branch.
Is there a way, we can make terraform init exclude those folders being cloned.
Thanks.
The Git transfer protocols all work by transferring batches of commits associated with a particular remote ref (branch or tag), so there is no way for a Git client to fetch only a subset of the directories or files in the selected commit.
Terraform can only use the Git protocol as it's already defined, and so it cannot provide any capabilities that the underlying protocol lacks.
If your concern is the amount of time taken to clone the entire repository, you may be able to optimize by excluding anything except the most recent commit rather than by ignoring files within that commit. You can do that by setting the depth argument to 1:
source = "git::https://xxx.xx.xx/scm/xxxx/xxxx-terraform-eks-ingress-module.git?ref=master&depth=1"
If even that isn't sufficient then I think your only further option would be to add a separate build and release step for your modules where you capture the subset of files that are relevant to the Terraform modules into a .zip or .tar.gz archive, publish that archive somewhere that Terraform can fetch it over HTTP, and then use fetching archives over HTTP as the source type. In this case Terraform will download only the contents of the archive, allowing you to curate exactly what's included. (It would also be equivalent to put the archive into one of the supported cloud storage services, such as Amazon S3.)
I would like to know the specs for the maven repository structure.
I know I could use archiva or nexus to create a repository. I am not interested in those information.
I have tried searching apache's maven website and google with the phrase
"maven artifact repository structure specs", and I mostly get the development directory structure expected on the maven client.
I would like someone to explain the structure here or point me to a comprehensive single document (i.e. one that does not explain the directory structure by making me follow a never ending trail of links) that explains for example,
How to create a simple maven repository using a static file system directory, deployed to apache http server.
or how to create a directory structure on googlecode that would appear as a maven repository.
Here is the specification (long overdue for documenting in Maven itself):
https://cwiki.apache.org/confluence/display/MAVENOLD/Repository+Layout+-+Final
That said, I wouldn't refer to that for creating the repository in the way you've described. The best thing to do is to use mvn deploy:deploy-file with appropriate parameters to upload the files that you want to, as it can write appropriate metadata and structure for you. This can be done to a filesystem location and synced to the location you want to.
Another alternative is to run a repository manager, like Archiva, and upload using the web interface until you are happy, then sync the directory structure to the location that you want to.
I have a repository directory under .m2 that I want to use as a remote repository (-Dmaven.repo.remote=http://remotehostname/repo) to other hosts. I tried just to expose the directory .m2/repository/ under Apache as http://remotehostname/repo, the directory is fully visible via HTTP, but Maven doesn't seem to be reading from the exposed repo. For various reasons, I do not want to add this new remote repo to settings.xml; limit it to -Dmaven.repo.remote
What do I need to do to convert a local repo under .m2 as a remote repo?
Further to #Perception's answer, you can look at Nexus Command Line tool, which can help convert your local repo to nexus repo
I haven't heard of any maven repository manager that can simply mirror your .m2 folder (which is just a local repository). You need to install and configure repository management software ... there are many free, open source ones, I recommend Artifactory or Nexus.
Nexus does fit very closely with your requirements since it uses a file based repository, and the file layout matches with the local repo layout.
I have two teamcity configurations one becoming my common helpers and reuseable components and my other a website which uses the common project.
I use a third configuration to publish to a test environment.
When the third configuration is run i would like it to get the artifacts from the common project and merge them with the website output and deploy. Am i asking for two much?
This ought to be pretty straightforward.
On ThirdConfig add two artifact dependencies. One whose source is CommonProject, and another whose source is WebProject. When configuring an artifact dependency it will allow you to specify which artifact files are are actually pulled from CommonProject and WebProject into ThirdConfig via the 'Artifact paths'. The artifact files can then be placed into some new folder hierarchy specific to ThirdConfig by using the 'Destination path'. These two options ought to be enough to create the directory structure that is the merging of CommonProject and WebProject. That takes care of the merge part.
The deploy is a bit more tricky. To my knowledge TeamCity does not support any sort of 'copy or upload to external location' function out of the box. For this bit you'll need to create an msbuild script (or batch file, or anything that can be run from the command line). Said script can expect the file/directory structure you've created via artifact dependencies where the root of the structure is the initial working directory of the script, and need only push these files out to your specific deploy location. That 'push' of course is going to be specific to your environment. Ftp, unc share, etc.
I accidently removed a hosted snapshot repository from Nexus containing a few artifacts needed by other developers on my team. Fine, I'll be able to recreate it fairly easy, but when I tried to add the repository again with the same name as the one I removed, the "Upload Artifact" tab did not show. I tried to clean the cache and reindex the public and public snapshot repos, but that didn't help. I also tried setting an alternative storage path by entering an alternative path on "Override Local Storage Location", same result.
Will I have to create a brand new repository with a different name and change all repository reference in my projects?
Thanks,
David
You should be able to create the new repo without any problem. It's possible you where inheriting permissions to this repo via a group and when you made the new repo you didn't add it to the group.
Also, all delete operations in Nexus simply move files to the sonatype-work/nexus/trash so you could have just copied all those files back directly on the disk after recreating the repo.
I think I noticed that snapshot repositories do not have the Artifact Upload tab, so possibly you created it as a snapshot repo?