Download artifacts archive from Artifactory - repository

I'm testing Artifactory 4.2.0 PRO.
The really nice feature is the possibility to download an archive of all files produced by the build by executing, something like:
curl -XPOST -u admin:password -H "Content-Type: application/json" http://localhost:8081/artifactory/api/archive/buildArtifacts -d '
{
"buildName": "Vehicle Routing Problem",
"buildNumber": "3",
"archiveType": "zip"
}' > build.zip
However, I'm unable to find if there is a possibility to do the same (download archive) when specifying exact properties using AQL. I have been trying to upload other artifacts with properties exactly the same as those pushed by the build, but they were not fetch by the snippet above (I assume some sort of metadata is stored somewhere).
What are the possibilities to fetch multiple artifacts without using many HTTP queries?
Regards.

The Retrieve Folder or Repository Archive API allows to download an archive file (supports zip/tar/tar.gz/tgz) containing all the artifacts that reside under the specified path (folder or repository root). However it does not support filtering by properties.
The Artifactory CLI supports concurrently downloading multiple files. It also supports downloading files we matches a set of property values. The CLI, however, will use multiple HTTP requests for doing so.
A third option would be developing a custom user plugin which allows downloading an archive of artifacts matching a set of properties. An execution user plugin can be executed as a REST API call. There is a sample plugin in the JFrogDev GitHub account which can serve as a good start point. This plugin allows downloading the content of a directory as an archive.

Related

terraform module from git repo - how to exclude directories from being cloned by terraform init?

We have a terraform module developed and kept it inside a repo and people access it by putting below in their main.tf
module "standard_ingress" {
source = "git::https://xxx.xx.xx/scm/xxxx/xxxx-terraform-eks-ingress-module.git?ref=master"
When they do terraform init whole repo is being cloned to folder (~/.terraform/modules/standard_ingress)
We have some non module (non terraform) related folders as well in the same repo and same branch.
Is there a way, we can make terraform init exclude those folders being cloned.
Thanks.
The Git transfer protocols all work by transferring batches of commits associated with a particular remote ref (branch or tag), so there is no way for a Git client to fetch only a subset of the directories or files in the selected commit.
Terraform can only use the Git protocol as it's already defined, and so it cannot provide any capabilities that the underlying protocol lacks.
If your concern is the amount of time taken to clone the entire repository, you may be able to optimize by excluding anything except the most recent commit rather than by ignoring files within that commit. You can do that by setting the depth argument to 1:
source = "git::https://xxx.xx.xx/scm/xxxx/xxxx-terraform-eks-ingress-module.git?ref=master&depth=1"
If even that isn't sufficient then I think your only further option would be to add a separate build and release step for your modules where you capture the subset of files that are relevant to the Terraform modules into a .zip or .tar.gz archive, publish that archive somewhere that Terraform can fetch it over HTTP, and then use fetching archives over HTTP as the source type. In this case Terraform will download only the contents of the archive, allowing you to curate exactly what's included. (It would also be equivalent to put the archive into one of the supported cloud storage services, such as Amazon S3.)

Are there any CDNs that can connect to custom Google NPM Artifact Registries?

I have a public custom repo and I can access the tar file using the curl -L -v "https://us-central1-npm.pkg.dev/my-project/npm-public/#my-scope/bucky-barnes/-/#pure-pm/bucky-barnes-0.0.1.tgz". The problem is I need to host the contents of the tgz similar to the site unpkg unpkg.com/react#16.7.0/umd/react.production.min.js. I know I could create a server that streams the tar, decompresses it and cache it; but that seems like it should be a solved problem.
Are there any CDNs that support custom registries like Google's Artifact Registry?
I see ones like https://cdn.jsdelivr.net that seem to support Github packages but I can't find any that support Google.

Are phantomjs binary artifacts hosted on some repository?

I would like to download phantomjs binary within a Gradle build script.
Where are they hosted ?
There is an official download repository for PhantomJS: https://bitbucket.org/ariya/phantomjs/downloads/
No.
They are only hosted on bitbucket: https://bitbucket.org/ariya/phantomjs/downloads/
To use them with Gradle, the only option found to be working so far is to use the gradle-download-task plugin. To implement caching, you can copy paste this code.
Otherwise, another potential option is to declare a custom ivy dependency, but so far I haven't found a way to get it to work. The issue is that bitbucket redirects to an Amazon S3 location, and the HEAD request that Gradle first issues to get the file size fails, probably because of a request signature problem.

Show HTML artifacts in bamboo without downloading

I've successfully created a small demo HTML report of test results from a build. Simply put, I'm doing numerical computations, and would like to give more detailed information on test results than a binary pass/fail. The HTML report consists of multiple HTML files with relative links between them.
However, linking to one file from the other sometimes leads to the file being opened in the browser, and sometimes a "download file" dialog opens. Any ideas what the rules are, so I can look at the whole report in-browser without resorting to downloading a zip file of the whole report, unzipping, etc etc?
Just a quick note here, if anyone should need it - as this was where I ended up in my search.
After upgrading our Bamboo to 6.8.1 build 60805 our code coverage artifacts started downloading, instead of being displayed inline.
This can be fixed by setting the Security and permission setting Allow artifacts to be embedded in Bamboo pages.
Be aware of the note about Cross-Site-Scripting vulnerabilities if enabled.
On our project we use this simple solution
1.In Stage configure final task script to copy reports to some folder:
echo "Copy artifact report"
rm -rf ../artifacts
mkdir ../artifacts
cp -r functionalTests/build/html/behat/* ../artifacts/
2.On Artifacts tab edit artifact definition and set Copy pattern to artifacts/**
Then when you navigate to build artifact then folder with reports will be opened in browser
To have an embedded html page in bamboo showing the coverage results, this page has partially helped me to make bamboo cooperate with python coverage:
Troubleshooting
The Clover tab shows the directory listing instead of the HTML report
Please check which artifact handler you use. The Amazon S3 Artifact
Handler serves files on a one-by-one basis, instead of exposing all
files as a static website. To change this, open Configure plan and on
the Miscellaneous tab select the Use custom artifact handler settings
check-box. Then select Server-Local Artifact Handler for shared and
non-shared artifacts and finally re-run the build.
In my setup though "Server-Local Artifact Handler" failed completely, but choosing "Bamboo remote handler" did the job.

Using secret api keys on travis-ci

I'd like to use travis-ci for one of my projects.
The project is an API wrapper, so many of the tests rely on the use of secret API keys. To test locally, I just store them as environment variables. What's a safe way to use those keys on Travis?
Travis has a feature to encrypt environment variables ("Encrypting environment variables"). This can be used to protect your secret API keys. I've successfully used this for my Heroku API key.
All you have to do is install the travis gem, encrypt the string you want and add the encrypted string in your .travis.yml. The encryption is only valid for one repository. The travis command gets your public key for your repo and can then decrypt the string during the build.
gem install --user travis
travis encrypt MY_SECRET_ENV=super_secret -r my_username/my_repo
This gives you the following output:
Please add the following to your .travis.yml file:
secure: "OrEeqU0z6GJdC6Sx/XI7AMiQ8NM9GwPpZkVDq6cBHcD6OlSppkSwm6JvopTR\newLDTdtbk/dxKurUzwTeRbplIEe9DiyVDCzEiJGfgfq7woh+GRo+q6+UIWLE\n3nowpI9AzXt7iBhoKhV9lJ1MROrnn4DnlKxAEUlHTDi4Wk8Ei/g="
according to this in travis ci documentation it's said that :
If you have both the Heroku and Travis CI command line clients installed, you can get your key, encrypt it and add it to your .travis.yml by running the following command from your project directory:
travis encrypt $(heroku auth:token) --add deploy.api_key
refer to the following tutorial to install heroku client according to your OS
You can also define secret variables in repository settings:
Variables defined in repository settings are the same for all builds, and when you restart an old build, it uses the latest values. These variables are not automatically available to forks.
Define variables in the Repository Settings that:
differ per repository.
contain sensitive data, such as third-party credentials.
To define variables in Repository Settings, make sure you’re logged in, navigate to the repository in question, choose “Settings” from the cog menu, and click on “Add new variable” in the “Environment Variables” section.
Use a different set of API keys and do it the same way. Your travis box gets setup for your build run and then completely torn down again after your build has finished. You have root access to your box during the build, so you can do whatever you want with it.