Nextflow Channel create with condition - nextflow

[Kasumi_H3K36, Kasumi_IgG, /mnt/Data/cut_and_tag/work/d0/3db2bde9eb1bdb0578073fb128bc4c/Kasumi_H3K36.no0.bedgraph]
[Kasumi_JMJD1C, Kasumi_IgG, /mnt/Data/cut_and_tag/work/b1/dffe2120acda5b05860e1a3bb0c1bf/Kasumi_JMJD1C.no0.bedgraph]
[Kasumi_NCOR1, Kasumi_IgG, /mnt/Data/cut_and_tag/work/9f/7c3680a1ff0ae0a5a27f42e1a27225/Kasumi_NCOR1.no0.bedgraph]
[Kasumi_IgG, Kasumi_IgG, /mnt/Data/cut_and_tag/work/21/1038cd4ecbc5b3f88da23ad1ee3147/Kasumi_IgG.no0.bedgraph]
[Kasumi_H4K5, Kasumi_IgG, /mnt/Data/cut_and_tag/work/3d/7b5239ab9dc83b00f992fea8926630/Kasumi_H4K5.no0.bedgraph]
This is one of my channel view. I am trying to make a new control channel when the first and second ID are same, and the rest as sample channel.

Here's one way using the branch operator. I made some assumptions about what the fields should be named, but hopefully the pattern is close to what you're looking for:
nextflow.enable.dsl=2
Channel
.of(
['Kasumi_H3K36', 'Kasumi_IgG', file('/mnt/Data/cut_and_tag/work/d0/3db2bde9eb1bdb0578073fb128bc4c/Kasumi_H3K36.no0.bedgraph')],
['Kasumi_JMJD1C', 'Kasumi_IgG', file('/mnt/Data/cut_and_tag/work/b1/dffe2120acda5b05860e1a3bb0c1bf/Kasumi_JMJD1C.no0.bedgraph')],
['Kasumi_NCOR1', 'Kasumi_IgG', file('/mnt/Data/cut_and_tag/work/9f/7c3680a1ff0ae0a5a27f42e1a27225/Kasumi_NCOR1.no0.bedgraph')],
['Kasumi_IgG', 'Kasumi_IgG', file('/mnt/Data/cut_and_tag/work/21/1038cd4ecbc5b3f88da23ad1ee3147/Kasumi_IgG.no0.bedgraph')],
['Kasumi_H4K5', 'Kasumi_IgG', file('/mnt/Data/cut_and_tag/work/3d/7b5239ab9dc83b00f992fea8926630/Kasumi_H4K5.no0.bedgraph')],
) \
.branch { sample1, sample2, bedgraph ->
controls: sample1 == sample2
return tuple( sample1, sample2, bedgraph )
others: true
return tuple( sample1, sample2, bedgraph )
} \
.set { inputs }
inputs.controls.view { "controls: $it" }
inputs.others.view { "others: $it" }
Results:
others: [Kasumi_H3K36, Kasumi_IgG, /mnt/Data/cut_and_tag/work/d0/3db2bde9eb1bdb0578073fb128bc4c/Kasumi_H3K36.no0.bedgraph]
controls: [Kasumi_IgG, Kasumi_IgG, /mnt/Data/cut_and_tag/work/21/1038cd4ecbc5b3f88da23ad1ee3147/Kasumi_IgG.no0.bedgraph]
others: [Kasumi_JMJD1C, Kasumi_IgG, /mnt/Data/cut_and_tag/work/b1/dffe2120acda5b05860e1a3bb0c1bf/Kasumi_JMJD1C.no0.bedgraph]
others: [Kasumi_NCOR1, Kasumi_IgG, /mnt/Data/cut_and_tag/work/9f/7c3680a1ff0ae0a5a27f42e1a27225/Kasumi_NCOR1.no0.bedgraph]
others: [Kasumi_H4K5, Kasumi_IgG, /mnt/Data/cut_and_tag/work/3d/7b5239ab9dc83b00f992fea8926630/Kasumi_H4K5.no0.bedgraph]
Update from comments:
Channel
.of(
['Kasumi_H3K36', 'Kasumi_IgG', file('/path/to/Kasumi_H3K36.no0.bedgraph')],
['Kasumi_JMJD1C', 'Kasumi_IgG', file('/path/to/Kasumi_JMJD1C.no0.bedgraph')],
['Kasumi_NCOR1', 'Kasumi_IgG', file('/path/to/Kasumi_NCOR1.no0.bedgraph')],
['Kasumi_IgG', 'Kasumi_IgG', file('/path/to/Kasumi_IgG.no0.bedgraph')],
['Kasumi_H4K5', 'Kasumi_IgG', file('/path/to/Kasumi_H4K5.no0.bedgraph')],
['NB4_H3K36', 'NB4_IgG', file('/path/to/NB4_H3K36.no0.bedgraph')],
['NB4_JMJD1C', 'NB4_IgG', file('/path/to/NB4_JMJD1C.no0.bedgraph')],
['NB4_NCOR1', 'NB4_IgG', file('/path/to/NB4_NCOR1.no0.bedgraph')],
['NB4_IgG', 'NB4_IgG', file('/path/to/NB4_IgG.no0.bedgraph')],
['NB4_H4K5', 'NB4_IgG', file('/path/to/NB4_H4K5.no0.bedgraph')],
) \
.branch { test_sample, control_sample, bedgraph ->
control_samples: test_sample == control_sample
return tuple( control_sample, tuple( test_sample, bedgraph ) )
test_samples: true
return tuple( control_sample, tuple( test_sample, bedgraph ) )
} \
.set { inputs }
inputs.test_samples
.combine( inputs.control_samples, by: 0 ) \
.map { group, test_tuple, control_tuple ->
tuple( *test_tuple, *control_tuple )
} \
.view()
Results:
[Kasumi_H3K36, /path/to/Kasumi_H3K36.no0.bedgraph, Kasumi_IgG, /path/to/Kasumi_IgG.no0.bedgraph]
[Kasumi_JMJD1C, /path/to/Kasumi_JMJD1C.no0.bedgraph, Kasumi_IgG, /path/to/Kasumi_IgG.no0.bedgraph]
[Kasumi_NCOR1, /path/to/Kasumi_NCOR1.no0.bedgraph, Kasumi_IgG, /path/to/Kasumi_IgG.no0.bedgraph]
[Kasumi_H4K5, /path/to/Kasumi_H4K5.no0.bedgraph, Kasumi_IgG, /path/to/Kasumi_IgG.no0.bedgraph]
[NB4_H3K36, /path/to/NB4_H3K36.no0.bedgraph, NB4_IgG, /path/to/NB4_IgG.no0.bedgraph]
[NB4_JMJD1C, /path/to/NB4_JMJD1C.no0.bedgraph, NB4_IgG, /path/to/NB4_IgG.no0.bedgraph]
[NB4_NCOR1, /path/to/NB4_NCOR1.no0.bedgraph, NB4_IgG, /path/to/NB4_IgG.no0.bedgraph]
[NB4_H4K5, /path/to/NB4_H4K5.no0.bedgraph, NB4_IgG, /path/to/NB4_IgG.no0.bedgraph]

Related

Why is this duckmap blocking?

I'm trying to list all files in a directory with this function:
sub list-directory($dir = '.') {
my #todo = $dir.IO.dir;
#todo = #todo.duckmap( -> $_ where $_.d { #todo.push($_.IO.dir); } );
#todo = #todo.duckmap( -> $_ where IO {.Str} );
return #todo;
}
The first duckmap is to list all subdirectories and the second one (this doesn't finish) is to convert the IO objects to Str.
Anyone knows why the second one isn't stopping?
As Hakon has said, it was a infinite loop. Here is the code fixed:
sub list-directory($dir = '.') {
my #todo = $dir.IO.dir;
#todo = #todo.duckmap( -> $_ where $_.d { #todo.push($_.IO.dir); $_; } );
grep { !.IO.d }, #todo.List.flat;
#todo.map({.Str});
}

Recursive generator - manual zip vs operator

Here's exercise 5.F.2 from 'A Book of Abstract Algebra' by Charles C Pinter:
Let G be the group {e, a, b, b^2, b^3, ab, ab^2, ab^3} whose
generators satisfy a^2 = e, b^4 = e, ba = ab^3. Write the table
of G. (G is called the dihedral group D4.)
Here's a little Perl 6 program which presents a solution:
sub generate(%eqs, $s)
{
my #results = ();
for %eqs.kv -> $key, $val {
if $s ~~ /$key/ { #results.push($s.subst(/$key/, $val)); }
if $s ~~ /$val/ { #results.push($s.subst(/$val/, $key)); }
}
for #results -> $result { take $result; }
my #arrs = #results.map({ gather generate(%eqs, $_) });
my $i = 0;
while (1)
{
for #arrs -> #arr { take #arr[$i]; }
$i++;
}
}
sub table(#G, %eqs)
{
printf " |"; for #G -> $y { printf "%-5s|", $y; }; say '';
printf "-----|"; for #G -> $y { printf "-----|"; }; say '';
for #G -> $x {
printf "%-5s|", $x;
for #G -> $y {
my $result = (gather generate(%eqs, "$x$y")).first(* (elem) #G);
printf "%-5s|", $result;
}
say ''
}
}
# ----------------------------------------------------------------------
# Pinter 5.F.2
my #G = <e a b bb bbb ab abb abbb>;
my %eqs = <aa e bbbb e ba abbb>; %eqs<e> = '';
table #G, %eqs;
Here's what the resulting table looks like:
Let's focus on these particular lines from generate:
my #arrs = #results.map({ gather generate(%eqs, $_) });
my $i = 0;
while (1)
{
for #arrs -> #arr { take #arr[$i]; }
$i++;
}
A recursive call to generate is made for each of the items in #results. Then we're effectively performing a manual 'zip' on the resulting sequences. However, Perl 6 has zip and the Z operator.
Instead of the above lines, I'd like to do something like this:
for ([Z] #results.map({ gather generate(%eqs, $_) })).flat -> $elt { take $elt; }
So here's the full generate using Z:
sub generate(%eqs, $s)
{
my #results = ();
for %eqs.kv -> $key, $val {
if $s ~~ /$key/ { #results.push($s.subst(/$key/, $val)); }
if $s ~~ /$val/ { #results.push($s.subst(/$val/, $key)); }
}
for #results -> $result { take $result; }
for ([Z] #results.map({ gather generate(%eqs, $_) })).flat -> $elt { take $elt; }
}
The issue with the Z version of generate is that it hangs...
So, my question is, is there a way to write generate in terms of Z?
Besides this core question, feel free to share alternative solutions to the exercise which explore and showcase Perl 6.
As another example, here's exercise 5.F.3 from the same book:
Let G be the group {e, a, b, b^2, b^3, ab, ab^2, ab^3} whose
generators satisfy a^4 = e, a^2 = b^2, ba = ab^3. Write the
table of G. (G is called the quaternion group.)
And the program above displaying the table:
As an aside, this program was converted from a version in C#. Here's how generate looks there using LINQ and a version of ZipMany courtesy of Eric Lippert.
static IEnumerable<string> generate(Dictionary<string,string> eqs, string s)
{
var results = new List<string>();
foreach (var elt in eqs)
{
if (new Regex(elt.Key).IsMatch(s))
results.Add(new Regex(elt.Key).Replace(s, elt.Value, 1));
if (new Regex(elt.Value).IsMatch(s))
results.Add(new Regex(elt.Value).Replace(s, elt.Key, 1));
}
foreach (var result in results) yield return result;
foreach (var elt in ZipMany(results.Select(elt => generate(eqs, elt)), elts => elts).SelectMany(elts => elts))
yield return elt;
}
The entire C# program: link.
[2022 update by #raiph. I just tested the first block of code in a recent Rakudo. The fourth example returned one result, 'abc', rather than none. This may be due to a new Raku design decision / roast improvement / trap introduced since this answer was last edited (in 2017), or a Rakudo bug. I'm not going to investigate; I just wanted to let readers know.]
Why your use of zip doesn't work
Your code assumes that [Z] ("reducing with the zip operator") can be used to get the transpose of a list-of-lists.
Unfortunately, this doesn't work in the general case.
It 'usually' works, but breaks on one edge case: Namely, when the list-of-lists is a list of exactly one list. Observe:
my #a = <a b c>, <1 2 3>, <X Y Z>; put [Z~] #a; # a1X b2Y c3Z
my #a = <a b c>, <1 2 3>; put [Z~] #a; # a1 b2 c3
my #a = <a b c>,; put [Z~] #a; # abc
my #a; put [Z~] #a; # abc <-- 2022 update
In the first two examples (3 and 2 sub-lists), you can see that the transpose of #a was returned just fine. The fourth example (0 sub-lists) does the right thing as well.
But the third example (1 sub-list) didn't print a b c as one would expect, i.e. it didn't return the transpose of #a in that case, but rather (it seems) the transpose of #a[0].
Sadly, this is not a Rakudo bug (in which case it could simply be fixed), but an unforseen interaction of two Perl 6 design decisions, namely:
The reduce meta-operator [ ] handles an input list with a single element by calling the operator it's applied to with one argument (said element).
In case you're wondering, an infix operator can be called with only one argument by invoking its function object: &infix:<Z>( <a b c>, ).
The zip operator Z and function zip (like other built-ins that accept nested lists), follows the so-called "single-argument rule" – i.e. its signature uses a single-argument slurpy parameter. This means that when it is called with a single argument, it will descend into it and consider its elements the actual arguments to use. (See also Slurpy conventions.)
So zip(<a b c>,) is treated as zip("a", "b", "c").
Both features provide some nice convenience in many other cases, but in this case their interaction regrettably poses a trap.
How to make it work with zip
You could check the number of elements of #arrs, and special-case the "exactly 1 sub-list" case:
my #arrs = #results.map({ gather generate(%eqs, $_) });
if #arrs.elems == 1 {
.take for #arrs[0][];
}
else {
.take for flat [Z] #arrs
}
The [] is a "zen slice" - it returns the list unchanged, but without the item container that the parent Array wrapped it in. This is needed because the for loop would consider anything wrapped in an item container as a single item and only do one iteration.
Of course, this if-else solution is not very elegant, which probably negates your reason for trying to use zip in the first place.
How to write the code more elegantly without zip
Refer to Christoph's answer.
It might be possible with a Z, but for my poor little brain, zipping recursively generated lazy lists is too much.
Instead, I did some other simplifications:
sub generate($s, %eqs) {
take $s;
# the given equations normalize the string, ie there's no need to apply
# the inverse relation
for %eqs.kv -> $k, $v {
# make copy of $s so we can use s/// instead of .subst
my $t = $s;
generate $t, %eqs
if $t ~~ s/$k/$v/;
}
}
sub table(#G, %eqs) {
# compute the set only once instead of implicitly on each call to (elem)
my $G = set #G;
# some code golfing
put ['', |#G]>>.fmt('%-5s|').join;
put '-----|' x #G + 1;
for #G -> $x {
printf '%-5s|', $x;
for #G -> $y {
printf '%-5s|', (gather generate("$x$y", %eqs)).first(* (elem) $G);
}
put '';
}
}
my #G = <e a b bb bbb ab abb abbb>;
# use double brackets so we can have empty strings
my %eqs = <<aa e bbbb e ba abbb e ''>>;
table #G, %eqs;
Here is a compact rewrite of generate that does bidirectional substitution, still without an explicit zip:
sub generate($s, %eqs) {
my #results = do for |%eqs.pairs, |%eqs.antipairs -> (:$key, :$value) {
take $s.subst($key, $value) if $s ~~ /$key/;
}
my #seqs = #results.map: { gather generate($_, %eqs) }
for 0..* -> $i { take .[$i] for #seqs }
}
Here's a version of generate that uses the approach demonstrated by smls:
sub generate(%eqs, $s)
{
my #results = ();
for %eqs.kv -> $key, $val {
if $s ~~ /$key/ { #results.push($s.subst(/$key/, $val)); }
if $s ~~ /$val/ { #results.push($s.subst(/$val/, $key)); }
}
for #results -> $result { take $result; }
my #arrs = #results.map({ gather generate(%eqs, $_) });
if #arrs.elems == 1 { .take for #arrs[0][]; }
else { .take for flat [Z] #arrs; }
}
I've tested it and it works on exercises 2 and 3.
As smls mentions in his answer, zip doesn't do what we were expecting when the given array of arrays only contains a single array. So, let's make a version of zip which does work with one or more arrays:
sub zip-many (#arrs)
{
if #arrs.elems == 1 { .take for #arrs[0][]; }
else { .take for flat [Z] #arrs; }
}
And now, generate in terms of zip-many:
sub generate(%eqs, $s)
{
my #results = ();
for %eqs.kv -> $key, $val {
if $s ~~ /$key/ { #results.push($s.subst(/$key/, $val)); }
if $s ~~ /$val/ { #results.push($s.subst(/$val/, $key)); }
}
for #results -> $result { take $result; }
zip-many #results.map({ gather generate(%eqs, $_) });
}
That looks pretty good.
Thanks smls!
smls suggests in a comment below that zip-many not invoke take, leaving that to generate. Let's also move flat from zip-many to generate.
The slimmed down zip-many:
sub zip-many (#arrs) { #arrs == 1 ?? #arrs[0][] !! [Z] #arrs }
And the generate to go along with it:
sub generate(%eqs, $s)
{
my #results;
for %eqs.kv -> $key, $val {
if $s ~~ /$key/ { #results.push($s.subst(/$key/, $val)); }
if $s ~~ /$val/ { #results.push($s.subst(/$val/, $key)); }
}
.take for #results;
.take for flat zip-many #results.map({ gather generate(%eqs, $_) });
}
Testing the keys and values separately seems a bit silly; your strings aren't really regexes, so there's no need for // anywhere in your code.
sub generate($s, #eqs) {
my #results = do for #eqs.kv -> $i, $equation {
take $s.subst($equation, #eqs[ $i +^ 1 ]) if $s.index: $equation
}
my #seqs = #results.map: { gather generate($_, #eqs) }
for 0..* -> $i { take .[$i] for #seqs }
}
Obviously with this version of generate you'll have to rewrite table to use #eqs instead of %eqs.

Find Index of all values of an array in another array and collect the value in that index from a third Array

I would like to find the index of matches from the descendentList in the parentIdList and then add the value which exists in that index from the idList to the descendentList and then once again check the parentIdList for the index of all the matching values.
I am essentially trying to create a looping structure which would result in looking like this:
This seems to work but only if you can allow descendentList to be a Set. If not then I am not sure what the terminating condition would be, it would just keep adding the values of the same indexes over and over. I think a Set is appropriate considering what you said in your comment above... "I would like to loop through this until no more matches are added to descendentList"
Set descendentList = [2]
def parentIdList = [0,1,2,3,2]
def idList = [1,2,3,4,5]
/**
* First: I would like to find the index of matches from the descendentList in the
* parentIdList
*/
def findIndexMatches(Set descendentList, List parentIdList, List idList) {
List indexes = []
def size = descendentList.size()
descendentList.each { descendent ->
indexes.addAll(parentIdList.findIndexValues { it == descendent })
}
addExistingValuesToFromIdListToDecendentList(descendentList, idList, indexes)
// Then once again check the parentIdList for the index of all the matching values.
if(size != descendentList.size()) { // no more indexes were added to decendentList
findIndexMatches(descendentList, parentIdList, idList)
}
}
/**
* and then add the value which exists in that index from the
* idList to the descendentList
*/
def addExistingValuesToFromIdListToDecendentList(Set descendentList, List idList, List indexes) {
indexes.each {
descendentList << idList[it as int]
}
}
findIndexMatches(descendentList, parentIdList, idList)
println descendentList // outputs [2,3,4,5]
Something like the following seems to work - not written any tests though, so may fail with different use cases - just a simple, idiomatic recursive solution.
def descendentList = [2]
def parentIdList = [0,1,2,3,2]
def idList = [1,2,3,4,5]
def solve( List descendentList, List parentIdList, List idList ){
List matchedIds = descendentList.inject( [] ){ result, desc ->
result + idList[ parentIdList.findIndexValues{ it == desc } ]
}
if ( matchedIds ){
descendentList + solve( matchedIds, parentIdList, idList )
} else {
descendentList
}
}
println solve( descendentList, parentIdList, idList )
You can also do this without recursion, using an iterator:
class DescendantIterator<T> implements Iterator<T> {
private final List<T> parents
private List<T> output
private final List<T> lookup
private List<T> next
DescendantIterator(List<T> output, List<T> parents, List<T> lookup) {
this.output = output
this.parents = parents
this.lookup = lookup
}
boolean hasNext() { output }
Integer next() {
def ret = output.head()
parents.findIndexValues { it == ret }.with { v ->
if(v) { output += lookup[v] }
}
output = output.drop(1)
ret
}
void remove() {}
}
def descendentList = [2]
def parentIdList = [0,1,2,3,2]
def idList = [1,2,3,4,5]
def values = new DescendantIterator<Integer>(descendentList, parentIdList, idList).collect()
After this, values == [2, 3, 5, 4]
First build a map from parent id to all it's child ids. Next find the results for the input and iterate over newly found results as long as there are no more.
def parentIdList = [0,1,2,3,2]
def idList = [1,2,3,4,5]
tree = [parentIdList, idList].transpose().groupBy{it[0]}.collectEntries{ [it.key, it.value*.get(1)] }
def childs(l) {
l.collect{ tree.get(it) }.findAll().flatten().toSet()
}
def descendants(descendentList) {
def newresults = childs(descendentList)
def results = [].toSet() + descendentList
while (newresults.size()) {
results.addAll(newresults)
newresults = childs(newresults) - results
}
return results
}
assert descendants([2]) == [2,3,4,5].toSet()
assert descendants([2,1]) == [1,2,3,4,5].toSet()
assert descendants([3]) == [3,4].toSet()

Delete Artifactory build artifacts using REST API

I have the following build artefacts in Artifactory server.
http://artifactory.company.com:8081/artifactory/libs-snapshot-local/com/mycompany/projectA/service_y/2.75.0.1/service_y-2.75.0.1.jar
http://artifactory.company.com:8081/artifactory/libs-snapshot-local/com/mycompany/projectA/service_y/2.75.0.2/service_y-2.75.0.2.jar
http://artifactory.company.com:8081/artifactory/libs-snapshot-local/com/mycompany/projectA/service_y/2.75.0.3/service_y-2.75.0.3.jar
http://artifactory.company.com:8081/artifactory/libs-snapshot-local/com/mycompany/projectA/service_y/2.75.0.4/service_y-2.75.0.4.jar
Questions:
I want a groovy script to delete the above artefacts except 2.75.0.3.jar (script should use Artifactory REST API). Does someone has a sample script to do that or at least delete all .jars in this case?
HOW can I use the following usage within a groovy script.
for ex: using the following line in groovy
DELETE /api/build/{buildName}[?buildNumbers=n1[,n2]][&artifacts=0/1][&deleteAll=0/1]
or
curl -X POST -v -u admin:password "http://artifactory.company.com:8081/artifactory/api/build/service_y?buildNumbers=129,130,131&artifacts=1&deleteAll=1"
Using the above mentioned curl command in Linux Putty on the same server where artifactory is installed, didn't work, gave an error.
* About to connect() to sagrdev3sb12 port 8081
* Trying 10.123.321.123... Connection refused
* couldn't connect to host
* Closing connection #0
curl: (7) couldn't connect to host
http://www.jfrog.com/confluence/display/RTF/Artifactory+REST+API#ArtifactoryRESTAPI-DeleteBuilds
or
http://www.jfrog.com/confluence/display/RTF/Artifactory+REST+API#ArtifactoryRESTAPI-DeleteItem
The above links show their - usage sample/ usage output - confuses me.
The following link might be the answer if we can tweak this script to retain one build and delete all other builds for "projectA" (group id), "service_y" (artifact id), and for release "2.75.0.x".
https://github.com/jettro/small-scripts/blob/master/groovy/artifactory/Artifactory.groovy
I might need to use either restClient or httpBuilder within Groovy (as mentioned in the above example link and the following link).
Using Artifactory's REST API to deploy jar file
Final Answer: This scriptler script/Groovy script -- includes deleting builds from BOTH - Jenkins (using groovy it.delete()) and Artifactory (using Artifactory REST API call).
Scriptler Catalog link: http://scriptlerweb.appspot.com/script/show/103001
Enjoy!
/*** BEGIN META {
"name" : "Bulk Delete Builds except the given build number",
"comment" : "For a given job and a given build numnber, delete all builds of a given release version (M.m.interim) only and except the user provided one. Sometimes a Jenkins job use Build Name setter plugin and same job generates 2.75.0.1 and 2.76.0.43",
"parameters" : [ 'jobName', 'releaseVersion', 'buildNumber' ],
"core": "1.409",
"authors" : [
{ name : "Arun Sangal - Maddys Version" }
]
} END META **/
import groovy.json.*
import jenkins.model.*;
import hudson.model.Fingerprint.RangeSet;
import hudson.model.Job;
import hudson.model.Fingerprint;
//these should be passed in as arguments to the script
if(!artifactoryURL) throw new Exception("artifactoryURL not provided")
if(!artifactoryUser) throw new Exception("artifactoryUser not provided")
if(!artifactoryPassword) throw new Exception("artifactoryPassword not provided")
def authString = "${artifactoryUser}:${artifactoryPassword}".getBytes().encodeBase64().toString()
def artifactorySettings = [artifactoryURL: artifactoryURL, authString: authString]
if(!jobName) throw new Exception("jobName not provided")
if(!buildNumber) throw new Exception("buildNumber not provided")
def lastBuildNumber = buildNumber.toInteger() - 1;
def nextBuildNumber = buildNumber.toInteger() + 1;
def jij = jenkins.model.Jenkins.instance.getItem(jobName);
def promotedBuildRange = new Fingerprint.RangeSet()
promotedBuildRange.add(buildNumber.toInteger())
def promoteBuildsList = jij.getBuilds(promotedBuildRange)
assert promoteBuildsList.size() == 1
def promotedBuild = promoteBuildsList[0]
// The release / version of a Jenkins job - i.e. in case you use "Build name" setter plugin in Jenkins for getting builds like 2.75.0.1, 2.75.0.2, .. , 2.75.0.15 etc.
// and over the time, change the release/version value (2.75.0) to a newer value i.e. 2.75.1 or 2.76.0 and start builds of this new release/version from #1 onwards.
def releaseVersion = promotedBuild.getDisplayName().split("\\.")[0..2].join(".")
println ""
println("- Jenkins Job_Name: ${jobName} -- Version: ${releaseVersion} -- Keep Build Number: ${buildNumber}");
println ""
/** delete the indicated build and its artifacts from artifactory */
def deleteBuildFromArtifactory(String jobName, int deleteBuildNumber, Map<String, String> artifactorySettings){
println " ## Deleting >>>>>>>>>: - ${jobName}:${deleteBuildNumber} from artifactory"
def artifactSearchUri = "api/build/${jobName}?buildNumbers=${deleteBuildNumber}&artifacts=1"
def conn = "${artifactorySettings['artifactoryURL']}/${artifactSearchUri}".toURL().openConnection()
conn.setRequestProperty("Authorization", "Basic " + artifactorySettings['authString']);
conn.setRequestMethod("DELETE")
if( conn.responseCode != 200 ) {
println "Failed to delete the build artifacts from artifactory for ${jobName}/${deleteBuildNumber}: ${conn.responseCode} - ${conn.responseMessage}"
}
}
/** delete all builds in the indicated range that match the releaseVersion */
def deleteBuildsInRange(String buildRange, String releaseVersion, Job theJob, Map<String, String> artifactorySettings){
def range = RangeSet.fromString(buildRange, true);
theJob.getBuilds(range).each {
if ( it.getDisplayName().find(/${releaseVersion}.*/)) {
println " ## Deleting >>>>>>>>>: " + it.getDisplayName();
deleteBuildFromArtifactory(theJob.name, it.number, artifactorySettings)
it.delete();
}
}
}
//delete all the matching builds before the promoted build number
deleteBuildsInRange("1-${lastBuildNumber}", releaseVersion, jij, artifactorySettings)
//delete all the matching builds after the promoted build number
deleteBuildsInRange("${nextBuildNumber}-${jij.nextBuildNumber}", releaseVersion, jij, artifactorySettings)
println ""
println("- Builds have been successfully deleted for the above mentioned release: ${releaseVersion}")
println ""
I had the same question, and found this site. I took the idea and simplified it into a python script. You can find the script at: clean_artifactory.py on github
If you have any questions, please let me know. I just now cleaned up more than 10,000 snapshot artifacts!
Some of the features:
DryRun
Can dictate a time_delay, builds last_updated with in a time frame will not be removed.
Specify a targeted group such as com/foo/bar and all snapshots with in.
Hope this helps!
Wondering if the following will help:
Blog:
http://browse.feedreader.com/c/Gridshore/11546011
and
Script: https://github.com/jettro/small-scripts/blob/master/groovy/artifactory/Artifactory.groovy
package artifactory
import groovy.text.SimpleTemplateEngine
import groovyx.net.http.RESTClient
import net.sf.json.JSON
/**
* This groovy class is meant to be used to clean up your Atifactory server or get more information about it's
* contents. The api of artifactory is documented very well at the following location
* {#see http://wiki.jfrog.org/confluence/display/RTF/Artifactory%27s+REST+API}
*
* At the moment there is one major use of this class, cleaning your repository.
*
* Reading data about the repositories is done against /api/repository, if you want to remove items you need to use
* '/api/storage'
*
* Artifactory returns a strange Content Type in the response. We want to use a generic JSON library. Therefore we need
* to map the incoming type to the standard application/json. An example of the mapping is below, all the other
* mappings can be found in the obtainServerConnection method.
* 'application/vnd.org.jfrog.artifactory.storage.FolderInfo+json' => server.parser.'application/json'
*
* The class makes use of a config object. The config object is a map with a minimum of the following fields:
* def config = [
* server: 'http://localhost:8080',
* repository: 'libs-release-local',
* versionsToRemove: ['/3.2.0-build-'],
* dryRun: true]
*
* The versionsToRemove is an array of strings that are the strart of builds that should be removed. To give an idea of
* the build numbers we use: 3.2.0-build-1 or 2011.10-build-1. The -build- is important for the solution. This is how
* we identify an artifact instead of a group folder.
*
* The final option to notice is the dryRun option. This way you can get an overview of what will be deleted. If set
* to false, it will delete the selected artifacts.
*
* Usage example
* -------------
* def config = [
* server: 'http://localhost:8080',
* repository: 'libs-release-local',
* versionsToRemove: ['/3.2.0-build-'],
* dryRun: false]
*
* def artifactory = new Artifactory(config)
*
* def numberRemoved = artifactory.cleanArtifactsRecursive('nl/gridshore/toberemoved')
*
* if (config.dryRun) {* println "$numberRemoved folders would have been removed."
*} else {* println "$numberRemoved folders were removed."
*}* #author Jettro Coenradie
*/
private class Artifactory {
def engine = new SimpleTemplateEngine()
def config
def Artifactory(config) {
this.config = config
}
/**
* Print information about all the available repositories in the configured Artifactory
*/
def printRepositories() {
def server = obtainServerConnection()
def resp = server.get(path: '/artifactory/api/repositories')
if (resp.status != 200) {
println "ERROR: problem with the call: " + resp.status
System.exit(-1)
}
JSON json = resp.data
json.each {
println "key :" + it.key
println "type : " + it.type
println "descritpion : " + it.description
println "url : " + it.url
println ""
}
}
/**
* Return information about the provided path for the configured artifactory and server.
*
* #param path String representing the path to obtain information for
*
* #return JSON object containing information about the specified folder
*/
def JSON folderInfo(path) {
def binding = [repository: config.repository, path: path]
def template = engine.createTemplate('''/artifactory/api/storage/$repository/$path''').make(binding)
def query = template.toString()
def server = obtainServerConnection()
def resp = server.get(path: query)
if (resp.status != 200) {
println "ERROR: problem obtaining folder info: " + resp.status
println query
System.exit(-1)
}
return resp.data
}
/**
* Recursively removes all folders containing builds that start with the configured paths.
*
* #param path String containing the folder to check and use the childs to recursively check as well.
* #return Number with the amount of folders that were removed.
*/
def cleanArtifactsRecursive(path) {
def deleteCounter = 0
JSON json = folderInfo(path)
json.children.each {child ->
if (child.folder) {
if (isArtifactFolder(child)) {
config.versionsToRemove.each {toRemove ->
if (child.uri.startsWith(toRemove)) {
removeItem(path, child)
deleteCounter++
}
}
} else {
if (!child.uri.contains("ro-scripts")) {
deleteCounter += cleanArtifactsRecursive(path + child.uri)
}
}
}
}
return deleteCounter
}
private RESTClient obtainServerConnection() {
def server = new RESTClient(config.server)
server.parser.'application/vnd.org.jfrog.artifactory.storage.FolderInfo+json' = server.parser.'application/json'
server.parser.'application/vnd.org.jfrog.artifactory.repositories.RepositoryDetailsList+json' = server.parser.'application/json'
return server
}
private def isArtifactFolder(child) {
child.uri.contains("-build-")
}
private def removeItem(path, child) {
println "folder: " + path + child.uri + " DELETE"
def binding = [repository: config.repository, path: path + child.uri]
def template = engine.createTemplate('''/artifactory/$repository/$path''').make(binding)
def query = template.toString()
if (!config.dryRun) {
def server = new RESTClient(config.server)
server.delete(path: query)
}
}
}
Artifactory REST API would be something like (I'm not sure):
I see line: def artifactSearchUri = "api/build/${jobName}/${buildNumber}"
import groovy.json.*
def artifactoryURL= properties["jenkins.ARTIFACTORY_URL"]
def artifactoryUser = properties["artifactoryUser"]
def artifactoryPassword = properties["artifactoryPassword"]
def authString = "${artifactoryUser}:${artifactoryPassword}".getBytes().encodeBase64().toString()
def jobName = properties["jobName"]
def buildNumber = properties["buildNumber"]
def artifactSearchUri = "api/build/${jobName}/${buildNumber}"
def conn = "${artifactoryURL}/${artifactSearchUri}".toURL().openConnection()
conn.setRequestProperty("Authorization", "Basic " + authString);
println "Searching artifactory with: ${artifactSearchUri}"
def searchResults
if( conn.responseCode == 200 ) {
searchResults = new JsonSlurper().parseText(conn.content.text)
} else {
throw new Exception ("Failed to find the build info for ${jobName}/${buildNumber}: ${conn.responseCode} - ${conn.responseMessage}")
}
I had similar needs and used the script by Jettro (above) as a starting point to manage our artifacts by marking their state as a property (e.g. tested, releasable, production) and then deletes old artifacts based on the number of artifacts in each state.
The script file itself and useful adjunct files can be obtained from:
https://github.com/brianpcarr/ArtifactoryCurator
The readme is:
Invoke with:
groovy ArtifactoryProcess.groovy [--dry-run] [--full-log] --function <func> --value <val> --web-server http://YourWebServer --repository yourRepoName --domain <com/YourOrg> Version1 ...
where:
--domain domain : Name of the domain to scan.
--dry-run : Don't change anything; just list what would be done
--full-log : Log miscellaneous steps for processing artifacts
--function function : function to perform on artifacts
--maxInState maxInState : name of csv file with states and max counts, optional
--must-have mustHave : property required before applying delete, mark,
download or clear, optional
--password password : Password to use to access Artifactory server.
--repository repoName : Name of the repository to scan.
--targetDir targetDir : target directory for downloaded artifacts
--userName userName : userName to use to access Artifactory server
--value value : value to use with function above, often required
--web-server webServer : URL to use to access Artifactory server
Example: groovy ArtifactoryCleanup.groovy --domain domain --dry-run --full-log --function function --maxInState maxInState.csv --must-have mustHave --password password --repository repoName --targetDir targetDir --userName userName --value value --web-server webServer 1.0.1 1.0.2
Supported functions include [clear, delete, config, download, mark, repoPrint]
Columns in config csv files can be [repoName, targetDir, maxInState, domain, value, userName, mustHave, webServer, password, function]
The ArtifactoryProcess script can be used in a couple of main modes as well as
a sort of hyper mode.
The first mode is to mark sets of artifacts as being in a particular state, e.g.
groovy.bat ArtifactoryProcess.groovy --function mark --value production --web-server http://YourWebServer/ --repository yourRepoName --domain <com.YourOrg> --userName fill-in-userID --password fill-in-password 1.0.45-zdf
would mark all artifacts in yourRepoName with a version of 1.0.45-zdf as
being in production.
The other mode is to cleanup old artifacts. This could be done in two stages
where previously marked artifacts are deleted and then additional artifacts
would be marked for deletion on the next run.
Whether the cleanup is done in two stages or one, the different states in which
an artifact can be is defined in a comma separated value (csv) file which has
the name of the state and the number of artifacts to retain in that state. The
last entry in the MaxInState.csv file is an unnamed state and is the maximum
number of otherwise unmarked artifacts should be retained.
There is also a hyper-mode (function config) where each step of a clean up is
read from a comma separated value (csv) file. In this case the first line will
name the parameters which are to be specified and the values for each step will
be in the following lines. It is recommended that parameters like user ID and
password be passed on the command line, not in the config file.
To run the script, it can be run from the git root as suggested above. However,
this requires that groovy 2.3 or higher be installed (parameters to closure
support was added then and is required for the closure implementation used).
This requires that your JVM be at least 1.7. If you do not wish to install
groovy on your server, you can comment out the #Grapes sections at the top (oh
for the conditional compilations of c days gone) and build the required jar file
with 'gradlew build'. You can then run the utility with something like:
java -jar build/libs/artifactoryProcess-run.jar --dry-run --full-log --function mark --value tested --web-server http://YourWebServer/ --repository yourRepoName --domain <com.YourOrg> --userName fill-in-userID --password fill-in-password 1.0.45-zdf
The main script is:
package artifactoryProcess
import com.xlson.groovycsv.CsvIterator
import groovy.util.logging.Log
import org.jfrog.artifactory.client.Artifactory
import org.jfrog.artifactory.client.Repositories
import org.jfrog.artifactory.client.model.impl.RepositoryTypeImpl
import org.jfrog.artifactory.client.DownloadableArtifact;
import org.jfrog.artifactory.client.ItemHandle;
import org.jfrog.artifactory.client.PropertiesHandler;
import org.jfrog.artifactory.client.model.Folder
import org.kohsuke.args4j.Argument;
import org.kohsuke.args4j.CmdLineParser;
import org.kohsuke.args4j.CmdLineException;
import org.kohsuke.args4j.ExampleMode;
import org.kohsuke.args4j.Option;
import groovy.transform.stc.ClosureParams;
import groovy.transform.stc.SimpleType;
import org.jfrog.artifactory.client.ArtifactoryClient;
import org.jfrog.artifactory.client.RepositoryHandle;
import com.xlson.groovycsv.CsvParser;
#Grapes([
#GrabResolver(name='jcenter', root='http://jcenter.bintray.com/', m2Compatible=true),
#Grab(group='net.sf.json-lib', module='json-lib', version='2.4', classifier='jdk15' ),
#Grab(group='org.codehaus.groovy.modules.http-builder', module='http-builder', version='0.7'),
#Grab( group='com.xlson.groovycsv', module='groovycsv', version='1.0' ),
#GrabExclude(group='org.codehaus.groovy', module='groovy-xml')
])
#Grapes( [
#Grab(group='org.kohsuke.args4j', module='args4j-maven-plugin', version='2.0.22'),
#GrabExclude(group='org.codehaus.groovy', module='groovy-xml')
])
#Grapes([
#Grab(group='org.jfrog.artifactory.client', module='artifactory-java-client-services', version='0.13'),
#GrabExclude(group='org.codehaus.groovy', module='groovy-xml')
])
/**
* This groovy class is meant to mark artifacts for release and clean up old versions.
*
* The first mode is to mark artifacts with properties such as tested, releasable and production.
*
* The second mode could be to mark artifacts for removal based on FIFO counts of artifacts in the
* states defined in the maxInState.csv file provided. If you say want at most 5 versions of an
* artifact in the production state and there are more than that, then the oldest versions could be
* marked for removal.
*
* The third mode could be the actual deletion of any artifacts which were marked for removal.
* The delay is to allow human intervention before wholesale deletion.
*
* The versionsToUse is an array of strings that are the start of builds that should be processed.
*
* There are two additional options. The first is the dryRun option. This way you can get
* an overview of what will be processed. If specified, no artifacts will be altered.
*
* Usage example
* groovy ArtifactoryProcess.groovy --dry-run --function mark --value production --must-have releasable --web-server http://yourWebServer/artifactory/ --domain <com.YourOrg> --repository libs-release-prod 1.0.1 1.0.2
*
* #author Brian Carr (snippets from Jettro Coenradie, David Carr and others)
*/
class ArtifactoryProcess {
public static final Set< String > validFunctions = [ 'mark', 'delete', 'clear', 'download', 'config', 'repoPrint' ];
public static final Set< String > validParameters = [ 'function', 'value', 'mustHave', 'targetDir', 'maxInState', 'webServer', 'repoName', 'domain', 'userName', 'password' ];
#Option(name='--dry-run', usage='Don\'t change anything; just list what would be done')
boolean dryRun;
#Option(name='--full-log', usage='Log miscellaneous steps for processing artifacts')
boolean fullLog;
// eg --function mark
#Option(name='--function', metaVar='function', usage="function to perform on artifacts")
String function;
// eg --value production
#Option(name='--value', metaVar='value', usage="value to use with function above, often required")
String value;
// eg --must-have releasable
#Option(name='--must-have', metaVar='mustHave', usage="property required before applying delete, mark, download or clear, optional")
String mustHave;
// eg --targetDir d:/temp/bin
#Option(name='--targetDir', metaVar='targetDir', usage="target directory for downloaded artifacts")
String targetDir;
// eg --maxInState MaxInState.csv
#Option(name='--maxInState', metaVar='maxInState', usage="name of csv file with states and max counts, optional")
String maxInState;
// eg --web-server 'http://artifactory01/artifactory/'
#Option(name='--web-server', metaVar='webServer', usage='URL to use to access Artifactory server')
String webServer;
// eg --repository 'libs-release-prod'
#Option(name='--repository', metaVar='repoName', usage='Name of the repository to scan.')
String repoName;
// eg --domain 'org/apache'
#Option(name='--domain', metaVar='domain', usage='Name of the domain to scan.')
String domain;
// eg --userName cleaner
#Option(name='--userName', metaVar='userName', usage='userName to use to access Artifactory server')
String userName;
// eg --password SomePswd
#Option(name='--password', metaVar='password', usage='Password to use to access Artifactory server.')
String password;
#Argument
ArrayList<String> versionsToUse = new ArrayList<String>();
class PathAndDate{
String path;
Date dtCreated;
}
class StateRecord {
String state;
int cnt;
List< PathAndDate > pathAndDate;
}
#SuppressWarnings(["SystemExit", "CatchThrowable"])
static void main( String[] args ) {
try {
new ArtifactoryProcess().doMain( args );
} catch (Throwable throwable) {
// Java returns exit code 0 if it terminates because of an uncaught Throwable.
// That's bad if we have a process like Bamboo depending on errors being non-zero.
// Thus, we catch all Throwables and explicitly set the exit code.
println( "Unexpected error: ${throwable}" )
System.exit(1)
}
System.exit(0);
}
List< StateRecord > stateSet = [];
Artifactory srvr;
RepositoryHandle repo;
private int numProcessed = 0;
String firstFunction;
String lastConfig;
void doMain( String[] args ) {
CmdLineParser parser = new CmdLineParser( this );
try {
parser.parseArgument(args);
if( function == 'config' && value == null ) {
throw new CmdLineException("You must provide a config.csv file as the value if you specify the config function.");
}
firstFunction = function; // Flag in case we recurse into config files, where did we start
if( function == 'config' ) {
processConfig();
return;
} else {
checkParms();
}
} catch(CmdLineException ex) {
System.err.println(ex.getMessage());
System.err.println();
System.err.println("groovy ArtifactoryProcess.groovy [--dry-run] [--full-log] --function <func> --value <val> --web-server http://YourWebServer --repository libs-release-prod --domain <com/YourOrg> Version1 ...");
parser.printUsage(System.err);
System.err.println();
System.err.println(" Example: groovy ArtifactoryProcess.groovy"+parser.printExample(ExampleMode.ALL)+" 1.0.1 1.0.2");
System.err.println();
System.err.println(" Supported functions include ${validFunctions}" );
System.err.println();
System.err.println(" Columns in config csv files can be ${validParameters}" );
return;
}
String stateLims;
if( maxInState != null && maxInState.size() > 0 ) stateLims = "(using stateLims)" else stateLims = "(no stateLims)"
println( "Started processing of $function with ${(value==null)?mustHave:value} $stateLims on $webServer in $repoName/$domain with $versionsToUse." );
withClient { newClient ->
srvr = newClient;
if( function == 'repoPrint' ) printRepositories();
else {
processRepo();
}
}
}
def processRepo() {
numProcessed = 0; // Reset count from last repo.
repo = srvr.repository( repoName );
processArtifactsRecursive( domain );
if( dryRun ) {
println "$numProcessed folders would have been $function[ed] with $value.";
} else {
println "$numProcessed folders were $function[ed] with $value.";
}
}
def processConfig() {
File configCSV = new File( value );
lastConfig = value; // Record which csv file we have last dived into.
Artifactory mySrvr = srvr; // Each line of config could have a different web server, preserve connection in case recursing
configCSV.withReader {
CsvIterator csvIt = CsvParser.parseCsv( it );
for( csvRec in csvIt ) {
if (fullLog) println("Step is ${csvRec}");
Map cols = csvRec.properties.columns;
String func = csvRec.function;
def hasFunc = cols.containsKey( 'function' );
def has = cols.containsKey( 'targetDir' );
if( cols.containsKey( 'function' ) && !noValue( csvRec.function ) ) function = csvRec.function ;
if( cols.containsKey( 'value' ) && !noValue( csvRec.value ) ) value = csvRec.value ;
if( cols.containsKey( 'targetDir' ) && !noValue( csvRec.targetDir ) ) targetDir = csvRec.targetDir ;
if( cols.containsKey( 'maxInState' ) && !noValue( csvRec.maxInState ) ) maxInState = csvRec.maxInState;
if( cols.containsKey( 'webServer' ) && !noValue( csvRec.webServer ) ) webServer = csvRec.webServer ;
if( cols.containsKey( 'repoName' ) && !noValue( csvRec.repoName ) ) repoName = csvRec.repoName ;
if( cols.containsKey( 'domain' ) && !noValue( csvRec.domain ) ) repoName = csvRec.domain ;
if( cols.containsKey( 'userName' ) && !noValue( csvRec.userName ) ) userName = csvRec.userName ;
if( cols.containsKey( 'password' ) && !noValue( csvRec.password ) ) password = csvRec.password ;
if( cols.containsKey( 'mustHave' ) ) mustHave = csvRec.mustHave; // Can clear out mustHave value
checkParms();
withClient { newClient ->
srvr = newClient;
processRepo();
}
srvr = mySrvr; // Restore previous web server connection
}
}
}
def checkParms() {
if( !noValue( maxInState ) ) {
stateSet.clear(); // Throw away any previous states from last step
File stateFile = new File( maxInState );
def RC = stateFile.withReader {
CsvIterator csvFile = CsvParser.parseCsv( it );
for( csvRec in csvFile ) {
String state = csvRec.properties.values[ 0 ];
String strCnt = csvRec.properties.values[ 1 ];
if( fullLog ) println( "State ${state} allowed ${strCnt}" );
int count = 0;
if( strCnt.integer ) count = strCnt.toInteger();
if( count < 0 ) count = 0;
// Iterator lies and claims there is a next when there isn't. Force break on empty state.
stateSet.add( new StateRecord( state: state, cnt: count, pathAndDate: [] ) );
}
}
}
String prefix;
if( firstFunction == 'config' && function != 'config' ) {
prefix = "While processing ${lastConfig} encountered, ";
} else prefix = '';
if( !validFunctions.contains( function ) ) {
throw new CmdLineException( "${prefix}Unrecognized function ${function}, function is required and must be one of ${validFunctions}." );
}
if( function == 'mark' && noValue( value ) ) {
throw new CmdLineException( "${prefix}You must provide a value to mark with if you specify the mark function." );
}
if( function == 'clear' && noValue( value ) ) {
throw new CmdLineException( "${prefix}You must provide a value to clear with if you specify the clear function." );
}
if( function != 'repoPrint' && noValue( domain ) ) {
throw new CmdLineException( "${prefix}You must provide a domain to use with the ${function} function." );
}
if( function == 'download' ) {
if( noValue( targetDir ) ) targetDir = '.';
}
if( noValue( webServer ) || noValue( userName ) || noValue( password ) || noValue( repoName ) ) {
throw new CmdLineException( "${prefix}You must provide the webServer, userName, password and repository name values to use." );
}
if( versionsToUse.size() == 0 && stateSet.size() == 0 && function != 'repoPrint' ) {
throw new CmdLineException( "${prefix}You must provide maxInState or a list of artifacts / versions to act upon." );
}
}
Boolean noValue( var ) {
return var == null || var == '';
}
/**
* Print information about all the available repositories in the configured Artifactory
*/
def printRepositories() {
Repositories repos = srvr.repositories();
List repoList = repos.list( RepositoryTypeImpl.LOCAL );
for( it in repoList ) {
println "key :" + it.key
println "type : " + it.type
println "description : " + it.description
println "url : " + it.url
println ""
};
}
/**
* Recursively removes all folders containing builds that start with the configured paths.
*
* #param path String containing the folder to check and use the childs to recursively check as well.
* #return Number with the amount of folders that were processed.
*/
private int processArtifactsRecursive( String path ) {
ItemHandle item = repo.folder( path );
// def RC = item.isFolder(); This lies, always returns true even for a file, go figure!
// def RC = path.endsWith('.xml'); // item.info() fails for simple files, go figure!
if( !path.endsWith( '.xml' ) &&
!path.endsWith( '.jar' ) &&
item.isFolder() ) {
Folder fldr;
try{
fldr = item.info()
} catch( Exception e ) {
println( "Error accessing $webServer/$repoName/$path" );
throw( e );
};
for( kid in fldr.children ) {
boolean processed = false;
if( stateSet.size() > 0 ) {
if( isEndNode( kid.uri )) {
processed = groupFolders( path + kid.uri );
}
} else {
versionsToUse.find { version ->
if( kid.uri.startsWith( '/' + version ) ) {
numProcessed += processItem( path + kid.uri );
return true; // Once we find a match, no others are interesting, we are outta here
} else return false; // Just formalize the on to next iterator
}
}
if( !processed ) {
processArtifactsRecursive( path + kid.uri );
}
}
}
/* If we are counting number in each state, our lists should be all set now */
if( stateSet.size() > 0 ) {
processSet();
}
return numProcessed;
}
private boolean processedThis( String vrsn, kid ) {
if( kid.uri.startsWith('/' + vrsn )) {
numProcessed += processItem( vrsn + kid.uri );
return true; // Once we find a match, no others are interesting, we are outta here
} else return false; // Just formalize the on to next iterator
}
// True if nodeName is of form int.int.other, could be one line, but how would you debug it.
private boolean isEndNode( String nodeName ){
int firstDot = nodeName.indexOf( '.' );
if( firstDot <= 1 ) return false; // nodeName starts with '/' which is ignored
int secondDot = nodeName.indexOf( '.', firstDot + 1 );
if( secondDot <= 0 ) return false;
String firstInt = nodeName.substring( 1, firstDot ); // nodeName starts with '/' which is ignored
if( !firstInt.isInteger() ) return false;
String secondInt = nodeName.substring( firstDot + 1, secondDot );
if( secondInt.isInteger() ) return true;
return false;
}
private boolean groupFolders( String path ) {
Map<String, List<String>> props;
stateSet.find { rec ->
ItemHandle folder = repo.folder( path );
if( rec.state.size() > 0 ) {
props = folder.getProperties( rec.state );
}
if( rec.state.size() <= 0 || props.size() > 0 ) {
PathAndDate nodePathDate = new PathAndDate();
nodePathDate.path = path;
nodePathDate.dtCreated = folder.info().lastModified;
rec.pathAndDate.add( nodePathDate ); // process this one
return true; // No others are interest, break out of iterator
} else return false; // On to next iterator
}
return true; // We always process all nodes which are end nodes
}
private boolean processSet() {
for( set in stateSet ) {
int del = set.cnt;
if( set.pathAndDate.size() < del ) {
del = set.pathAndDate.size() }
else {
set.pathAndDate.sort() { a,b -> b.dtCreated <=> a.dtCreated }; // Sort newest first to preserve newest
}
while( del > 0 ) {
set.pathAndDate.remove( 0 );
del--;
}
while( set.pathAndDate.size() > 0 ) {
numProcessed += processItem( set.pathAndDate[ 0 ].path );
set.pathAndDate.remove( 0 );
}
}
return true;
}
private int processItem( String path ) {
int retVal = 0;
if( fullLog ) println "Processing folder: ${path}, ${function} with ${value}.";
def RC;
ItemHandle folder = repo.folder( path );
Map<String, List<String>> props;
boolean hasRqrd = true;
if( !noValue( mustHave ) ) {
props = folder.getProperties( mustHave );
if( props.size() > 0 ) hasRqrd = true; else hasRqrd = false; // I like this better than ternary operator, you?
}
if( !hasRqrd ) return retVal;
switch( function ) {
case 'delete':
if( !dryRun ) RC = repo.delete( path );
retVal++;
break;
case 'download':
if( folder.isFolder() ) {
Folder item = folder.info();
item.children.find() { kid ->
if( kid.uri.endsWith('.jar') ) {
DownloadableArtifact DA = repo.download( path + kid.uri );
InputStream dlJar = DA.doDownload(); // Open Source
FileWriter lclJar = new FileWriter( targetDir + kid.uri, false ); // Open Dest
for( id in dlJar ) { lclJar.write( id ); } // Copy contents
if( fullLog ) println( "Downloaded ${path + kid.uri} to ${targetDir + kid.uri}." );
retVal++;
return true;
}
}
}
break;
case 'mark':
props = folder.getProperties( value );
if( props.size() == 0 ) {
PropertiesHandler item = folder.properties();
PropertiesHandler PH = item.addProperty( value, 'true' );
if( !dryRun ) RC = PH.doSet();
retVal++;
}
break;
case 'clear':
props = folder.getProperties( value );
if( props.size() == 1 ) {
if( !dryRun ) RC = folder.deleteProperty( value ); // Null return is success, go figure!
retVal++;
}
break;
default:
println( "Unknown function $function with $value encountered on ${path}.")
}
if( retVal > 0 ) println "Completed $function on $path with ${(value==null)?mustHave:value}.";
return retVal;
}
private <T> T withClient( #ClosureParams( value = SimpleType, options = "org.jfrog.artifactory.client.Artifactory" ) Closure<T> closure ) {
def client = ArtifactoryClient.create( "${webServer}artifactory", userName, password )
try {
return closure( client )
} finally {
client.close()
}
}
}

Get variable dynamically

Is there any way to reference variables dynamically like methods? Here is the Groovy example of referencing a method dynamically:
class Dog {
def bark() { println "woof!" }
def sit() { println "(sitting)" }
def jump() { println "boing!" }
}
def doAction( animal, action ) {
animal."$action"() //action name is passed at invocation
}
def rex = new Dog()
doAction( rex, "bark" ) //prints 'woof!'
doAction( rex, "jump" ) //prints 'boing!'
... But doing something like this doesn't work:
class Cat {
def cat__PurrSound__List = ['a', 'b', 'c']
def cat__MeowSound__List = [1, 2, 3]
def someMethod(def sound) {
assert sound == "PurrSound"
def catSoundListStr = "cat__${obj.domainClass.name}__List"
assert catSoundListStr = "cat__PurrSound__List"
def catSoundList = "$catSoundListStr"
assert catSoundList == cat__PurrSound__List // fail
}
}
Yeah, so you can do:
def methodName = 'someMethod'
assert foo."$methodName"() == "asdf" // This works
and to get the object by name, you can do (in a Script at least):
// Cannot use `def` in a groovy-script when trying to do this
foo = new Foo()
def objectName = 'foo'
assert this."$objectName".someMethod() == "asdf"